text
stringlengths 28
2.36M
| meta
stringlengths 20
188
|
---|---|
TITLE: Wavefunction Amplitude Intuition
QUESTION [6 upvotes]: Reading the responses to this question: Contradiction in my understanding of wavefunction in finite potential well
it seems people are pretty confident that, e.g., the wavefunction of a particle in a slanted potential well:
makes physical sense, since the system is non-dissipative, so you are more likely to find the particle in a region of higher potential "where its kinetic energy would be lower" in loose terms.
So the probability of finding the particle in some small region near the minimum of the potential is lowest, got it.
How does this reconcile with e.g. the ground state of the quantum simple harmonic oscillator ($\psi \propto e^{-x^2}$)?
In that case we have a situation where the greatest probability of finding the particle is indeed at the minimum of the potential, and so using the idea of classical turning points to determine the maxima of ψ breaks down.
I can't wrap my head around why sometimes the responses to the linked question are fine and dandy, and other times they are manifestly wrong. Is it something to do with my assumption that any state with a given energy would have a higher probability amplitude at higher potential?
REPLY [7 votes]: Having small or large quantum numbers makes the difference.
See also Correspondence principle:
In physics, the correspondence principle states that the behavior
of systems described by the theory of quantum mechanics
(or by the old quantum theory) reproduces classical physics
in the limit of large quantum numbers.
The semi-classical behavior (i.e. the wave function has large amplitude
and long wavelength where the classical particle would move slowly)
is valid only for large quantum numbers $n$.
But usually it is not true for small quantum numbers $n$
(i.e. for the few lowest energy levels).
Then the wave function typically behaves very different
from a classical particle.
You can see this trend both for the slanted potential well and
for the harmonic oscillator.
The particle in the slanted potential well
behaves very classical for $n=61$ and $n=35$,
but it does not for $n=1$.
The harmonic oscillator behaves quite classical for $n=10$,
but it does not for $n=0, 1, 2$. | {"set_name": "stack_exchange", "score": 6, "question_id": 711930} |
TITLE: Plugging a circle into real polynomial
QUESTION [5 upvotes]: I have written some code (attached below) that generates a random real polynomial $P$ degree and coefficient within some range.
I then plotted and looked at $im(P(S^1)) $ with $S$ being the unit circle in the complex plane.
To my surprise I got pictures with some interesting properties. (Interesting at least for me, (pictures are viewable below))
I noticed:
The figure is connected (not too surprising)
The figure is self-intersecting itself and an intersection seems to be always crossed exactly twice. (Might be wrong, considering rounding errors etc.)
The intersection points $z_i$ seem to have Im$(z_i) = 0$
Point 1. follows directly from S being compact and $P$ continuous. However I find it harder to justify 2 and 3, especially I can make such a claim, maybe I were just lucky with my numbers. Therefore I would appreciate it, if someone could clarify points 2 and 3 to me, if those statements are correct and especially why. As always thanks in advance.
'''
Created on 16 Sep 2017
@author: Imago
'''
import pylab
import cmath as c
import matplotlib.cm as cm
import matplotlib.pyplot as plt
import numpy as np
import random as r
NUMBER_OF_POINTS = 0.0001
RADIUS = 2.8
N = NUMBER_OF_POINTS
R = RADIUS
# generate a random polynominal with degree deg, and integer coefficients in range (min, max)
def grp(min, max, deg):
l = list()
for i in range(deg):
l.append(r.randint(min, max))
return np.poly1d(np.array(l))
# give me Re(z), Im(z)
def split(z):
return complex(z).real, complex(z).imag
# my polynominal
f = grp(-3, 3, 10)
print('Polynominal')
print(f)
# interval of numbers between 0 and 1.
I = np.arange(0, 1, N)
# skip the next 6 lines, if you not want to expande the code
X = list()
Y = list()
n = 1
k = 0
X.append(list())
Y.append(list())
# create the points for plotting
for x in I:
z = R * np.exp(x * 2 * np.pi * 1j)
v = f(z)
X[k].append(complex(v).real) # k = 0
Y[k].append(complex(v).imag)
# colour, plot and show the figure
colors = iter(cm.rainbow(np.linspace(0, 1, n))) # n = 1
for c in colors :
plt.scatter(X[k], Y[k], c)
k = k + 1
plt.show()
REPLY [3 votes]: Ad 2: A polynomial with threefold intersection would be
$$f(x)=(x^3-1)x.$$
Ad 3: It is clear that the image is symmetric with respect to the $x$-axis, which automatically causes intersection points there as soon as the curve crosses the $x$-axis non-perpendicularly.
Nevertheless, $f(x)=x^2$ can be said to pass twice hrough each of its point (even though this might be considered cheating). Maybe try
$$ f(x)=x^3+(x^{9}-1)x$$ | {"set_name": "stack_exchange", "score": 5, "question_id": 2432033} |
\begin{document}
\maketitle
\begin{abstract}
A new implementation of the canonical polyadic decomposition (CPD) is presented. It features lower computational complexity and memory usage than the available state of art implementations available. The CPD of tensors is a challenging problem which has been approached in several manners. Alternating least squares algorithms were used for a long time, but they convergence properties are limited. Nonlinear least squares (NLS) algorithms - more precisely, damped Gauss-Newton (dGN) algorithms - are much better in this sense, but they require inverting large Hessians, and for this reason there is just a few implementations using this approach. In this paper, we propose a fast dGN implementation to compute the CPD. In this paper, we make the case to always compress the tensor, and propose a fast damped Gauss-Newton implementation to compute the canonical polyadic decomposition.
\end{abstract}
\begin{keywords}
multilinear algebra, tensor decompositions, canonical polyadic decomposition, multilinear singular value decomposition, Gauss-Newton, algorithms
\end{keywords}
\begin{AMS}
15A69, 62H25, 65K05
\end{AMS}
\section{Introduction}
In this paper we introduce \emph{Tensor Fox}, a new tensor package that outperform other well-known available tensor packages with respect to the CPD. The main improvements of Tensor Fox with respect to the other packages are: a preprocessing stage, where the multilinear singular value decomposition (MLSVD) is used to compress the tensor; the approximated Hessian has a block structure which can be exploited to perform fast iterations. We will see that Tensor Fox is competitive, being able to handle every tested problem with high speed and accuracy. The other tensor packages considered are: \emph{Tensorlab \cite{tensorlab, tensorlab2} (version 3.0), Tensor Toolbox \cite{tensortoolbox, cp_opt}} (version 3.1) and \emph{TensorLy} \cite{tensorly}. These packages are the state of art in terms of performance and accuracy with respect to the problem of computing the CPD.
For decades tensor decompositions have been applied to general multidimensional data with success. Today they excel in several applications, including blind source separation, dimensionality reduction, pattern/image recognition, machine learning and data mining, see for instance \cite{bro, cichoki2009, cichoki2014, savas, smilde, anandkumar}. In the old days, the alternating least squares (ALS) was the working horse algorithm to compute the CPD of a tensor. It is easy to implement, each iteration is cheap to compute and it has low memory cost. However Kruskal, Harshman and others started to notice that this algorithm had its limitations in the presence of \emph{bottlenecks} and \emph{swamps} \cite{kruskal-harshman, burdick}. Several attempts were made in order to improve the performance of ALS algorithms, but none was fully satisfactory. The dGN algorithm is a nonlinear least squares (NLS) algorithm which appears to be a better approach for the CPD. It is less sensitive to bottlenecks and swamps and appears to attain quadratic convergence when close to the solution. The main challenge with the dGN algorithm is the fact that we need to invert a large Hessian at each iteration. Tensor Fox relies on this second approach.
The paper is organized as follows. Notations and basic tensor algebra are briefly reviewed in \cref{sec:notations}. The algorithm used to compute the CPD is explained at some detail in section 3. Experiments to validate the claims about the performance of Tensor Fox (TFX) are described in section 4. As an illustration of a 'real world' problem, more tests with Gaussian mixtures are conducted in section 5. The main outcomes of this paper are highlighted in section 6 where we conclude the paper.
\section{Notations and basic definitions}
\label{sec:notations}
A tensor is a real multidimensional array in $\mathbb{R}^{I_1 \times I_2 \times \ldots \times I_L}$, where $I_1, I_2, \ldots, I_L$ are positive integer numbers. The \emph{order} of a tensor is the number of indexes associated to each entry of the tensor. For instance, a $L$-th order tensor is an element of the space $\mathbb{R}^{I_1 \times I_2 \times \ldots \times I_L}$. Each index is associated to a dimension of the tensor, and each dimension is called a \emph{mode} of the tensor. Tensors of order $1$ are vectors and tensors of order $2$ are matrices. Scalars are denoted by lower case letters. Vectors are denoted by bold lower-case letters, e.g., $\textbf{x}$. Matrices are denoted by bold capital letters, e.g., $\textbf{X}$. Tensors are denoted by calligraphic capital letters, e.g., $\mathcal{T}$. The $i$-th entry of a vector $\textbf{x}$ is denoted by $x_i$, the entry $(i,j)$ of a matrix $\textbf{X}$ is denoted by $x_{ij}$, and the entry $(i,j,k)$ of a third order tensor $\mathcal{T}$ is denoted by $t_{ijk}$. Finally, any kind sequence will be indicated by superscripts. For example, we write $\textbf{X}^{(1)}, \textbf{X}^{(2)}, \ldots$ for a sequence of matrices. We denote by $\ast$ the \emph{Hadamard product} (i.e., the coordinatewise product) between two matrices with same shape.
Through most part of the text we limit the discussion to third order tensors, so instead of using $I_1, I_2, I_3$ to denote the dimensions we will just use $m, n, p$. However, everything can be easily generalized to $L$-th order tensors.
\begin{definition} The \emph{inner product} of two tensors $\mathcal{T}, \mathcal{S} \in \mathbb{R}^{m \times n \times p}$ is defined as
$$\langle \mathcal{T}, \mathcal{S} \rangle = \sum_{i=1}^m \sum_{j=1}^n \sum_{k=1}^p t_{ijk} s_{ijk}.$$
\end{definition}
\begin{definition} The associate (Frobenius) \emph{norm} of $\mathcal{T}$ is given by
$$\| \mathcal{T} \| = \sqrt{ \langle \mathcal{T}, \mathcal{T} \rangle }.$$
\end{definition}
We will use the notation $\| \cdot \|$ also for the matrix and vector Frobenius norm.
\begin{definition} \label{tensprod} The \emph{tensor product} (also called \emph{outer product}) between three vectors $\textbf{x} \in \mathbb{R}^m, \textbf{y} \in \mathbb{R}^n, \textbf{z} \in \mathbb{R}^p$ is the tensor $\textbf{x} \otimes \textbf{y} \otimes \textbf{z} \in \mathbb{R}^{m \times n \times p}$ such that
$$(\textbf{x} \otimes \textbf{y} \otimes \textbf{z})_{ijk} = x_i y_j z_k.$$
\end{definition}
We can consider this definition for only two vectors $\textbf{x, y}$. By doing this we have the tensor $\textbf{x} \otimes \textbf{y} \in \mathbb{R}^{m \times n}$ given by
$$(\textbf{x} \otimes \textbf{y})_{ij} = x_i y_j.$$
This tensor is just the rank one matrix $\textbf{xy}^T$, which is the outer product between $\textbf{x}$ and $\textbf{y}$.
\begin{definition} A tensor $\mathcal{T} \in \mathbb{R}^{m \times n \times p}$ is said to have \emph{rank} $1$ if there exists vectors $\textbf{x} \in \mathbb{R}^m, \textbf{y} \in \mathbb{R}^n, \textbf{z} \in \mathbb{R}^p$ such that
$$\mathcal{T} = \textbf{x} \otimes \textbf{y} \otimes \textbf{z}.$$
\end{definition}
\begin{definition} A tensor $\mathcal{T} \in \mathbb{R}^{m \times n \times p}$ is said to have \emph{rank} $R$ if $\mathcal{T}$ can be written as sum of $R$ rank one tensors and cannot be written as a sum of less than $R$ rank one tensors. In this case we write $rank(\mathcal{T}) = R$.
\end{definition}
When $\mathcal{T}$ has rank $R$, there exists vectors $\textbf{x}_1, \ldots, \textbf{x}_R \in \mathbb{R}^m$, $\textbf{y}_1, \ldots, \textbf{y}_R \in \mathbb{R}^n$ and $\textbf{z}_1, \ldots, \textbf{z}_R \in \mathbb{R}^p$ such that
$$\mathcal{T} = \sum_{r=1}^R \textbf{x}_r \otimes \textbf{y}_r \otimes \textbf{z}_r.$$
By definition we can't write $\mathcal{T}$ with less than $R$ terms.
\begin{definition} A \emph{CPD of rank $R$} for a tensor $\mathcal{T}$ is a decomposition of $\mathcal{T}$ as a sum of $R$ rank one terms.
\end{definition}
In contrast to the SVD for the matrix case, the CPD is ``unique'' when the rank of a tensor $\mathcal{T}$ of order $\geq 3$ is $R$. The notion of uniqueness here is modulo permutations and scaling. Note that one can arbitrarily permute the rank one terms or scale them as long as their product is the same. These trivial modifications doesn't change the tensor and we consider it to be the unique up to these modifications.
In practical applications one may not know in advance the rank $R$ of $\mathcal{T}$. In this case we look for a CPD with low rank \cite{comon} such that
$$\mathcal{T} \approx \sum_{r=1}^R \textbf{x}_r \otimes \textbf{y}_r \otimes \textbf{z}_r.$$
\begin{definition} The \emph{multilinear multiplication} between the tensor $\mathcal{T} \in \mathbb{R}^{m \times n \times p}$ and the matrices $\textbf{A} \in \mathbb{R}^{m' \times m}, \textbf{B} \in \mathbb{R}^{n' \times n}, \textbf{C} \in \mathbb{R}^{p' \times p}$ is the tensor $\mathcal{T}' \in \mathbb{R}^{m' \times n' \times p'}$ given by
$$\mathcal{T}'_{i' j' k'} = \sum_{i=1}^m \sum_{j=1}^n \sum_{k=1}^p a_{i' i} b_{j' j} c_{k' k} t_{ijk}.$$
\end{definition}
To be more succinct we denote $\mathcal{T}' = (\textbf{A}, \textbf{B}, \textbf{C}) \cdot \mathcal{T}$. Observe that in the matrix case we have $(\textbf{A}, \textbf{B}) \cdot \textbf{M} = \textbf{A} \textbf{M B}^T$.
\begin{definition} Let $\mathcal{T} \in \mathbb{R}^{m \times n \times p}$ be any tensor. A \emph{Tucker decomposition} of $\mathcal{T}$ is any decomposition of the form
$$\mathcal{T} = (\textbf{A}, \textbf{B}, \textbf{C}) \cdot \mathcal{S}.$$
\end{definition}
$\mathcal{S}$ is called the \emph{core tensor} while $\textbf{A}, \textbf{B}, \textbf{C}$ are called the \emph{factors} of the decomposition. Usually they are considered to be orthogonal. One particular (and trivial) Tucker decomposition of $\mathcal{T}$ is $\mathcal{T} = (\textbf{I}_m, \textbf{I}_n, \textbf{I}_p) \cdot \mathcal{T}$, where $\textbf{I}_m$ is the $m \times m$ identity matrix. One way conclude this is just by expanding $\mathcal{T}$ in canonical coordinates by
$$\mathcal{T} = \sum_{i=1}^m \sum_{j=1}^n \sum_{k=1}^p t_{ijk} \cdot \textbf{e}_i \otimes \textbf{e}_j \otimes \textbf{e}_k,$$
where $\textbf{e}_i, \textbf{e}_j, \textbf{e}_k$ are the canonical basis vectors of their corresponding spaces.
If $\mathcal{T}$ has rank $R$, let
$$\mathcal{T} = \sum_{r=1}^R \textbf{x}_r \otimes \textbf{y}_r \otimes \textbf{z}_r$$
be a CPD of rank $R$. Then a more useful Tucker decomposition is given by $\mathcal{T} = (\textbf{X}, \textbf{Y}, \textbf{Z}) \cdot \mathcal{I}_{R \times R \times R}$, where $\textbf{X} = [ \textbf{x}_1, \ldots, \textbf{x}_R ] \in \mathbb{R}^{m \times R}$, $\textbf{Y} = [ \textbf{y}_1, \ldots, \textbf{y}_R ] \in \mathbb{R}^{n \times R}$, $\textbf{Z} = [ \textbf{z}_1, \ldots, \textbf{z}_R ] \in \mathbb{R}^{p \times R}$ and $\mathcal{I}_{R \times R \times R}$ is the cubic tensor with ones at its diagonal and zeros elsewhere.
\section{Tensor compression}
\label{sec:compression}
Working with ``raw'' data is, usually, not advised because of the typical large data size. A standard approach is to compress the data before starting the actual work, and this is not different in the context of tensors. This is efficient a way of reducing the effects of the curse of dimensionality. We consider tensors of general order only in this section.
Given a tensor $\mathcal{S} \in \mathbb{R}^{I_1 \times \ldots \times I_L}$, let $\mathcal{S}_{i_\ell = k} \in \mathbb{R}^{I_1 \times \ldots \times I_{\ell-1} \times I_{\ell+1} \times \ldots \times I_L}$ be the sub tensor of $\mathcal{S}$ obtained by fixing the $\ell$-th index of $\mathcal{S}$ with value equal to $k$ and varying all the other indexes. More precisely, $\mathcal{S}_{i_\ell = k} = \mathcal{S}_{: \ldots : k : \ldots :}$, where the value $k$ is at the $\ell$-th index. We call these sub tensors by \emph{hyperslices}. In the case of a third order tensors, these sub-tensors are the slices of the tensor. $\mathcal{S}_{i_1 = k}$ are the horizontal slices, $\mathcal{S}_ {i_2 = k}$ the lateral slices and $\mathcal{S}_ {i_3 = k}$ the frontal slices. The main tool to compress a tensor is the given by the following theorem.
\begin{theorem}[L. D. Lathauwer, B. D. Moor, J. Vandewalle, 2000] \label{MLSVD}
For any tensor $\mathcal{T} \in \mathbb{R}^{I_1 \times \ldots \times I_L}$ there are orthogonal matrices $\textbf{U}^{(1)} \in \mathbb{R}^{I_1 \times R_1}, \ldots, \textbf{U}^{(L)} \in \mathbb{R}^{I_L \times R_L}$ and a tensor $\mathcal{S} \in \mathbb{R}^{R_1 \times \ldots \times R_L}$ such that
\begin{enumerate}
\item For all $\ell = 1 \ldots L$, we have $R_\ell \leq I_\ell$.
\item $\mathcal{T} = (\textbf{U}^{(1)}, \ldots, \textbf{U}^{(L)}) \cdot \mathcal{S}$.
\item For all $\ell = 1 \ldots L$, the sub-tensors $\mathcal{S}_{i_\ell = 1}, \ldots, \mathcal{S}_{i_\ell = I_\ell}$ are orthogonal with respect to each other.
\item For all $\ell = 1 \ldots L$, we have $\|\mathcal{S}_{i_\ell = 1}\| \geq \ldots \geq \|\mathcal{S}_{i_\ell = I_\ell}\|$.
\end{enumerate}
\end{theorem}
We refer to this decomposition as the \emph{multilinear singular value decomposition} (MLSVD) of $\mathcal{T}$. The core tensor $\mathcal{S}$ of the MLSVD distributes the ``energy'' (i.e., the magnitude of its entries) in such a way so that it concentrates more energy at the first entry $s_{11 \ldots 1}$ and disperses as we move along each dimension. Figure~\ref{fig:slices-energy} illustrates the energy distribution when $\mathcal{S}$ is a third order tensor. The red slices contains more energy and it changes to white when the slice contains less energy. Note that the energy of the slices are given precisely by the singular values.
\begin{figure}[htbp]
\centering
\includegraphics[scale=.11]{slices_energy}
\caption{Energy of the slices of a core third order tensor $\mathcal{S}$ obtained after a MLSVD.}
\label{fig:slices-energy}
\end{figure}
\begin{definition}
The \emph{multilinear rank of $\mathcal{T}$} is the $L$-tuple $\left( R_1, \ldots, R_L \right)$. We denote $rank_\boxplus(\mathcal{T}) = (R_1, \ldots, R_L)$.
\end{definition}
The multilinear rank is also often called the \emph{Tucker rank} of $\mathcal{T}$. Now we will see some important results regarding this rank. More details can be found in \cite{lim}.
\begin{theorem}[V. de Silva, and L.H. Lim, 2008]
Let $\mathcal{T} \in \mathbb{R}^{I_1} \otimes \ldots \otimes \mathbb{R}^{I_L}$ and $\textbf{M}^{(1)} \in \mathbb{R}^{I_1' \times I_1}, \ldots, \textbf{M}^{(L)} \in \mathbb{R}^{I_L' \times I_L}$. Then the following statements holds.
\begin{enumerate}
\item $\| rank_\boxplus(\mathcal{T}) \|_\infty \leq rank(T)$.
\item $rank_\boxplus((\textbf{M}^{(1)}, \ldots, \textbf{M}^{(L)})\cdot \mathcal{T}) \leq rank_\boxplus(\mathcal{T})$.
\item If $\textbf{M}^{(1)} \in GL(I_1'), \ldots, \textbf{M}^{(L)} \in GL(I_L')$, then $rank_\boxplus((\textbf{M}^{(1)}, \ldots, \textbf{M}^{(L)}) \cdot \mathcal{T}) = rank_\boxplus(\mathcal{T})$, where $GL(d)$ denotes the group of invertible matrices $d \times d$.
\end{enumerate}
\end{theorem}
After computing the MLSVD of $\mathcal{T}$, notice that computing a CPD for $\mathcal{S}$ is equivalent to computing a CPD for $\mathcal{T}$. Indeed, if $\mathcal{S} = \displaystyle \sum_{r=1}^R \textbf{w}_r^{(1)} \otimes \ldots \otimes \textbf{w}_r^{(L)}$, then we can write $\mathcal{S} = (\textbf{W}^{(1)}, \ldots, \textbf{W}^{(L)}) \cdot \mathcal{I}_{R \times \ldots \times R}$, which implies that
$$\mathcal{T} = (\textbf{U}^{(1)}, \ldots, \textbf{U}^{(L)}) \cdot \left( (\textbf{W}^{(1)}, \ldots, \textbf{W}^{(L)}) \cdot \mathcal{I}_{R \times \ldots \times R} \right) = $$
$$ = (\textbf{U}^{(1)} \textbf{W}^{(1)}, \ldots, \textbf{U}^{(L)} \textbf{W}^{(L)}) \cdot \mathcal{I}_{R \times \ldots \times R}.$$
Let $rank_\boxplus(\mathcal{T}) = (R_1, \ldots, R_L)$. We use the algorithm introduced by N. Vannieuwenhove, R. Vandebril and K. Meerberge \cite{sthosvd} to compute a truncated MLSVD. Now let $(\tilde{R}_1, \ldots, \tilde{R}_L) \leq (R_1, \ldots, R_L)$ be a lower multilinear rank. We define $\tilde{\textbf{U}}^{(\ell)} = \left[ \textbf{U}_{:1}^{(\ell)}, \ldots, \textbf{U}_{:\tilde{R}_\ell}^{(\ell)} \right] \in \mathbb{R}^{I_\ell \times \tilde{R}_\ell}$ to be the matrix composed by the first columns of $\textbf{U}^{(\ell)}$, and $\tilde{\mathcal{S}} \in \mathbb{R}^{\tilde{R}_1 \times \ldots \times \tilde{R}_L}$ is such that $\tilde{s}_{i_1 \ldots i_L} = s_{i_1 \ldots i_L}$ for $1 \leq i_1 \leq \tilde{R}_1, \ldots, 1 \leq i_L \leq \tilde{R}_L$. Figure~\ref{fig:trunc} illustrates such a truncation in the case of a third order tensor. The white part correspond to $\mathcal{S}$ after we computed the full MLSVD, the gray tensor is the reduced format of $\mathcal{S}$, and the red tensor is the truncated tensor $\tilde{\mathcal{S}}$. Our goal is to find the smallest $(\tilde{R}_1, \ldots, \tilde{R}_L)$ such that $\| \mathcal{S} - \tilde{\mathcal{S}} \|$ is not so large\footnote{Actually, $\mathcal{S}$ and $\tilde{\mathcal{S}}$ belongs to different spaces. We committed an abuse of notation and wrote $\| \mathcal{S} - \tilde{\mathcal{S}} \|$ considering the projection of $\tilde{\mathcal{S}}$ over the space of $\mathcal{S}$, that is, enlarge $\tilde{\mathcal{S}}$ so it has the same size of $\mathcal{S}$ and consider these new entries as zeros.}. Since reducing $(\tilde{R}_1, \ldots, \tilde{R}_L)$ too much causes $\| \mathcal{S} - \tilde{\mathcal{S}} \|$ to increase, there is a trade off we have to manage in the best way possible.
\begin{figure}[htbp]
\centering
\includegraphics[scale=.45]{trunc}
\caption{Truncated tensor $\tilde{\mathcal{S}}$.}
\label{fig:trunc}
\end{figure}
\section{Algorithm description and analysis}
\label{sec:alg}
Given a tensor $\mathcal{T} \in \mathbb{R}^{m \times n \times p}$ and a rank value $R$, we want to find a rank $R$ approximation for $\mathcal{T}$. More precisely, we want to find vectors $\textbf{x}_1, \ldots, \textbf{x}_R \in \mathbb{R}^m$, $\textbf{y}_1, \ldots, \textbf{y}_R \in \mathbb{R}^n$ and $\textbf{z}_1, \ldots, \textbf{z}_R \in \mathbb{R}^p$ such that
\begin{equation}
\mathcal{T} \approx \sum_{r=1}^R \textbf{x}_r \otimes \textbf{y}_r \otimes \textbf{z}_r. \label{eq:rank_approx}
\end{equation}
Finding the best approximation is known to be a ill-posed problem \cite{lim} so we will be content with just a reasonable approximation. This also depends on the choice of $R$ but we won't address this issue here. For each problem at hand one may have a good guess of what should be the rank. In the worse scenario it is still possible to try several ranks and keep the best fit, although caution is necessary to not overfit the data.
Consider the factors $\textbf{X} = [ \textbf{x}_1, \ldots, \textbf{x}_R ] \in \mathbb{R}^{m \times R}$, $\textbf{Y} = [ \textbf{y}_1, \ldots, \textbf{y}_R ] \in \mathbb{R}^{n \times R}$, $\textbf{Z} = [ \textbf{z}_1, \ldots, \textbf{z}_R ] \in \mathbb{R}^{p \times R}$. Using the multilinear multiplication notation we can write ~\ref{eq:rank_approx} as $\mathcal{T} \approx (\textbf{X}, \textbf{Y}, \textbf{Z}) \cdot \mathcal{I}_{R \times R \times R}$. This view may lead us to consider minimizing the map
\begin{equation}
(\textbf{X}, \textbf{Y}, \textbf{Z}) \mapsto \frac{1}{2} \left\| \mathcal{T} - (\textbf{X}, \textbf{Y}, \textbf{Z}) \cdot \mathcal{I}_{R \times R \times R} \right\|^2. \label{eq:rank_approx_tucker}
\end{equation}
Since the matrix structure is not really useful in~\ref{eq:rank_approx_tucker}, we may just vectorize the matrices and concatenate them all to form the single vector
$$\textbf{w} =
\left[
\begin{array}{c}
vec(\textbf{X})\\
vec(\textbf{Y})\\
vec(\textbf{Z})
\end{array}
\right].$$
The map $vec$ just stacks all columns of a matrix, with the first column at the top and going on until the last column at the bottom. With this notation we consider the error function $F:\mathbb{R}^{R(m+n+p)} \to \mathbb{R}$ given by
$$F(\textbf{w}) = \frac{1}{2} \left\| \mathcal{T} - (\textbf{X}, \textbf{Y}, \textbf{Z}) \cdot \mathcal{I}_{R \times R \times R} \right\|^2 = $$
$$ = \frac{1}{2} \left\| \mathcal{T} - \sum_{r=1}^R \textbf{x}_r \otimes \textbf{y}_r \otimes \textbf{z}_r \right\|^2 = $$
$$ = \frac{1}{2} \sum_{i=1}^m \sum_{j=1}^n \sum_{k=1}^p \left( \mathcal{T}_{ijk} - \sum_{r=1}^R \textbf{x}_{ir} \textbf{y}_{jr} \textbf{z}_{kr} \right)^2.$$
For each choice of $\textbf{w}$ we write the approximating tensor as $\tilde{\mathcal{T}} = \tilde{\mathcal{T}}(\textbf{w})$. In order to minimize $F$ we first introduce the Gauss-Newton algorithm (without damping). Each difference $\mathcal{T}_{ijk} - \tilde{\mathcal{T}}_{ijk} = \mathcal{T}_{ijk} - \sum_{r=1}^R \textbf{x}_i^{(r)} \textbf{y}_j^{(r)} \textbf{z}_k^{(r)}$ is the residual between $\mathcal{T}_{ijk}$ and $\tilde{\mathcal{T}}_{ijk}$. From these residuals we form the function $f:\mathbb{R}^{R(m+n+p)} \to \mathbb{R}^{mnp}$ defined by $f = (f_{111}, f_{112}, \ldots, f_{mnp})$, where $f_{ijk}(\textbf{w}) = \mathcal{T}_{ijk} - \tilde{\mathcal{T}}_{ijk}$. Then we can write
$$F(\textbf{w}) = \frac{1}{2} \sum_{i=1}^m \sum_{j=1}^n \sum_{k=1}^p f_{ijk}(\textbf{w})^2 = \frac{1}{2} \| f(\textbf{w}) \|^2.$$
\begin{lemma} The partial derivatives of $f$ are the following.
$$\frac{\partial f_{ijk}}{\partial x_{i'r}} = \left\{
\begin{array}{cc}
- y_{jr} z_{kr}, & \text{if } i' = i\\
0, & \text{otherwise}
\end{array}
\right.$$
$$\frac{\partial f_{ijk}}{\partial y_{j'r}} = \left\{
\begin{array}{cc}
- x_{ir} z_{kr}, & \text{if } j' = j\\
0, & \text{otherwise}
\end{array}
\right.$$
$$\frac{\partial f_{ijk}}{\partial z_{k'r}} = \left\{
\begin{array}{cc}
- x_{ir} y_{jr}, & \text{if } k' = k\\
0, & \text{otherwise}
\end{array}
\right.$$
\end{lemma}
Denote the derivative of $f$ at $\textbf{w}$ (the Jacobian matrix) by $\textbf{J}_f = \textbf{J}_f(\textbf{w})$. Also denote the gradient of $F$ at $\textbf{w}$ by $\nabla F = \nabla F(\textbf{w})$ and the corresponding Hessian by $\textbf{H}_F = \textbf{H}_F(\textbf{w})$. We can relate the derivatives of $f$ and $F$ through the following result.
\begin{lemma} \label{relations} The following identities always holds.
\begin{enumerate}
\item $\displaystyle \nabla F = \textbf{J}_f^T \cdot f$.
\item $\displaystyle \textbf{H}_F = \textbf{J}_f^T \textbf{J}_f + \sum_{i=1}^m \sum_{j=1}^n \sum_{k=1}^p f_{ijk} \cdot \textbf{H}_{f_{ijk}}$, where $\textbf{H}_{f_{ijk}}$ is the Hessian matrix of $f_{ijk}$.
\end{enumerate}
\end{lemma}
As the algorithm converges we expect to have $f_{ijk} \approx 0$ for all $i,j,k$. Together with lemma ~\ref{relations} this observation shows that $\textbf{H}_F \approx \textbf{J}_f^T \textbf{J}_f$ when close to a optimal point.
\subsection{Gauss-Newton algorithm}
The Gauss-Newton algorithm is obtained by first considering a first order approximation of $f$ at a point $\textbf{w}^{(0)} \in \mathbb{R}^{R(m+n+p)}$, that is,
\begin{equation}
f(\textbf{w}^{(0)} + \underbrace{(\textbf{w} - \textbf{w}^{(0)})}_{step} ) = f(\textbf{w}) \approx f(\textbf{w}^{(0)}) + \textbf{J}_f(\textbf{w}^{(0)}) \cdot (\textbf{w} - \textbf{w}^{(0)}). \label{eq:linear_approx}
\end{equation}
In order to minimize ~\ref{eq:linear_approx} at the neighborhood of $\textbf{w}^{(0)}$ we can compute the minimum of $\| f(\textbf{w}^{(0)}) + \textbf{J}_f(\textbf{w}^{(0)}) \cdot (\textbf{w} - \textbf{w}^{(0)}) \|$ for $\textbf{w} \in \mathbb{R}^{R(m+n+p)}$. Note that minimizing ~\ref{eq:linear_approx} is a least squares problem since we can rewrite this problem as $\displaystyle \min_\textbf{x} \| \textbf{A}\textbf{x} - \textbf{b}\|$ for $\textbf{A} = \textbf{J}_f(\textbf{w}^{(0)})$, $\textbf{x} = \textbf{w} - \textbf{w}^{(0)}$, $\textbf{b} = - f(\textbf{w}^{(0)})$.
The solution gives us the next iterate $\textbf{w}^{(1)}$. More generally, we obtain $\textbf{w}^{(k+1)}$ from $\textbf{w}^{(k)}$ by defining $\textbf{w}^{(k+1)} = \textbf{x}^\ast + \textbf{w}^{(k)}$, where $\textbf{x}^\ast$ is the solution of the normal equations
\begin{equation}
\textbf{A}^T \textbf{A} \textbf{x} = \textbf{A}^T \textbf{b} \label{eq:normal_eq}
\end{equation}
for $\textbf{A} = \textbf{J}_f(\textbf{w}^{(k)})$, $\textbf{x} = \textbf{w} - \textbf{w}^{(k)}$, $\textbf{b} = - f(\textbf{w}^{(k)})$. This iteration process is the Gauss-Newton algorithm, and it is guaranteed to converge to a local minimum \cite{madsen}.
The Jacobian matrix $\textbf{J}_f$ is always singular so the problem is ill-posed. For this reason we want to regularize the problem. By introducing regularization we can avoid singularity and improve convergence. A common approach is to introduce a suitable regularization matrix $\textbf{L} \in \mathbb{R}^{R(m+n+p) \times R(m+n+p)}$ called \emph{Tikhonov matrix}. Instead of solving equation ~\ref{eq:normal_eq} we solve
\begin{equation}
\left( \textbf{A}^T \textbf{A} + \textbf{L}^T \textbf{L} \right) \textbf{x} = \textbf{A}^T \textbf{b}. \label{eq:reg_eq}
\end{equation}
When $\textbf{L}$ is diagonal, this is called the dGN algorithm. TFX works with a Tikhonov matrix of the form $\textbf{L} = \mu \textbf{D}$, where $\textbf{D}$ is a certain $R(m+n+p) \times R(m+n+p)$ matrix with positive diagonal and $\mu > 0$ is the \emph{damping parameter}. The important property of $\textbf{D}$ is that $\textbf{A}^T \textbf{A} + \textbf{D}$ is diagonally dominant. The damping parameter $\mu$ is usually updated at each iteration. These updates are very important since $\mu$ influences both the direction and the size of the step at each iteration. Let $\tilde{\mathcal{T}}^{(k)}$ be the approximating tensor at the $k$-th iteration and $\tilde{f}(\textbf{w}^{(k)}) = f(\textbf{w}^{(k-1)}) + \textbf{J}_f(\textbf{w}^{(k-1)}) \cdot (\textbf{w}^{(k)} - \textbf{w}^{(k-1)})$ is the first order approximation of $f$ at $\textbf{w}^{(k)}$. TFX uses the update strategy
\begin{flalign*}
& \texttt{if } g < 0.75\\
& \hspace{.8cm} \mu \leftarrow \mu/2\\
& \texttt{else if } g > 0.9\\
& \hspace{.8cm} \mu \leftarrow 1.5 \cdot \mu
\end{flalign*}
where $g$ is the \emph{gain ratio}, defined as
$$g = \frac{ \| \mathcal{T} - \tilde{\mathcal{T}}^{(k-1)} \|^2 - \| \mathcal{T} - \tilde{\mathcal{T}}^{(k)} \|^2 }{ \| \mathcal{T} - \tilde{\mathcal{T}}^{(k-1)} \|^2 - \| \tilde{f}(\textbf{w}^{(k)}) \|^2 }.$$
The denominator is the predicted improvement from the $(k-1)$-th iteration to $k$-th iteration, whereas the numerator measures the actual improvement.
\begin{theorem} With the notations above, the following holds.
\begin{enumerate}
\item $\textbf{J}_f(\textbf{w}^{(k)}) \textbf{J}_f(\textbf{w}^{(k)}) + \mu \textbf{D}$ is a positive definite matrix for all $\mu > 0$.
\item $\textbf{w}^{(k+1)} - \textbf{w}^{(k)}$ is a descent direction for $F$ at $\textbf{w}^{(k)}$.
\item If $\mu$ is big enough, then $\small \textbf{w}^{(k+1)} - \textbf{w}^{(k)} \approx - \frac{1}{\mu} \textbf{J}_f(\textbf{w}^{(k)})^T \cdot f(\textbf{w}^{(k)}) = - \frac{1}{\mu} \nabla F(\textbf{w}^{(k)})$.
\item If $\mu$ is small enough, then $\textbf{w}^{(k+1)} - \textbf{w}^{(k)} \approx \textbf{w}_{GN}^{(k+1)} - \textbf{w}^{(k)}$, where $\textbf{w}_{GN}^{(k+1)}$ is the point we would obtain using classic Gauss-Newton iteration (i.e., without regularization).
\end{enumerate}
\end{theorem}
\begin{remark}
Item 3 is to be used when the current iteration is far from the solution, since $ - \frac{1}{\mu} \nabla F(\textbf{w}^{(k)})$ is a short step in the descent direction. We want to be careful when distant to the solution. This shows that dGN behaves as the gradient descent algorithm when distant to the solution. On the other hand, item 4 is to be used when the current iteration is close to the solution, since the step is closer to the classic Gauss-Newton, we may attain quadratic convergence at the final.
\end{remark}
\subsection{Exploiting the structure of $\textbf{A}^T \textbf{A}$}
At each iteration we must solve equation ~\ref{eq:reg_eq}, and since $\textbf{A}^T \textbf{A} + \textbf{L}^T \textbf{L}$ is positive definite we are able to find an approximated solution by taking just a few iterations of the conjugate gradient algorithm. However $\textbf{A}^T \textbf{A} + \textbf{L}^T \textbf{L}$ is a $R(m+n+p) \times R(m+n+p)$ matrix, and at each iteration this may be costly since each matrix-vector multiplication costs $\mathcal{O}(R^2(m+n+p)^2)$ flops (floating point operations). We can work around this issue by exploiting the structure of $\textbf{A}^T \textbf{A}$.
\begin{theorem}
Let $\textbf{A} = \textbf{J}_f(\textbf{w}^{(k)})$ and let $\textbf{X}, \textbf{Y}, \textbf{Z}$ be the factor matrices of the approximating CPD at $k$-th iteration, where $\textbf{X} = [ \textbf{x}^{(1)}, \ldots, \textbf{x}^{(R)} ]$, $\textbf{Y} = [ \textbf{y}^{(1)}, \ldots, \textbf{y}^{(R)} ]$, $\textbf{Z} = [ \textbf{z}^{(1)}, \ldots, \textbf{z}^{(R)} ]$. Then,
$$\textbf{A}^T \textbf{A} =
\left[ \begin{array}{ccc}
\textbf{B}_X & \textbf{B}_{XY} & \textbf{B}_{XZ}\\
\textbf{B}_{XY}^T & \textbf{B}_Y & \textbf{B}_{YZ}\\
\textbf{B}_{XZ}^T & \textbf{B}_{YZ}^T & \textbf{B}_Z
\end{array} \right],$$
where
$$\textbf{B}_X =
\left[ \begin{array}{ccc}
\langle \textbf{y}_1, \textbf{y}_1 \rangle \langle \textbf{z}_1, \textbf{z}_1 \rangle \textbf{I}_m & \ldots & \langle \textbf{y}_1, \textbf{y}_R \rangle \langle \textbf{z}_1, \textbf{z}_R \rangle \textbf{I}_m\\
\vdots & & \vdots\\
\langle \textbf{y}_R, \textbf{y}_1 \rangle \langle \textbf{z}_R, \textbf{z}_1 \rangle \textbf{I}_m & \ldots & \langle \textbf{y}_R, \textbf{y}_R \rangle \langle \textbf{z}_R, \textbf{z}_R \rangle \textbf{I}_m\\
\end{array} \right],$$
$$\textbf{B}_Y =
\left[ \begin{array}{ccc}
\langle \textbf{x}_1, \textbf{x}_1 \rangle \langle \textbf{z}_1, \textbf{z}_1 \rangle \textbf{I}_n & \ldots & \langle \textbf{x}_1, \textbf{x}_R \rangle \langle \textbf{z}_1, \textbf{z}_R \rangle \textbf{I}_n\\
\vdots & & \vdots\\
\langle \textbf{x}_R, \textbf{x}_1 \rangle \langle \textbf{z}_R, \textbf{z}_1 \rangle \textbf{I}_n & \ldots & \langle \textbf{x}_R, \textbf{x}_R \rangle \langle \textbf{z}_R, \textbf{z}_R \rangle \textbf{I}_n\\
\end{array} \right],$$
$$\textbf{B}_Z =
\left[ \begin{array}{ccc}
\langle \textbf{x}_1, \textbf{x}_1 \rangle \langle \textbf{y}_1, \textbf{y}_1 \rangle \textbf{I}_p & \ldots & \langle \textbf{y}_1, \textbf{y}_R \rangle \langle \textbf{y}_1, \textbf{y}^{(r)} \rangle \textbf{I}_p\\
\vdots & & \vdots\\
\langle \textbf{x}_R, \textbf{x}_1 \rangle \langle \textbf{y}_R, \textbf{y}_1 \rangle \textbf{I}_p & \ldots & \langle \textbf{x}_R, \textbf{x}_R \rangle \langle \textbf{y}_R, \textbf{y}_R \rangle \textbf{I}_p\\
\end{array} \right],$$
$$\textbf{B}_{XY} =
\left[ \begin{array}{ccc}
\langle \textbf{z}_1, \textbf{z}_1 \rangle \textbf{x}_1 \textbf{y}_1^T & \ldots & \langle \textbf{z}_1, \textbf{z}_R \rangle \textbf{x}^{(r)} \textbf{y}_1^T\\
\vdots & & \vdots\\
\langle \textbf{z}_R, \textbf{z}_1 \rangle \textbf{z}_1 \textbf{y}_R^T & \ldots & \langle \textbf{z}_R, \textbf{z}_R \rangle \textbf{x}_R \textbf{y}_R^T\\
\end{array} \right],$$
$$\textbf{B}_{XZ} =
\left[ \begin{array}{ccc}
\langle \textbf{y}_1, \textbf{y}_1 \rangle \textbf{x}_1 \textbf{z}_1^T & \ldots & \langle \textbf{y}_1, \textbf{y}_R \rangle \textbf{x}_R (\textbf{z}^{(1)})^T\\
\vdots & & \vdots\\
\langle \textbf{y}_R, \textbf{y}_1 \rangle \textbf{x}_1 \textbf{z}_R^T & \ldots & \langle \textbf{y}_R, \textbf{y}_R \rangle \textbf{x}_R \textbf{z}_R^T\\
\end{array} \right],$$
$$\textbf{B}_{YZ} =
\left[ \begin{array}{ccc}
\langle \textbf{x}_1, \textbf{x}_1 \rangle \textbf{y}_1 \textbf{z}_1^T & \ldots & \langle \textbf{x}_1, \textbf{x}^{(r)} \rangle \textbf{y}^{(r)} \textbf{z}_1^T\\
\vdots & & \vdots\\
\langle \textbf{x}_R, \textbf{x}_1 \rangle \textbf{y}_1 \textbf{z}_R^T & \ldots & \langle \textbf{x}_R, \textbf{x}_R \rangle \textbf{y}_R \textbf{z}_R^T\\
\end{array} \right].$$
\end{theorem}
Now we can see how to retrieve $\textbf{A}^T \textbf{A}$ from the factor matrices $\textbf{X}, \textbf{Y}, \textbf{Z}$ with few computations and low memory cost. First we compute and store all the scalar products. It is easy to see we will need to store $9 R^2$ floats. The other parts of $\textbf{A}^T \textbf{A}$ can be obtained directly from the factor matrices so we are done with regard to memory costs. This is a big reduction in memory size since the original matrix would require $R^2 \left( m+n+p \right)^2$ floats in dense format. The overall cost to compute those scalar products is $\mathcal{O}\left( R^2 (m+n+p) \right)$ flops, which is also reasonable.
It is convenient to store the above products in matrix form, so we write
$$\Pi_X =
\left[ \begin{array}{ccc}
\langle \textbf{y}_1, \textbf{y}_1 \rangle \langle \textbf{z}_1, \textbf{z}_1 \rangle & \ldots & \langle \textbf{y}_1, \textbf{y}_R \rangle \langle \textbf{z}_1, \textbf{z}_R \rangle \\
\vdots & & \vdots\\
\langle \textbf{y}_R, \textbf{y}_1 \rangle \langle \textbf{z}_R, \textbf{z}_1 \rangle & \ldots & \langle \textbf{y}_R, \textbf{y}_R \rangle \langle \textbf{z}_R, \textbf{z}_R \rangle \\
\end{array} \right],$$
$$\Pi_Y =
\left[ \begin{array}{ccc}
\langle \textbf{x}_1, \textbf{x}_1 \rangle \langle \textbf{z}_1, \textbf{z}_1 \rangle & \ldots & \langle \textbf{x}_1, \textbf{x}_R \rangle \langle \textbf{z}_1, \textbf{z}_R \rangle \\
\vdots & & \vdots\\
\langle \textbf{x}_R, \textbf{x}_1 \rangle \langle \textbf{z}_R, \textbf{z}_1 \rangle & \ldots & \langle \textbf{x}_R, \textbf{x}_R \rangle \langle \textbf{z}_R, \textbf{z}_R \rangle \\
\end{array} \right],$$
$$\Pi_Z =
\left[ \begin{array}{ccc}
\langle \textbf{x}_1, \textbf{x}_1 \rangle \langle \textbf{y}_1, \textbf{y}_1 \rangle & \ldots & \langle \textbf{y}_1, \textbf{y}_R \rangle \langle \textbf{y}_1, \textbf{y}^{(r)} \rangle\\
\vdots & & \vdots\\
\langle \textbf{x}_R, \textbf{x}_1 \rangle \langle \textbf{y}_R, \textbf{y}_1 \rangle & \ldots & \langle \textbf{x}_R, \textbf{x}_R \rangle \langle \textbf{y}_R, \textbf{y}_R \rangle \\
\end{array} \right],$$
$$\Pi_{XY} =
\left[ \begin{array}{ccc}
\langle \textbf{z}_1, \textbf{z}_1 \rangle & \ldots & \langle \textbf{z}_1, \textbf{z}_R \rangle \\
\vdots & & \vdots\\
\langle \textbf{z}_R, \textbf{z}_1 \rangle & \ldots & \langle \textbf{z}_R, \textbf{z}_R \rangle \\
\end{array} \right],$$
$$\Pi_{XZ} =
\left[ \begin{array}{ccc}
\langle \textbf{y}_1, \textbf{y}_1 \rangle & \ldots & \langle \textbf{y}_1, \textbf{y}_R \rangle \\
\vdots & & \vdots\\
\langle \textbf{y}_R, \textbf{y}_1 \rangle & \ldots & \langle \textbf{y}_R, \textbf{y}_R \rangle \\
\end{array} \right],$$
$$\Pi_{YZ} =
\left[ \begin{array}{ccc}
\langle \textbf{x}_1, \textbf{x}_1 \rangle & \ldots & \langle \textbf{x}_1, \textbf{x}^{(r)} \rangle \\
\vdots & & \vdots\\
\langle \textbf{x}_R, \textbf{x}_1 \rangle & \ldots & \langle \textbf{x}_R, \textbf{x}_R \rangle \\
\end{array} \right].$$
As already mentioned, $\textbf{A}^T \textbf{A}$ will be used to solve the normal equations~\ref{eq:reg_eq}. The algorithm of choice to accomplish this is the conjugate gradient method. This classical algorithm is particularly efficient to solve normal equations where the matrix is positive definite, which is our case. Furthermore, our version of the conjugate gradient is \emph{matrix-free}, that is, we are able to compute matrix-vector products $\textbf{A}^T \textbf{A} \cdot \textbf{v}$ without actually constructing $\textbf{A}^T \textbf{A}$. By exploiting the block structure of $\textbf{A}^T \textbf{A}$ we can save memory and the computational cost still is lower than the naive cost of $R^2 \left( m+n+p \right)^2$ flops.
\begin{theorem} \label{Hv}
Given any vector $\textbf{v} \in \mathbb{R}^{R (m+n+p)}$, write
$$\textbf{v} =
\left[ \begin{array}{c}
vec(\textbf{V}_X)\\
vec(\textbf{V}_Y)\\
vec(\textbf{V}_Z)
\end{array} \right]$$
where $\textbf{V}_X \in \mathbb{R}^{m \times R}$, $\textbf{V}_Y \in \mathbb{R}^{n \times R}$, $\textbf{V}_Z \in \mathbb{R}^{p \times R}$. Then,
$$\textbf{A}^T \textbf{A} \cdot \textbf{v} = \footnotesize \left[ \begin{array}{c}
\displaystyle vec\left( \textbf{V}_X \cdot \Pi_X \right) + vec\left( \textbf{X} \cdot \Big( \Pi_{XY} \ast \big( \textbf{V}_Y^T \cdot \textbf{Y} \big) \Big) \right) + vec\left( \textbf{X} \cdot \Big( \Pi_{XZ} \ast \big( \textbf{V}_Z^T \cdot \textbf{Z} \big) \Big) \right)\\ \\
\displaystyle vec\left( \textbf{Y} \cdot \Big( \Pi_{XY} \ast \big( \textbf{V}_X^T \cdot \textbf{X} \big) \Big) \right) + vec\left( \textbf{V}_Y \cdot \Pi_Y \right) + vec\left( \textbf{Y} \cdot \Big( \Pi_{YZ} \ast \big( \textbf{V}_Z^T \cdot \textbf{Z} \big) \Big) \right)\\ \\
\displaystyle vec\left( \textbf{Z} \cdot \Big( \Pi_{XZ} \ast \big( \textbf{V}_X^T \cdot \textbf{X} \big) \Big) \right) + vec\left( \textbf{Z} \cdot \Big( \Pi_{YZ} \ast \big( \textbf{V}_Y^T \cdot \textbf{Y} \big) \Big) \right) + vec\left( \textbf{V}_Z \cdot \Pi_Z \right)
\end{array} \right].$$
\end{theorem}
\bigskip
\textbf{Proof:} We will prove only the equality for the first block
$$\displaystyle vec\left( \textbf{V}_X \ast \Pi_X \right) + vec\left( \textbf{X} \cdot \Big( \Pi_{XY} \ast \big( \textbf{V}_Y^T \cdot \textbf{Y} \big) \Big) \right) + vec\left( \textbf{X} \cdot \Big( \Pi_{XZ} \ast \big( \textbf{V}_Z^T \cdot \textbf{Z} \big) \Big) \right),$$
the others are analogous. First notice that
$$\textbf{A}^T \textbf{A} \cdot \textbf{v} = \textbf{J}_f^T \textbf{J}_f \cdot \textbf{v} =
\left[ \begin{array}{ccc}
\displaystyle\frac{\partial f}{\partial \textbf{X}}^T \frac{\partial f}{\partial \textbf{X}} \ & \ \displaystyle\frac{\partial f}{\partial \textbf{X}}^T \frac{\partial f}{\partial \textbf{Y}} \ & \ \displaystyle\frac{\partial f}{\partial \textbf{X}}^T \frac{\partial f}{\partial \textbf{Z}}\\ \\
\displaystyle\frac{\partial f}{\partial \textbf{Y}}^T \frac{\partial f}{\partial \textbf{X}} \ & \ \displaystyle\frac{\partial f}{\partial \textbf{Y}}^T \frac{\partial f}{\partial \textbf{Y}} \ & \ \displaystyle\frac{\partial f}{\partial \textbf{Y}}^T \frac{\partial f}{\partial \textbf{Z}}\\ \\
\displaystyle\frac{\partial f}{\partial \textbf{Z}}^T \frac{\partial f}{\partial \textbf{X}} \ & \ \displaystyle\frac{\partial f}{\partial \textbf{Z}}^T \frac{\partial f}{\partial \textbf{Y}} \ & \ \displaystyle\frac{\partial f}{\partial \textbf{Z}}^T \frac{\partial f}{\partial \textbf{Z}}
\end{array} \right]
\left[ \begin{array}{c}
vec(\textbf{V}_X)\\
vec(\textbf{V}_Y)\\
vec(\textbf{V}_Z)
\end{array} \right] = $$
$$\left[ \begin{array}{c}
\displaystyle\frac{\partial f}{\partial \textbf{X}}^T \frac{\partial f}{\partial \textbf{X}} \cdot vec(\textbf{V}_X) + \displaystyle\frac{\partial f}{\partial \textbf{X}}^T \frac{\partial f}{\partial \textbf{Y}} \cdot vec(\textbf{V}_Y) + \displaystyle\frac{\partial f}{\partial \textbf{X}}^T \frac{\partial f}{\partial \textbf{Z}} \cdot vec(\textbf{V}_Z)\\ \\
\displaystyle\frac{\partial f}{\partial \textbf{Y}}^T \frac{\partial f}{\partial \textbf{X}} \cdot vec(\textbf{V}_X) + \displaystyle\frac{\partial f}{\partial \textbf{Y}}^T \frac{\partial f}{\partial \textbf{Y}} \cdot vec(\textbf{V}_Y) + \displaystyle\frac{\partial f}{\partial \textbf{Y}}^T \frac{\partial f}{\partial \textbf{Z}} \cdot vec(\textbf{V}_Z)\\ \\
\displaystyle\frac{\partial f}{\partial \textbf{Z}}^T \frac{\partial f}{\partial \textbf{X}} \cdot vec(\textbf{V}_X) + \displaystyle\frac{\partial f}{\partial \textbf{Z}}^T \frac{\partial f}{\partial \textbf{Y}} \cdot vec(\textbf{V}_Y) + \displaystyle\frac{\partial f}{\partial \textbf{Z}}^T \frac{\partial f}{\partial \textbf{Z}} \cdot vec(\textbf{V}_Z)
\end{array} \right], $$
where
$$\frac{\partial f}{\partial \textbf{X}} = \left[
\begin{array}{ccccccc}
\displaystyle\frac{\partial f_{111}}{partial \textbf{X}_{11}} & \ldots & \displaystyle\frac{\partial f_{111}}{\partial \textbf{X}_{1R}} & \ldots & \displaystyle\frac{\partial f_{111}}{\partial \textbf{X}_{m1}} & \ldots & \displaystyle\frac{\partial f_{111}}{\partial \textbf{X}_{mR}} \\ \\
\displaystyle\frac{\partial f_{112}}{\partial \textbf{X}_{11}} & \ldots & \displaystyle\frac{\partial f_{112}}{\partial \textbf{X}_{1R}} & \ldots & \displaystyle\frac{\partial f_{112}}{\partial \textbf{X}_{m1}} & \ldots & \displaystyle\frac{\partial f_{112}}{\partial \textbf{X}_{mR}} \\ \\
\vdots & & \vdots & & \vdots & & \vdots\\ \\
\displaystyle\frac{\partial f_{mnp}}{\partial \textbf{X}_{11}} & \ldots & \displaystyle\frac{\partial f_{mnp}}{\partial \textbf{X}_{1R}} & \ldots & \displaystyle\frac{\partial f_{mnp}}{\partial \textbf{X}_{m1}} & \ldots & \displaystyle\frac{\partial f_{mnp}}{\partial \textbf{X}_{mR}}
\end{array}
\right],$$
and the other derivatives are defined similarly.
Now we simplify each term in the summation above. It is necessary to consider two separate cases. Again, we only prove one particular case since the other ones are proven similarly.\bigskip
\textbf{Case 1 (different modes):} Write $\textbf{V}_{Y} = [\textbf{v}_{Y_1}, \ldots, \textbf{v}_{Y_R}]$, where each $\textbf{v}_{Y_r} \in \mathbb{R}^n$ is a column of $\textbf{V}_{Y}$. Then
$$\frac{\partial f}{\partial \textbf{X}}^T \frac{\partial f}{\partial \textbf{Y}} \cdot vec(\textbf{V}_Y) = $$
$$ =
\small\left[ \begin{array}{ccc}
\displaystyle \langle \textbf{z}_1, \textbf{z}_1 \rangle \cdot \textbf{x}_1 \textbf{y}_1^T & \ldots & \displaystyle \langle \textbf{z}_1, \textbf{z}_R \rangle \cdot \textbf{x}_R \textbf{y}_1^T\\
\vdots & & \vdots\\
\displaystyle \langle \textbf{z}_R, \textbf{z}_1 \rangle \cdot \textbf{x}_1 \textbf{y}_R^T & \ldots & \displaystyle \langle \textbf{z}_R, \textbf{z}_R \rangle \cdot \textbf{x}_R \textbf{y}_R^T\\
\end{array} \right]
\left[ \begin{array}{c}
\textbf{v}_{Y_1}\\
\vdots\\
\textbf{v}_{Y_R}
\end{array} \right] =
\left[ \begin{array}{c}
\displaystyle \sum_{r=1}^R \langle \textbf{z}_1, \textbf{z}_r \rangle \cdot \textbf{x}_r \textbf{y}_1^T \cdot \textbf{v}_{Y_r}\\
\vdots\\
\displaystyle \sum_{r=1}^R \langle \textbf{z}_R, \textbf{z}_r \rangle \cdot \textbf{x}_r \textbf{y}_R^T \cdot \textbf{v}_{Y_r}
\end{array} \right] = $$
$$ = \left[ \begin{array}{c}
\displaystyle \sum_{r=1}^R \langle \textbf{z}_1, \textbf{z}_r \rangle \textbf{x}_r \langle \textbf{y}_1, \textbf{v}_{Y_r} \rangle\\
\vdots\\
\displaystyle \sum_{r=1}^R \langle \textbf{z}_R, \textbf{z}_r \rangle \textbf{x}_r \langle \textbf{y}_R, \textbf{v}_{Y_r} \rangle
\end{array} \right]$$
$$ =
\left[ \begin{array}{c}
\left[ \textbf{x}_1, \ldots, \textbf{x}_R \right]
\left[ \begin{array}{c}
\displaystyle \langle \textbf{z}_1, \textbf{z}_1 \rangle \langle \textbf{y}_1, \textbf{v}_{Y_1} \rangle\\
\vdots\\
\displaystyle \langle \textbf{z}_1, \textbf{z}_R \rangle \langle \textbf{y}_1, \textbf{v}_{Y_R} \rangle
\end{array}
\right]\\
\vdots\\
\left[ \textbf{x}_1, \ldots, \textbf{x}_R \right]
\left[ \begin{array}{c}
\displaystyle \langle \textbf{z}_R, \textbf{z}_1 \rangle \langle \textbf{y}_R, \textbf{v}_{Y_1} \rangle\\
\vdots\\
\displaystyle \langle \textbf{z}_R, \textbf{z}_R \rangle \langle \textbf{y}_R, \textbf{v}_{Y_R} \rangle
\end{array}
\right]
\end{array} \right] =
\left[ \begin{array}{c}
\textbf{X} \cdot \Big( (\Pi_{XY})_1 \ast \big( \textbf{V}_Y^T \cdot \textbf{y}_1 \big) \Big)\\
\vdots\\
\textbf{X} \cdot \Big( (\Pi_{XY})_R \ast \big( (\textbf{V}_Y^T \cdot \textbf{y}_R \big) \Big)
\end{array} \right] = $$
$$ =
vec\left( \left[ \textbf{X} \cdot \Big( (\Pi_{XY})_1 \ast \big( \textbf{V}_Y^T \cdot \textbf{y}_1 \big) \Big), \ldots, \textbf{X} \cdot \Big( (\Pi_{XY})_R \ast \big( (\textbf{V}_Y^T \cdot \textbf{y}_R \big) \Big) \right] \right) = $$
$$ =
vec\left( \textbf{X} \cdot \Big( \Pi_{XY} \ast \big( (\textbf{V}_Y^T \cdot \textbf{Y} \big) \Big) \right)$$
where each $(\Pi_{XY})_r$ is the $r$-th column of $\Pi_{XY}$. Despite the notation refers to the rows of $\Pi_{XY}$, this is not a problem since this matrix is symmetric.\\
\textbf{Case 2 (equal modes):} In this case we have
$$\frac{\partial f}{\partial \textbf{X}}^T \frac{\partial f}{\partial \textbf{X}} \cdot vec(\textbf{V}_X) = $$
$$ =
\left[ \begin{array}{ccc}
\displaystyle \langle \textbf{y}_1, \textbf{y}_1 \rangle \langle \textbf{z}_1, \textbf{z}_1 \rangle \cdot \textbf{I}_m & \ldots & \displaystyle \langle \textbf{y}_1, \textbf{y}_R \rangle \langle \textbf{z}_1, \textbf{z}_R \rangle \cdot \textbf{I}_m\\
\vdots & & \vdots\\
\displaystyle \langle \textbf{y}_R, \textbf{y}_1 \rangle \langle \textbf{z}_R, \textbf{z}_1 \rangle \cdot \textbf{I}_m & \ldots & \displaystyle \langle \textbf{y}_R, \textbf{y}_R \rangle \langle \textbf{z}_R, \textbf{z}_R \rangle \cdot \textbf{I}_m
\end{array} \right]
\left[ \begin{array}{c}
\textbf{v}_{X_1}\\
\vdots\\
\textbf{v}_{X_R}
\end{array} \right] = $$
$$ = \left[ \begin{array}{ccc}
\displaystyle \sum_{r=1}^R \langle \textbf{y}_1, \textbf{y}_r \rangle \langle \textbf{z}_1, \textbf{z}_r \rangle \cdot \textbf{v}_{X_r}\\
\vdots\\
\displaystyle \sum_{r=1}^R \langle \textbf{y}_R, \textbf{y}_r \rangle \langle \textbf{z}_R, \textbf{z}_r \rangle \cdot \textbf{v}_{X_r}
\end{array} \right] = $$
$$\hspace{1cm} =
\left[ \begin{array}{c}
\textbf{V}^{(\ell')} \cdot (\Pi_X)_1\\
\vdots\\
\textbf{V}^{(\ell')} \cdot (\Pi_X)_R
\end{array} \right] = $$
$$\hspace{1.9cm} = vec\left( \textbf{V}_X \cdot (\Pi_X)_1, \ldots, \textbf{V}^{(\ell')} \cdot (\Pi_X)_R \right) = vec\left( \textbf{V}_X \cdot \Pi_X \right). \hspace{1.9cm}\square$$
\section{Computational experiments}
\label{sec:comp}
\subsection{Procedure}
We have selected a set of very distinct tensors to test the known tensor implementations. Given a tensor $\mathcal{T}$ and a rank $R$, we compute the CPD of TFX with the default maximum number of iterations\footnote{The default is $\texttt{maxiter} = 200$ iterations.} $100$ times and retain the best result, i.e., the CPD with the smallest relative error. Let $\varepsilon$ be this error. Now let ALG be any other algorithm implemented by some of the mentioned libraries. We set the maximum number of iterations to \verb|maxiter|, keep the other options with their defaults, and run ALG with these options $100$ times. The only accepted solutions are the ones with relative error smaller that $\varepsilon + \varepsilon/100$. Between all accepted solutions we return the one with the smallest running time. If none solution is accepted, we increase \verb|maxiter| by a certain amount and repeat. We try the values $\verb|maxiter| = 5, 10, 50, 100, 150, \ldots, 900, 950, 1000$, until there is an accepted solution. The running time associated with the accepted solution is the accepted time. These procedures favour all algorithms against TFX since we are trying to find a solution close to the solution of TFX with the minimum number of iterations. We remark that the iteration process is always initiated with a random point. The option to generate a random initial point is offered by all libraries, and we use each one they offer (sometimes random initialization was already the default option). There is no much difference in their initializations, which basically amounts to draw random factor matrices from the standard Gaussian distribution. The time to perform the MLSVD or any kind of preprocessing is included in the time measurements. If one want to reproduce the tests presented here, they can be found at \url{https://github.com/felipebottega/Tensor-Fox/tree/master/tests}.
\subsection{Algorithms}
We used Linux Mint operational system in our tests. All tests were conducted using a processor Intel Core i7-4510U - 2.00GHz (2 physical cores and 4 threads) and 8GB of memory. The libraries mentioned run in Python or Matlab. We use Python - version 3.6.5 and Matlab - version 2017a. In both platforms we used BLAS MKL-11.3.1. Finally, we want to mention that TFX runs in Python using Numba to accelerate computations. We used the version 0.41 of Numba in these tests. The algorithm and implementations that are used in the experiments are the following.
\subsubsection{TFX}
The algorithm used in TFX's implementation is the nonlinear squares scheme described in the previous section. There are more implementation details to be discussed, but the interested reader can check \cite{tensorfox} for more information.
\subsubsection{TALS}
This is the Tensorlab's implementation of ALS algorithm. Although ALS is remarkably fast and easy to implement, it is not very accurate specially in the presence of bottlenecks or swamps. It seems (see \cite{tensorlab2}) that this implementation is very robust while still fast.
\subsubsection{TNLS}
This is the Tensorlab's implementation of NLS algorithm. This is the one we described in the previous section. We should remark that this implementation is similar to TFX's implementation at some points, but there are big differences when we look in more details. In particular the compression procedure, the preconditioner, the damping parameter update rule and the number of iterations of the conjugate gradient are very different.
\subsubsection{TMINF}
This is the Tensorlab's implementation of the problem as an optimization problem. They use a quasi-Newton method, the limited-memory BFGS, and consider equation ~\ref{eq:rank_approx_tucker} just as a minimization of a function.
\subsubsection{OPT}
Just as the TMINF approach, the OPT algorithm is a implementation of Tensor Toolbox, which considers ~\ref{eq:rank_approx_tucker} as a minimization of a function. They claim that using the algorithm option 'lbfgs' is the preferred option\footnote{Check \url{https://www.tensortoolbox.org/cp_opt_doc.html} for more information.}, so we used this way.
\subsubsection{TLALS}
TensorLy has only one way to compute the CPD, which is a implementation of the ALS algorithm. We denote it by TLALS, do not confuse with TALS, the latter is the Tensorlab's implementation.
\subsubsection{fLMa}
fLMA stands for \emph{fast Levenberg-Marquardt algorithm}, and it is a different version of the damped Gauss-Newton.\\
For all Tensorlab algorithm implementation we recommend reading \cite{tensorlab2}, for the Tensor Toolbox we recommend \cite{cp_opt}, for TensorLy we recommend \cite{tensorly}, and for TensorBox we recommend \cite{tensorbox, cichoki2013}. In these benchmarks we used Tensorlab version 3.0 and Tensor Toolbox version 3.1.
In Tensorlab it is possible to disable or not the option of refinement. The first action in Tensorlab is to compress the tensor and work with the compressed version. If we want to use refinement then the program uses the compressed solution to compute a refined solution in the original space. This can be more demanding but can improve the result considerably. In our experience working in the original space is not a good idea because the computational cost increases drastically and the gain in accuracy is generally very small. Still we tried all Tensorlab algorithms with and without refinement. We will write TALSR, TNLSR and TMINFR for the algorithms TALS, TNLS and TMINF with refinement, respectively.
\subsection{Tensors}
Now we describe the tensors used in our benchmarking. The idea was to have a set of very distinct tensors so we could test the implementations in very different situations. This can give a good idea of how they should perform in general.
\subsubsection{Swimmer}
This tensor was constructed based on the paper \cite{shashua} as an example of a non-negative tensor. It is a set of 256 images of dimensions $32 \times 32$ representing a swimmer. Each image contains a torso (the invariant part) of 12 pixels in the center and four limbs of 6 pixels that can be in one of 4 positions. In this work they proposed to use a rank $R = 50$ tensor to approximate, and we do the same for our test. In figure~\ref{fig:swimmer} we can see some frontal slices of this tensor.
\begin{figure}[htbp]
\centering
\includegraphics[scale=.4]{swimmer}
\caption{Swimmer tensor.}
\label{fig:swimmer}
\end{figure}
\subsubsection{Handwritten digit}
This is a classic tensor in machine learning, it is the MNIST\footnote{\url{http://yann.lecun.com/exdb/mnist/}} database of handwritten digits. Each slice is a image of dimensions $20 \times 20$ of a handwritten digit. Also, each 500 consecutive slices correspond to the same digit, so the first 500 slices correspond to the digit 0, the slices 501 to 1000 correspond to the digit 1, and so on. We choose $R = 150$ as a good rank to construct the approximating CPD to this tensor. In figure~\ref{fig:handwritten} we can see some frontal slices of this tensor.
\begin{figure}[htbp]
\centering
\includegraphics[scale=.4]{handwritten}
\caption{Handwritten digits tensor.}
\label{fig:handwritten}
\end{figure}
\subsubsection{Border rank}
We say a tensor $\mathcal{T}$ has \emph{border} rank $\underline{R}$ if there is a sequence of tensors of rank $\underline{R}$ converging to $\mathcal{T}$ and also there is not a sequence of tensors with lower rank satisfying the same property. If $\mathcal{T}$ has rank equal to $R$, then it is easy to see that $\underline{R} \leq R$. The strict inequality can happen too, and this means that the set of rank $R$ tensors is not closed. This phenomenon makes the CPD computation a challenging problem. This subject was discussed by Silva and Lim in \cite{lim}. In the same paper they showed that
$$\mathcal{T}_k = k \left( \textbf{x}_1 + \frac{1}{k} \textbf{y}_1 \right) \otimes \left( \textbf{x}_2 + \frac{1}{k} \textbf{y}_2 \right) \otimes \left( \textbf{x}_3 + \frac{1}{k} \textbf{y}_3 \right) - k \textbf{x}_1 \otimes \textbf{x}_2 \otimes \textbf{x}_3$$
is a sequence of rank sequence of rank 2 tensors converging to a tensor of rank 3, where each pair $\textbf{x}_i, \textbf{y}_i \in \mathbb{R}^m$ is linearly independent. The limit tensor is $\mathcal{T} = \textbf{x}_1 \otimes \textbf{x}_2 \otimes \textbf{y}_3 + \textbf{x}_1 \otimes \textbf{y}_2 \otimes \textbf{x}_3 + \textbf{y}_1 \otimes \textbf{x}_2 \otimes \textbf{x}_3$. We choose to compute a CPD of rank $R = 2$ to see how the algorithms behaves when we try to approximate a problematic tensor by tensor with low rank. In theory it is possible to have arbitrarily good approximations.
\subsubsection{Matrix multiplication}
Let $\mathcal{M}_N \in \mathbb{R}^{N^2 \times N^2 \times N^2}$ be the tensor associated with the multiplication between two matrices in $\mathbb{R}^{N \times N}$. The classic form of $\mathcal{M}_N$ is given by
$$\mathcal{M}_N = \sum_{i=1}^m \sum_{j=1}^n \sum_{k=1}^l vec(\textbf{e}_j^i) \otimes vec(\textbf{e}_k^j) \otimes vec(\textbf{e}_k^i),$$
where $\textbf{e}_i^j$ is the matrix $N \times N$ with entry $(i,j)$ equal to 1 and the remaining entries equal to zero. Since Strassen \cite{strassen} it is known that matrix multiplication between matrices of dimensions $N \times N$ can be made with $\mathcal{O}(N^{\log_2 7})$ operations. Many improvements were made after Strassen but we won't enter in the details here. For the purpose of testing we choose the small value $N = 5$ and the rank $R = \lceil 5^{\log_2 7} \rceil = 92$. This value is the bound obtained by Strassen in \cite{strassen}. It is not necessarily the true rank of the tensor but it is close enough to make an interesting test.
\subsection{Collinear factors}
The phenomenon of swamps occurs when all factors in each mode are almost collinear. Their presence is a challenge for many algorithms because they can slow down convergence. Now we will create synthetic data to simulate various degrees of collinearity between the factors. We begin generating three random matrices $\textbf{M}_X \in \mathbb{R}^{m \times R}, \textbf{M}_Y \in \mathbb{R}^{n \times R}, \textbf{M}_Z \in \mathbb{R}^{p \times r}$, where each entry is drawn from the normal distribution with mean 0 and variance 1. After that we perform QR decomposition of each matrix, obtaining the decompositions $\textbf{M}_X = \textbf{Q}_X \textbf{R}_X, \textbf{M}_Y = \textbf{Q}_Y \textbf{R}_Y, \textbf{M}_Z = \textbf{Q}_Z \textbf{R}_Z$. The matrices $\textbf{Q}_X, \textbf{Q}_Y, \textbf{Q}_Z$ are orthogonal. Now fix three columns $\textbf{q}_X^{(i')}, \textbf{q}_Y^{(j')}, \textbf{q}_Z^{(k')}$ of each one of these matrices. The factors $\textbf{X} = [ \textbf{x}^{(1)}, \ldots, \textbf{x}^{(R)} ] \in \mathbb{R}^{m \times R}$, $\textbf{Y} = [ \textbf{y}^{(1)}, \ldots, \textbf{y}^{(R)} ] \in \mathbb{R}^{n \times R}$, $\textbf{Z} = [ \textbf{z}^{(1)}, \ldots, \textbf{z}^{(R)} ] \in \mathbb{R}^{p \times R}$ are generated by the equations below.
$$\textbf{x}^{(i)} = \textbf{q}_X^{(i')} + c \cdot \textbf{q}_X^{(i)}, \quad i=1 \ldots R$$
$$\textbf{y}^{(j)} = \textbf{q}_Y^{(j')} + c \cdot \textbf{q}_Y^{(j)}, \quad j=1 \ldots R$$
$$\textbf{z}^{(k)} = \textbf{q}_Z^{(k')} + c \cdot \textbf{q}_Z^{(k)}, \quad k=1 \ldots R$$
The parameter $c \geq 0$ defines the degree of collinearity between the vectors of each factor. A value of $c$ close to 0 indicates high degree of collinearity, while a high value of $c$ indicates low degree of collinearity.
Another phenomenon that occurs in practice is the presence of noise in the data. So we will treat these two phenomena at once in this benchmark. After generating the factors $\textbf{X}, \textbf{Y}, \textbf{Z}$ we have a tensor $\mathcal{T} = (\textbf{X}, \textbf{Y}, \textbf{Z}) \cdot \mathcal{I}_{R \times R \times R}$. That is, $\textbf{X}, \textbf{Y}, \textbf{Z} $ are the exact CPD of $\mathcal{T}$. Now consider a noise $\mathcal{N} \in \mathbb{R}^{m \times n \times p}$ such that each entry of $\mathcal{N} $ is obtained by the normal distribution with mean 0 and variance 1. Thus we form the tensor $\hat{\mathcal{T}} = \mathcal{T} + \nu \cdot \mathcal{N}$, where $ \nu > 0 $ defines the magnitude of the noises. The idea is to compute a CPD of $ \hat{\mathcal{T}}$ of rank $R$ and then evaluate the relative error between this tensor and $\mathcal{T}$. We expect the computed CPD to clear the noises and to be close to $\mathcal{T}$ (even if it is not close to $ \hat{\mathcal{T}}$). We will fix $\nu = 0.01$ and generate tensors for $c = 0.1, 0.5, 0.9$. In all cases we will be using $ m = n = p = 300 $ and $R = 15 $. This is a particularly difficult problem since we are considering swamps and noises at once. The same procedure to generate tensors were used for benchmarking in \cite{cichoki2013}.
\subsection{Double bottlenecks}
We proceed almost in the same as before for swamps, we used the same procedure to generate the first two columns of each factor matrix, then the remaining columns are equal to the columns of the QR decomposition. After generating the factors $\textbf{X}, \textbf{Y}, \textbf{Z}$ we consider a noise $\mathcal{N} \in \mathbb{R}^{m \times n \times p}$ such that each entry of $\mathcal{N} $ is obtained by the normal distribution with mean 0 and variance 1. Thus we form the tensor $\hat{\mathcal{T}} = \mathcal{T} + \nu \cdot \mathcal{N}$, where $ \nu > 0 $ defines the magnitude of the noises. The collinear parameter used for the tests are $c = 0.1, 0.5$, and we fix $\nu = 0.01$ as before. The procedure before the noise is presented below.
$$\textbf{x}^{(i)} = \textbf{q}_X^{(i')} + c \cdot \textbf{q}_X^{(i)}, \quad i=1, 2$$
$$\textbf{y}^{(j)} = \textbf{q}_Y^{(j')} + c \cdot \textbf{q}_Y^{(j)}, \quad j=1, 2$$
$$\textbf{z}^{(k)} = \textbf{q}_Z^{(k')} + c \cdot \textbf{q}_Z^{(k)}, \quad k=1, 2$$
$$\textbf{x}^{(i)} = \textbf{q}_X^{(i)}, \quad i=3 \ldots R$$
$$\textbf{y}^{(j)} = \textbf{q}_Y^{(j)}, \quad j=3 \ldots R$$
$$\textbf{z}^{(k)} = \textbf{q}_Z^{(k)}, \quad k=3 \ldots R$$
\subsection{Results}
In figure~\ref{fig:benchs} there are some charts, each one showing the best running time of the algorithms with respect to each one of the tensors describe previously. If some algorithm is not included in a chart, it means that the algorithm was unable to achieve an acceptable error within the conditions described at the beginning if this section.
\begin{figure}[htbp]
\includegraphics[scale=0.3]{benchmarks}
\caption{Benchmarks of all tensors and all implementations.}
\label{fig:benchs}
\end{figure}
The first thing we should note is that TNLS is robust enough so it can deliver an acceptable solution in almost every test, with the exception of the border rank tensor and the collinear tensor with $c = 0.9$. Not only that but it is also very fast, beating TFX in two tests. It should be noted too how TALS performs well at the collinear tests. Normally we could expect the opposite since there are many swamps at these tests. This shows how Tensorlab made a good work with the ALS algorithm. Apart from that, none of the other algorithms seemed to stand out for some test. Finally, we want to add that, although none algorithm was able to deliver an acceptable solution for the border rank tensor test, the algorithm TNLS could compute solutions very close to the desired interval within a reasonable time (but still more time than TFX).
\section{Gaussian Mixture}
\label{sec:gmix}
Gaussian mixture models are more connected to real applications. We start briefly describing the model, them we apply some algorithms in synthetic data and compare the results. The theory discussed here is based on \cite{anandkumar}.
\subsection{Gaussian mixture model}
Consider a mixture of $K$ Gaussian distributions with identical covariance matrices. We have lots of data with unknown averages and unknown covariance matrices to infer the parameters of distributions, which are are unknown. The problem at hand is to design an algorithm to \emph{learn} these parameters from the data given. We use $\mathbb{P}$ to denote probability and $\mathbb{E}$ to denote expectation (which we may also call the \emph{mean} or \emph{average}).
Let $\textbf{x}^{(1)}, \ldots, \textbf{x}^{(N)} \in \mathbb{R}^d$ be a set of collected data sample. Let $h$ be a discrete random variable with values in $\{1, 2, \ldots, K\}$ such that $\mathbb{P}[h = i]$ is the probability that a sample $\textbf{x}$ is a member of the $i$-th distribution. We denote $w^{(i)} = \mathbb{P}[h = i]$ and $\textbf{w} = [ w^{(1)}, \ldots, w^{(K)}]^T$, the vector of probabilities. Let $\textbf{u}^{(i)} \in \mathbb{R}^d$ be the mean of the $i$-th distribution and assume that all distributions have the same covariance matrix $\sigma^2 \textbf{I}_d$ for $\sigma > 0$. See figure~\ref{fig:gaussian_mix} for an illustration of a Gaussian mixture in the case where $d = 2$ and $K = 2$.
\begin{figure}[htbp]
\centering
\includegraphics[scale=.42]{gaussian_mix}
\caption{Gaussian mixture in the plane with 2 distributions. The first distribution has mean $\textbf{u}^{(1)} = [-0.34, \ 0.93]^T$ and the second has mean $\textbf{u}^{(2)} = [0.93, \ 0.34]^T$. The variance is $\sigma^2 = 0.0059$.}
\label{fig:gaussian_mix}
\end{figure}
To keep everything simple we also assume that the means $\textbf{u}^{(i)}$ form an orthonormal set, i.e., every $\textbf{u}^{(i)}$ is a unit vector and these vectors are orthogonal with respect to each other. To see how to proceed in the general case we refer to the previously mentioned paper.
Given a sample point $\textbf{x}$, note that we can write
$$\textbf{x} = \textbf{u}_h + \textbf{z},$$
where $\textbf{z}$ is a random vector with mean 0 and covariance $\sigma^2 \textbf{I}_d$. We summarise the main results in the next theorem whose proof can be found in \cite{anandkumar}.
\begin{theorem}[Hsu and Kakade, 2013] \label{moments}
Assume $d \geq K$. The variance $\sigma^2$ is the smallest eigenvalue of the covariance matrix $\mathbb{E}[ \textbf{x} \otimes \textbf{x} ] - \mathbb{E}[ \textbf{x} ] \otimes \mathbb{E}[ \textbf{x} ]$. Furthermore, if
\begin{flalign*}
& M_1 = \mathbb{E}[ \textbf{x} ],\\
& M_2 = \mathbb{E}[ \textbf{x} \otimes \textbf{x} ] - \sigma^2 \textbf{I}_d,\\
& M_3 = \mathbb{E}[ \textbf{x} \otimes \textbf{x} \otimes \textbf{x} ] - \sigma^2 \sum_{i=1}^d \left( \mathbb{E}[ \textbf{x} ] \otimes \textbf{e}_i \otimes \textbf{e}_i + \textbf{e}_i \otimes \mathbb{E}[ \textbf{x} ] \otimes \textbf{e}_i + \textbf{e}_i \otimes \textbf{e}_i \otimes \mathbb{E}[ \textbf{x} ] \right),
\end{flalign*}
then
\begin{flalign*}
& M_1 = \sum_{i=1}^K w^{(i)}\ \textbf{u}^{(i)},\\
& M_2 = \sum_{i=1}^K w^{(i)}\ \textbf{u}^{(i)} \otimes \textbf{u}^{(i)},\\
& M_3 = \sum_{i=1}^K w^{(i)}\ \textbf{u}^{(i)} \otimes \textbf{u}^{(i)} \otimes \textbf{u}^{(i)}.
\end{flalign*}
\end{theorem}
Theorem~\ref{moments} allows us to use the method of moments, which is a classical parameter estimation technique from statistics. This method consists in computing certain statistics of the data (often empirical moments) and use it to find model parameters that give rise to (nearly) the same corresponding population quantities. Now suppose that $N$ is large enough so we have a reasonable number of sample points to make useful statistics. First we compute the empirical mean
\begin{equation}
\hat{\mu} := \frac{1}{N} \sum_{j=1}^N \textbf{x}^{(j)} \approx \mathbb{E}[ \textbf{x} ]. \label{eq:empirical_mean}
\end{equation}
Now use this result to compute the empirical covariance matrix
\begin{equation}
\hat{\textbf{S}} := \frac{1}{N} \sum_{j=1}^N ( \textbf{x}^{(j)} \otimes \textbf{x}^{(j)} - \hat{\mu} \otimes \hat{\mu} ) \approx \mathbb{E}[ \textbf{x} \otimes \textbf{x} ] - \mathbb{E}[ \textbf{x} ] \otimes \mathbb{E}[ \textbf{x} ]. \label{eq:empirical_cov}
\end{equation}
The smallest eigenvalue of $\hat{\textbf{S}}$ is the empirical variance $\hat{\sigma}^2 \approx \sigma^2$. Now we compute the empirical third moment (empirical skewness)
\begin{equation}
\hat{\mathcal{S}} := \frac{1}{N} \sum_{j=1}^N \textbf{x}^{(j)} \otimes \textbf{x}^{(j)} \otimes \textbf{x}^{(j)} \approx \mathbb{E}[ \textbf{x} \otimes \textbf{x} \otimes \textbf{x} ] \label{eq:empirical_skew}
\end{equation}
and use it to get the empirical value of $M_3$,
\begin{equation}
\hat{\mathcal{M}}_3 := \hat{\mathcal{S}} - \hat{\sigma}^2 \sum_{i=1}^d \left( \hat{\mu} \otimes \textbf{e}_i \otimes \textbf{e}_i + \textbf{e}_i \otimes \hat{\mu} \otimes \textbf{e}_i + \textbf{e}_i \otimes \textbf{e}_i \otimes \hat{\mu} \right) \approx M_3.\label{eq:empirical_M3}
\end{equation}
By theorem~\ref{moments}, $M_3 = \displaystyle \sum_{i=1}^K w^{(i)}\ \textbf{u}^{(i)} \otimes \textbf{u}^{(i)} \otimes \textbf{u}^{(i)}$, which is a symmetric tensor containing all parameter information we want to find. The idea is, after computing a symmetric CPD for $\hat{\mathcal{M}}_3$, normalize the factors so each vector has unit norm. By doing this we have a tensor of the form
$$\sum_{i=1}^K \hat{w}^{(i)}\ \hat{\textbf{u}}^{(i)} \otimes \hat{\textbf{u}}^{(i)} \otimes \hat{\textbf{u}}^{(i)}$$
as a candidate to solution. Note that it is easy to make all $\hat{w}^{(i)}$ positive. If some of them is negative, just multiply it by $-1$ and multiply one of the associated vectors also by $-1$. The final tensor is unchanged but all $\hat{w}^{(i)}$ now are positive.
\subsection{Computational experiments}
Now we describe how to generated data, what algorithms are used to compute the CPDs and how the comparisons are made. Here we restrict our attention only to the implementations of TFX and Tensorlab.
To compute a symmetric CPD with TFX we just have to set the option \verb|symm| to\footnote{\url{https://github.com/felipebottega/Tensor-Fox/blob/master/tutorial/3-intermediate_options.ipynb}} \verb|True|. We also observed that it was necessary to let TFX to perform more conjugate gradient iterations in order to obtain meaningful results, although this may increase the computational time. To compute a symmetric CPD with Tensorlab we need to create a model specifying that the result should be a symmetric tensor. Also, in this context it is only possible to use NLS and MINF algorithms, with or without refinement. For more information we recommend reading sections 8.2 and 8.3 from the Tensorlab guide.\footnote{\url{https://www.tensorlab.net/userguide3.pdf}} In order to obtain meaningful results we set the parameters \verb|TolFun| and \verb|TolX| to $10^{-12}$.
\subsubsection{Procedure}
To work with Gaussian mixtures we generate datasets for $d = 20, 100$ and $K = 5, 15$. In particular, we have that $\mathcal{M}_3 \in \mathbb{R}^{d \times d \times d}$. For each example we generate the probabilities $w^{(i)}$ by taking random values in the interval $(0,1)$ such that all $w^{(i)} > 0$ and $w^{(1)} + \ldots + w^{(K)} = 1$. To generate the means we first generated a random Gaussian matrix $\textbf{M} \in \mathbb{R}^{d \times K}$ with full rank, computed its SVD, $\textbf{M} = \textbf{U} \Sigma \textbf{V}^T$, and used each column of $\textbf{U}$ to be a mean of the distribution (we only need the first $K$ columns of $\textbf{U}$).
For each example we generated a population with size $N = 10000$ by drawing samples $\textbf{x} \in \mathbb{R}^d$ using the following procedure:
\begin{enumerate}
\item Generate a random number $h \in \{1, 2, \ldots, K\}$ from the distribution given by the $w^{(i)}$. More precisely, $\mathbb{P}[h = i] = w^{(i)}$. The number obtained, $i$, is the distribution of $\textbf{x}$.
\item Generate a random vector $\textbf{z} \in \mathbb{R}^d$ with mean 0 and covariance $\sigma^2 \textbf{I}_d$.
\item Set $\textbf{x} = \textbf{u}^{(i)} + \textbf{z}$.
\item Repeat the previous steps $N$ times.
\end{enumerate}
With these samples we are able to estimate $\mathcal{M}_3$ using the empirical tensor $\hat{\mathcal{M}}_3$ defined in~\ref{eq:empirical_M3}.
\subsubsection{Computational results}
For each example we computed 100 CPDs and retain the one with best fit. In this case the best fit is defined as the smallest CPD error but the error corresponding to the parameters. If $\hat{w}^{(i)}$ and $\hat{\textbf{u}}^{(i)}, i=1 \ldots K$, are the approximated parameters, then the corresponding fit is the value
$$\frac{ \| \hat{\textbf{w}} - \textbf{w} \| }{ \| \textbf{w} \| } + \frac{ \| \hat{\textbf{U}} - \textbf{U} \| }{ \| \textbf{U} \| },$$
where $\hat{\textbf{w}} = [ \hat{w}^{(1)}, \ldots, \hat{w}^{(K)}]^T$ and $\hat{\textbf{U}} = [ \hat{\textbf{u}}^{(1)}, \ldots, \hat{\textbf{u}}^{(K)} ]$.
For this problem we took a different approach when comparing the algorithms. We let all programs run with the parameters mentioned before and compared the results in a \verb|accuracy| $\times$ \verb|time| plot. For each example we make two plots, the first with the errors of the probabilities and the second with the errors of the means.
For $d = 20$ we can see that both NLS and TFX achieves the same accuracy, with NLS being a little faster. When the dimension is increased to $d = 100$, TFX starts to be faster than NLS. Not only that, but for $K = 15$ we can see that NLS is less accurate than TFX. The difference in speed and accuracy becomes more evident as $d$ and $K$ increases. In all tests MINF performed poorly, being the slowest and less accurate. See figures~\ref{fig:d20_k5},~\ref{fig:d20_k15},~\ref{fig:d100_k5},~\ref{fig:d100_k15}.
\begin{figure}[htbp]
\includegraphics[scale=0.4]{d20_k5}
\caption{Computational results for $d = 20$ and $K = 5$. The horizontal axis represents the average time (in seconds) to compute the solutions. At the left, the vertical axis represents the best relative error of the probabilities, and at the right, the vertical axis represents the best relative error of the means.}
\label{fig:d20_k5}
\end{figure}
\begin{figure}[htbp]
\includegraphics[scale=0.415]{d20_k15}
\caption{Computational results for $d = 20$ and $K = 15$.}
\label{fig:d20_k15}
\end{figure}
\begin{figure}[htbp]
\includegraphics[scale=0.408]{d100_k5}
\caption{Computational results for $d = 100$ and $K = 5$.}
\label{fig:d100_k5}
\end{figure}
\begin{figure}[htbp]
\includegraphics[scale=0.4]{d100_k15}
\caption{Computational results for $d = 100$ and $K = 15$.}
\label{fig:d100_k15}
\end{figure}
\section{Conclusions}
\label{sec:conclusions}
In the past years several implementations of algorithms to compute the CPD were proposed. Usually they are based on alternating least squares, nonlinear squares or optimization methods. Our work differs from others due to several other implementation features not deeply investigated so far: the damping parameter rule update, preconditioners, regularization, compression, strategies for the number of conjugate gradient iterations, and others. In order to construct TensorFox we conducted a deep investigation through several possibilities at these features. After several attempts and tests we converged to this competitive algorithm for computing the CPD.
We introduced some of the main features of our implementation and performed a series of tests of it against other implementations. We then introduced the concept of Gaussian mixtures and performed more tests. This particular problem is harder than the others since we need to compute several CPDs in order have a good fit. In these tests our algorithm also showed to be competitive and, in particular, it seemed to perform better as the problem gets harder.
\appendix
\section{Proofs and some generalization}
Although theorems 4.3 and 4.4 were stated for third order tensors, their generalizations are interesting and can be easily addressed. We remark that theorem 4.6 also can be generalized without much effort but its third order version is the one of interest in this paper.
Denote $\textbf{w} = [vec(\textbf{W}^{(1)})^T, \ldots, vec(\textbf{W}^{(L)})^T]^T$, where each $\textbf{W}^{(\ell)} \in \mathbb{R}^{I_\ell \times R}$ is the factor matrix of a $L$-th order CPD given by $(\textbf{W}^{(1)}, \ldots, \textbf{W}^{(L)}) \cdot \mathcal{I}_{R \times \ldots \times R}$.\bigskip
\textbf{Proof of theorem 4.3:} To prove item 1, just take any $\textbf{w} \in \mathbb{R}^{R \sum_{\ell=1}^L I_\ell}$ and note that
$$\left\langle \left( \textbf{J}_f^T \textbf{J}_f + \mu \textbf{D} \right)\textbf{w}, \textbf{w} \right\rangle = $$
$$ = \left\langle \textbf{J}_f^T \textbf{J}_f \textbf{w}, \textbf{w} \right\rangle + \left\langle \mu \textbf{D} \textbf{w}, \textbf{w} \right\rangle = $$
$$ = \left\langle \textbf{J}_f \textbf{w}, \textbf{J}_f \textbf{w} \right\rangle + \left\langle \sqrt{\mu} \sqrt{\textbf{D}} \textbf{w}, \sqrt{\mu} \sqrt{\textbf{D}} \textbf{w} \right\rangle = $$
$$ = \| \textbf{J}_f \textbf{w} \|^2 + \| \sqrt{\mu} \sqrt{\textbf{D}} \textbf{w} \|^2 > 0.$$
The proof of item 2 is very similar to the previous proof in the classic Gauss-Newton. From the iteration formula
$$\textbf{w}^{(k+1)} = \textbf{w}^{(k)} - \left( \textbf{J}_f(\textbf{w}^{(k)})^T \textbf{J}_f(\textbf{w}^{(k)}) + \mu \textbf{D} \right)^{-1} \textbf{J}_f(\textbf{w}^{(k)})^T \cdot f(\textbf{w}^{(k)})$$
we can conclude that
$$- \left( \textbf{J}_f(\textbf{w}^{(k)})^T \textbf{J}_f(\textbf{w}^{(k)}) + \mu \textbf{D} \right) \cdot \left( \textbf{w}^{(k+1)} - \textbf{w}^{(k)} \right) = \textbf{J}_f(\textbf{w}^{(k)})^T \cdot f(\textbf{w}^{(k)}).$$
Now, with this identity, note that
$$\left\langle \nabla F(\textbf{w}^{(k)}), \textbf{w}^{(k+1)} - \textbf{w}^{(k)} \right\rangle = $$
$$ = \left\langle \textbf{J}_f(\textbf{w}^{(k)})^T f(\textbf{w}^{(k)}), \textbf{w}^{(k+1)} - \textbf{w}^{(k)} \right\rangle = $$
$$ = - \left\langle \left( \textbf{J}_f(\textbf{w}^{(k)})^T \textbf{J}_f(\textbf{w}^{(k)}) + \mu \textbf{D} \right) \cdot \left( \textbf{w}^{(k+1)} - \textbf{w}^{(k)} \right), \textbf{w}^{(k+1)} - \textbf{w}^{(k)} \right\rangle < 0. $$
The inequality above follows from the fact that $\textbf{J}_f(\textbf{w}^{(k)})^T \textbf{J}_f(\textbf{w}^{(k)}) + \mu \textbf{D}$ is positive definite.
To prove item 3, take $\mu$ such that $\| \textbf{D}^{-1} \textbf{J}_f(\textbf{w}^{(k)})^T \textbf{J}_f(\textbf{w}^{(k)}) \| \ll \mu$ (this is ``large enough'' in this context). We know $\textbf{J}_f(\textbf{w}^{(k)})^T \textbf{J}_f(\textbf{w}^{(k)}) + \mu \textbf{D}$ since it is positive definite. Also, by the definition of $\mu$ we have that
$$\left( \textbf{J}_f(\textbf{w}^{(k)})^T \textbf{J}_f(\textbf{w}^{(k)}) + \mu \textbf{D} \right)^{-1} = \left( \mu \textbf{D} \left( \displaystyle \frac{1}{\mu} \textbf{D}^{-1} \textbf{J}_f(\textbf{w}^{(k)})^T \textbf{J}_f(\textbf{w}^{(k)}) + \textbf{I} \right) \right)^{-1} \approx $$
$$ \approx \left( \mu \textbf{D} \left( \textbf{0} + \textbf{I} \right) \right)^{-1} = \frac{1}{\mu} \textbf{D}^{-1}.$$
Using the iteration formula with this approximation gives
$$\textbf{w}^{(k+1)} \approx \textbf{w}^{(k)} - \frac{1}{\mu} \textbf{D}^{-1} \textbf{J}_f(\textbf{w}^{(k)})^T f(\textbf{w}^{(k)}) = \textbf{w}^{(k)} - \frac{1}{\mu} \textbf{D}^{-1} \nabla F(\textbf{w}^{-1}).$$
Finally, to prove item 4 just consider $\mu \approx 0$ and substitute in the iteration formula. Then we get the classical formula trivially.$\hspace{6.2cm}\square$\bigskip
Let two matrices $\textbf{A} \in \mathbb{R}^{k \times \ell}, \textbf{B} \in \mathbb{K}^{m \times n}$. The \emph{Kronecker product} between $\textbf{A}$ and $\textbf{B}$ is defined by
$$\textbf{A} \tilde{\otimes} \textbf{B} = \left[
\begin{array}{cccc}
a_{11} \textbf{B} & a_{12} \textbf{B} & \ldots & a_{1\ell} \textbf{B}\\
a_{21} \textbf{B} & a_{22} \textbf{B} & \ldots & a_{2\ell} \textbf{B}\\
\vdots & \vdots & \ddots & \vdots\\
a_{k1} \textbf{B} & a_{k2} \textbf{B} & \ldots & a_{k\ell} \textbf{B}
\end{array}
\right].$$
The matrix given in the definition is a block matrix such that each block is a $m \times n$ matrix, so $\textbf{A} \tilde{\otimes} \textbf{B}$ is a $km \times \ell n$ matrix. We would like to point out that some texts uses $\otimes$ for the Kronecker product and $\circ$ for the tensor product.\bigskip
\textbf{Theorem 4.4 generalized:}
Denote $\omega_{r' r''}^{(\ell)} = \langle \textbf{w}_{r'}^{(\ell)}, \textbf{w}_{r''}^{(\ell)} \rangle$. Then we have that
$$\textbf{J}_f^T \textbf{J}_f =
\left[ \begin{array}{ccc}
\textbf{H}_{11} & \ldots & \textbf{H}_{1L}\\
\vdots & & \vdots\\
\textbf{H}_{L1} & \ldots & \textbf{H}_{LL}
\end{array} \right],$$
where
$$\textbf{H}_{\ell' \ell''} =
\left[ \begin{array}{ccc}
\displaystyle \prod_{\ell \neq \ell', \ell''} \omega_{11}^{(\ell)} \cdot \textbf{w}_1^{(\ell')} \big( \textbf{w}_1^{(\ell'')} \big)^T & \ldots & \displaystyle \prod_{\ell \neq \ell', \ell''} \omega_{1R}^{(\ell)} \cdot \textbf{w}_R^{(\ell')} \big( \textbf{w}_1^{(\ell'')} \big)^T\\
\vdots & & \vdots\\
\displaystyle \prod_{\ell \neq \ell', \ell''} \omega_{R1}^{(\ell)} \cdot \textbf{w}_1^{(\ell')} \big( \textbf{w}_R^{(\ell'')} \big)^T & \ldots & \displaystyle \prod_{\ell \neq \ell', \ell''} \omega_{RR}^{(\ell)} \cdot \textbf{w}_R^{(\ell')} \big( \textbf{w}_R^{(\ell'')} \big)^T
\end{array} \right]$$
for $\ell' \neq \ell''$, and
$$\textbf{H}_{\ell' \ell'} =
\left[ \begin{array}{ccc}
\displaystyle \prod_{\ell \neq \ell'} \omega_{11}^{(\ell)} \cdot \textbf{I}_{I_{\ell'}} & \ldots & \displaystyle \prod_{\ell \neq \ell'} \omega_{1R}^{(\ell)} \cdot \textbf{I}_{I_{\ell'}}\\
\vdots & & \vdots\\
\displaystyle \prod_{\ell \neq \ell'} \omega_{R1}^{(\ell)} \cdot \textbf{I}_{I_{\ell'}} & \ldots & \displaystyle \prod_{\ell \neq \ell'} \omega_{RR}^{(\ell)} \cdot \textbf{I}_{I_{\ell'}}
\end{array} \right].$$\bigskip
\textbf{Proof:} First note that
$$\textbf{J}_f^T \textbf{J}_f =
\left[ \begin{array}{c}
\displaystyle\frac{\partial f}{\partial \textbf{W}^{(1)}}^T\\
\vdots\\
\displaystyle\frac{\partial f}{\partial \textbf{W}^{(L)}}^T
\end{array} \right]
\left[ \frac{\partial f}{\partial \textbf{W}^{(1)}}, \ldots, \frac{\partial f}{\partial \textbf{W}^{(L)}} \right] = $$
$$ = \left[ \begin{array}{ccc}
\displaystyle\frac{\partial f}{\partial \textbf{W}^{(1)}}^T \frac{\partial f}{\partial \textbf{W}^{(1)}} & \ldots & \displaystyle\frac{\partial f}{\partial \textbf{W}^{(1)}}^T \frac{\partial f}{\partial \textbf{W}^{(L)}}\\
\vdots & & \vdots\\
\displaystyle\frac{\partial f}{\partial \textbf{W}^{(L)}}^T \frac{\partial f}{\partial \textbf{W}^{(1)}} & \ldots & \displaystyle\frac{\partial f}{\partial \textbf{W}^{(L)}}^T \frac{\partial f}{\partial \textbf{W}^{(L)}}\\
\end{array} \right],$$
where
$$\frac{\partial f}{\partial \textbf{W}^{(\ell')}}^T \frac{\partial f}{\partial \textbf{W}^{(\ell'')}} =
\left[ \begin{array}{c}
\displaystyle\frac{\partial f}{\partial \textbf{w}_1^{(\ell')}}^T\\
\vdots\\
\displaystyle\frac{\partial f}{\partial \textbf{w}_R^{(\ell')}}^T
\end{array} \right]
\left[ \frac{\partial f}{\partial \textbf{w}_1^{(\ell'')}}, \ldots, \frac{\partial f}{\partial \textbf{w}_R^{(\ell'')}} \right] = $$
$$ = \left[ \begin{array}{ccc}
\displaystyle\frac{\partial f}{\partial \textbf{w}_1^{(\ell')}}^T \frac{\partial f}{\partial \textbf{w}_1^{(\ell'')}} & \ldots & \displaystyle\frac{\partial f}{\partial \textbf{w}_1^{(\ell')}}^T \frac{\partial f}{\partial \textbf{w}_R^{(\ell'')}}\\
\vdots & & \vdots\\
\displaystyle\frac{\partial f}{\partial \textbf{w}_R^{(\ell')}}^T \frac{\partial f}{\partial \textbf{w}_1^{(\ell'')}} & \ldots & \displaystyle\frac{\partial f}{\partial \textbf{w}_R^{(\ell')}}^T \frac{\partial f}{\partial \textbf{w}_R^{(\ell'')}}
\end{array} \right].$$
Let $\omega_{r' r''}^{(\ell)} = \langle \textbf{w}_{r'}^{(\ell)}, \textbf{w}_{r''}^{(\ell)} \rangle$ and assume, without loss of generality, that $1 \leq \ell' < \ell'' \leq L$. Thus we have that
$$\frac{\partial f}{\partial \textbf{w}_{r'}^{(\ell')}}^T \frac{\partial f}{\partial \textbf{w}_{r''}^{(\ell'')}} = $$
$$ = \left( \textbf{w}_{r'}^{(1)} \tilde{\otimes} \ldots \tilde{\otimes} \textbf{w}_{r'}^{(\ell'-1)} \tilde{\otimes} \textbf{I}_{I_{\ell'}} \tilde{\otimes} \textbf{w}_{r'}^{(\ell'+1)} \tilde{\otimes} \ldots \tilde{\otimes} \textbf{w}_{r'}^{(L)} \right)^T \cdot $$
$$\left( \textbf{w}_{r''}^{(1)} \tilde{\otimes} \ldots \tilde{\otimes} \textbf{w}_{r''}^{(\ell''-1)} \tilde{\otimes} \textbf{I}_{I_{\ell''}} \tilde{\otimes} \textbf{w}_{r''}^{(\ell''+1)} \tilde{\otimes} \ldots \tilde{\otimes} \textbf{w}_{r''}^{(L)} \right) = $$
$$ = \prod_{\ell \neq \ell', \ell''} \omega_{r' r''}^{(\ell)} \cdot \textbf{w}_{r''}^{(\ell')} \big( \textbf{w}_{r'}^{(\ell'')} \big)^T.$$
In the case $\ell' = \ell''$ we have
$$\frac{\partial f}{\partial \textbf{w}_{r'}^{(\ell')}}^T \frac{\partial f}{\partial \textbf{w}_{r''}^{(\ell')}} = $$
$$ = \omega_{r' r''}^{(1)} \tilde{\otimes} \ldots \tilde{\otimes} \omega_{r' r''}^{(\ell'-1)} \tilde{\otimes} \ \textbf{I}_{I_{\ell'}}^2 \tilde{\otimes} \ \omega_{r' r''}^{(\ell'+1)} \tilde{\otimes} \ldots \tilde{\otimes} \omega_{r' r''}^{(L)} = $$
$$ = \prod_{\ell \neq \ell'} \omega_{r' r''}^{(\ell)} \cdot \textbf{I}_{I_{\ell'}}.$$
Therefore,
$$\frac{\partial f}{\partial \textbf{W}^{(\ell')}}^T \frac{\partial f}{\partial \textbf{W}^{(\ell'')}} =
\left[ \begin{array}{ccc}
\displaystyle \prod_{\ell \neq \ell', \ell''} \omega_{11}^{(\ell)} \cdot \textbf{w}_1^{(\ell')} \big( \textbf{w}_1^{(\ell'')} \big)^T & \ldots & \displaystyle \prod_{\ell \neq \ell', \ell''} \omega_{1R}^{(\ell)} \cdot \textbf{w}_R^{(\ell')} \big( \textbf{w}_1^{(\ell'')} \big)^T\\
\vdots & & \vdots\\
\displaystyle \prod_{\ell \neq \ell', \ell''} \omega_{R1}^{(\ell)} \cdot \textbf{w}_1^{(\ell')} \big( \textbf{w}_R^{(\ell'')} \big)^T & \ldots & \displaystyle \prod_{\ell \neq \ell', \ell''} \omega_{RR}^{(\ell)} \cdot \textbf{w}_R^{(\ell')} \big( \textbf{w}_R^{(\ell'')} \big)^T
\end{array} \right]$$
when $\ell' \neq \ell''$. Finally, we have that
$$\hspace{1.8cm} \frac{\partial f}{\partial \textbf{W}^{(\ell')}}^T \frac{\partial f}{\partial \textbf{W}^{(\ell')}} =
\left[ \begin{array}{ccc}
\displaystyle \prod_{\ell \neq \ell'} \omega_{11}^{(\ell)} \cdot \textbf{I}_{I_{\ell'}} & \ldots & \displaystyle \prod_{\ell \neq \ell'} \omega_{1R}^{(\ell)} \cdot \textbf{I}_{I_{\ell'}}\\
\vdots & & \vdots\\
\displaystyle \prod_{\ell \neq \ell'} \omega_{R1}^{(\ell)} \cdot \textbf{I}_{I_{\ell'}} & \ldots & \displaystyle \prod_{\ell \neq \ell'} \omega_{RR}^{(\ell)} \cdot \textbf{I}_{I_{\ell'}}
\end{array} \right]. \hspace{1.8cm}\square$$\bigskip
\section*{Acknowledgments}
I would like to acknowledge my advisor, professor Gregorio Malajovich, for the fruitful conversations we had and the help to put this paper in a reasonable format. | {"config": "arxiv", "file": "1912.02366/ex_article.tex"} |
\section{Robustness under Cascading Failure}
\label{sec:cascading-failure}
\input{./cascading-failure/cascading-failure-network-flow}
\input{./cascading-failure/cascading-failure-contagion}
\paragraph{Future Research Directions}
A very few formal stability and robustness analyses exist for cascading failure dynamics in physical networks. It would be natural to investigate if the nonlinearities induced by cascading failure can be cast into canonical forms, similar in spirit to the capacity constraint in Section~\ref{sec:capacity-perturbation-dynamic}, so as to use tools from the piecewise linear or hybrid systems literature.
In the setup in Sections~\ref{sec:cascade-electrical} and \ref{sec:cascade-transport}, vulnerability of a link is determined by its residual capacity with respect to the nominal equilibrium flow, which in turn could be the outcome of a process. This is to be contrasted with the setting in Section~\ref{sec:cascading-failure-contagion}, where assignment of thresholds to nodes is independent of the initial network structure or the network formation process which leads to the initial interconnection structure. Modeling this dependency and evaluating resilience of the network formation process will be an interesting direction to pursue. | {"config": "arxiv", "file": "1909.06506/cascading-failure/cascading-failure-main-v2.tex"} |
TITLE: How do I get the value of the structure sheaf on an arbitrary open subset of an affine scheme?
QUESTION [0 upvotes]: In Vakil's notes on the foundations of algebraic geometry, he defines the structure sheaf $
\mathscr{O}_{\text{Spec}A}(D(f))$ for distinguished open sets $D(f)$ by $\mathscr{O}_{\text{Spec}A}(D(f)) := A_f$. He then defines the restriction map for $D(g) \subseteq D(f)$ by
$$\rho^{D(f)}_{D(g)} : \mathscr{O}_{\text{Spec}A}(D(f)) \rightarrow \mathscr{O}_{\text{Spec}A}(D(g))$$
"in the obvious way". I assume this means
$$\frac{a}{f^{n}} \mapsto \frac{b^{m}a}{g^{mn}}?$$
He then goes on and proves that this is indeed a sheaf of rings on the distinguished open sets. Now, he claims that this then indeed gives a sheaf of rings on the topological space Spec $A$. I know that every open set $U$ of Spec $A$ may be written as the union of distinguished open sets such that
$$U = \bigcup_{i \in I} D(f_{i}).$$
But how do I get the sheaf $\mathscr{O}_{\text{Spec}A}(U)$? I.e. is this the localization at the sum $\sum_{i \in I} f_i$ or the product of the $A_{f_{i}}$?
REPLY [0 votes]: It is not important to know what the sections are on a generic open set, as you can always work with basic open sets, but you can find a description of the sections of $\text{Spec} A$ in Hartshorne's Algebraic geometry, page 70 | {"set_name": "stack_exchange", "score": 0, "question_id": 4291018} |
TITLE: Finding a critical point
QUESTION [2 upvotes]: Let $X=\{u\in C^2([0,1],\mathbb R),u(0)=u'(0)=0 \}$. Let $F : X \rightarrow \mathbb R$ such as $F(u)=\frac{1}{2}\int_0^1(u')^2dt-\int_0^1\cos(u)dt$.
For which norm $X$ is a Banach space ?
Show that $F$ is differentiable in every point and calculate its differential.
Find the critical point of $F$ and determine its nature.
I had no problem to solve this question : I know that $C^2([0,1],\mathbb R)$ is a Banach space with the norm : $||f||=||f||_\infty+||f'||_\infty+||f''||_\infty$. Furthermore $X$ is closed in $C^2([0,1],\mathbb R)$ ans is a subspace, so $X$ is a Banach space.
Let :
$f_1:C^1([0,1],\mathbb R)\rightarrow\mathbb R,u\mapsto \int_0^1 u dt$
$f_2:C^2([0,1],\mathbb R)\rightarrow C^1([0,1],\mathbb R),u\mapsto u'$
$f_3:C^1([0,1],\mathbb R)\rightarrow C^1([0,1],\mathbb R),u\mapsto u^2$
$f_4:C^2([0,1],\mathbb R)\rightarrow\mathbb R,u\mapsto \int_0^1 u dt$
$f_5:C^2([0,1],\mathbb R)\rightarrow C^2([0,1],\mathbb R),u\mapsto \cos(u)$
$f_1,f_2,f_4$ are linear and continuous : For example
$|f_1(u)|\le ||u||_\infty\le||u||_\infty+||u'||_\infty$ so $f_1$ is continuous.
Also $f_3$ is differentiable and we have : $f_3'(u)(h)=2uh$ and $f_5$ is differentiable with $f_5'(u)(h)=-h\sin(u)$.
Thus we have :
$$F'(u)(h)=\int_0^1u'h'dt+\int_0^1h\sin(u)dt$$
I need to find the $u \in X$ such as $\forall h, F'(u)(h)=0$. I am not really sure how to solve this question. I know that $u=0$ is such a critical point but are there other such points ?
To study the nature of the critical point in $u=0$, I want to show that it is a minima : indeed if $u$ is not constant on an interval the fist term of $F$is $>0$, so since $u \in C^2([0,1],\mathbb R)$, if $u$ is non constant the first term is $>0$. With a similar reasonning we need $u=0 \pmod{2\pi}$ to minimise the second term and the condition $x\in X$ impose that $u=0$ to minimise $F$.
All in all I want to know how to show that $0$ is the only critical point and I would like to know if there is a simplier way to determine the nature of this critical point.
REPLY [0 votes]: With an integration by part $\forall u \in X$ such as $u(1)=0$ we have : $$F'(u)(h)=\int_0^1 (-u''+\sin(u))hdx$$
Thus all critical points $u\in X$ must verify the Cauchy problem :
$$\begin{cases} -u''+\sin(u)=0 \\ u(0)=u'(0)=0\end{cases}$$
So with Cauchy–Lipschitz theorem $u=0$ is the unique solution to this differential equation. | {"set_name": "stack_exchange", "score": 2, "question_id": 2097179} |
\begin{document}
\setcounter{page}{1}
\title{Coded Elastic Computing on Machines with Heterogeneous Storage and Computation Speed}
\author{Nicholas Woolsey,~\IEEEmembership{Student Member,~IEEE}, Rong-Rong Chen,~\IEEEmembership{Member,~IEEE},\\ and Mingyue Ji,~\IEEEmembership{Member,~IEEE}
\thanks{This manuscript was partially presented in the conference paper \cite{woolsey2019hetCEC}.}
\thanks{The authors are with the Department of Electrical Engineering,
University of Utah, Salt Lake City, UT 84112, USA. (e-mail: nicholas.woolsey@utah.edu, rchen@ece.utah.edu and mingyue.ji@utah.edu)}
}
\maketitle
\begin{abstract}
We study the optimal design of heterogeneous Coded Elastic Computing (CEC) where machines have varying computation speeds and storage. CEC introduced by Yang et al. in 2018 is a framework that mitigates the impact of elastic events, where machines can join and leave at arbitrary times. In CEC, data is distributed among machines using a Maximum Distance Separable (MDS) code such that subsets of machines can perform the desired computations. However, state-of-the-art CEC designs only operate on homogeneous networks where machines have the same speeds and storage. This may not be practical. In this work, based on an MDS storage assignment, we develop a novel computation assignment approach for heterogeneous CEC networks to minimize the overall computation time. We first consider the scenario where machines have heterogeneous computing speeds but same storage and then the scenario where both heterogeneities are present. We propose a novel combinatorial optimization formulation and solve it exactly by decomposing it into a convex optimization problem for finding the optimal computation load and a ``filling problem" for finding the exact computation assignment. A low-complexity ``filling algorithm" is adapted and can be completed within a number of iterations equals at most the number of available machines.
\if{0}
We study the optimal design of heterogeneous Coded Elastic Computing (CEC) networks where machines have varying properties of computation speed and storage capacity. CEC introduced by Yang {\it et al.} in 2018 is a framework that mitigates the impact of elastic events, where machines can join and leave the network at arbitrary times. In CEC, a set of data is distributed among storage constrained machines using a Maximum Distance Separable (MDS) code such that subsets of machines can perform the desired computations. This eliminates the need to re-distribute data after each elastic event. However, state-of-the-art CEC designs only operate on homogeneous networks where machines have the same computation speeds and storage capacity. This may not be the case in practice. In this work, based on an MDS like storage scheme for heterogeneous storage, we develop a novel approach for
general heterogeneous CEC networks to minimize the overall computation time by distributing an optimal computation load or number of computations assigned to each machine. We first focus on the scenario where machines have heterogeneous computing speeds but the same storage capacity and then the scenario where both heterogeneities are present. We propose a novel problem formulation to minimize the overall computation time by solving a combinatorial optimization problem. We solve this problem exactly by decomposing it into two sub-problems including a relaxed convex optimization problem for finding the optimal computation load and a ``filling problem" for finding the exact computation assignment.
A low complexity ``filling algorithm" is adapted to find the optimal computation assignment within a number of iterations equals at most the number of available machines.
The proposed CEC computation assignment algorithm works for an arbitrary set of machine speeds and storage requirements.
\fi
\end{abstract}
\begin{IEEEkeywords}
Coded Elastic Computing (CEC), heterogeneous storage space, heterogeneous computing speeds, Maximum Distance Separable (MDS) codes, filling problem, optimal design, low-complexity
\end{IEEEkeywords}
\section{Introduction}
\label{section: intro}
Coding is an effective tool to speed up distributed computing networks and has attracted significant attention recently. Examples include Coded Distributed Computing (CDC) for MapReduce-like frameworks \cite{li2018fundamental,li2018cdc,konstantinidis2018leveraging,woolsey2018new,Srinivasavaradhan2018distributed,prakash2018coded,woolsey2019ccdc,xu2019cdc,woolsey2019coded,wan2020topological}
and coded data shuffling for distributed machine learning \cite{attia2019shuffling,elmahdy2018shuffling,wan2020fundamental}, where code designs minimize the communication load by trading increased computation resources and/or storage on each machine. Another example is to use codes to mitigate the straggler effect in applications such as linear operations
\cite{lee2017speeding,tandon2017gradient,dutta2016short,wan2020distributed}
and matrix multiplications \cite{yu2020straggler,dutta2020coding,wan2020cache}, where any subset of machines with a cardinality larger than the recovery threshold can recover the matrix multiplication. This eliminates the need to wait for the computation of slow machines. Moreover, coding has also been applied to the computing problems integrated with optimizations \cite{karakus2017straggler} and with the requirements of security \cite{bitar2017minimizing,aliasgari2020computing}, privacy \cite{hua2019privacy,obead2020private,aliasgari2020computing,chen2020gcsa} and robustness \cite{chen2018draco}. In this paper, we will consider another important and novel application of coding in order to cope with the {\em elasticity} in distributed and cloud computing systems \cite{jonas2017occupy}.
Coded Elastic Computing (CEC) was introduced by Yang {\it et al.} in 2018 to mitigate the impact of {\it preempted} machines on a storage limited distributed computing network \cite{yang2018coded}. As opposed to stragglers, whose identities are unknown when computations are assigned, the preempted (unavailable) machines are known. Hence, while it is necessary to bring computation redundancy into the straggler mitigation problem, it is desired to have no computation redundancy in the elastic computing problem. In elastic computing, computations are performed over many times steps and between each time step an {\it elastic event} may occur where machines become preempted or available again. Computations are performed on the same set of data, e.g., a matrix, while the computations change each time step. For example, in each time step the data matrix may be multiplied with a different vector. In each time step, the goal becomes to assign computations among the available machines such that the overall computation time can be minimized. A naive approach is to assign each machine a non-overlapping part of the data. However, this is inefficient as the storage placement has to be redefined with each elastic event.
In homogeneous CEC proposed in \cite{yang2018coded}, where all machines have the same storage space and computing speed, the storage of each machine is placed once using a Maximum Distance Separable (MDS) code and remains unchanged between elastic events. The data is split into $L$ equal sized, disjoint data sets and each machine stores a coded combination of these sets. For the state-of-the-art design \cite{yang2018coded}, each machine stores
a unique and coded $\frac{1}{L}$ fraction (in size) of the original data library. Furthermore, since all the machines have the same computing speed, in order to minimize the overall computation time,
all the machines are assigned an equal number of computations (e.g., number of vector-vector multiplications). The requirement for CEC is to allow any computation task (e.g., matrix-vector multiplications) be resolved by combining the coded computation results of $L$ machines.
This means that the computation tasks have to be assigned to these $L$ available machines while keeping the computation redundancy to a minimum.
The authors of \cite{yang2018coded} proposed a novel ``cyclic'' computation assignment such that each machine is assigned by the same number computations and no computation redundancy is present.
The recent work \cite{dau2019optimizing} also studies the homogeneous CEC and aims to maximize the overlap of the task assignments between computation time steps. With each elastic event, the computation assignment must change. In the cyclic approach proposed in \cite{yang2018coded}, the assignments in the current time step are independent of assignments in previous time steps. In \cite{dau2019optimizing}, the authors design assignment schemes to minimize the changes in the assignments between time steps. In some cases, the proposed assignment schemes were shown to achieve zero ``transition waste", or
no new local computations at the existing machines.
The works of \cite{yang2018coded} and \cite{dau2019optimizing} only study homogeneous CEC networks. However, in practice, the available computing machines in a cloud system are often heterogeneous due to the fact that even if all the machines are homogeneous, one machine could be used by multiple users simultaneously and each user may have different computation and storage demands. When machines have varying storage space and computing speeds or capabilities, the previous CEC designs are sub-optimal. For example, in \cite{yang2018coded} and \cite{dau2019optimizing}, each machine is assigned the same amount of computations. If one machine is faster it will be idle waiting for slower machines to finish. The problem of heterogeneous computation assignment is challenging because we must ensure that each computation is performed on $L$ coded data sets while meeting some optimal computation load of each machine based on the relative computing speeds of the machines such that the overall computation time can be minimized. Moreover, the previous designs assume each machine is capable of storing a $\frac{1}{L}$ fraction of the data, or one coded data set. If machines have varying storage requirements then to use the designs of \cite{yang2018coded} and \cite{dau2019optimizing}, either machines with less storage are excluded or machines with more storage do not utilize their entire storage.
In this paper, we propose a CEC framework optimized for a heterogeneous network where machines have varying computation speeds and storage requirements such that the overall computation time is minimized.
{We represent the relative speed of the machines by a speed vector $\boldsymbol{s}\in\mathbb{R}_{+}^N$ and the storage capacity of the machines by a storage vector $\boldsymbol{\sigma} \in \mathbb{Z}_+^N$. The latter represents an integer number of coded data sets each machine can store and imposes a limit on the amount of computations each machine can be assigned. Under this framework, as much as the storage allows, more computations are assigned to faster machines and less to slower machines in a systematic way to minimize the
overall computation time while maintaining MDS code requirements.
}
The main contributions of this paper are summarized as follows.
\begin{itemize}
\item To the best of our knowledge, this is the first work to adopt the performance metric of overall computation time to optimize the computation assignment for CEC networks while making use of MDS codes.
We introduce a class of computation assignments for CEC networks in which computations are distributed
over common row sets ({with possibly different sizes}) across all machines, and the optimization of computation assignment is performed through an iterative algorithm which identifies the row sets and the machines that compute them. These lead to a novel combinatorial optimization framework
that finds
the optimal assignment for minimal overall computation time.
\item As opposed to other existing works on CEC networks that focus on homogeneous networks, our study applies to general CEC networks in which machines can have both heterogeneous computation speed and storage capacity. We develop a novel approach to solve the optimal computation assignment by utilizing both types of heterogeneity under a unified optimization framework. Particularly, we show that the optimal computation assignment can be determined based on an ordering of each machine's storage capacity to computation speed ratio (SCR). The overall computation time of the optimal computation assignment is limited by machines with the largest SCR.
\item We solve the proposed combinatorial optimization problem exactly by decomposing it into two sub-problems. The first is a relaxed convex optimization problem to determine the computation load of each machine {without specifying the exact computation assignment}. The second is a computation assignment problem (or a ``filling problem") to find the exact computation assignment across machines while meeting the optimal computation load {obtained from the first sub-problem} and the MDS code requirement.
\item
Under the proposed optimization framework, after solving the first sub-problem for the optimal computation load, we can adapt an iterative filling algorithm (previously developed for setting of private information retrieval \cite{woolsey2019optimal}) to find the exact computation assignment in the second sub-problem.
The adapted algorithm for the new setting of CEC networks
converges within a number of iterations no greater than the number of available machines. This leads to a low-complexity design of optimal computation assignment for large CEC networks with an arbitrary set of machine computing speeds and storage requirements.
\end{itemize}
The paper is organized as follows. In Section~\ref{sec: Network Model and Problem Formulation}, we present the network model and introduce the proposed problem formulation. Before going into details of the proposed solutions, in Section~\ref{sec: example}, we give examples of the proposed CEC algorithms for two scenarios, which are networks with 1) heterogeneous computing speeds and homogeneous storage space and 2) heterogeneous computing speeds and heterogeneous storage capacity. The proposed combinatorial optimization problem is solved in two steps in Section~\ref{sec: compload} and Section~\ref{sec: compassign}, where the proposed low-complexity CEC computation assignment algorithms are introduced. The paper is concluded in Section~\ref{sec: conclusions}.
\paragraph*{Notation Convention}
We use $|\cdot|$ to denote the cardinality of a set or the length of a vector.
Let $[n] := \{1,2,\ldots,n\}$ denote a set of integers from $1$ to $n$. A bold symbol such as $\boldsymbol{a}$ indicates a vector and $a[i]$ denotes the $i$-th element of $\boldsymbol{a}$.
$\mathbb{Z}^+$ denotes the set of all positive integers; $\mathbb{R}^+$ denotes the set of all positive real numbers and $\mathbb{Q}^+$ denotes the set of all positive rational numbers. Finally, let $\mathbb{R}_+^N$ be the set of all length-$N$ vectors of real, positive numbers.
\section{System Model and Problem Formulation}
\label{sec: Network Model and Problem Formulation}
\subsection{System Model }
We consider a set of $N$ machines. Each machine $n\in[N]$ stores an integer number, $\sigma[n]$, of coded sub-matrices, which we refer to as {cs-matrices}, derived from a $q\times r$ data matrix, $\boldsymbol{X}$, where $q$ can be large. Here, we denote the vector $\boldsymbol{\sigma} = (\sigma[1], \sigma[2], \ldots, \sigma[N]), \;\sigma[n] \in \mathbb{Z}^+,\; \forall n \in [N]$ as the storage vector and
\be
Z\triangleq\sum_{n=1}^{N}\sigma[n]
\ee
is the total number of cs-matrices stored among all machines.\footnote{Note that assuming $\sigma[n] \in \mathbb{Z}^+, \forall n \in [N]$ is for the ease of presentation. If any $\sigma[n]$ is a fractional number, then we can let $Z$ be large such that all $\sigma[n]$s can be a positive integer.} The cs-matrices are specified by an $Z\times L$ MDS code generator matrix $\boldsymbol{G}$ where $g_{i,\ell}$ denotes the element in the $i$-th row and $l$-th column, where $Z,L \in \mathbb{Z}^+$.
Let any $L$ rows of $\boldsymbol{G}$ be invertible. The data matrix, $\boldsymbol{X}$, is row-wise split into $L$ disjoint, $\frac{q}{L}\times r$ uncoded sub-matrices, $\boldsymbol{X}_1,\ldots ,\boldsymbol{X}_L$. Define a set of $Z$ cs-matrices
\be
\boldsymbol{\tilde{X}}_i = \sum_{\ell=1}^{L}g_{i,\ell}\boldsymbol{X}_\ell
\ee
for $i\in[Z]$. Each $ \boldsymbol{\tilde{X}}_i, i \in [Z]$ has $\frac{q}{L} $ rows.
Assume that machine $n$ will store $\sigma[n]$ of these cs-matrices, specified by an index set $\mathcal{Q}_n$.
In other words, machine $n$ stores the cs-matrices of $\{\boldsymbol{\tilde{X}}_i: i\in\mathcal{Q}_n\}$, where $|\mathcal{Q}_n|=\sigma[n]$.
We also assume that different machines store different cs-matrices: the sets $\mathcal{Q}_1, \cdots, \mathcal{Q}_N$ are disjoint.
Also, note that given a cs-matrix $ \boldsymbol{\tilde{X}}_i, i \in [Z]$, there is an unique machine that stores this cs-matrix.
The machines collectively perform matrix-vector computations\footnote{It can be shown that our CEC designs also operate on the other applications outlined in \cite{yang2018codedArxiv} rather than just matrix-vector multiplications including matrix-matrix multiplications, linear regressions and so on.} over multiple times steps. In a given time step only a subset of the $N$ machines are available to perform matrix computations. Note that, the available machines of each time step are known when we design the computation assignments. Specifically, in time step $t$, a set of available machines $\mathcal{N}_t \subseteq [N]$ aims to compute
\be
\boldsymbol{y}_t = \boldsymbol{X}\boldsymbol{w}_t
\ee
where $\boldsymbol{w}_t$ is some vector of length $r$. The machines of $[N]\setminus\mathcal{N}_t$ are preempted. The number of cs-matrices stored at the available machines is
\be
Z_t\triangleq\sum_{n\in\mathcal{N}_t}\sigma[n] \geq L,
\ee
which means that $Z_t$ is at least $L$ such that the desired computation can be recovered.
Machines of $\mathcal{N}_t$ do not compute $\boldsymbol{y}_t$ directly. Instead, each machine $n\in \mathcal{N}_t$ computes the set
\be
\mathcal{V}_{n} = \bigcup_{i\in\mathcal{Q}_n}\left\{ v = \boldsymbol{\tilde{X}}_i^{(j)}\boldsymbol{w}_t : j \in \mathcal{W}_{i} \right\}
\ee
where $\boldsymbol{\tilde{X}}_i^{(j)}$ is the $j$-th row of $\boldsymbol{\tilde{X}}_i$, the cs-matrix stored by machine $n$, and $\mathcal{W}_{i}\subseteq\left[ \frac{q}{L}\right]$ is the set of rows of $\boldsymbol{\tilde{X}}_i$ assigned to machine $n$ in time step $t$ for computing tasks.
\subsection{Key Definitions }
In the following, we introduce three key definitions that are useful to specify the computation assignments in a CDC network.
\begin{defn}
The computation load vector, $\boldsymbol{\mu}$, defined as
\be \label{eq: compload_vector}
\mu[n] = {\frac{|\mathcal{V}_n|}{\left( \frac{q}{L}\right)}=} \frac{\sum_{i\in\mathcal{Q}_n}|\mathcal{W}_{i}|}{
\left( \frac{q}{L} \right)}, \;\; \forall n \in \mathcal{N}_t,
\ee
is the normalized number of rows computed by machine $n$ in time step $t$.
\hfill $\Diamond$
\end{defn}
Note that while $\boldsymbol{\mu}$, $\mathcal{W}_{i}$, and $\mathcal{V}_{n}$ can change with each time step, reference to $t$ is omitted {for ease of presentation. } Moreover, assume that machines have varying computation speeds defined by a strictly positive vector $\boldsymbol{s} = [s[1], s[2], \ldots, s[N]], \;s[n] \in \mathbb{Q}^+,\; \forall n \in [N]$, fixed over all time steps. Here, computation speed is defined as the number of row multiplications per unit time.
\begin{defn}
Given a computation load vector $\boldsymbol{\mu}$, the overall computation time is dictated by the machine(s) that takes the most time to perform its assigned computations. Thus, we define the overall computation time as a function of the computation load $\boldsymbol{\mu}$, as
\be
c(\boldsymbol{\mu}) = \max_{n\in \mathcal{N}_t} \frac{\mu[n]}{s[n]}.
\ee
\hfill $\Diamond$
\end{defn}
At each time step $t$, for each $j\in\left[ \frac{q}{L}\right]$, the $j$-th row of $L$ sc-matrices undergoes a vector-vector multiplication with $\boldsymbol{w}_t$. The results are sent to a master node which can resolve the elements of $\boldsymbol{y}_t$ by the MDS code design. To ensure each row is assigned $L$ times,\footnote{When each machine stores just $1$ coded matrix, we say the computations are assigned to each machine as in the original CEC work \cite{yang2018codedArxiv}. Alternatively, when some machines store more than $1$ matrix, each row is computed for $L$ different matrices which are stored across a number of machines less than or equal to $L$.} we will introduce a general framework for defining computation assignments.
Given a cs-matrix $\boldsymbol{\tilde{X}}_i$, $\mathcal{W}_{i}$ includes the subset of rows in $\boldsymbol{\tilde{X}}_i$ {assigned to be computed by the machine that stores $\boldsymbol{\tilde{X}}_i$}.
In our design, instead of trying to determine this assignment row by row, we make the assignment in ``blocks'' of rows. Namely, each $\mathcal{W}_{i}$ will include blocks of rows from $\boldsymbol{\tilde{X}}_i$. Furthermore, we will use a common set of blocks for the assignment of all $\mathcal{W}_{i}, i \in {[Z]}$, which we refer to as row sets. These are formally defined below.
\begin{defn}
Since each cs-matrix has $\frac{q}{L}$ rows, we partition the full index set of these rows $\{1,2, \cdots, \frac{q}{L} \}$ into $F$ consecutive disjoint subsets, possibly with varying sizes, called row sets, denoted by $\boldsymbol{\mathcal{M}}_t = (\mathcal{M}_{1},\ldots,\mathcal{M}_{F} )$, whose union gives the full index set.
\hfill $\Diamond$
\end{defn}
\begin{defn}
Given row sets $\boldsymbol{\mathcal{M}}_t = (\mathcal{M}_{1},\ldots,\mathcal{M}_{F} )$, we define cs-matrix sets
$\boldsymbol{\mathcal{P}}_t = (\mathcal{P}_{1} , \ldots , \mathcal{P}_{F} )$ where each $\mathcal{P}_{f}$ includes the indices of $L$ cs-matrices for which all rows in $\mathcal{M}_{f}$ are computed by the machines that store these cs-matrices.
Specifically, if $i \in \mathcal{P}_{f}$ and a machine $n$ stores $\boldsymbol{\tilde{X}}_i$ ($i \in \mathcal{Q}_n$), then machine $n$ will compute all rows in $\mathcal M_f$ from $\boldsymbol{\tilde{X}}_i$, i.e., these rows are included in $\mathcal{W}_{i}$. This ensures that each row set $\mathcal{M}_{f}$ is computed exactly $L$ times using the $L$ cs-matrices stored on these machines.
\hfill $\Diamond$
\end{defn}
From the above definitions, we see that the rows computed by machine $n\in \mathcal{N}_t$ in time step $t$ are in the set
\be
\bigcup_{i\in\mathcal{Q}_n}\mathcal{W}_{i} = \bigcup_{i\in\mathcal{Q}_n}\left\{ \mathcal{M}_f : f\in[F], i \in \mathcal{P}_{f}\right\}.
\ee
Note that the sets $\mathcal{M}_{1},\ldots,\mathcal{M}_{F}$ and $\mathcal{P}_{1} , \ldots , \mathcal{P}_{F}$ and $F$ may vary with each time step.
\begin{remark}
When each machine only stores one cs-matrix, there is a one-to-one mapping between a cs-matrix and a machine, and thus $\mathcal P_f$ also represents the set of machines that compute rows in $\mathcal M_f$. This is used in Example 1 and Algorithm 1, where we assume that the machines have heterogeneous speeds but homogeneous storage.
\end{remark}
\begin{remark}
The row set and cs-matrix set pair $\left(\boldsymbol{\mathcal{M}}_t, \boldsymbol{\mathcal{P}}_t \right)$ combine to determine the computation assignment. The computation load $\boldsymbol{\mu}$ is a function of $\left(\boldsymbol{\mathcal{M}}_t, \boldsymbol{\mathcal{P}}_t \right)$.
\end{remark}
One example to illustrate $\Mc_f$ and $\Pc_f$ is given in Fig.~\ref{fig: exp1}(a). In this example, we consider the case that all machines have heterogeneous computing speed but with homogeneous storage, i.e., $\sigma[n]=1, n \in [N]$. Each machine store one cs-matrix and the union of $\Mc_f, f \in [4]$ in different colors cover all the row indices, $\left[ \frac{q}{L}\right]$. We let machine $n$ stores $\boldsymbol{\tilde{X}}_n$, then we can see that $\Pc_1 = \{1,5,6\}$, $\Pc_2 = \{2,3,4\}$, $\Pc_3 = \{3,5,6\}$, $\Pc_4 = \{4,5,6\}$. In this case, since there is a one-to-one mapping between the machine and the cs-matrix that it stores, the cs-matrix set $\mathcal P_f$ can also be interpreted as an index set of the machines that compute the stored cs-matrices.
\subsection{Formulation of a Combinatorial Optimization Framework}
In a given time step $t$, our goal is to define the computation assignment, $\boldsymbol{\mathcal{M}}_t$ and $\boldsymbol{\mathcal{P}}_t $, such that the resulting computation load vector defined in (\ref{eq: compload_vector}) has the minimum computation time.
In time step $t$, given $\mathcal{N}_t$, $\mathcal{Q}_1,\ldots,\mathcal{Q}_N$ and $\boldsymbol{s}$, the optimal computation time, $c^*$, is the minimum of computation time defined by all possible computation assignments, $\left(\boldsymbol{\mathcal{M}}_t, \boldsymbol{\mathcal{P}}_t \right)$.
Hence,
based on all the conditions discussed before and given the storage vector $\boldsymbol{\sigma}$,
we can formulate the following combinatorial optimization problem.
\begin{subequations} \label{eq: optprob_assign}
\begin{align}
\underset{{\left(\boldsymbol{\mathcal{M}}_t, \boldsymbol{\mathcal{P}}_t \right)} }{\text{minimize}} & \quad c\left(\boldsymbol{\mu}\left(\boldsymbol{\mathcal{M}}_t, \boldsymbol{\mathcal{P}}_t \right)\right) \label{eq: opt 1} \\
\text{subject to:} & \bigcup_{\mathcal{M}_f \in \boldsymbol{\mathcal{M}}_t}\mathcal{M}_f=\left[ \frac{q}{L}\right],\label{eq: optprob_assign 0}\\
& \mathcal{P}_f\subseteq\bigcup_{n\in\mathcal{N}_t}\mathcal{Q}_n,\;\; \forall \mathcal{P}_f \in \boldsymbol{\mathcal{P}}_t, \label{eq: optprob_assign 1}\\
& |\mathcal{P}_f|= L, \;\; \forall \mathcal{P}_f \in \boldsymbol{\mathcal{P}}_t,\label{eq: optprob_assign 2} \\
& |\boldsymbol{\mathcal{M}}_t|=|\boldsymbol{\mathcal{P}}_t| \label{eq: optprob_assign 3},
\end{align}
\end{subequations}
The objective function (\ref{eq: opt 1}) is the overall computation time of a computation load vector $\boldsymbol{\mu}$, which is a function of $\left(\boldsymbol{\mathcal{M}}_t, \boldsymbol{\mathcal{P}}_t \right)$. Conditions (\ref{eq: optprob_assign 0})-(\ref{eq: optprob_assign 3}) specifies the constraints on $\left(\boldsymbol{\mathcal{M}}_t, \boldsymbol{\mathcal{P}}_t \right)$, which are to be optimized over. Specifically, (\ref{eq: optprob_assign 0}) ensures that the union of the row sets $\mathcal M_f$ equals the full set of $\frac{q}{L}$ rows. i.e., all rows are assigned to be computed by some active machines at time $t$.
Condition (\ref{eq: optprob_assign 1}) ensures that the cs-matrices in $\mathcal P_f$ are stored by only active machines at time $t$, and hence, each row set $\mathcal M_f$ is only assigned to be computed from these active machines.
Condition (\ref{eq: optprob_assign 2}) ensures that each row set is computed from exactly $L$ cs-matrices. Condition (\ref{eq: optprob_assign 3}) ensures that each row set has a corresponding cs-matrix set, i.e., the number of row sets equals the number of cs-matrix set.
\begin{remark}
In Sections~\ref{sec: compload} and \ref{sec: compassign}, we precisely solve the combinatorial optimization problem of (\ref{eq: optprob_assign}).
We decompose this problem into two sub-problems. First, a convex optimization problem to find an optimal computation load vector $\boldsymbol{\mu}$ without the consideration of a specific computation assignment. This means that we will solve the problem of (\ref{eq: optprob_assign}) by treating $\boldsymbol{\mu}$ as a real vector without considering whether such a computation assignment is feasible. Second, given the optimal $\boldsymbol{\mu}$ solved in the previous convex optimization problem, we solve a computation assignment problem or a ``filling problem" in order to find a $\left(\boldsymbol{\mathcal{M}}_t, \boldsymbol{\mathcal{P}}_t \right)$ that meets the optimal computation load. Moreover, we show that an optimal assignment, $\left(\boldsymbol{\mathcal{M}}_t, \boldsymbol{\mathcal{P}}_t \right)$, can be found via a low complexity algorithm that completes in at most $N_t$ iterations and the number of computation assignments, $F$, is at most $N_t$.
\end{remark}
Before going into the details of the solution for the optimization problem (\ref{eq: optprob_assign}), we will first present two toy examples to illustrate the proposed algorithms.
\section{Two CEC Examples}
\label{sec: example}
In this section, we will discuss two CEC examples to illustrate the proposed approaches. Example 1 considers the scenario where machines have heterogeneous computing speeds but homogeneous storage constraints. Example 2 considers the more general scenario where machines have both heterogeneous storage space and computing speeds.
\subsection{Example 1:
CEC with Heterogeneous computing speeds/Homogeneous Storage Constraints}
\label{sec: Machines With Homogeneous Storage Constraints}
We consider a system with a total of $N=6$ machines where each has the storage capacity to store $\frac{1}{3}$ of the data matrix $\boldsymbol{X}$. In time step $t$, the machines have the collective goal of computing $\boldsymbol{y}_t=\boldsymbol{X}\boldsymbol{w}_t$ where $\boldsymbol{w}_t$ is some vector. In order to allow for preempted machines, $\boldsymbol{X}$ is split row-wise into $L=3$ sub-matrices, $\boldsymbol{X}_1$, $\boldsymbol{X}_2$ and $\boldsymbol{X}_3$ and an MDS code is used to construct the cs-matrices $\{\boldsymbol{\tilde{X}}_n : n\in[N] \}$ which are stored among the machines. In particular, when the storage space among machines is homogeneous, machine $n$ stores only one cs-matrix $\boldsymbol{\tilde{X}}_n$ and $\boldsymbol{\sigma}=[1,\ldots,1]$.
In this case, there is a one-to-one mapping between cs-matrices and machines.
This placement is designed such that any element of $\boldsymbol{y}_t$ can be recovered by obtaining the corresponding coded computation from any $L=3$ machines.
To recover the entirety of $\boldsymbol{y}_t$, we split the cs-matrices into row sets, such that each set is used for computation at $L=3$ machines.
The machines have relative computation speeds defined by
$\boldsymbol{s} = [\;2,\;\;2,\;\;3,\;\;3,\;\;4,\;\;4\;]$,
where machines $5$ and $6$ are the fastest and can perform row computations twice as fast as machines $1$ and $2$. Machines $3$ and $4$ are the next fastest and can perform matrix computations $1.5$ times as fast as machines $1$ and $2$. Our goal is to assign computations, or rows of the cs-matrices, to the machines to minimize the overall computation time with the constraint that each computation is assigned to $3$ machines.
\begin{figure*}
\centering \includegraphics[width=15cm]{ElasticFig1_v4.pdf}
\vspace{-0.6cm}
\caption{~\small An illustration of the optimal computation assignments in Example 1 over $4$ times steps on a heterogeneous CEC network where machines have heterogeneous computing speed and homogeneous storage space. In this example, $N=6$, $Z=6$, $L=3$ and $\Qc_n=\{ n \}, \forall n \in [4]$. At $t=1$, there are $F=4$ row sets $\mathcal M_1$ (green) $\mathcal M_2$ (blue), $\mathcal M_3$ (magenta), $\mathcal M_4$ (yellow), each is assigned to $L=3$ cs-matrices. The number labeled in the center of each row set is the fraction of the rows in that row set. The row sets change over time. At $t=3$, there are $F=3$ row sets, and at $t=4$, there is only $F=1$ row set. }
\label{fig: exp1}
\end{figure*}
In time step $1$, there are no preempted machines which means that $\mathcal{N}_1=\{1,\ldots, 6 \}$ and $N_1=6$. We assign fractions of the rows to the machines defined by the computation load vector
$\boldsymbol{\mu} = \left[\;\frac{1}{3},\;\;\frac{1}{3},\;\;\frac{1}{2},\;\;\frac{1}{2},\;\;\frac{2}{3},\;\;\frac{2}{3}\;\right]$,
such that machines $1$ and $2$ are assigned $\frac{1}{3}$, machines $3$ and $4$ are assigned $\frac{1}{2}$ and machines $5$ and $6$ are assigned $\frac{2}{3}$ of the rows of their respective cs-matrices. We define $\boldsymbol{\mu}$ such that it sums to $L=3$ and each row can be assigned to $3$ machines.
Furthermore, based on the machine computation speeds, the machines finish at the same time to minimize the overall computation time.
In Section \ref{sec: compload}, we outline
the systematic approach to determine $\boldsymbol{\mu}$. Next, given $\boldsymbol{\mu}$, the rows of the cs-matrices must be assigned. We define row sets, $\mathcal{M}_{1}$, $\mathcal{M}_{2}$, $\mathcal{M}_{3}$, and $\mathcal{M}_{4}$ which are assigned to $F=4$ sets of $L=3$ cs-matrices $\mathcal{P}_{1}$, $\mathcal{P}_{2}$, $\mathcal{P}_{3}$, and $\mathcal{P}_{4}$. Since each machine $n$ stores one cs-matrix $\boldsymbol{\tilde{X}}_n$, it is equivalent to say that we assign computations to machines. In other words $\mathcal{P}_{1}$, $\mathcal{P}_{2}$, $\mathcal{P}_{3}$, and $\mathcal{P}_{4}$ represent the machines assigned to compute $\mathcal{M}_{1}$, $\mathcal{M}_{2}$, $\mathcal{M}_{3}$, and $\mathcal{M}_{4}$, respectively. These sets are depicted in Fig.~\ref{fig: exp1}(a) where, for example, $\mathcal{M}_{1}$ contains the first $\frac{1}{3}$ of the rows assigned to machines $\mathcal{P}_{1} = \{ 1,5,6\}$. Moreover, $\mathcal{M}_{2}$ contains the next $\frac{1}{3}$ of the rows assigned to machines $\mathcal{P}_{2}=\{2,3,4\}$, $\mathcal{M}_{3}$ contains the next $\frac{1}{6}$ of the rows assigned to machines $\mathcal{P}_{3}=\{3,5,6\}$ and $\mathcal{M}_{4}$ contains the final $\frac{1}{6}$ of the rows assigned to machines $\mathcal{P}_{4}=\{4,5,6\}$. In Section \ref{sec: compassign}, we present Algorithm \ref{algorithm:1} to determine the computation assignment for general $\boldsymbol{\mu}$. By this assignment, the fraction of rows assigned to machine $n$ sums to $\mu[n]$ and each row is assigned to $L=3$ machines to recover the entirety of $\boldsymbol{y}_1$.
In time step $2$, we find $N_2=5$ because machine $4$ is preempted and
no longer available to perform computations. Therefore, the computations must be re-assigned among $\mathcal{N}_2=\{1, 2, 3, 5, 6 \}$. First, we obtain
$\boldsymbol{\mu} = \left[\;\frac{2}{5},\;\;\frac{2}{5},\;\;\frac{3}{5},\;\;0,\;\;\frac{4}{5},\;\;\frac{4}{5}\;\right]$,
which sums to $L=3$ and minimizes the overall computation time. Given $\boldsymbol{\mu}$, we then use Algorithm \ref{algorithm:1} (see Section~\ref{sec: compassign}) to assign computations to a machine with the least number of remaining rows to be assigned and $L-1=2$ machines with the most number of remaining rows to be assigned. For example, in the first iteration, $\mathcal{M}_{1}$ is defined to contain the first $\frac{2}{5}$ of the rows and is assigned to machines $\mathcal{P}_{1}= \{ 1,5,6\}$. After this iteration, machines $2$, $5$ and $6$ require $\frac{2}{5}$ of the total rows to still be assigned to them and machine $3$ requires $\frac{3}{5}$ of the total rows. In the next iteration, $\mathcal{M}_{2}$ contains the next $\frac{1}{5}$ of the rows and is assigned to $\mathcal{P}_{2}=\{ 2,3,6\}$. Note that, only $\frac{1}{5}$ of the rows could be assigned in this iteration otherwise there would only be two machines, $3$ and $5$, which still require assignments and therefore, the remaining rows cannot be assigned to three machines. In the final two iterations, $\mathcal{M}_{3}$ and $\mathcal{M}_{4}$ contain $\frac{1}{5}$ of the previously unassigned rows and are assigned to the machines of $\mathcal{P}_{3} = \{2,3,5 \}$ and $\mathcal{P}_{4} = \{3,5,6 \}$, respectively. These assignments are depicted in Fig.~\ref{fig: exp1}(b).
Next, in time step $3$, we find $N_3=4$ because machines $4$ and $6$ are preempted. Hence, $\mathcal{N}_3=\{1, 2, 3, 5 \}$. Similar to previous time steps, it is ideal to have machines $3$ and $5$ compute $1.5\times$ and $2\times$ the number of computations, respectively, compared to machines $1$ and $2$. However, this is not possible since each machine can be assigned at most a number of rows equal to the number of rows of the cs-matrices. In this case, we assign all rows to the fastest machine, machine $5$, and assign fractions of the rows to the remaining machines which sum up to $2$. As a result, we let
$\boldsymbol{\mu} = \left[\;\frac{4}{7},\;\;\frac{4}{7},\;\;\frac{6}{7},\;\;0,\;\;1,\;\;0\;\right]$.
Then, Algorithm \ref{algorithm:1} defines $\mathcal{M}_{1}$, $\mathcal{M}_{2}$ and $\mathcal{M}_{3}$ as disjoint sets containing $\frac{3}{7}$, $\frac{1}{7}$ and $\frac{3}{7}$ of the rows, respectively. Moreover, these row sets are assigned to the machines of $\mathcal{P}_{1}= \{ 1,3,5\}$, $\mathcal{P}_{2}= \{ 1,2,5\}$ and $\mathcal{P}_{3}= \{ 2,3,5\}$, respectively. These assignments are depicted in Fig.~\ref{fig: exp1}(c).
Finally, in time step $4$, machines $1$, $4$ and $6$ are preempted which means that $\Nc_4 = \{2,3,5\}$ and $N_4=3$. To assign all the rows to $L=3$ machines, each available machine is assigned by all of the rows and
$\boldsymbol{\mu} = \left[\;0,\;\;1,\;\;1,\;\;0,\;\;1,\;\;0\;\right]$.
In other words, $\mathcal{M}_{1}$ contains all rows and $\mathcal{P}_{1}=\{2,3,5 \}$. This is depicted in Fig.~\ref{fig: exp1}(d).
Next, we present Example 2 with heterogeneous storage space and computing speeds. This example uses Algorithm 2, which is a generalization of Algorithm 1 discussed in Example 1.
\subsection{Example 2: CEC with Heterogeneous computing speeds and Storage Constraints}
\label{sec: Machines With Heterogeneous Storage Constraints}
Consider the case where $L=6$ and there are $N=6$ machines which each have distinct speed-storage pairs. For ease of presentation, we only focus on a single time step (or $t=1$) and assume there are no preempted machines ($N_1=6$).
The
computing speed of the available machines are defined by
$\boldsymbol{s} = \left[\;2,\;\; 3, \;\; 4, \;\; 2, \;\; 3, \;\; 4\; \right]$.
The total number of stored cs-matrices is $Z_1=9$ and the machines store a number of cs-matrices defined by
$\boldsymbol{\sigma} = \left[\;2,\;\; 2, \;\; 2, \;\; 1, \;\; 1, \;\; 1 \;\right]$.
For example, machine 1 has a speed of 2 and stores 2 cs-matrices and machine 5 has a speed of 3 and stores 1 cs-matrix.
The storage of the cs-matrices at each machine is shown in Fig.~\ref{fig: exp2} where $\boldsymbol{\tilde{X}}_i$ is labeled at the top of each block which represent a cs-matrix.
The machines are in descending order based on $\frac{\sigma[n]}{s[n]}$ to use Theorem 1 to find the optimal computation load vector described in Section \ref{sec: compload}. Based on Theorem 1, the computation load vector is
$\boldsymbol{\mu} = \left[\;\frac{8}{11},\;\;\frac{12}{11},\;\;\frac{16}{11},\;\;\frac{8}{11},\;\;1,\;\;1\;\right]$.
Notice that, different from Example 1, a machine here may have a computation load greater than $1$ if it performs computations on more than $1$ stored cs-matrices. However, similar to Example 1, machines either have the same computation time or perform computations on all locally stored data. In this example, based on the computation speeds, machines $1$ through $4$ complete the assigned computing tasks at the same time and machines $5$ and $6$ compute using the entirety of their one locally available cs-matrix and finish before the other machines.
\begin{figure*}
\centering
\centering \includegraphics[width=16cm]{ElasticFig1_hetstorage_v2.pdf}
\vspace{-0.8cm}
\caption{~\small Example 2: Optimal computation assignments for a CEC network with machines that have heterogeneous storage requirements and varying computations speeds. Here, we have $N=6$, $Z=9$, $L=6$. The machines have varying computation speeds $\boldsymbol{s} = [2,\; 3, \; 4, \; 2, \; 3, \; 4\;]$ and storage capacity
$\boldsymbol{\sigma} = [2,\; 2, \; 2, \; 1, \; 1, \; 1 \;].$ Machines 1 to 3 each stores 2 cs-matrices, and machines 4 to 6 each stores only 1 cs-matrix. The machines are ordered in the decreasing order of SCR $\frac{\sigma[n]}{s[n]}$. There are 4 row sets $\mathcal M_1$ (green) $\mathcal M_2$ (blue), $\mathcal M_3$ (magenta), $\mathcal M_4$ (yellow), each is assigned to $L=6$ cs-matrices. For instance, $\mathcal M_1$ is assigned to cs-matrices $\mathcal P_1=\{3,4,5,7,8,9\}$. The optimal computation load vector is $\boldsymbol{\mu} = [\;\frac{8}{11},\;\;\frac{12}{11},\;\;\frac{16}{11},\;\;\frac{8}{11},\;\;1,\;\;1\;]$. For instance, machine 3 computes one full cs-matrix $\boldsymbol{\tilde{X}}_5$ and partially computes $\frac{3}{11} +\frac{2}{11}=\frac{5}{11}$ of $\boldsymbol{\tilde{X}}_6$, which adds up to a computation load of $\mu[3]=1+\frac{5}{11}=\frac{16}{11}$.}
\label{fig: exp2}
\vspace{-0.5cm}
\end{figure*}
Next, we need to assign computations that yield the computation load vector, $\boldsymbol{\mu}$.
We use Algorithm 2 in Section~\ref{sec: compassign}, where
instead of assigning computations to machines, we assign computations to each cs-matrix at each machine.
In this algorithm, we need to first decide how much of each cs-matrix will be computed. For example, consider machine $3$ which has a computation load of $\mu[3]=\frac{16}{11}$ and locally stores $\sigma[3]=2$ cs-matrices. There is a choice of how much machine $3$ will compute each of its cs-matrices. A solution which simplifies the assignment is for machine $3$ to compute the entirety of one cs-matrix and a $\frac{5}{11}$ fraction of the other. Similarly, machine $2$ will compute the entirety of one cs-matrix and $\frac{1}{11}$ fraction of the other. In general, when $\sigma[n]>1$ and machine $n$ stores more than one cs-matrices, it will compute $\lfloor \mu[n] \rfloor$ whole cs-matrices and a $\mu[n]-\lfloor \mu[n] \rfloor$ fraction of the remaining cs-matrix.
The final computation assignments are shown in Fig. \ref{fig: exp2}. There are $F=4$ matrix row sets $\mathcal{M}_1$ through $\mathcal{M}_4$ which contain a $\frac{1}{11}$, $\frac{3}{11}$, $\frac{2}{11}$ and $\frac{5}{11}$ fraction of rows, respectively. Each row set $\mathcal M_f$ is assigned to a sc-matrix set $\mathcal P_f$ that contains $L=6$ sc-matrices. These are given by sc-matrix sets $\mathcal P_1=\{ 3,4,5,7,8,9\}$, $\mathcal P_2=\{ 1,3,5,6,8,9\}$, $\mathcal P_3=\{ 3,5,6,7,8,9\}$, and $\mathcal P_4=\{ 1,3,5,7,8,9\}$.
\section{First sub-problem: Optimal Computation Load Vector}
\label{sec: compload}
We decompose the optimization problem (\ref{eq: optprob_assign}) into two sub-problems.
In this section, we present the first sub-problem by
introducing a relaxed convex optimization problem
to find the optimal computation load vector $\boldsymbol{\mu^*}$ and its corresponding computation time ${\hat c}{^*}=c(\boldsymbol \mu^*)$
without considering an explicit computation assignment $(\boldsymbol{\mathcal{M}}_t,\boldsymbol{\mathcal{P}}_t)$. Due to the relaxed constraints, we have $\hat{c}^*\leq c^*$.
Next, in Section~\ref{sec: compassign}, we will present the second sub-problem in which
we show that it is possible to find a computation assignment $(\boldsymbol{\mathcal{M}}_t,\boldsymbol{\mathcal{P}}_t)$ that achieves the $\boldsymbol{\mu^*}$ that we found in the first step. Hence, there is no gap between the ``relaxed" convex optimization problem and (\ref{eq: optprob_assign}) and we have $c^* = \hat{c}^*$.
\subsection{The Proposed Relaxed Convex Optimization Problem}
Given a computation speed vector $\boldsymbol{s}$ and storage vector $\boldsymbol{\sigma}$, we let the optimal computation load vector $\boldsymbol{\mu}^*$ be the solution to the following convex
optimization problem:
\begin{subequations} \label{eq: optprob}
\begin{align}
\underset{\boldsymbol{\mu}}{\text{minimize }} \; & c(\boldsymbol{\mu})= \max_{n\in [N_t]}\frac{\mu[n]}{s[n]} \\
\text{subject to:} &\sum_{n\in [N_t]}\mu[n] = L,
\label{eq:sum_L}\\
&0 \leq \mu[n] \leq \sigma[n], \forall n \in [N_t], \\
& \mu[n] \in \mathbb{R}^+, \; \forall n\in [N_t]
\end{align}
\end{subequations}
which can be shown to be a convex optimization problem.
While computation assignments, $(\boldsymbol{\mathcal{M}}_t,\boldsymbol{\mathcal{P}}_t)$, are not explicitly considered in (\ref{eq: optprob}), we note that the key constraint of $\sum_{n\in [N_t]}\mu[n] = L$ is a relaxed version of the requirement on the computation assignment that each row set should be assigned to $L$ sc-matrices.
It is important to note that the analytical solution to the optimization problem (\ref{eq: optprob}) can be explicitly found.
When $Z_t=L$, it can be seen that this optimal solution is given by $\boldsymbol{\mu}^* = \boldsymbol{\sigma}$.
When $Z_t>L$, the analytical optimal solution to (\ref{eq: optprob}) is presented in the following theorem.
\begin{theorem}\label{th: load_assignment}
Assume that $Z_t>L$ and { the machines are labeled in the decreasing order of the storage capacity to computation speed ratio (SCR)}
\be
\label{eq: theorem 1 1}
\frac{\sigma[1]}{s[1]} \geq \frac{\sigma[2]}{s[2]} \geq \cdots \geq\frac{\sigma[N_t]}{s[N_t]}.
\ee
The optimal solution $\boldsymbol \mu^*$ to the optimization problem of (\ref{eq: optprob}) must take the following form
\begin{equation}
\mu^*[n]=\begin{cases}
\hat c^* s[n] & \text{if } 1\le n \le k^*\\
\sigma[n] & \text{if } k^*+1 \le n \le N_t,
\end{cases}
\label{eq:optimal_form}
\end{equation}
where $k^*$ is the largest integer in $[N_t]$ such that
\be
\frac{\sigma[k^*+1]}{s[k^*+1]} < \hat c^* = \frac{L-\sum_{n=k^*+1}^{N_t}\sigma[n]}{\sum_{n=1}^{k^*}s[n]} \leq \frac{\sigma[k^*]}{s[k^*]},\quad \text{if $k^* < N_t$},
\label{eq: c_bounds-fork}
\ee
otherwise, $k^* = N_t$.
Here, ${\hat c}{^*}=c(\boldsymbol \mu^*)$ is the maximum computation time among the $N_t$ machines given the computation load assignment $\boldsymbol \mu^*$.
\end{theorem}
\subsection{Proof of Theorem~\ref{th: load_assignment}}
In the following, we first present two Claims that will lead to the proof of Theorem~\ref{th: load_assignment}.
\begin{claim}\label{cl: 1}
{If $\mu^*[n]<\hat{c}^* s[n]$, then $\mu^*[n] = \sigma[n]$. Thus, in this case the optimal computation load assigned to machine $n$ is equivalent to its storage.}
\end{claim}
\begin{IEEEproof}
We prove Claim \ref{cl: 1} by contradiction.
Since $\hat c^*=\text{max}_{n \in [N_t]}\frac{\mu^*[n]}{s[n]}$, we define two disjoint sets $\mathcal{T}_0$ and $\mathcal{T}_1$, where $\mathcal{T}_0 \bigcup \mathcal{T}_1=[N_t]$, as follows.
\be
\mathcal{T}_0=\{n\in[N_t]:\mu^*[n]=\hat{c}^* s[n]\}
\ee
and
\be
\mathcal{T}_1=\{n\in[N_t]:\mu^*[n]<\hat{c}^* s[n]\}.
\ee
In the following, we will show that if there exists an $i \notin \mathcal{T}_0$, then we must have $i\in \mathcal{T}_1$ and $\mu^*[i]=\sigma[i]$. In order to do this, we will construct a new solution $\boldsymbol{\mu}'$ from the optimal solution $\boldsymbol{\mu}^*$ such that $c(\boldsymbol{\mu}') < \hat{c}^*$, which leads to a contradiction that $\boldsymbol{\mu}^*$ is an optimal solution. The details are as follows. Assume that there exists some $i\in[N_t]$ such that $ i \in \mathcal{T}_1$ and $\mu^*[i] < \sigma[i]$.
Define $\boldsymbol{\mu}'$ such that
\begin{align}
\mu'[n] = \left\{ \begin{array}{cc}
\mu^*[n] + \epsilon \hspace{5mm} &\text{if }n=i, \\
\mu^*[n] - \frac{\epsilon}{|\mathcal{T}_0|} \hspace{5mm} &\text{if } n\in\mathcal{T}_0, \\
\mu^*[n] \hspace{5mm} &\text{if } n \in \mathcal{T}_1 \setminus i\\
\end{array} \right.
\end{align}
where $0<\epsilon<\sigma[n]-\mu^*[n]$ and $\epsilon$ is sufficiently small such that
\be
\frac{\mu'[i]}{s[i]} = \frac{\mu^*[i]+\epsilon}{s[i]} < \hat{c}^*,
\ee
and for all $n \in \mathcal{T}_0$
\be
\mu^*[n] - \frac{\epsilon}{|\mathcal{T}_0|} > 0.
\ee
One can verify that we have $\frac{\mu'[n]}{s[n]} < \hat{c}^*$ for any $n \in [N_t]$ and thus we obtain $c(\boldsymbol{\mu}') < \hat{c}^*$. This contradicts with the assumption that $\boldsymbol{\mu}^*$ is optimal. Thus, it follows that if $n \notin \mathcal{T}_0$, then we must have $n\in \mathcal{T}_1$ and $\mu^*[n]=\sigma[n]$.
\end{IEEEproof}
\begin{claim}\label{cl: 2}
If $j \in \mathcal{T}_0 $ and $i \in \mathcal{T}_1$, then
$\frac{\sigma[j]}{s[j]} > \frac{\sigma[i]}{s[i]}$.
\end{claim}
\begin{IEEEproof}
This claim follows directly from
\be\label{eq: pfeq1}
\frac{\mu^*[i]}{s[i]}=\frac{\sigma[i]}{s[i]} < \hat{c}^*= \frac{\mu^*[j]}{s[j]} \leq \frac{\sigma[j]}{s[j]}.
\ee
\end{IEEEproof}
\indent {\it{Proof of Theorem~\ref{th: load_assignment}:}}
Combining Claims \ref{cl: 1} and \ref{cl: 2}, we find that the optimal solution must take the form of
\begin{equation}
\mu^*[n]=\begin{cases}
\hat c_k^* s[n] & \text{if } 1\le n \le k,\\
\sigma[n] & \text{if } k+1 \le n \le N_t,
\end{cases}
\label{eq:optimal_form_k}
\end{equation}
where $k=|\mathcal{T}_0|$. Next, we will optimize $k$ such that $\hat c_k^*$ is minimized.
Since $\sum_{n\in [N_t]}\mu^*[n] = L$, by using (\ref{eq:optimal_form_k}), we obtain (\ref{eq: c_bounds-fork}) because
\begin{align}
L = \sum_{n=1}^{N_t}\mu^*[n]
&= \sum_{n=1}^{k}\mu^*[n] + \sum_{n=k+1}^{N_t}\sigma[n] \\
&= \hat{c}_k^*\sum_{n=1}^{k}s[n] + \sum_{n=k+1}^{N_t}\sigma[n]
\end{align}
and
\be \label{eq: c_k}
\hat{c}_k^* = \frac{L - \sum_{n=k+1}^{N_t}\sigma[n]}{\sum_{n=1}^{k}s[n]}.
\ee
The left-most inequality of (\ref{eq: c_bounds-fork}) follows from $k \in \mathcal{T}_0$ and $\mu^*[k] \le \sigma[n]$. The right-most inequality of (\ref{eq: c_bounds-fork}) follows from $k+1 \in \mathcal{T}_1$ and $\mu^*[k+1] = \sigma[n]$.
Since $\frac{\sigma[n]}{s[n]}$ is a decreasing sequence, we see from (\ref{eq: c_bounds-fork}) that $\hat c_k^*$ is maximized when $k$ is chosen to be $k^*$, the largest value in $[N_t]$ such that (\ref{eq: c_bounds-fork}) is satisfied.
\subsection{Discussions on Theorem~\ref{th: load_assignment}}
From (\ref{eq: theorem 1 1}), (\ref{eq:optimal_form}) and (\ref{eq: c_bounds-fork}), we can observe that the optimal solution $\boldsymbol \mu^*$ to the optimization problem (\ref{eq: optprob}) is always rational due to the fact that $\boldsymbol s$ are rational numbers and $\boldsymbol \sigma$ are integers. Hence, it is achievable for large enough $q$ if the computation assignment exists.
The following corollary presents the solution of the optimal computation load vector when the storage among machines is homogeneous, i.e., each machine stores exactly one cs-matrix. This storage design is equivalent to that used in the original CEC work of \cite{yang2018coded}, but here the machines have varying speeds as opposed to the homogeneous setting of \cite{yang2018coded}.
\begin{corollary}\label{crly: 1}
When $\boldsymbol{\sigma}=[1,1\cdots,1]$, we find
\begin{equation}
\mu^*[n]=\begin{cases}
\hat c^* s[n] & \text{if } 1\le n \le k^*\\
1 & \text{if } k^*+1 \le n \le N_t,
\end{cases}
\label{eq:optimal_form_hm}
\end{equation}
where $k^*$ is the largest integer in $[N_t]$ such that
\be
\frac{1}{s[k^*+1]} < \hat c^* = \frac{L-N_t+k^*}{\sum_{n=1}^{k^*}s[n]} \leq \frac{1}{s[k^*]}.
\label{eq: c_bounds-fork_hm}
\ee
\end{corollary}
\begin{IEEEproof}
Corollary \ref{crly: 1} is proved by substituting $\sigma[n]=1$ for $n\in[N_t]$ in equations (\ref{eq:optimal_form}) and (\ref{eq: c_bounds-fork}) and ordering the machines by speed in ascending order.
\end{IEEEproof}
\begin{remark}
The two cases in (\ref{eq:optimal_form}) are determined by whether a machine $n$ satisfies $ \mu^*[n]=\hat c^* s[n]$ or $ \mu^*[n]< \hat c^* s[n]$. For the first case when $1 \le n \le k^*$, the equality is achieved and we must have $0< \mu^*[n]\le \sigma[n]$. {Among these $k^*$ machines, the computation load $ \mu^*[n]$ is proportional to the computation speed $s[n]$.} For the second case when $k^*+1 \le n \le N$, we have the strict inequality and $ \mu^*[n]=\sigma[n]$. {The computation load $\mu^*[n]$ equals (thus is limited by) the storage $\sigma[n]$.}
The equality in (\ref{eq: c_bounds-fork}) ensures that $\sum_{n=1}^{N_t} \mu^*[n]=L$; the right-most inequality ensures that $\mu^*[n] \le \mu^*[k^*]=\hat c^* s[k^*] \le \sigma[n],$ for any $1\le n \le k^*$; the left-most inequality ensures that for any $k^*+1 \le n \le N$, we have
$ \mu^*[n]< \hat c^* s[n]$. Hence, the computation time $\hat c^*$ is equal to the local computation time of any of the $k^*$ machines with the largest SCR.
\end{remark}
Since the optimization problem of (\ref{eq: optprob}) aims to minimize a convex function on a closed and convex set, the existence of an optimal solution is guaranteed. This ensures the existence of
some $k^* \in [N_t]$ such that (\ref{eq: c_bounds-fork}) is satisfied. In the following, we provide a numerical procedure to find $k^*$. First, it is straightforward to verify that if the right-hand-side (RHS) inequality ``$\le $'' of (\ref{eq: c_bounds-fork}) is violated for $k^*=i$, then the left-hand-side (LHS) inequality ``$<$'' of (\ref{eq: c_bounds-fork}) must hold for $k^*=i-1$. In other words, for any $i\in[N_t]$,
\be
\text{if } \hat c_i^* > \frac{\sigma[i]}{s[i]}, \text{then } \frac{\sigma[i]}{s[i]} < \hat c_{i-1}^*.
\label{eq:equivalent}
\ee
where $\hat c_i^*$ (and $\hat c_{i-1}^*$) are defined by (\ref{eq: c_k}) for different values of $k^*$.
We first check $k^*=N_t$. If the RHS of (\ref{eq: c_bounds-fork}) holds, then we have $k^*=N_t$. Otherwise, it follows from (\ref{eq:equivalent}) that the LHS of (\ref{eq: c_bounds-fork}) must hold for $k^*=N_t-1$. If the RHS of (\ref{eq: c_bounds-fork}) also hold for $k^*=N_t-1$, then we have $k^*=N_t-1$. Otherwise, it follows from (\ref{eq:equivalent}) that the LHS of (\ref{eq: c_bounds-fork}) must hold for $k^*=N_t-2$. We continue this process by decreasing $k^*$ until we find one value of $k^*$ for which both sides of (\ref{eq: c_bounds-fork}) hold. This process is guaranteed to terminate before reaching $k^*=1$ for which the RHS of (\ref{eq: c_bounds-fork}) always hold. Hence, this establishes the procedure to find $k^*$ directly using (\ref{eq: c_bounds-fork}).
Finally, we note that the solution in Theorem \ref{th: load_assignment} for the optimization problem of (\ref{eq: optprob}) has a ``water-filling" like visualization as shown in Fig. \ref{fig: waterfill}.
In Fig. \ref{fig: waterfill}(a), the storage of machine $n$ is represented by the area of the full rectangle (shaded with the peach color) that it corresponds to.
We make the width of the rectangle $s[n]$ because if we ``fill'' part of the rectangle with an area of $\mu[n]$ (shaded in blue), then the height of the filled area, which is the water level at that rectangle, represents the computation time of machine $n$. Note that, this filled area does not represent a specific computation assignment, but only the total computations assigned to machine $n$. In Fig. \ref{fig: waterfill}(b), in accordance to (\ref{eq: theorem 1 1}), we arrange the available machines in descending order of the rectangle height, which is $\frac{\sigma[n]}{s[n]}$ for machine $n$. Then, following (\ref{eq:sum_L}), we ``fill'' all the available machines with a total area of $L$. First, notice that machines with larger rectangle width, or speed, will have larger filled area, or more computation load, until they are completely filled. Machines $k^*+1,\ldots, N_t$ with smaller rectangle height fill completely and have a computation time strictly less than $\hat{c}^*$. Machines $1,\ldots, k^*$ with larger rectangle height, all have the same computation time of $\hat{c}^*$. Note that one or more of these machines may be completely filled such as machine $k^*$ in Fig. \ref{fig: waterfill}(b), but will have the same ``water level" as machines $1,\ldots,k^*$.
\begin{figure*}
\centering
\centering \includegraphics[width=16cm]{water_fill_fig.pdf}
\put(-162,22){{\large $\frac{\sigma[n]}{s[n]}$}}
\put(-152,-4){{\small $s[n]$}}
\put(-140,-4){{\small $s[n]$}}
\put(-145,-10){{\small (a)}}
\put(-68,-10){{\small (b)}}
\put(-151,12){\rotatebox{90}{Area$=\sigma[n]$}}
\put(-130.5,9){{\large $\frac{\mu[n]}{s[n]}$}}
\put(-138.5,7){\rotatebox{90}{\small $\mu[n]$}}
\put(-119,10.5){$\hat{c}^*$}
\put(-111,-4){{\small $s[1]$}}
\put(-103.5,-4){{\small $s[2]$}}
\put(-96.5,-4){{\small $s[3]$}}
\put(-74,-4){{\small $s[k^*]$}}
\put(-78.5,22){{\small machine $k^*$}}
\put(-38,24){{\small Total fill area}}
\put(-43,20){{\small (across all machines)}}
\put(-12,22){{ $=L$}}
\put(-61,-4){{\small $s[k^*+1]$}}
\put(-39,-4){{\small $s[N_t-1]$}}
\put(-17,-4){{\small $s[N_t]$}}
\vspace{-0.cm}
\caption{~\small A water-filling like representation of the storage and computation load of (a) for machine $n$ only and (b) for a set of available machines with an optimal computation load vector that solves the optimization of (\ref{eq: optprob}) with the solution of Theorem \ref{th: load_assignment}.
Machines in (b) are ordered in the decreasing order of $\frac{\sigma(n)}{s(n)}$.
The storage of machine $n$ is represented by the area of the full rectangle (in peach color). The computation load $\mu[n]$ assigned to machine $n$ is represented by the area of the filled blue region within the $n$-th rectangle. The height of the each filled blue region represents $\frac{\mu[n]}{s[n]}$, which is the computation time of machine $n$.
The maximum height of these filled regions represent the computation time $\hat c^*=c(\boldsymbol{\mu^*})$.
}
\label{fig: waterfill}
\end{figure*}
\subsection{Computation Load of Example 1 and Example 2}
\label{sec: Computation Load Examples}
{\it Homogeneous storage}: We return to Example 1 presented in Section~\ref{sec: Machines With Homogeneous Storage Constraints} with homogeneous storage but heterogeneous computing speeds and explain how to find the optimal computation load vector. In this example, each machine stores exactly one cs-matrix. When $t=1$, we have $N_1=6$ and $L=3$.
Given $\boldsymbol{s} = [2,\;2,\;3,\;3,\;4,\;4]$, the largest $k^*$ that satisfies (\ref{eq: c_bounds-fork}) is $k^*=6$, and thus $\hat c^*=1/6$, $\boldsymbol{\mu^*}= \hat c^* \boldsymbol{s}=\left[\frac{1}{3},\frac{1}{3},\frac{1}{2},\frac{1}{2},\frac{2}{3},\frac{2}{3}\right]$.
Similarly, for $t=2$, since machine 4 is preempted, we have now $N_2=5$, $\Nc_2 = \{1,2,3,5,6\}$ and $\boldsymbol{s} = [2,\;2,\;3,\;4,\;4]$ (we ignore any preempted machines). In this case, we have $k^*=5$, and thus $\hat c^*=1/5$, $\boldsymbol{\mu^*}= \hat c^* \boldsymbol{s}=\left[\frac{2}{5},\frac{2}{5},\frac{3}{5}, \frac{4}{5},\frac{4}{5}\right].$
Similarly, for $t=3$, we have $N_3=4$, $\Nc_3 = \{1,2,3,5\}$ and $\boldsymbol{s} = [2,\;2,\;3,\;4]$ because machines 4 and 6 preempts.
Here, we have $k^*=3$, $\hat c^*=2/7$, and $\boldsymbol{\mu^*}=\left[\frac{4}{7},\frac{4}{7},\frac{6}{7}, 1\right]$.
\footnote{Note that,
as in the optimization problem of (\ref{eq: optprob}), the computation load of the preempted machines are ignored since they are simply $0$, presenting a slight difference between the optimal computation load vectors presented in Section \ref{sec: example}.}
{\it Heterogeneous storage}: We illustrate this case using Example 2 presented in Section~\ref{sec: Machines With Heterogeneous Storage Constraints} with $L=N=6$ and no preempted machines. In this case, we order the machines in a descending order by $\frac{\sigma[n]}{s[n]}$, where $\boldsymbol s = [2, 3, 4, 2, 3, 4]$ and $\boldsymbol \sigma = [2, 2, 2, 1, 1, 1]$. Next, we need to determine $k^*$, and we start by checking $k^*=6$. However, we can observe that (\ref{eq: c_bounds-fork}) does not hold since
\be
\
\frac{L}{\sum_{n=1}^{N_t}s[n]}
= \frac{1}{3} > \frac{1}{4} = \frac{\sigma[6]}{s[6]}.
\ee
Similarly, if we try $k^*=5$, we see that (\ref{eq: c_bounds-fork}) does not hold since
\be
\frac{L-\sigma [6]}{\sum_{n=1}^{5}s[n]} = \frac{5}{14} > \frac{1}{3} = \frac{\sigma[5]}{s[5]}.
\ee
Finally, we see that $k^*=4$ is the solution that satisfies (\ref{eq: c_bounds-fork}) because
\be
\frac{\sigma[5]}{s[5]} = \frac{1}{3} < \frac{L-\sigma [5] - \sigma [6]}{\sum_{n=1}^{4}s[n]} = \frac{4}{11} \leq \frac{1}{2} = \frac{\sigma[4]}{s[4]}.
\ee
It follows that $\hat c^*=4/11$ and by using (\ref{eq:optimal_form}), we obtain $\boldsymbol{\mu^*}=\left[\frac{8}{11},\frac{12}{11},\frac{16}{11}, \frac{8}{11},1,1\right]$.
In Section \ref{sec: compassign}, we will show that there {always} exists a computation assignment $(\boldsymbol{\mathcal{M}}_t,\boldsymbol{\mathcal{P}}_t)$ whose computation load vector equals $\boldsymbol{\mu^*}$ and the assignment pair can be found using the proposed Algorithm 1 (for homogeneous storage) and Algorithm 2 (for heterogeneous storage) in no more than $N_t$ iterations.
\section{Second Sub-problem: Optimal Computation Assignment}
\label{sec: compassign}
In this section, we present a computation assignment $(\boldsymbol{\mathcal{M}}_t,\boldsymbol{\mathcal{P}}_t)$ that solves the optimization problem of (\ref{eq: optprob_assign}). First, we show the existence of a computation assignment that yields the computation load vector $\boldsymbol{\mu}^*$ and computation time $\hat{c}^*$. This shows that there is no gap between the combinatorial optimization problem (\ref{eq: optprob_assign}) and the ``relaxed" convex optimization problem of (\ref{eq: optprob}). Then, we provide a low-complexity iterative algorithm that converges to such an assignment in just $N_t$ iterations. In the following, we start with the case of homogeneous storage and heterogeneous computing speeds, where each machine stores exactly one cs-matrix, then we move to the case of heterogeneous storage and computing speed requirements where each machine may store any integer number of cs-matrices.
\subsection{Homogeneous Storage with Heterogeneous Computing Speeds}
Here, we focus on the case where $\sigma[n]=1$ for $n\in[N_t]$ such that each available machine stores exactly one cs-matrix. Our goal is to assign computations among the machines such that each row set in $\boldsymbol{\Mc}_t$
is assigned to $L$ machines and the assignments satisfy the $\boldsymbol{\mu}^*$ given by (\ref{eq:optimal_form_hm}).
{Interestingly, we find that once $\boldsymbol{\mu}^*$ is given, we can adapt the {\em filling problem (FP)} introduced in \cite{woolsey2019optimal} for private information retrieval (PIR) to solve our second sub-problem of finding the computation assignment for CEC networks. Note that our proposed formulation of the computation assignments based on row sets, together with the two-step approach to solve the proposed combinatorial optimization problem, are important to allow successful adaptation of the FP problem \cite{woolsey2019optimal} to the CEC setting. }
In particular, we refer to the following lemma (Theorem~2 in \cite{woolsey2019optimal}).
\begin{lemma}
\label{lemma: 1}
Given $\boldsymbol{\mu}^* \in \mathbb{R}_+^N$ and $L \in \mathbb{Z}^+$, a $(\boldsymbol{\mu}^*, L)$-FP solution exists {\it if and only if}
\begin{align}\label{eq: FP_exist}
\mu^*[n] \leq \frac{\sum_{i=1}^{N_t}\mu^*[i]}{L}
\end{align}
for all $n \in [N_t]$.
\hfill $\square$
\end{lemma}
In our problem setting, we have $\sum_{i=1}^{N_t}\mu^*[i]=L$ and $\mu^*[n]\leq 1$ for all $n \in [N_t]$. Therefore, by using Lemma~\ref{lemma: 1}, an optimal computation assignment exists. Moreover, by adapting Algorithm~1 in \cite{woolsey2019optimal}, we obtain an equivalent Algorithm~\ref{algorithm:1}
(see pseudo-codes of Algorithm~\ref{algorithm:1} for detailed operations) to explicitly provide an optimal computation assignment, $\left(\boldsymbol{\mathcal{M}}_t, \boldsymbol{\mathcal{P}}_t \right)$ where machine $n$ stores and performs computations on its stored cs-matrix $\boldsymbol{\tilde{X}}_n$.
\begin{algorithm}
\caption{Computation Assignment: Homogeneous Storage Capacity and Heterogeneous Computing Speeds}
\label{algorithm:1}
\begin{algorithmic}[1]
\item[ {\bf Input}: $\boldsymbol{\mu}^*$, $N_t$, $L$, and $q$ ] \quad $\%$ $\boldsymbol{\mu}^*$ is solution to first sub-problem (\ref{eq:optimal_form})-(\ref{eq: c_bounds-fork}).
\item $\boldsymbol{m} \leftarrow \boldsymbol{\mu}^*$ \quad $\%$ $\boldsymbol{m}$ represents the remaining computation load vector to be assigned. \hspace*{2.6cm} $\%$ Initialize $\boldsymbol{m} $ as the optimal computation load vector $\boldsymbol{\mu}^*$
\item $f \leftarrow 0$
\While {$\boldsymbol{m}$ contains a non-zero element}
\State $f \leftarrow f+1$
\State $L' \leftarrow \sum_{n=1}^{N_t}m[n]$ \quad \quad $\%$ $L'$ represents the sum of the remaining computation load
\State $N'\leftarrow$ number of non-zero elements in $\boldsymbol{m}$
\State $\boldsymbol{\ell} \leftarrow$ indices that sort the non-zero elements of $\boldsymbol{m}$ from smallest to largest\footnotemark[5]
\State $\mathcal{P}_{f} \leftarrow\{\ell [1], \ell [N'-L+2] , \ldots , \ell [N'] \}$ \quad $\%$ specify machines that will compute $\mathcal M_f$
\If {$N' \geq L+1$} \quad $\%$ $\alpha_f$ is the fraction of rows assigned to row set $\mathcal M_f$
\State $\alpha_f \leftarrow \min \left(\frac{L'}{L} - m[\ell[N' - L + 1]], m[\ell[1]]\right)$\footnotemark[6] \quad $\%$ assign only a fraction of remaining \hspace*{4.5cm} $\%$ rows to to $\mathcal P_f$ to ensure a FP solution exists at next iteration.
\Else
\State $\alpha_f \leftarrow m[\ell[1]]$ \quad $\%$ assign all remaining un-assigned rows of machine $\ell[1]$ to $\mathcal P_f$.
\EndIf
\For {$n \in \mathcal{P}_{f}$}
\State $m[n] \leftarrow m[n] - \alpha_f$ \quad $\%$ update remaining computation load at each machine
\EndFor
\EndWhile
\item $F \leftarrow f$
\State Partition rows $[\frac{q}{L}]$ into $F$ disjoint row sets: $\mathcal{M}_{1}, \ldots , \mathcal{M}_{F}$ of size $\frac{\alpha_1 q}{L},\ldots,\frac{\alpha_{F}q}{L}$ rows respectively
\item[ {\bf Output}: $F$, $\mathcal{M}_{1}, \ldots , \mathcal{M}_{F}$ and $\mathcal{P}_{1}, \ldots , \mathcal{P}_{F}$ ]
\end{algorithmic}
\end{algorithm}
\footnotetext[5]{$\boldsymbol{\ell}$ is an $N'$-length vector and $0<m[\ell[1]]\leq m[\ell[2]]\leq \cdots \leq m[\ell[N']]$.}
\footnotetext[6]{This is the condition obtained by using Lemma~\ref{lemma: 1}.}
\begin{remark}
Using a similar approach in the proof of Lemma~2 in \cite{woolsey2019optimal}, we can show that $F \leq N_t$ such that Algorithm~1 needs at most $N_t$ iterations to complete. Hence, we omit the proof of the correctness of Algorithm~1 here.
\end{remark}
\begin{remark}
The connection between Algorithm 1 of this work and that of \cite{woolsey2019optimal} lies in that for the PIR storage placement problem, one places file sets at $L$ databases one at a time to fulfill certain storage requirement; Analogously, in the second sub-problem of the CEC computation assignment, we allocate computation row sets to $L$ sc-matrices one at a time to fulfill a computation load assignment.
\end{remark}
In the following, we will present an example to illustrate how Algorithm~1 is applied to find the optimal computation assignment.
\subsection{An Example of Algorithm \ref{algorithm:1} for Homogeneous Storage and Heterogeneous Computing Speed}
\begin{figure}
\centering
\centering \includegraphics[width=8cm]{elastic_comp_table.pdf}
\put(-75,43.2){$f$}
\put(-67.5,43.2){$\alpha_f$}
\put(-61,43.2){$m[1]$}
\put(-52.5,43.2){$m[2]$}
\put(-44,43.2){$m[3]$}
\put(-35.5,43.2){$m[4]$}
\put(-27,43.2){$m[5]$}
\put(-18.5,43.2){$m[6]$}
\put(-8,43.2){$L'$}
\vspace{-0.5cm}
\caption{~\small Computation assignment following Algorithm \ref{algorithm:1} for Example 1 at time $t=2$. Here, $N=6$, $L=3, F=4$. $f$ is the iteration index; Each row corresponds to an iteration $f$, where $\alpha_f$ is to the fraction of rows assigned to row set $\mathcal M_f$; $m[n]$ denotes the remaining computation load for machine $n$ at iteration $f$. The three red arrows from the first row to the second row represent that a fraction of $\alpha_1=\frac{2}{5}$ rows are assigned to $\mathcal M_1$, which are computed by machines (or equivalently, cs-matrices) $\mathcal P_1=\{1,5,6\}$. $L'$ represents the remaining total computation load at iteration $f$. At $f=1$, $L'=L=3$. After one iteration, we have $L'=3-3 \cdot \alpha_1=\frac{9}{5}$. Note that $\alpha_f$ is determined by lines 9 to 12 of Algoirthm \ref{algorithm:1}.
At each iteration $f$, $\mathcal P_f$ includes the machine with the smallest remaining $m[n]$ and $L-1=2$ machines with the largest remaining $m[n]$.}
\label{table: example}
\vspace{-0.4cm}
\end{figure}
We return to Example 1 presented in Sections~\ref{sec: Machines With Homogeneous Storage Constraints} and \ref{sec: Computation Load Examples}
and use Algorithm \ref{algorithm:1} to derive the computation (rows in each element of $\boldsymbol{\Mc}_t$) assignments for $t=2$, where machine 4 is preempted. The steps of the algorithm are shown in Fig.~\ref{table: example}. In this case, we showed that $\boldsymbol{\mu}^* = \left[\frac{2}{5},\frac{2}{5},\frac{3}{5}, \frac{4}{5},\frac{4}{5}\right]$ in Section~\ref{sec: Computation Load Examples}. In the first iteration, $f=1$, we have $L'=L$ and $\boldsymbol{m}=\boldsymbol{\mu}^*$ as no computations have been assigned yet. Rows of the respective cs-matrices are assigned to machine $1$, $5$, and $6$ because among all machines, machine $1$ has the least remaining computations to be assigned, and machines $5$ and $6$ have with the most remaining computations to be assigned. Moreover, note that
\be
\label{eq: m 1}
m[1] = \frac{2}{5} \leq \frac{L'}{L} - m[3] = 1 - \frac{3}{5}=\frac{2}{5},
\ee
where machine $3$ is the machine with the most remaining rows to be assigned that is not included in $\mathcal{P}_1=\{1,5,6\}$. Therefore, a fraction $\alpha_1=\frac{2}{5}$ of the rows are assigned to machines ${1,5,6}$.
Then, $\boldsymbol{m}$ is adjusted to reflect the remaining computations to be assigned and $L'=3-3\alpha_1 = \frac{9}{5}$. For iteration $2$, the condition of (\ref{eq: m 1}) relating to line 10 of Algorithm 1 is motivated by the necessary and sufficient conditions for the existence of a $(\boldsymbol{m}, L')$-FP solution given in Lemma~\ref{lemma: 1}. Here, we are interested in the FP solution for $\boldsymbol{m}$ which is updated after each iteration. In other words, for each iteration of a row set assignment in Algorithm 1, we ensure there exists a set of succeeding row assignments such that a final FP solution is obtained.
In the second iteration, $f=2$, machine $2$ is a machine with the least remaining rows to be assigned. Computations are assigned to machine $2$ and machines $3$ and $6$ which are a pair of machines with the most remaining computations to be assigned. Ideally, we would like to assign all the remaining rows to machine $2$. However, since
\be
m[2] = \frac{2}{5} > \frac{L'}{L} - m[5] = \frac{3}{5} - \frac{2}{5}=\frac{1}{5},
\ee
assigning $\frac{2}{5}$ of the rows to machine $2$ in this iteration will violate the condition of (\ref{eq: FP_exist}) in Lemma~\ref{lemma: 1} and as a consequence, there will be no valid filling solution going forward. Therefore, we set $\alpha_2=\frac{1}{5}$ instead and after this iteration $\boldsymbol{m}$ and $L'$ are adjusted accordingly.
In the third iteration, $f=3$, since
\be
m[2] = \frac{1}{5} \leq \frac{L'}{L} - m[6] = \frac{2}{5} - \frac{1}{5} = \frac{1}{5},
\ee
we assign $\alpha_3=\frac{1}{5}$ of the rows to machines ${2,3,5}$ and $\boldsymbol{m}$ and $L'$ are adjusted accordingly. Finally, in the fourth iteration, $f=4$, only three machines ${3,5,6}$ have non-zero computation assignment left and each is assigned $\alpha_4=\frac{1}{5}$ of the rows.
In this example, the algorithm converges in $F=4$ (fewer than $N_2=5$) iterations. The resulting computing assignment is shown in Fig.~\ref{fig: exp1}(b).
\subsection{Heterogeneous Storage Capacity and Computing Speeds}
When both storage capacity and computing speeds among machines are heterogeneous, machines may store more than cs-matrices. In this case, machine $n$ will pick $\lfloor\mu^*[n]\rfloor$ cs-matrices to compute entirely. Then, it will pick the remaining cs-matrix and compute a $\mu^*[n]-\lfloor\mu^*[n]\rfloor$ fraction of that cs-matrix. We will show this strategy requires $F\leq N_t$ iterations using Algorithm~2 and the number of computation assignments $F$ is at most equal to the number of available machines $N_t$. Overall, the assignment consists two steps.
In the first step, those cs-matrices that are computed entirely are put into in the cs-matrix sets of $\boldsymbol{\mathcal{P}}_t$.
In the second step, we assign row sets to the cs-matrices which are not entirely computed so that each row set in $\boldsymbol \Mc_t$ is guaranteed to be computed across $L$ cs-matrices. Next, we demonstrate that we can re-use Algorithm \ref{algorithm:1} for the second step of the computation assignment under a modified procedure described in Algorithm~2 (see pseudo-codes of Algorithm~\ref{algorithm:2}).
To explain the computation assignment process we introduce the following notations. For $n\in[N_t]$, let $\tilde{\mathcal{Q}}_n\subseteq\mathcal{Q}_n$ contain the indices of $\lfloor\mu^*[n]\rfloor$ randomly chosen cs-matrices in $\mathcal{Q}_n$ that machine $n$ computes entirely. Note that $|\tilde{\mathcal{Q}}_n|=\lfloor\mu^*[n]\rfloor$. If $\mu[n]<1$, then $\tilde{\mathcal{Q}}_n$ is empty. Next, machine $n$ randomly chooses one cs-matrix from
$\mathcal{Q}_n\setminus \tilde{\mathcal{Q}}_n$ to compute partially.
Note that when $\mu[n]$ is an integer, $\hat{\theta}[n]$ is simply a dummy variable and is never referenced, i.e. $\hat{\theta}[n] = \varnothing$. In the following, we denote $\hat{\boldsymbol{\theta}} = [\hat{\theta}[1], \hat{\theta}[2], \ldots, \hat{\theta}[N_t]]$. Then we define the partial computation vector $\hat{\boldsymbol{\mu}}\in\mathbb{R}_+^{N_t}$ such that
\be
\label{eq: mu partial}
\hat{\mu}[n] = \mu^*[n]-\lfloor\mu^*[n]\rfloor, \;\; \forall n \in [N_t].
\ee
Hence, machine $n$ will entirely compute each cs-matrix $\boldsymbol{\tilde{X}}_i$ for $i\in\tilde{\mathcal{Q}}_n$ and compute a $\hat{\mu}[n]$ fraction of the cs-matrix $\boldsymbol{\tilde{X}}_{\hat{\theta}[n]}$.
\begin{algorithm}
\caption{ Computation Assignment: Heterogeneous Storage Capacity and Computing Speeds}
\label{algorithm:2}
\begin{algorithmic}[1]
\item[ {\bf Input}: $\boldsymbol{\mu}^*$, $N_t$, $L$, $q$, $\tilde{\mathcal{Q}}_1,\ldots,\tilde{\mathcal{Q}}_{N_t}$ and $\hat{\boldsymbol{\theta}}$ ]
\quad $\%$ $\boldsymbol{\mu}^*$ is solution to first sub-problem (\ref{eq:optimal_form})-(\ref{eq: c_bounds-fork}).
\hspace*{4cm} $\%$ $\tilde{\mathcal{Q}}_1,\ldots,\tilde{\mathcal{Q}}_{N_t}$ and $\hat{\boldsymbol{\theta}}$ include pre-chosen cs-matrices based on $\boldsymbol{\mu}^*$.
\For {$n \in [N_t]$}
\State $\hat{\mu}[n] \leftarrow \mu^*[n]-\lfloor\mu^*[n]\rfloor$ \quad $\%$ determine the computation load for each partially computed \hspace*{5cm} $\%$cs-matrix.
\EndFor
\item $\hat{L} \leftarrow \sum_{n=1}^{N_t}\hat{\mu}[n]$ \quad \quad $\%$ $\hat L$ is the sum of the computation load over partially computed \hspace*{3.8cm} $\%$ cs-matrices. Use Algorithm 1 next to find computation assignment \hspace*{3.8cm} $\%$ for partially computed matrices.
\item $\hat{F}$, $\hat{\mathcal{M}}_1,\ldots,\hat{\mathcal{M}}_F$ and $\hat{\mathcal{P}}_1,\ldots,\hat{\mathcal{P}}_F$ $\leftarrow$ Output of {\bf Algorithm \ref{algorithm:1}} with $\hat{\boldsymbol{\mu}}$, $N_t$, $\hat{L}$, and $q$ as input
\item $F \leftarrow \hat{F}$
\For {$f \in [F]$}
\State $\mathcal{M}_f\leftarrow \hat{\mathcal{M}}_f$ \hspace*{1.5cm} $\%$ use the output row sets from Algorithm 1 as final row sets.
\State $\mathcal{P}_f\leftarrow \bigcup_{n\in\hat{\mathcal{P}}_f} \hat{\theta}[n] \cup \bigcup_{n\in[N_t]} \tilde{\mathcal{Q}}_n$ \quad $\%$ combine outputs of Algorithm 1 with fully computed \hspace*{6.2cm} $\%$ cs-matrices to obtain the final cs-matrices assignment
\EndFor
\item[ {\bf Output}: $F$, $\mathcal{M}_{1}, \ldots , \mathcal{M}_{F}$ and $\mathcal{P}_{1}, \ldots , \mathcal{P}_{F}$ ]
\end{algorithmic}
\end{algorithm}
Finally, we define the sum of the partial computation load vector $\hat{\boldsymbol{\mu}}$ as
\be
\hat{L} \triangleq \sum_{n=1}^{N_t}\hat{\mu}[n].
\ee
Note that $\hat{L}$ is an important parameter because it represents the number of cs-matrices that each row set needs to be assigned to, excluding those cs-matrices that are entirely computed. In other words, since the elements in $\boldsymbol{\mu}$ sum to $L$, there are $L-\hat{L}$ cs-matrices that are entirely computed by the machines. Therefore, in order to assign each row computation (or row set) in $\boldsymbol \Mc_t$ to $L$ cs-matrices, we assign each row set to $\hat{L}$ cs-matrices that are only partially computed. The detailed description of the proposed algorithm is given in Algorithm~2.
In Algorithm~2, we will perform Algorithm \ref{algorithm:1} on $\hat{\boldsymbol{\mu}}$, which iteratively fills some computations at $\hat{L}$ machines in each iteration. Following similar arguments as before, we need to ensure that such a filling problem solution exists and can be found using the proposed algorithm.
From Lemma~\ref{lemma: 1}, we see that a $(\hat{\boldsymbol{\mu}}, \hat L)$-FP solution exists because $\hat{\mu}[n] < 1$ for all $n\in[N_t]$ and $\hat{L} = \sum_{n=1}^{N_t}\hat{\mu}[n]$.
Then, similar to the previous analysis, it can be seen that Algorithm \ref{algorithm:1} will yield a $(\hat{\boldsymbol{\mu}}, \hat L)$-FP solution with $F\leq N_t$ iterations.
This means that, in order to use Algorithm \ref{algorithm:1}, instead of inputting $\boldsymbol{\mu}^*$ and $L$, we input $\hat{\boldsymbol{\mu}}$ and $\hat{L}$. Then, we label the output of Algorithm \ref{algorithm:1} as $\hat{F}$, $\hat{\mathcal{M}}_1,\ldots,\hat{\mathcal{M}}_F$ and $\hat{\mathcal{P}}_1,\ldots,\hat{\mathcal{P}}_F$. These variables represent the computation assignments at the cs-matrices that are partially computed, but computations are only assigned to $\hat{L}$, instead of $L$, cs-matrices. Note that, due to the one-to-one mapping between partially computed cs-matrices and machines, each $\hat{\mathcal{P}}_f$ represents the set of machines that are assigned to compute rows in $\mathcal{M}_f$ of the cs-matrix it is partially computing.
To complete the computation assignment, we must include the $L-\hat{L}$ cs-matrices that are entirely computed. Therefore,
\be
\mathcal{P}_f = \bigcup_{n\in\hat{\mathcal{P}}_f} \hat{\theta}[n] \; \cup \bigcup_{m\in[N_t]} \tilde{\mathcal{Q}}_m , \;\; \forall f\in[F],
\ee
where $|\bigcup_{n\in\hat{\mathcal{P}}_f} \hat{\theta}[n]| = \hat L$ and $| \bigcup_{m\in[N_t]} \tilde{\mathcal{Q}}_m| = L-\hat L$.
Since Algorithm \ref{algorithm:1} assigns computations to machines\footnote{Algorithm \ref{algorithm:1} is designed with the assumption that machine $n, \forall n \in [N]$ stores one cs-matrix, $\tilde{X}_n$.}, $\hat{\boldsymbol{\theta}}$ is needed to identify the cs-matrices that the machines partially compute. In addition, the number of computation assignments remains the same and $\hat{F}=F$. The row sets also remain the same, $\mathcal{M}_f=\hat{\mathcal{M}}_f$ for all $f\in[F]$.
Algorithm~2 will be illustrated using the following example.
\subsection{An Example of Algorithm \ref{algorithm:2} for Heterogeneous Storage and Computing Speed}
We consider Example 2 presented in Sections~\ref{sec: Machines With Heterogeneous Storage Constraints} and \ref{sec: Computation Load Examples}, where we have $L=6$ and $\boldsymbol \sigma = [2, 2, 2, 1, 1, 1]$. This means that machines $1$, $2$, and $3$ stores two cs-matrices while machines $4$, $5$, and $6$ stores one cs-matrix, respectively.
We assume no preempted machines at $t=1$. In this case, the optimal computation load vector is found to be $\boldsymbol{\mu}^* = \left[\;\frac{8}{11},\;\;\frac{12}{11},\;\;\frac{16}{11},\;\;\frac{8}{11},\;\;1,\;\;1\;\right]$ in Section~\ref{sec: Computation Load Examples}. Since $\mu^*[n] \geq 1, n \in \{2, 3, 5, 6\}$, it can be seen that machines $2$, $3$, $5$, and $6$ will compute all the rows sets of $\boldsymbol\Mc_1$ for one cs-matrix (see Fig.~\ref{fig: exp2}). Next, machines $1$, $2$, $3$ and $4$ have one cs-matrix to be partially computed, and each of them will compute a fraction of that cs-matrix.
Note that, since $\mu^*[5]=1$ and $\mu^*[6]=1$ are integers, by the algorithm design, no computations will be assigned to cs-matrices partially computed by machines $5$ and $6$. In other words, based on the optimal computation load vector $\boldsymbol{\mu}^*$, machines $5$ and $6$ only entirely compute cs-matrices. Using (\ref{eq: mu partial}), we can obtain the partial computation load vector as
$\hat{\boldsymbol{\mu}} = \left[\;\frac{8}{11},\;\;\frac{1}{11},\;\;\frac{5}{11},\;\;\frac{8}{11},\;\;0,\;\;0\;\right]$,
whose elements sum to $\hat{L}=2$. Our goal is to assign computations to cs-matrices partially computed by machines $1$ through $4$, where we assign the computations corresponding to each row set of $\boldsymbol\Mc_1$ to $\hat{L}=2$ cs-matrices at a time.
This will be done using Algorithm \ref{algorithm:1}.
In particular, let the indexes of the cs-matrices stored at the machines be $\mathcal{Q}_1=\{1,2 \}$, $\mathcal{Q}_2=\{3,4 \}$, $\mathcal{Q}_3=\{5,6 \}$, $\mathcal{Q}_4=\{7 \}$, $\mathcal{Q}_5=\{8 \}$ and $\mathcal{Q}_6=\{9 \}$. Each machine $n$ picks a set of $\lfloor \mu[n]\rfloor$ stored cs-matrices to be computed entirely which
could be that of $\tilde{\mathcal{Q}}_1=\varnothing$, $\tilde{\mathcal{Q}}_2=\{3 \}$, $\tilde{\mathcal{Q}}_3=\{5 \}$, $\tilde{\mathcal{Q}}_4=\varnothing$, $\tilde{\mathcal{Q}}_5=\{8 \}$ and $\tilde{\mathcal{Q}}_6=\{9 \}$. Moreover, each machine selects an index of a stored cs-matrix to be partially computed, which are denoted as
$\hat{\boldsymbol{\theta}}=\left[\;1,\;\;4,\;\;6,\;\;7,\;\;0,\;\;0\;\right]$.
In the first iteration of Algorithm \ref{algorithm:1} inside Algorithm 2 (line 5), we aim to assign some computations to the cs-matrix $\boldsymbol{\tilde{X}}_{\hat{\theta}[2]}$, since $\hat \mu[2]$ is the smallest non-zero element in $\hat{\boldsymbol{\mu}}$.
$\boldsymbol{\tilde{X}}_{\hat{\theta}[2]}$ will be partially computed by machine $2$. We also assign this computation to machine $4$ because it is a machine with the largest remaining computations to be assigned (line 8 in Algorithm 1). Therefore, we assign a $\alpha_1=\frac{1}{11}$ fraction of rows to the cs-matrices partially computed by machines $2$ and $4$ (line 10 in Algorithm 1). After this iteration $\boldsymbol{m}=\left[\;\frac{8}{11},\;\;0,\;\;\frac{5}{11},\;\;\frac{7}{11},\;\;0,\;\;0\;\right]$ (line 15 in Algorithm 1), and machines $1$ and $3$ are the machines with the most and least, respectively, remaining computations to be assigned. From line 10 in Algorithm~1, since
\be
m[3]=\frac{5}{11}> \frac{3}{11}=\frac{\hat{L}'}{\hat{L}}-m[4],
\ee
we assign a $\alpha_2=\frac{3}{11}$ fraction of rows to machines $1$ and $3$. Then,
after this iteration,
we find $\boldsymbol{m} = \left[ \; \frac{5}{11}, \;\;0,\;\;\frac{2}{11},\;\;\frac{7}{11},\;\;0,\;\;0\;\right]$. By a similar approach, next we assign a $\alpha_3=\frac{2}{11}$ fraction of rows to machines $3$ and $4$ and a $\alpha_4=\frac{5}{11}$ fraction of rows to machines $1$ and $4$. After this iteration, we can find that $\boldsymbol{m}=\boldsymbol{0}$.
Based on above procedure in Algorithm \ref{algorithm:1}, we obtain the output $\hat{F}$, $\hat{\mathcal{M}}_1,\ldots,\hat{\mathcal{M}}_4$ which contain a $\frac{1}{11}$, $\frac{3}{11}$, $\frac{2}{11}$ and $\frac{5}{11}$ fraction of rows, respectively, and the machines assigned to compute row sets $\mathcal M_f$ are given by $\hat{\mathcal{P}}_1=\{ 2,4 \}$, $\hat{\mathcal{P}}_2=\{ 1,3 \}$, $\hat{\mathcal{P}}_3=\{ 3,4 \}$ and $\hat{\mathcal{P}}_4=\{ 1,4 \}$.
For the final solution, the number of assignments stays the same, $F=\hat{F}=4$, and the row sets stay the same $\mathcal{M}_f=\hat{\mathcal{M}}_f,\;\forall f\in [F]$. However, using $\hat{\mathcal{P}}_f,\;\forall f\in [F]$, we need to define specifically which cs-matrices are being computed for each row set. Note that, $\hat{\mathcal{P}}_f$ is the set of machines that are assigned to compute rows in $\mathcal{M}_f$ of the corresponding cs-matrix partially computed. We use $\hat{\boldsymbol{\theta}}$ to resolve the indexes of the cs-matrices from $\hat{\mathcal{P}}_f$. Then, we also need to include the indexes of all cs-matrices that are entirely computed from $\tilde{\mathcal{Q}}_1,\ldots ,\tilde{\mathcal{Q}}_6$. For example, recall that, $\tilde{\mathcal{Q}}_1=\varnothing$, $\tilde{\mathcal{Q}}_2=\{3 \}$, $\tilde{\mathcal{Q}}_3=\{5 \}$, $\tilde{\mathcal{Q}}_4=\varnothing$, $\tilde{\mathcal{Q}}_5=\{8 \}$ and $\tilde{\mathcal{Q}}_6=\{9 \}$, we then obtain the cs-matrix sets
\be
\mathcal{P}_1=\{\hat{\theta}[2], \hat{\theta}[4],3,5,8,9 \}=\{3,4,5,7,8,9\}.
\ee
Similarly, we see that $\mathcal{P}_2=\{1,3,5,6,8,9\}$, $\mathcal{P}_3=\{3,5,6,7,8,9\}$ and $\mathcal{P}_4=\{1,3,5,7,8,9\}$.
\section{Conclusions}
\label{sec: conclusions}
In this paper, we study the heterogeneous coded elastic computing problem where computing machines store MDS coded data matrices and may have both varying computation speeds and storage capacity. The key of this problem is to design a fixed storage assignment scheme and a computation assignment strategy such that no redundant computations are present and the overall computation time can be minimized as long as there {are at least $L$ cs-matrices stored among the available machines}. Given a set of available machines with arbitrary relative computation speeds and storage capacity, we first proposed a novel combinatorial min-max problem formulation in order to minimize the overall computation time, which is determined by the machines that need the longest computation time. Based on the MDS coded storage assignment, the goal of this optimization problem is to assign computation tasks to machines such that the overall computation time is minimized.
In order to precisely solve this combinatorial problem, we decompose it into a convex optimization problem to determine the optimal computation load of each machine and a computation assignment problem {that yields} the resulting computation load from the convex optimization problem. Then, we adapt low-complexity iterative algorithms to find the optimal solution to the original combinatorial problems, which require a number of iterations no greater than the number of available machines. The proposed heterogeneous coded elastic computing design has the potential to perform computations faster than the state-of-the-art design which was developed for a homogeneous distributed computing system.
\appendices
\bibliographystyle{IEEEbib}
\bibliography{references_d2d}
\end{document} | {"config": "arxiv", "file": "2008.05141/Heterogeneous CEC_ArXiv_v2/CEC_HetStorage_v5.tex"} |
TITLE: Can the cardinality of a set $\mathbb{S}^n$ be written in terms of $\lvert S\rvert$?
QUESTION [0 upvotes]: The question is pretty simple:
Can the cardinality of a set $\mathbb{S}^n$ be written in terms of $\lvert \mathbb{S}\rvert$? This includes transfinite cardinals.
Here, $\mathbb{S}^n=\{(x_1,\cdots,x_n):x_i\in \mathbb{S}\}$ and $n\in\Bbb N$.
I’ve researched the answer and I can’t find anything. I understand set theory just fine so don’t hold back on any explanations or proofs. A well-justified answer would get the $\color{green}\checkmark$.
REPLY [2 votes]: I assume that by $n$, you mean a natural number $n\in\mathbb N$.
If $|\mathbb S|=m$ is finite, this is simply a matter of combinatorics: for each coordinate in the $n$-tuple $(x_1,\dots,x_n)$ we have $m$ options, so the total number of $n$-tuples is simply $n\cdot m$.
If $\mathbb S$ is infinite, things get a little more complicated. For $\aleph$ numbers we have that $\aleph_\alpha\cdot\aleph_\alpha=\aleph_\alpha$. You can prove this by using the canonical well-order on $Ord^2$ by letting $(\alpha,\beta)<(\gamma,\delta)$ iff
$\max\{\alpha,\beta\}<\max\{\gamma,\delta\}$, or
$\max\{\alpha,\beta\}=\max\{\gamma,\delta\}$ and $(\alpha,\beta)$ is less than $(\gamma,\delta)$ using the lexicographical order.
If there exists a bijection between $\mathbb S$ and an aleph number $\aleph_\alpha$, then we can see that $\mathbb S^n$ has cardinality $|\mathbb (\aleph_\alpha)^n|=\prod_n \aleph_\alpha=\aleph_\alpha$. When the Axiom of Choice is true, any infinite $\mathbb S$ has a cardinality equal to an aleph number.
However, without choice, there exist sets $\mathbb S$ that have a cardinality unequal to any $\aleph$ number, and hence we cannot say $\mathbb S^n$ has the same cardinality as $\mathbb S$. In fact without the Axiom of Choice it is consistent that $|\mathbb S|\neq |\mathbb S\times\mathbb S|$. The statement that $\mathbb S$ and $\mathbb S\times \mathbb S$ have the same cardinality is sufficient to prove the Axiom of Choice.
With or without choice, you could define the cardinality of $\mathbb S^n$ as the cardinality of the set of functions $f:\{0,1,\dots,n-1\}\to\mathbb S$. This is how $|\mathbb S^n|$ is defined. Since $n=\{0,1,\dots,n-1\}$ is a cardinal itself, and cardinal exponentiation $\kappa^\lambda$ is defined as the cardinality of the set of functions $\lambda\to\kappa$, we see that $|\mathbb S^n|=|\mathbb S|^n$. | {"set_name": "stack_exchange", "score": 0, "question_id": 3265911} |
\begin{document}
\maketitle
\begin{abstract}
For the Hermitian inexact Rayleigh quotient iteration (RQI), we
present a new general theory, independent of iterative solvers for
shifted inner linear systems. The theory shows that the method
converges at least quadratically under a new condition, called the
uniform positiveness condition, that may allow inner tolerance
$\xi_k\geq 1$ at outer iteration $k$ and can be considerably weaker
than the condition $\xi_k\leq\xi<1$ with $\xi$ a constant not near
one commonly used in literature. We consider the convergence of the
inexact RQI with the unpreconditioned and tuned preconditioned
MINRES method for the linear systems. Some attractive properties are
derived for the residuals obtained by MINRES. Based on them and the
new general theory, we make a more refined analysis and establish a
number of new convergence results. Let $\|r_k\|$ be the residual
norm of approximating eigenpair at outer iteration $k$. Then all the
available cubic and quadratic convergence results require
$\xi_k=O(\|r_k\|)$ and $\xi_k\leq\xi$ with a fixed $\xi$ not near
one, respectively. Fundamentally different from these, we prove that
the inexact RQI with MINRES generally converges cubically,
quadratically and linearly provided that $\xi_k\leq\xi$ with a
constant $\xi<1$ not near one, $\xi_k=1-O(\|r_k\|)$ and
$\xi_k=1-O(\|r_k\|^2)$, respectively. Therefore, the new convergence
conditions are much more relaxed than ever before.
The theory can be used to design practical stopping criteria to
implement the method more effectively.
Numerical experiments confirm our results.
\bigskip
\textbf{Keywords.} Hermitian, inexact RQI, uniform positiveness
condition, convergence, cubic, quadratic, inner iteration, outer
iteration, unprecondtioned MINRES, tuned preconditioned MINRES
\bigskip
{\bf AMS subject classifications.}\ \ 65F15, 65F10, 15A18
\end{abstract}
\section{Introduction} \label{SecIntro}
We consider the problem of computing an eigenvalue $\lambda$ and the
associated eigenvector $x$ of a large and possibly sparse Hermitian
matrix $A\in \mathbb{C}^{n\times n}$, i.e.,
\begin{equation}
Ax=\lambda x.
\end{equation}
Throughout the paper, we are interested in the eigenvalue
$\lambda_1$ closest to a target $\sigma$ and its corresponding
eigenvector $x_1$ in the sense that
\begin{equation}
|\lambda_1-\sigma| <|\lambda_2-\sigma| \le \cdots \le
|\lambda_n-\sigma|. \label{sigma}
\end{equation}
Suppose that $\sigma$ is between $\lambda_1$ and $\lambda_2$. Then we have
\begin{equation}
|\lambda_1-\sigma|<\frac{1}{2}|\lambda_1-\lambda_2|.\label{gap}
\end{equation}
We denote by $x_1, x_2,\ldots, x_n$ are the unit length eigenvectors
associated with $\lambda_1,\lambda_2,\ldots,\lambda_n$. For brevity
we denote $(\lambda_1,x_1)$ by $(\lambda,x)$. There are a number of
methods for computing $(\lambda,x)$, such as the inverse iteration
\cite{ParlettSEP}, the Rayleigh quotient iteration (RQI)
\cite{ParlettSEP}, the Lanczos method and its shift-invert variant
\cite{ParlettSEP}, the Davidson method and the Jacobi--Davidson
method \cite{stewart,vorst}. However, except the standard Lanczos
method, these methods and shift-invert Lanczos require the exact
solution of a possibly ill-conditioned linear system at each
iteration. This is generally very difficult and even impractical by
a direct solver since a factorization of a shifted $A$ may be too
expensive. So one generally resorts to iterative methods to solve
the linear systems involved, called inner iterations. We call
updates of approximate eigenpairs outer iterations. A combination of
inner and outer iterations yields an inner-outer iterative eigensolver,
also called an inexact eigensolver.
Among the inexact eigensolvers available, the inexact inverse iteration
and the inexact RQI are the simplest and most basic ones. They not
only have their own rights but also are key ingredients of other more
sophisticated and practical inexact solvers, such as inverse
subspace iteration \cite{robbe}
and the Jacobi--Davidson method. So one must first
analyze their convergence. This is generally the first step towards
better understanding and analyzing other more practical inexact
solvers.
For $A$ Hermitian or non-Hermitian, the inexact inverse iteration
and the inexact RQI have been considered, and numerous convergence
results have been established in many papers, e.g.,
\cite{mgs06,MullerVariableShift,freitagspence,
golubye,hochnotay,Lailin,NotayRQI,SimonciniRQI,Smit,EshofJD,xueelman}
and the references therein. For the Hermitian eigenproblem, general
theory for the inexact RQI can be found in Berns-M\"uller and
Spence~\cite{mgs06}, Smit and Paadekooper \cite{Smit} and van den
Eshof~\cite{EshofJD}. They prove that the inexact RQI achieves cubic
and quadratic convergence with decreasing inner tolerance
$\xi_k=O(\|r_k\|)$ and $\xi_k\leq\xi<1$ with a constant $\xi$ not
near one. Supposing that the shifted linear systems are solved by
the minimal residual method (MINRES) \cite{paige,saad},
mathematically equivalent to the conjugate residual method
\cite{saad}, Simoncini and Eld$\acute{e}$n~\cite{SimonciniRQI} prove
the cubic and quadratic convergence of the inexact RQI with MINRES
and present a number of important results under the same assumption
on $\xi_k$. Simoncini and Eld$\acute{e}$n first observed that the
convergence of the inexact RQI may allow $\xi_k$ to almost stagnate,
that is, i.e., $\xi_k$ near one. Xue and Elman~\cite{xueelman} have
refined and extended some results due to Simoncini and
Eld$\acute{e}$n. They have proved that MINRES typically exhibits a
very slow residual decreasing property (i.e., stagnation in their
terminology) during initial steps but the inexact RQI may still
converge; it is the smallest harmonic Ritz value that determines the
convergence of MINRES for the shifted linear systems.
Furthermore, although Xue and Elman's results have indicated
that very slow MINRES residual
decreasing {\em may not} prevent the convergence of inexact RQI, too
slow residual decreasing does matter and will make it fail to
converge. Besides, for the inexact RQI, to the
author's best knowledge, there is no result available on linear
convergence and its conditions.
In this paper we first study the convergence of the inexact RQI,
independent of iterative solvers for inner linear systems. We
present new general convergence results under a certain
uniform positiveness condition, which takes into account the residual
directions obtained by iterative solvers for the inner linear
systems. Unlike the common condition $\xi_k\leq\xi<1$ with $\xi$ a constant,
it appears that the uniform positiveness condition critically
depends on iterative solvers for inner iterations and may allow
$\xi_k\approx 1$ and even $\xi_k>1$, much weaker than the common condition
in existing literatures.
We then focus on the inexact RQI with the unpreconditioned MINRES
used for solving inner shifted linear systems. Our key observation
is that one usually treats the residuals obtained by MINRES as general ones,
simply takes their norms but ignores their directions. As will be clear
from our general convergence results, residual directions of the
linear systems play a crucial role in refining convergence analysis of
the inexact RQI with MINRES. We first establish a few attractive properties of
the residuals obtained by MINRES for the shifted linear systems. By
combining them with the new general convergence theory, we derive a
number of new insightful results that are not only stronger than but
also fundamentally different from the known ones in the literature.
We show how the inexact RQI with MINRES meets the uniform
positiveness condition and how it behaves if the condition fails to
hold.
As will be clear, we trivially have $\xi_k\leq 1$ for MINRES at any
inner iteration step. We prove that the inexact RQI with MINRES
generally converges cubically if the uniform positiveness condition
holds. This condition is shown to be equivalent to $\xi_k\leq \xi$
with a fixed $\xi$ not near one, but the inexact RQI with MINRES now
has cubic convergence other than the familiar quadratic convergence.
Cubic convergence does not require decreasing inner tolerance
$\xi_k=O(\|r_k\|)$ any more. We will see that $\xi=0.1,\ 0.5$ work
are enough, $\xi=0.8$ works well and a smaller $\xi$ is not
necessary. We prove that quadratic convergence only requires
$\xi_k=1-O(\|r_k\|)$, which tends to one as $\|r_k\|\rightarrow 0$
and is much weaker than the familiar condition $\xi_k\leq\xi$ with a
constant $\xi$ not near one. Besides, we show that a linear
convergence condition is $\xi_k= 1-O(\|r_k\|^2)$, closer to one than
$1-O(\|r_k\|)$. Therefore, if stagnation occurs during inner
iterations, the inexact RQI may converge quadratically or linearly;
if stagnation is too serious, that is, if $\xi_k$ is closer to one
than $1-O(\|r_k\|^2)$, the method may fail to converge. Note that,
for the inner linear systems, the smaller $\xi_k$ is, the more
costly it is to solve them using MINRES. As a result, in order to
achieve cubic and quadratic convergence, our new conditions are more
relaxed and easier to meet than the corresponding known ones.
Therefore, our results not only give new insights into the method
but also have impacts on its effective implementations. They allow
us to design practical criteria to best control inner tolerance to
achieve a desired convergence rate and to implement the method more
effectively than ever before. Numerical experiments demonstrate
that, in order to achieve cubic convergence, our new implementation
is about twice as fast as the original one with $\xi_k=O(\|r_k\|)$.
Besides, we establish a lower bound on the norms of approximate
solutions $w_{k+1}$'s of the linear systems obtained by MINRES. We
show that they are of $O(\frac{1}{\|r_k\|^2})$,
$O(\frac{1}{\|r_k\|})$ and $O(1)$ when the inexact MINRES converges
cubically, quadratically and linearly, respectively. So
$\|w_{k+1}\|$ can reflect how fast the inexact RQI converges and can
be used to control inner iteration, similar to those done in, e.g.,
\cite{SimonciniRQI,xueelman}. Making use of the bound, as a
by-product, we present a simpler but weaker convergence result on
the inexact RQI with MINRES. It and the bound for $\|w_{k+1}\|$ are
simpler and interpreted more clearly and easily than those obtained
by Simoncini and Eld$\acute{e}$n~\cite{SimonciniRQI}. However, we
will see that our by-product and their result are weaker than our
main results described above. An obvious drawback is that the cubic
convergence of the exact RQI and of the inexact RQI with MINRES
cannot be recovered when $\xi_k=0$ and $\xi_k=O(\|r_k\|)$,
respectively.
It appears \cite{freitagspence,freitag08b,xueelman} that it is often
beneficial to precondition each shifted inner linear system with a
tuned preconditioner, which can be much more effective than the corresponding
usual preconditioner. How to extend the main results on the inexact RQI with
the unpreconditioned MINRES case to the inexact RQI with
a tuned preconditioned MINRES turns out to be nontrivial. We will carry out
this task in the paper.
The paper is organized as follows. In Section \ref{SecIRQI}, we
review the inexact RQI and present new general convergence results,
independent of iterative solvers for the linear systems. In
Section~\ref{secminres}, we present cubic, quadratic and linear
convergence results on the inexact RQI with the unpreconditioned MINRES.
In Section~\ref{precondit}, we show that
the theory can be extended to the tuned preconditioned MINRES case.
We perform numerical experiments to confirm our results in
Section~\ref{testminres}. Finally, we end up with some concluding
remarks in Section~\ref{conc}.
Throughout the paper, denote by the superscript
* the conjugate transpose of a matrix or vector, by $I$ the
identity matrix of order $n$, by $\|\cdot\|$ the vector 2-norm and
the matrix spectral norm, and by $\lambda_{\min},\lambda_{\max}$
the smallest and largest eigenvalues of $A$, respectively.
\section{The inexact RQI and general convergence theory}\label{SecIRQI}
RQI is a famous iterative algorithm and its locally cubic
convergence for Hermitian problems is very attractive
\cite{ParlettSEP}. It plays a crucial role in some practical
effective algorithms, e.g., the QR algorithm,
\cite{GolubMC,ParlettSEP}. Assume that the unit length $u_k$ is already a
reasonably good approximation to $ x $. Then the Rayleigh quotient
$\theta_k = u^*_k A u_k$ is a good approximation to $\lambda$ too.
RQI computes a new approximation $u_{k+1}$ to $x$ by solving the
shifted inner linear system
\begin{equation} \label{EqERQILinearEquation1}
( A - \theta_k I ) w = u_k
\end{equation}
for $w_{k+1}$ and updating $u_{k+1}=w_{k+1}/\|w_{k+1}\|$ and
iterates until convergence. It is known
\cite{mgs06,NotayRQI,ParlettSEP} that if
$$
|\lambda-\theta_0|
<\frac{1}{2}\min_{j=2,3,\ldots,n}|\lambda-\lambda_j|
$$
then RQI (asymptotically) converges to $ \lambda $ and $x$
cubically. So we can assume that the eigenvalues of $A$ are ordered
as
\begin{equation}
|\lambda-\theta_k| <|\lambda_2-\theta_k| \le \cdots \le
|\lambda_n-\theta_k|. \label{order}
\end{equation}
With this ordering and $\lambda_{\min}\leq\theta_k\leq\lambda_{\max}$,
we have
\begin{equation}
|\lambda-\theta_k| <\frac{1}{2}|\lambda-\lambda_2|. \label{sep1}
\end{equation}
An obvious drawback of RQI is that at each iteration $k$ we need the
exact solution $w_{k+1}$ of $( A - \theta_k I ) w= u_k$. For a large
$A$, it is generally very expensive and even impractical to solve it
by a direct solver due to excessive memory and/or computational
cost. So we must resort to iterative solvers to get an approximate
solution of it. This leads to the inexact RQI.
\eqref{EqERQILinearEquation1} is solved by an iterative solver
and an approximate solution $w_{k+1}$ satisfies
\begin{equation} \label{EqIRQILinearEquation1}
( A - \theta_k I )w_{k+1} = u_k + \xi_k d_k, \quad u_{k+1} =w_{k+1}
/ \|w_{k+1} \|
\end{equation}
with $ 0 < \xi_k \le \xi$, where $\xi_kd_k $ with $\|d_k\|=1$ is the
residual of $(A-\theta_k I)w=u_k$, $d_k$ is the residual direction
vector and $\xi_k$ is the {\em relative} residual norm (inner
tolerance) as $\|u_k\|=1$ and may change at every outer iteration
$k$. This process is summarized as Algorithm 1. If $\xi_k=0$ for all
$k$, Algorithm 1 becomes the exact RQI.
\begin{algorithm}
\caption{The inexact RQI} \label{AlgIRQI}
\begin{algorithmic}[1]
\STATE Choose a unit length $u_0$, an approximation to $x$.
\FOR{$k$=0,1, \ldots}
\STATE $\theta_k = u^*_k A u_k$.
\STATE Solve $(A-\theta_k I)w=u_k$ for $w_{k+1}$ by an
iterative solver with
$$
\|(A-\theta_k I)
w_{k+1}-u_k\|=\xi_k\leq\xi.
$$
\STATE $u_{k+1} = w_{k+1}/\|w_{k+1}\|$.
\STATE If convergence occurs, stop.
\ENDFOR
\end{algorithmic}
\end{algorithm}
It is always assumed in the literature that $\xi<1$ when making a
convergence analysis. This requirement seems very natural as
heuristically \eqref{EqERQILinearEquation1} should be solved with
some accuracy. Van den Eshof \cite{EshofJD} presents a quadratic
convergence bound that requires $\xi$ not near one, improving a
result of \cite{Smit} by a factor two. Similar quadratic convergence
results on the inexact RQI have also been proved in some other
papers, e.g., \cite{mgs06,SimonciniRQI,Smit}, under the same
condition on $\xi$. To see a fundamental difference between the
existing results and ours (cf.
Theorem~\ref{ThmIRQIQuadraticConvergence}), we take the result of
\cite{EshofJD} as an example and restate it. Before proceeding, we
define
$$
\beta=\frac{\lambda_{\max}-\lambda_{\min}}{|\lambda_2-\lambda|}
$$
throughout the paper. We comment that
$\lambda_{\max}-\lambda_{\min}$ is the spectrum spread of $A$ and
$|\lambda_2-\lambda|$ is the gap or separation of $\lambda$ and the
other eigenvalues of $A$.
\begin{theorem}{\rm \cite{EshofJD}} \label{eshof}
Define $\phi_k=\angle(u_k, x)$ to be the acute angle between $u_k$
and $x$, and assume that $w_{k+1}$ is such that
\begin{equation} \label{EqIRQIXi}
\| ( A - \theta_k I ) w_{k+1} - u_k \|=\xi_k \le \xi<1.
\end{equation}
Then letting $\phi_{k+1}=\angle(u_{k+1},x)$ be the acute between
$u_{k+1}$ and $x$, the inexact RQI converges quadratically:
\begin{equation} \label{EqIRQIVan den Eshof}
\tan \phi_{k+1} \le \frac{\beta\xi}{\sqrt{ 1 - \xi^2 }} \sin^2
\phi_k + O( \sin^3 \phi_k ).
\end{equation}
\end{theorem}
However, we will see soon that the condition $\xi<1$ not near one
can be stringent and unnecessary for quadratic convergence. To see
this, let us decompose $ u_k $ and $ d_k $ into the orthogonal
direct sums
\begin{eqnarray}
&u_k = x \, \cos \phi_k + e_k \, \sin \phi_k, \quad e_k \perp x,
\label{EqIRQIDecompositoinOfu_k} \\
&d_k = x \, \cos \psi_k + f_k \, \sin \psi_k, \quad f_k \perp x
\label{EqIRQIDecompositoinOfd_k}
\end{eqnarray}
with $\|e_k\|=\|f_k\|=1$ and $\psi_k=\angle(d_k,x)$. Then
\eqref{EqIRQILinearEquation1} can be written as
\begin{eqnarray} \label{EqIRQILinearEquation2}
( A - \theta_k I ) w_{k+1} = ( \cos \phi_k + \xi_k \, \cos \psi_k ) \, x
+ ( e_k \, \sin \phi_k + \xi_k \, f_k \, \sin \psi_k ).
\end{eqnarray}
Inverting $ A - \theta_k I$ gives
\begin{equation} \label{EqIRQIwk}
w_{k+1} = ( \lambda - \theta_k )^{-1} ( \cos \phi_k + \xi_k \,
\cos \psi_k ) \, x + ( A - \theta_k I )^{-1} ( e_k \, \sin \phi_k
+ \xi_k \, f_k \, \sin \psi_k ).
\end{equation}
We now revisit the convergence of the inexact RQI and prove that it
is the size of $|\cos\phi_k + \xi_k\cos \psi_k|$ other than
$\xi_k\leq\xi<1$ that is critical in affecting convergence.
\begin{theorem} \label{ThmIRQIQuadraticConvergence}
If the uniform positiveness condition
\begin{equation} \label{EqIRQIC}
|\cos \phi_k + \xi_k \cos \psi_k| \ge c
\end{equation}
is satisfied with a constant $c>0$ uniformly independent of $k$,
then
\begin{eqnarray}
\tan \phi_{k+1} &\le &2\beta\frac{\sin \phi_k + \xi_k \sin \psi_k}
{|\cos \phi_k + \xi_k \cos \psi_k|}\sin^2\phi_k \label{bound1}\\
&\le &\frac{2\beta\xi_k}{c} \sin^2 \phi_k + O( \sin^3 \phi_k ),
\label{EqIRQIQuadraticConvergence}
\end{eqnarray}
that is, the inexact RQI converges quadratically at least for
uniformly bounded $\xi_k\leq\xi$ with $\xi$ some moderate constant.
\end{theorem}
\begin{proof}
Note that (\ref{EqIRQIwk}) is an orthogonal direct sum decomposition
of $w_{k+1}$ since for a Hermitian $A$ the second term is orthogonal
to $x$. We then have
$$
\tan \phi_{k+1} = | \lambda - \theta_k | \frac{\| ( A - \theta_k I
)^{-1} ( e_k \, \sin \phi_k + \xi_k f_k \, \sin \psi_k ) \|} {|\cos
\phi_k + \xi_k \, \cos \psi_k|}.
$$
As $A$ is Hermitian and $e_k\perp x$, it is easy to verify (cf.
\cite[p. 77]{ParlettSEP}) that
$$
\lambda -\theta_k=(\lambda - e^*_k A e_k)\sin^2\phi_k,
$$
$$
|\lambda_2 - \lambda| \le | \lambda - e^*_k A e_k | \le
\lambda_{\max}-\lambda_{\min},
$$
\begin{equation}
|\lambda_2-\lambda|\sin^2\phi_k\leq |\lambda - \theta_k| \leq
(\lambda_{\max}-\lambda_{\min})\sin^2\phi_k. \label{error1}
\end{equation}
Since {\small
\begin{eqnarray*}
\|( A -\theta_k I )^{-1} ( e_k \,
\sin \phi_k + \xi_k f_k \, \sin \psi_k )\|& \le& \| ( A - \theta_k
I )^{-1} e_k \| \sin \phi_k +
\xi_k \| ( A - \theta_k I )^{-1} f_k \| \sin \psi_k \\
& \le& |\lambda_2 -\theta_k|^{-1}(\sin \phi_k + \xi_k \sin\psi_k
)\\
&\leq&2 |\lambda_2-\lambda|^{-1}(\sin \phi_k + \xi_k \sin\psi_k)
\end{eqnarray*}}
with the last inequality holding because of (\ref{order}), we get
\begin{eqnarray*}
\tan \phi_{k+1}& \le& | \lambda - e^*_k A e_k | \sin^2 \phi_k \frac{
2(\sin \phi_k + \xi_k \sin \psi_k)} { |\lambda_2 - \lambda||\cos
\phi_k + \xi_k\cos
\psi_k|}\\
&\leq&\frac{2|\lambda_n -\lambda|}{|\lambda_2 -\lambda|}\frac{\sin
\phi_k + \xi_k \sin \psi_k} {|\cos \phi_k + \xi_k \cos
\psi_k|}\sin^2\phi_k
\\
&\leq&\xi_k \frac{2(\lambda_{\max}-\lambda_{\min})}{c|\lambda_2 -
\lambda|} \sin^2 \phi_k + O( \sin^3 \phi_k ).
\end{eqnarray*}
\end{proof}
Define $\|r_k\|=\|(A-\theta_k I)u_k\|$. Then by (\ref{sep1}) we get
$|\lambda_2-\theta_k|>\frac{|\lambda_2-\lambda|}{2}$. So it is known
from \cite[Theorem 11.7.1]{ParlettSEP} that
\begin{equation}
\frac{\|r_k\|}{\lambda_{\max}-\lambda_{\min}}\leq\sin\phi_k\leq\frac{2\|r_k\|}
{|\lambda_2-\lambda|}.\label{parlett}
\end{equation}
We can present an alternative of (\ref{EqIRQIQuadraticConvergence})
when $\|r_k\|$ is concerned.
\begin{theorem}\label{resbound}
If the uniform positiveness condition {\rm (\ref{EqIRQIC})} holds,
then
\begin{equation}
\|r_{k+1}\|\leq\frac{8\beta^2\xi_k} {c|\lambda_2-\lambda|}\|r_k\|^2
+O(\|r_k\|^3).\label{resgeneral}
\end{equation}
\end{theorem}
\begin{proof}
Note from (\ref{parlett}) that
$$
\frac{\|r_{k+1}\|}{\lambda_{\max}-\lambda_{\min}}\leq\sin\phi_{k+1}\leq\tan\phi_{k+1}.
$$
Substituting it and the upper bound of (\ref{parlett}) into
(\ref{EqIRQIQuadraticConvergence}) establishes (\ref{resgeneral}).
\end{proof}
For the special case that the algebraically smallest eigenvalue is
of interest, Jia and Wang~\cite{jiawang} proved a slightly different
result from Theorem~\ref{ThmIRQIQuadraticConvergence}. Similar to
the literature, however, they still assumed that $\xi_k\leq\xi<1$ and did not
analyze the theorem further, though their proof did not use this
assumption. A striking insight from the theorem is that the
condition $\xi_k\leq\xi<1$ may be considerably relaxed. If
$\cos\psi_k$ is positive, the uniform positiveness condition holds
for any uniformly bounded $\xi_k\leq\xi$. So we can have $\xi\geq 1$
considerably. If $\cos\psi_k$ is negative, then
$|\cos\phi_k+\xi_k\cos\psi_k|\geq c$ means that
$$
\xi_k\leq\frac{c-\cos\phi_k}{\cos\psi_k}
$$
if $\cos\phi_k+\xi_k\cos\psi_k\geq c$ with $c<1$ is required or
$$
\xi_k\geq\frac{c+\cos\phi_k}{-\cos\psi_k}
$$
if $-\cos\phi_k-\xi_k\cos\psi_k\geq c$ is required. Keep in mind
that $\cos\phi_k\approx 1$. So the size of $\xi_k$ critically depends on
that of $\cos\psi_k$, and for a given $c$ we may have $\xi_k\approx 1$ and even
$\xi_k>1$. Obviously, without the information on $\cos\psi_k$, it
would be impossible to access or estimate $\xi_k$. As a general
convergence result, however, its significance and importance consist
in that it reveals a new remarkable fact: It appears first time that (13)
may allow $\xi_k$ to be relaxed (much) more than that used in all
known literatures and meanwhile preserves the same convergence rate
of outer iteration. As a result, the condition $\xi_k\leq\xi<1$ with
constant $\xi$ not near one may be stringent and unnecessary for the
quadratic convergence of the inexact RQI, independent of iterative
solvers for the linear systems. The new condition has a strong impact on
practical implementations as we must use a
certain iterative solver, e.g., the very popular MINRES method and
the Lanczos method (SYMMLQ) for solving $(A-\theta_k I)w=u_k$. We will
see that $\cos\psi_k$ is critically iterative solver dependent. For
MINRES, $\cos\psi_k$ has some very attractive properties,
by which we can precisely determine bounds for $\xi_k$ in
Section~\ref{secminres},
which are much more relaxed than those in the literature. For the Lanczos
method, we refer to \cite{jia09} for $\cos\psi_k$ and $\xi_k$, where
$\cos\psi_k$ and $\xi_k$ are fundamentally different from those
obtained by MINRES and $\xi_k\geq 1$ considerably is allowed.
{\bf Remark 1}. If $ \xi_k = 0 $ for all $ k $, the inexact RQI
reduces to the exact RQI and
Theorems~\ref{ThmIRQIQuadraticConvergence}--\ref{resbound} show
(asymptotically) cubic convergence: $\tan \phi_{k+1}=O(\sin^3\phi_k)$ and
$\|r_{k+1}\|=O(\|r_k\|^3)$.
{\bf Remark 2}. If the linear systems are solved with decreasing
tolerance $\xi_k=O(\sin\phi_k)=O(\|r_k\|)$, then $\tan
\phi_{k+1}=O(\sin^3\phi_k)$ and $\|r_{k+1}\|=O(\|r_k\|^3)$. Such
(asymptotically) cubic convergence also appears in several papers, e.g.,
\cite{mgs06,Smit,EshofJD}, either explicitly or implicitly.
\section{Convergence of the inexact RQI with MINRES}\label{secminres}
The previous results and discussions are for general purpose,
independent of iterative solvers for $(A-\theta_k I)w=u_k$. Since
we have $\lambda_{\rm min}\leq\theta_k\leq\lambda_{\rm max}$,
the matrix $A-\theta_k I$ is Hermitian indefinite. One of the most
popular iterative solvers for $(A-\theta_k I)w=u_k$ is the MINRES
method as it has a very attractive residual monotonic decreasing
property \cite{paige,saad}. This leads to the inexact RQI with MINRES.
We briefly review MINRES for solving (\ref{EqERQILinearEquation1}).
At outer iteration $k$, taking the starting vector $v_1$ to be
$u_k$, the $m$-step Lanczos process on $A-\theta_k I$ can be written
as
\begin{equation}
( A - \theta_k I ) V_m =
V_mT_m+t_{m+1m}v_{m+1}e_m^*=V_{m+1}\hat{T}_m, \label{lanczosp}
\end{equation}
where the columns of $ V_m =(v_1,\ldots,v_m)$ form an orthonormal
basis of the Krylov subspace $\mathcal{K}_m(A - \theta_k I, u_k )
= \mathcal{K}_m ( A, u_k )$, $V_{m+1}=(V_m, v_{m+1})$,
$T_m=(t_{ij})=V_m^*(A-\theta_kI)V_m$ and
$\hat{T}_m=V_{m+1}^*(A-\theta_kI)V_m$ \cite{ParlettSEP,saad}.
Taking the zero vector as an initial guess to the solution of
$(A-\theta_k I)w=u_k$, MINRES \cite{GolubMC,paige,saad} extracts the
approximate solution $w_{k+1}=V_m\hat y$ to $(A-\theta_k I)w=u_k$
from $\mathcal{K}_m (A, u_k)$, where $\hat y$ is the solution of the
least squares problem $\min\|e_1-\hat{T}_my\|$ with $e_1$ being the
first coordinate vector of dimension $m+1$.
By the residual monotonic decreasing property of MINRES, we
trivially have $\xi_k \le\|u_k \| = 1 $ for all $k$ and any inner
iteration steps $m$. Here we must take $m>1$; for $m=1$,
it is easily verified that $\hat y=0$ and thus $w_{k+1}=0$ and
$\xi_k=1$ by noting that $t_{11}=u_k^*(A-\theta_k I)u_k=0$. So
$u_{k+1}$ is undefined and the inexact RQI with MINRES breaks down
if $m=1$.
The residual direction vectors $d_k$ obtained by MINRES
have some attractive features and we can precisely get subtle bounds
for $\sin\psi_k$ and $\cos\psi_k$, as the following results show.
\begin{theorem}\label{minres}
For MINRES, let the unit length vectors $e_k$ and $f_k$ be as in
{\rm (\ref{EqIRQIDecompositoinOfu_k})} and {\rm
(\ref{EqIRQIDecompositoinOfd_k})}, define the angle
$\varphi_k=\angle(f_k,(A-\theta_k I)e_k)$ and assume that
\begin{equation}
|\cos\varphi_k|\geq|\cos\varphi|>0 \label{assump1}
\end{equation}
holds uniformly for an angle $\varphi$ away from $\frac{\pi}{2}$
independent of $k$, i.e.,
\begin{equation}
|f_k^*(A-\theta I)e_k|=\|(A-\theta_k I)e_k\||\cos\varphi_k|\geq
\|(A-\theta_k I)e_k\||\cos\varphi|.\label{assump2}
\end{equation}
Then we have
\begin{eqnarray}
\sin\psi_k&\leq&\frac{2\beta}{|\cos\varphi|}
\sin\phi_k,\label{sinpsi}\\
\cos\psi_k&=&\pm (1-O(\sin^2\phi_k)). \label{cospsi}
\end{eqnarray}
Furthermore, provided $\xi_k>\sin\phi_k$, then
\begin{eqnarray}
\cos\psi_k&=&-1+O(\sin^2\phi_k),\label{cospsik}\\
d_k&=&-x+O(\sin\phi_k).\label{dkerror}
\end{eqnarray}
\end{theorem}
\begin{proof}
Note that for MINRES its residual $\xi_kd_k$ satisfies
$\xi_kd_k\perp (A-\theta_k I)\mathcal{K}_m(A,u_k)$. Therefore, we
specially have $\xi_k d_k \perp (A-\theta_k I) u_k$, i.e., $d_k
\perp (A-\theta_k I) u_k$. Then from
(\ref{EqIRQIDecompositoinOfu_k}) and
(\ref{EqIRQIDecompositoinOfd_k}) we obtain
$$
(\lambda-\theta_k)\cos\phi_k\cos\psi_k+f_k^*(A-\theta_kI)e_k\sin\phi_k\sin\psi_k=0.
$$
So
\begin{equation}
\tan\psi_k=\frac{(\theta_k-\lambda)\cos\phi_k}{f_k^*(A-\theta_kI)e_k\sin\phi_k}.
\label{tanpsi}
\end{equation}
By $|\lambda-\theta_k|< \frac{|\lambda-\lambda_2|}{2}$, we get
\begin{eqnarray*}
|f_k^*(A-\theta_k I)e_k|&=&\|(A-\theta _k I)e_k\||\cos\varphi_k|\\
&\geq& \|(A-\theta _k I)e_k\||\cos\varphi|\\
&\geq&|\lambda_2-\theta_k||\cos\varphi|\\
&>&\frac{|\lambda_2-\lambda|}{2}|\cos\varphi|.
\end{eqnarray*}
Using (\ref{error1}), we obtain from (\ref{tanpsi})
\begin{eqnarray*}
|\tan\psi_k|&\leq&\frac{(\lambda_{\max}-\lambda_{\min})\sin\phi_k\cos\phi_k}
{|f_k^*(A-\theta_k I)e_k|}\\
&\leq&\frac{2(\lambda_{\max}-\lambda_{\min})}{|\lambda_2-\lambda||\cos\varphi|}
\sin\phi_k\cos\phi_k\\
&\leq&\frac{2\beta}{|\cos\varphi|}\sin\phi_k.
\end{eqnarray*}
Therefore, (\ref{sinpsi}) holds. Note that (\ref{sinpsi}) means
$\sin\psi_k=O(\sin\phi_k)$. So we get
$$
\cos\psi_k=\pm\sqrt{1-\sin^2\psi_k}=\pm
(1-\frac{1}{2}\sin^2\psi_k)+O(\sin^4\psi_k)=\pm (1-O(\sin^2\phi_k))
$$
by dropping the higher order term $O(\sin^4\phi_k)$.
Now we prove that $\cos\psi_k$ and $\cos\phi_k$ must have opposite
signs if $\xi_k>\sin\phi_k$. Since the MINRES residual
$$
\xi_kd_k=(A-\theta_k I)w_{k+1}-u_k,
$$
by its residual minimization property we know that $(A-\theta_k
I)w_{k+1}$ is just the orthogonal projection of $u_k$ onto
$(A-\theta_k I){\cal K}_m(A,u_k)$ and $\xi_kd_k$ is orthogonal to
$(A-\theta_k I)w_{k+1}$. Therefore, we get
\begin{equation}
\xi_k^2+\|(A-\theta_k I)w_{k+1}\|^2=\|u_k\|^2=1.\label{minpro}
\end{equation}
Note that
$$
(A-\theta_k I)w_{k+1}=u_k+\xi_k d_k=(\cos\phi_k+\xi_k\cos\psi_k)x
+(e_k\sin\phi_k+\xi_kf_k\sin\psi_k)
$$
is an orthogonal direct sum decomposition of $(A-\theta_k
I)w_{k+1}$. Therefore, we have
$$
\|(A-\theta_k I)w_{k+1}\|^2=(\cos\phi_k+\xi_k\cos\psi_k)^2
+\|e_k\sin\phi_k+\xi_kf_k\sin\psi_k\|^2,
$$
which, together with (\ref{minpro}), gives
$$
(\cos\phi_k+\xi_k\cos\psi_k)^2+\xi_k^2\leq 1.
$$
Solving it for $\xi_k\cos\psi_k$, we have
$$
-\cos\phi_k-\sqrt{1-\xi_k^2}\leq \xi_k\cos\psi_k\leq
-\cos\phi_k+\sqrt{1-\xi_k^2},
$$
in which the upper bound is negative provided that
$-\cos\phi_k+\sqrt{1-\xi_k^2}<0$, which means $\xi_k>\sin\phi_k$. So
$\cos\psi_k$ and $\cos\phi_k$ must have opposite signs if
$\xi_k>\sin\phi_k$. Hence, it follows from \eqref{cospsi} that
\eqref{cospsik} holds if $\xi_k>\sin\phi_k$. Combining
(\ref{cospsi}) with \eqref{EqIRQIDecompositoinOfd_k} and
\eqref{sinpsi} gives \eqref{dkerror}.
\end{proof}
Since $u_k$ is assumed to be a reasonably good approximation to $x$,
$\sin\phi_k$ is small. As a result, the condition $\xi_k>\sin\phi_k$
is easily satisfied unless the linear system is solved with very
high accuracy.
Clearly, how general this theorem depends on how general assumption
(\ref{assump1}) is. Next we give a qualitative analysis to show
that this assumption is very reasonable and holds generally, indicating that
the theorem is of rationale and generality.
Note that by definition we have
$$
e_k \mbox{ and }f_k\in {\rm span}\{x_2,x_3,\ldots,x_n\},
$$
so does
$$
(A-\theta_kI)e_k\in {\rm span}\{x_2,x_3,\ldots,x_n\}.
$$
We now justify the generality of $e_k$ and $f_k$. In the proof of
cubic convergence of RQI, which is the inexact RQI with $\xi_k=0$,
Parlett \cite[p. 78-79]{ParlettSEP} proves that $e_k$ will start to
converge to $x_2$ only after $u_k$ has converged to $x$ and
$e_k\rightarrow x_2$ holds for large enough $k$. In other words,
$e_k$ is a general combination of $x_2,x_3,\ldots,x_n$ and does not
start to converge before $u_k$ has converged. Following his proof
path, we have only two possibilities on $e_k$ in the inexact RQI
with MINRES: One is that $e_k$, at best, can possibly start to approach $x_2$
only if $u_k$ has converged to $x$; the other is that $e_k$ is nothing
but just still a general linear combination of $x_2,x_3,\ldots,x_n$ and
does not converge to any specific vector for any $\xi_k$. In either
case, $e_k$ is indeed a general linear combination of $x_2,x_3,\ldots,x_n$
before $u_k$ has converged.
Expand the unit length $e_k$ as
$$
e_k=\sum_{j=2}^n\alpha_jx_j
$$
with $\sum_{j=2}^n\alpha_j^2=1$. Then, based on the above arguments,
no $\alpha_j$ is small before $u_k$ has converged. Note that
$$
(A-\theta_k I)e_k=\sum_{j=2}^n\alpha_j (\lambda_j-\theta_k)x_j.
$$
Since $\theta_k$ supposed to be a reasonably good approximation to
$\lambda$, for $j=2,3,\ldots,n$, $\lambda_j-\theta_k$ are not small,
so that $(A-\theta_k I)e_k$ is a general linear combination of
$x_2,x_3,\ldots,x_n$.
Let $p_m(z)$ be the residual polynomial of MINRES applied to
(\ref{EqERQILinearEquation1}). Then it is known \cite{paige} that
$p_m(0)=1$, its $m$ roots are the harmonic values of $A-\theta_k I$
with respect to ${\cal K}_m(A,u_k)$. From
(\ref{EqIRQIDecompositoinOfu_k}) and
(\ref{EqIRQIDecompositoinOfd_k}) we can write the residual
$\xi_kd_k$ as
\begin{eqnarray*}
\xi_k d_k&=&p_m(A-\theta_k I)u_k=p_m(A-\theta_k I)(x\cos\phi_k+
e_k\sin\phi_k)\\
&=& \cos\phi_kp_m(\lambda-\theta_k)x+\sin\phi_k
\sum_{j=2}^n\alpha_jp_m(\lambda_j-\theta_k)x_j\\
&=&\xi_k(x\cos\psi_k+f_k\sin\psi_k).
\end{eqnarray*}
Noting that $\|f_k\|=1$, we get
$$
f_k=\frac{p_m(A-\theta_k
I)e_k}{\|p_m(A-\theta_kI)e_k\|}=\frac{\sum_{j=2}^n\alpha_j
p_m(\lambda_j-\theta_k)x_j}
{(\sum_{j=2}^n\alpha_j^2p_m^2 (\lambda_j-\theta_k))^{1/2}}.
$$
Since $u_k$ is rich in $x$ and has small components in
$x_2,x_3,\ldots,x_n$, ${\cal K}_m(A,u_k)$ contains not much
information on $x_2,x_3,\ldots,x_n$ unless $m$ is large enough.
Therefore, as approximations to the eigenvalues
$\lambda_2-\theta_k,\lambda_3-\theta_k,\ldots,\lambda_n-\theta_k$ of
the matrix $A-\theta_kI$, the harmonic Ritz values are generally of
poor quality unless $m$ is large enough. By continuity,
$p_m(\lambda_j-\theta_k),\,j=2,3,\ldots,n$ are generally not near
zero. This means that usually $f_k$ is a general linear combination of
$x_2,x_3,\ldots,x_n$, which is the case for $\xi_k$ not very small.
We point out that $p_m(\lambda_j - \theta_k),\ j = 2,3,...,n$ can be
possibly near zero only if $\xi_k$ is small enough, but in this case
the cubic convergence of the outer iteration can be established
trivially, according to Theorem~\ref{ThmIRQIQuadraticConvergence}.
For other discussions on $\xi_k$ and harmonic Ritz values, we refer
to \cite{xueelman}.
In view of the above, it is very unlikely for $f_k$ and $(A-\theta_k
I)e_k$ to be nearly orthogonal, that is, $\varphi_k$ is rarely near
$\frac{\pi}{2}$. So, $|\cos\varphi_k|$ should be uniformly away from
zero in general, and assumption (\ref{assump1}) is very general and
reasonable.
Obviously, $|\cos\varphi_k|$ is a-priori and cannot be computed in
practice. For test purposes, for each matrix in
Section~\ref{testminres} and some others, supposing the $x$'s are
known, we have computed $|\cos\varphi_k|$ for $\xi_k=O(\|r_k\|)$,
$\xi_k\leq \xi<1$ with $\xi$ a fixed constant, $\xi_k=1-(\|r_k\|)$
and $\xi_k=1-O(\|r_k\|^2)$, respectively. As will be seen, the
latter three requirements on $\xi_k$ are our new cubic, quadratic
and linear convergence conditions for the inexact RQI with MINRES
that will be derived by combining this theorem with
Theorems~\ref{ThmIRQIQuadraticConvergence}--\ref{resbound} and are
critically need the assumption that $|\cos\varphi_k|$ is uniformly
away from zero, independent of $k$. Among thousands
$|\cos\varphi_k|$'s, we have found that most of the
$|\cos\varphi_k|$'s are considerably away from zero and very few
smallest ones are around $10^{-4}$. Furthermore, we have found
that their arithmetic mean is basically $0.016\sim 0.020$ for each
matrix and a given choice of $\xi_k$ above. To highlight these
results and to be more illustrative, we have extensively computed
$|\cos\angle(w,v)|=\frac{|w^*v|}{\|w\|\|v\|}$ with many vector pairs
$(w,v)$ of numerous dimensions no less than 1000 that were generated
randomly in a normal distribution. We have observed that the values
of our $|\cos\varphi_k|$'s and their arithmetic mean for each matrix
and a given choice of $\xi_k$ above have very similar behavior to
those of thousands $|\cos\angle(w,v)|$'s and the arithmetic mean
for a given dimension. As a
consequence, this demonstrates that assumption~(\ref{assump1}) are
very like requiring the uniform non-orthogonality of two normal
random vectors and thus should hold generally. In other words,
$f_k,e_k$ and $(A-\theta_k I)e_k$ are usually indeed general
linear combinations of $x_2,\ldots,x_n$ and have the nature as vectors
generated in a normal distribution, so assumption~(\ref{assump1})
should hold generally. On the other hand, our later
numerical experiments also confirm the cubic, quadratic and linear
convergence under our corresponding new conditions This means that
the numerical experiments also justify the rationale and generality
of the assumption a-posteriori.
Based on Theorem~\ref{minres} and
Theorems~\ref{ThmIRQIQuadraticConvergence}--\ref{resbound},
we now present new convergence results on the inexact RQI with MINRES.
\begin{theorem}\label{minrescubic}
With $\cos\varphi$ defined as in Theorem~{\rm \ref{minres}},
assuming that $|\cos\varphi|$ is uniformly away from zero, the
inexact RQI with MINRES asymptotically converges cubically:
\begin{equation}
\tan\phi_{k+1}\leq\frac{2\beta
(|\cos\varphi|+2\xi_k\beta)}{(1-\xi_k)|\cos\varphi|}\sin^3\phi_k
\label{cubic}
\end{equation}
if $\xi_k\leq\xi$ with a fixed $\xi$ not near one; it converges
quadratically:
\begin{equation}
\tan\phi_{k+1}\leq\eta\sin^2\phi_k \label{quadminres}
\end{equation}
if $\xi_k$ is near one and bounded by
\begin{equation}
\xi_k\leq 1-\frac{6\beta^2\sin\phi_k}{\eta|\cos\varphi|}
\label{minresquad}
\end{equation}
with $\eta$ a moderate constant; it converges linearly at least:
\begin{equation}
\tan\phi_{k+1}\leq\zeta\sin\phi_k \label{linminres}
\end{equation}
with a constant $\zeta<1$ independent of $k$ if $\xi_k$ is bounded
by
\begin{equation}
1-\frac{6\beta^2\sin\phi_k}{\eta|\cos\varphi|}<\xi_k\leq
1-\frac{6\beta^2\sin^2\phi_k}{\zeta|\cos\varphi|}. \label{minreslin}
\end{equation}
\end{theorem}
\begin{proof}
Based on Theorem~\ref{minres}, we have
\begin{eqnarray}
\cos\phi_k+\xi_k\cos\psi_k&=&1-\frac{1}{2}\sin^2\phi_k\pm\xi_k
(1-O(\sin^2\phi_k))\nonumber\\
&=&1\pm\xi_k+O(\sin^2\phi_k)\nonumber\\
&=&1\pm\xi_k\geq 1-\xi_k \label{denominres}
\end{eqnarray}
by dropping the higher order term $O(\sin^2\phi_k)$. Therefore,
the uniform positiveness condition holds provided that
$\xi_k\leq\xi$ with a fixed $\xi$ not near one. Combining
(\ref{bound1}) with \eqref{sinpsi} and (\ref{cospsi}), we get
$$
\tan\phi_{k+1}\leq 2\beta\frac{1+\xi_k \frac{2\beta}{|\cos\varphi|}}
{1-\xi_k}\sin^3\phi_k,
$$
which is just (\ref{cubic}) and shows the cubic convergence of the
inexact RQI with MINRES if $\xi_k\leq \xi$ with a fixed $\xi$ not
near one.
Next we prove the quadratic convergence result. Since $\xi_k<1$ and
$\beta\geq 1$, it follows from (\ref{sinpsi}) and (\ref{denominres})
that asymptotically
\begin{eqnarray*}
2\beta\frac{\sin \phi_k + \xi_k \sin \psi_k} {|\cos \phi_k + \xi_k
\cos \psi_k|}&<&2\beta\frac{(1+2\beta/|\cos\varphi|)\sin\phi_k}
{\cos\phi_k+\xi_k\cos\psi_k}\\
&\leq&\frac{6\beta^2\sin\phi_k}
{(\cos\phi_k+\xi_k\cos\psi_k)|\cos\varphi|}\\
&\leq&\frac{6\beta^2\sin\phi_k}{(1-\xi_k)|\cos\varphi|}.
\end{eqnarray*}
So from (\ref{bound1}) the inexact RQI with MINRES converges
quadratically and (\ref{quadminres}) holds if
$$
\frac{6\beta^2\sin\phi_k} {(1-\xi_k)|\cos\varphi|}\leq\eta
$$
for a moderate constant $\eta$ independent of $k$. Solving this
inequality for $\xi_k$ gives (\ref{minresquad}).
Finally, we prove the linear convergence result. Analogously, we
have
$$
2\beta\frac{\sin\phi_k + \xi_k \sin\psi_k}{|\cos \phi_k + \xi_k \cos
\psi_k|}\sin\phi_k
<\frac{6\beta^2\sin^2\phi_k}{(1-\xi_k)|\cos\varphi|}.
$$
So it follows from (\ref{bound1}) that the inexact RQI with MINRES
converges linearly at least and (\ref{linminres}) holds when
$$
\frac{6\beta^2\sin^2\phi_k}{(1-\xi_k)|\cos\varphi|}\leq \zeta<1
$$
with a constant $\zeta$ independent of $k$. Solving the above
inequality for $\xi_k$ gives (\ref{minreslin}).
\end{proof}
Theorem~\ref{minrescubic} presents the convergence results in terms
of the a priori uncomputable $\sin\phi_k$. We next derive their
counterparts in terms of the a posteriori computable $\|r_k\|$, so
that they are of practical value as much as possible and can be used
to best control inner-outer accuracy to achieve a desired
convergence rate.
\begin{theorem}\label{minres2}
With $\cos\varphi$ defined as in Theorem~{\rm \ref{minres}},
assuming that $|\cos\varphi|$ is uniformly away from zero, the
inexact RQI with MINRES asymptotically converges cubically:
\begin{equation}
\|r_{k+1}\|\leq\frac{16\beta^2
(|\cos\varphi|+2\xi_k\beta)}{(1-\xi_k)(\lambda_2-\lambda)^2|\cos\varphi|}
\|r_k\|^3 \label{cubicres}
\end{equation}
if $\xi_k\leq\xi$ with a fixed $\xi$ not near one; it converges
quadratically:
\begin{equation}
\|r_{k+1}\|\leq\frac{4\beta\eta}{|\lambda_2-\lambda|}\|r_k\|^2
\label{quadres}
\end{equation}
if $\xi_k$ is near one and bounded by
\begin{equation}
\xi_k\leq 1-\frac{6\beta\|r_k\|}{\eta
|\lambda_2-\lambda||\cos\varphi|} \label{quadracond}
\end{equation}
with $\eta$ a moderate constant; it converges at linear factor
$\zeta$ at least:
\begin{equation}
\|r_{k+1}\|\leq\zeta\|r_k\| \label{linresmono}
\end{equation}
if $\xi_k$ is bounded by
\begin{equation}
1-\frac{6\beta\|r_k\|}{\eta
|\lambda_2-\lambda||\cos\varphi|}<\xi_k\leq
1-\frac{48\beta^3\|r_k\|^2}{\zeta(\lambda_2-\lambda)^2|\cos\varphi|}
\label{linrescond}
\end{equation}
with a constant $\zeta<1$ independent of $k$.
\end{theorem}
\begin{proof}
From (\ref{parlett}) we have
$$
\frac{\|r_{k+1}\|}{\lambda_{\max}-\lambda_{\min}}\leq\sin\phi_{k+1}\leq\tan\phi_{k+1},\
\sin\phi_k\leq\frac{2\|r_k\|}{|\lambda_2-\lambda|}.
$$
So (\ref{cubicres}) is direct from (\ref{cubic}) by a simple
manipulation. Next we use
(\ref{parlett}) to denote $\sin\phi_k=\frac{\|r_k\|}{C}$ with
$\frac{|\lambda_2-\lambda|}{2}\leq
C\leq\lambda_{\max}-\lambda_{\min}$. Note that
$$
2\beta\frac{\sin \phi_k + \xi_k \sin \psi_k} {|\cos \phi_k + \xi_k
\cos \psi_k|}<\frac{6\beta^2\sin\phi_k} {(1-\xi_k)|\cos\varphi|}=
\frac{6\beta^2\|r_k\|}{C(1-\xi_k)|\cos\varphi|}
$$
Therefore, similarly to the proof of Theorem~\ref{minrescubic},
if
$$
\frac{6\beta^2\|r_k\|}{C(1-\xi_k)|\cos\varphi|}\leq \eta,
$$
the inexact RQI with MINRES converges quadratically.
Solving this inequality for $\xi_k$ gives
\begin{equation}
\xi_k\leq
1-\frac{6\beta^2\|r_k\|}{C\eta|\cos\varphi|}.\label{boundxi}
\end{equation}
Note that
\begin{eqnarray*}
1-\frac{6\beta^2\|r_k\|}{C\eta|\cos\varphi|} &\geq&
1-\frac{6\beta^2\|r_k\|}{\eta(\lambda_{\max}-\lambda_{\min})|\cos\varphi|}\\
&=&1-\frac{6\beta\|r_k\|} {\eta|\lambda_2-\lambda||\cos\varphi|}.
\end{eqnarray*}
So, if $\xi_k$ satisfies (\ref{quadracond}), then it satisfies
(\ref{boundxi}) too. Therefore, the inexact RQI with MINRES
converges quadratically if (\ref{quadracond}) holds. Furthermore,
from (\ref{parlett}) we have
$\frac{\|r_{k+1}\|}{\lambda_{\max}-\lambda_{\min}}\leq\sin\phi_{k+1}\leq\tan\phi_{k+1}$.
As a result, from (\ref{quadminres}) we obtain
\begin{eqnarray*}
\|r_{k+1}\|&\leq&(\lambda_{\max}-\lambda_{\min})\eta\frac{\|r_k\|^2}{C^2}\\
&\leq&\frac{4\beta\eta}{|\lambda_2-\lambda|}\|r_k\|^2,
\end{eqnarray*}
proving (\ref{quadres}).
In order to make $\|r_k\|$ monotonically converge to zero linearly,
by (\ref{cubicres}) we simply set
$$
\frac{16\beta^2
(|\cos\varphi|+2\xi_k\beta)\|r_k\|^2}{(1-\xi_k)(\lambda_2-\lambda)^2|\cos\varphi|}
\leq\frac{48\beta^3\|r_k\|^2}{(1-\xi_k)
(\lambda_2-\lambda)^2|\cos\varphi|}\leq\zeta<1
$$
with $\zeta$ independent of $k$. Solving it for $\xi_k$ gives
$$
\xi_k\leq
1-\frac{48\beta^3\|r_k\|^2}{\zeta(\lambda_2-\lambda)^2|\cos\varphi|}.
$$
Combining it with (\ref{quadracond}) proves (\ref{linresmono}) and
(\ref{linrescond}).
\end{proof}
First of all, we make a comment on Theorems~\ref{minrescubic}--\ref{minres2}.
As justified previously, assumption (\ref{assump1}) is of wide
generality. Note that
Theorems~\ref{ThmIRQIQuadraticConvergence}--\ref{resbound} hold as
we always have $\cos\phi_k+\xi_k\cos\psi_k\geq 1-\xi_k$
asymptotically, independent of $\varphi_k$. Therefore, in case
$|\cos\varphi|$ is occasionally near and even zero, the inexact RQI
with MINRES converges quadratically at least provided that
$\xi_k\leq \xi$ with a fixed $\xi$ not near one.
In order to judge cubic convergence quantitatively, we should rely on
Theorem~\ref{ThmIRQIQuadraticConvergence} and
Theorem~\ref{minrescubic} (equivalently, Theorem~\ref{resbound} and
Theorem~\ref{minres2}), in which cubic convergence precisely means
that the asymptotic convergence factor
\begin{equation}
\frac{\sin\phi_{k+1}}{\sin^3\phi_k}\leq2\beta \label{rate1}
\end{equation}
for RQI and the asymptotic convergence factor
\begin{equation}
\frac{\sin\phi_{k+1}}{\sin^3\phi_k}\leq\frac{2\beta
(|\cos\phi|+2\xi_k\beta)}{(1-\xi_k)|\cos\varphi|}=
\frac{2\beta}{1-\xi_k}+\frac{4\xi_k\beta^2} {(1-\xi_k)|\cos\varphi|}
\label{rate2}
\end{equation}
for the inexact RQI with MINRES. The asymptotic factors in
(\ref{rate1})--(\ref{rate2}) do not affect the cubic convergence
rate itself but a bigger asymptotic factor affects reduction amount
at each outer iteration and more outer iterations may be needed.
Some other comments are in order. First,
the bigger $\beta$ is, the bigger the asymptotic convergence factors
are and meanwhile the considerably bigger the factor in
(\ref{rate2}) is than that in (\ref{rate1}) if $\xi_k$ is not near
zero. In this case, the cubic convergence of both RQI and the
inexact RQI may not be very visible; furthermore, RQI may need more
outer iterations, and the inexact RQI may need more outer iterations
than RQI. Second, if the factor in (\ref{rate1}) is not big, the
factor in (\ref{rate2}) differs not much with it
provided $\xi_k\leq\xi<1$ with a fixed $\xi$ not near one, so that
the inexact RQI and RQI use (almost) the same outer iterations.
Third, it is worth reminding that the factor in (\ref{rate2}) is an
estimate in the worst case, so it may often be conservative. Fourth,
as commented previously, the inexact RQI with MINRES may
behave more like quadratic in case $|\cos\varphi|$ is
occasionally very small or even zero.
Theorem~\ref{minres2} shows that $\|r_k\|$ can be used to control
$\xi_k$ in order to achieve a desired convergence rate. For cubic
convergence, at outer iteration $k$ we only need to solve the linear
system (\ref{EqERQILinearEquation1}) by MINRES with low accuracy
$\xi_k\leq\xi$. It is safe to do so with $\xi=0.1,0.5$ and even with
$\xi=0.8$. A smaller $\xi$ is not necessary and may cause much waste
at each outer iteration. Thus, we may save much computational cost,
compared with the inexact RQI with MINRES with decreasing tolerance
$\xi_k=O(\sin\phi_k)=O(\|r_k\|)$. Compared with Theorem~\ref{eshof},
another fundamental distinction is that the new quadratic
convergence results only require to solve the linear system with
very little accuracy $\xi_k=1-O(\sin\phi_k)=1-O(\|r_k\|)\approx 1$
rather than with $\xi_k\leq\xi$ not near one. They indicate that the
inexact RQI with MINRES converges quadratically provided that
$\cos\phi_k+\xi_k\cos\psi_k\approx
1-\xi_k=O(\sin\phi_k)=O(\|r_k\|)$. The results also illustrate that
the method converges linearly provided that
$\xi_k=1-O(\sin^2\phi_k)=1-O(\|r_k\|^2)$. In this case, we have
$\cos\phi_k+\xi_k\cos\psi_k\approx
1-\xi_k=O(\sin^2\phi_k)=O(\|r_k\|^2)$. So $\xi_k$ can be
increasingly closer to one as the method converges when quadratic
and linear convergence is required, and $\xi_k$ can be closer to one
for linear convergence than for quadratic convergence. These results
make it possible to design effective criteria on how to best control inner
tolerance $\xi_k$ in terms of the outer iteration accuracy $\|r_k\|$
to achieve a desired convergence rate. In addition, interestingly,
we comment that, instead of quadratic convergence as the authors of
\cite{mgs06} claimed, Table~1 in \cite{mgs06} actually showed the
same asymptotic cubic convergence of the inexact RQI with MINRES for
a fixed $\xi=0.1$ as that for decreasing tolerance
$\xi_k=O(\|r_k\|)$ in the sense of (\ref{rate1}) and (\ref{rate2}).
Below we estimate $\|w_{k+1}\|$ in (\ref{EqIRQILinearEquation1})
obtained by MINRES and establish a lower bound for it.
As a byproduct, we also present a simpler but weaker quadratic
convergence result. Note that the exact solution of
$(A-\theta_k I)w=u_k$ is
$w_{k+1}=(A-\theta_k I)^{-1}u_k$, which corresponds to $\xi_k=0$ in
(\ref{EqIRQIwk}). Therefore, with a reasonably good $u_k$,
from (\ref{EqIRQIwk}), (\ref{error1}) and (\ref{parlett}) we have
\begin{eqnarray*}
\|w_{k+1}\|&=&\frac{\cos\phi_k}{|\theta_k-\lambda|}+O(\sin\phi_k)\\
&\approx&\frac{1}{|\theta_k-\lambda|}=\|(A-\theta_kI)^{-1}\|\\
&=&O\left(\frac{1}{\sin^2\phi_k}\right)=O\left(\frac{1}{\|r_k\|^2}\right).
\end{eqnarray*}
\begin{theorem}\label{mimic}
Asymptotically we have
\begin{eqnarray}
\|w_{k+1}\|&\geq&\frac{(1-\xi_k)|\lambda_2-\lambda|}{4\beta\|r_k\|^2},\label{wk+1}\\
\|r_{k+1}\|&\leq&\sqrt{\frac{1+\xi_k}{1-\xi_k}}\frac{4\beta}{|\lambda_2-\lambda|}
\|r_k\|^2. \label{resrelation}
\end{eqnarray}
Thus, the inexact RQI
with MINRES converges quadratically at least once $\xi_k$ is
not near one.
\end{theorem}
\begin{proof}
By using (\ref{EqIRQIwk}), (\ref{error1}), (\ref{parlett}) and
(\ref{denominres}) in turn, we obtain
\begin{eqnarray*}
\|w_{k+1}\|&\geq&\frac{|\cos\phi_k+\xi_k\cos\psi_k|}{|\theta_k-\lambda|}\\
&\geq&\frac{|\cos\phi_k+\xi_k\cos\psi_k|}{(\lambda_{\max}-\lambda_{\min})\sin^2\phi_k}\\
&\geq&\frac{|\cos\phi_k+\xi_k\cos\psi_k||\lambda_2-\lambda|}{4\beta\|r_k\|^2}\\
&\geq&\frac{(1-\xi_k)|\lambda_2-\lambda|+O(\sin^2\phi_k)}{4\beta\|r_k\|^2}\\
&=&\frac{(1-\xi_k)|\lambda_2-\lambda|}{4\beta\|r_k\|^2}
\end{eqnarray*}
with the last equality holding asymptotically by ignoring $O(1)$. This
proves (\ref{wk+1}).
It follows from (\ref{minpro}) and $u_{k+1}=w_{k+1}/\|w_{k+1}\|$
that
$$
\|(A-\theta_k I)u_{k+1}\|=\frac{\sqrt{1-\xi_k^2}}{\|w_{k+1}\|}.
\label{simoncini1}
$$
So from the optimality of Rayleigh quotient we obtain
\begin{equation}
\|r_{k+1}\|=\|(A-\theta_{k+1}I)u_{k+1}\|\leq\|(A-\theta_kI)u_{k+1}\|
=\frac{\sqrt{1-\xi_k^2}}{\|w_{k+1}\|}.\label{simoncini2}
\end{equation}
Substituting (\ref{wk+1}) into it establishes (\ref{resrelation}).
\end{proof}
Simoncini and Eld$\acute{e}$n \cite{SimonciniRQI} present an
important estimate on $\|w_{k+1}\|$:
\begin{equation}
\|w_{k+1}\|\geq\frac{|1-\varepsilon_m|\cos^3\phi_k}{\sin\phi_k}
\frac{1}{\|r_k\|},\label{simonwk+1}
\end{equation}
where $\varepsilon_m=p_m(\lambda-\theta_k)$ with $p_m$ the residual
polynomial of MINRES satisfying $p_m(0)=1$; see Proposition 5.3 there.
This relation involves $\varepsilon_m$ and is less easily interpreted
than (\ref{wk+1}). When $\varepsilon_m$ is not near one,
$\|w_{k+1}\|$ is bounded by $O(\frac{1}{\|r_k\|^2})$ from below.
Based on this estimate, Simoncini and Eld$\acute{e}$n have designed
a stopping criterion for inner iterations.
From (\ref{wk+1}) and Theorems~\ref{minrescubic}--\ref{minres2}, it
is instructive to observe the remarkable facts: $\|w_{k+1}\|$
increases as rapidly as $O(\frac{1}{\|r_k\|^2})$ and
$O(\frac{1}{\|r_k\|})$, respectively, if the inexact RQI with MINRES
converges cubically and quadratically; but it is $O(1)$ if the
method converges linearly. As (\ref{wk+1}) is sharp, the size of
$\|w_{k+1}\|$ can reveal both cubic and quadratic convergence.
We can control $\|w_{k+1}\|$ to make the method converge
cubically or quadratically, as done in
\cite{SimonciniRQI}. However, it is unlikely to do so for linear
convergence as the method may converge linearly or disconverges when
$\|w_{k+1}\|$ remains $O(1)$.
Another main result of Simoncini and
Eld$\acute{e}$n~\cite{SimonciniRQI} is Proposition 5.3 there:
\begin{equation}
\|r_{k+1}\|\leq\frac{\sin\phi_k}{\cos^3\phi_k}\frac{\sqrt{1-\xi_k^2}}
{|1-\varepsilon_m|}\|r_k\|. \label{simoncini3}
\end{equation}
Note $\sin\phi_k=O(\|r_k\|)$. The above result means quadratic
convergence if $\frac{\sqrt{1-\xi_k^2}} {|1-\varepsilon_m|}$ is
moderate, which is the case if $\xi_k$ and $\varepsilon_m$ are not
near one. We refer to \cite{paige,xueelman} for discussions on
$\varepsilon_m$. Since how $\xi_k$ and $\varepsilon_m$ affect each
other is complicated, (\ref{resrelation}) is simpler and more easily
understandable than (\ref{simoncini3}). However, both
(\ref{resrelation}) and (\ref{simoncini3}) are weaker than
Theorems~\ref{minrescubic}--\ref{minres2} since quadratic
convergence requires $\xi_k<1$ not near one and the cubic
convergence of RQI and of the inexact RQI cannot be recovered when
$\xi_k=0$ and $\xi_k=O(\|r_k\|)$, respectively.
\section{The inexact RQI with a tuned preconditioned
MINRES}\label{precondit}
We have found that even for $\xi_k$ near one we may still need quite
many inner iteration steps at each outer iteration. This is especially
the case for difficult problems, i.e., big $\beta$'s, or for computing
an interior eigenvalue $\lambda$ since it
leads to a highly Hermitian indefinite matrix $(A-\theta_k I)$
at each outer iteration. So, in order to improve the
overall performance, preconditioning may be necessary to speed up
MINRES. Some preconditioning techniques have been proposed in, e.g.,
\cite{mgs06,SimonciniRQI}. In the unpreconditioned case, the
right-hand side $u_k$ of (\ref{EqERQILinearEquation1}) is rich in
the direction of the desired $x$. MINRES can benefit much from this
property when solving the linear system. Actually,
if the right-hand side is an eigenvector of the coefficient
matrix, Krylov subspace type methods will find the exact solution in one step.
However, a usual preconditioner loses this important
property, so that inner iteration steps may not be reduced
\cite{mgs06,freitagspence,freitag08b}. A preconditioner with tuning
can recover this property and meanwhile attempts to improve the
conditioning of the preconditioned linear system, so that considerable
improvement over a usual preconditioner is possible
\cite{freitagspence,freitag08b,xueelman}. In this section we show how to extend
our theory to the inexact RQI with a tuned preconditioned MINRES.
Let $Q=LL^*$ be a Cholesky factorization of some Hermitian positive
definite matrix which is an approximation to $A-\theta_k I$ in some
sense \cite{mgs06,freitag08b,xueelman}. A tuned preconditioner
${\cal Q}={\cal L}{\cal L}^*$ can be constructed by adding a rank-1
or rank-2 modification to $Q$, so that
\begin{equation}
{\cal Q}u_k=Au_k; \label{tune}
\end{equation}
see \cite{freitagspence,freitag08b,xueelman} for details. Using
the tuned preconditioner ${\cal Q}$, the shifted inner linear system
(\ref{EqERQILinearEquation1}) is equivalently transformed to
the preconditioned one
\begin{equation}
B\hat{w}={\cal L}^{-1}(A-\theta_k I){\cal L}^{-*}\hat{w}={\cal
L}^{-1}u_k \label{precond}
\end{equation}
with the original $w={\cal L}^{-*}\hat{w}$. Once MINRES is used to
solve it, we are led to the inexact RQI with a tuned preconditioned
MINRES. A power of the tuned preconditioner ${\cal Q}$ is that the
right-hand side ${\cal L}^{-1}u_k$ is rich in the eigenvector of $B$
associated with its smallest eigenvalue and has the same quality as
$u_k$ as an approximation to the eigenvector $x$ of $A$, while for the
usual preconditioner $Q$ the right-hand side $L^{-1}u_k$ does not
possess this property.
Take the zero vector as an initial guess to the solution of
(\ref{precond}) and let $\hat{w}_{k+1}$ be the approximate solution
obtained by the $m$-step MINRES applied to it. Then we have
\begin{equation}
{\cal L}^{-1}(A-\theta_k I){\cal L}^{-*}\hat{w}_{k+1}={\cal
L}^{-1}u_k+\hat{\xi}_k\hat{d}_k, \label{preresidual}
\end{equation}
where $\hat{w}_{k+1}\in {\cal K}_m(B,{\cal L}^{-1}u_k)$,
$\hat{\xi}_k\hat{d}_k$ with $\|\hat{d}_k\|=1$ is the residual and
$\hat{d}_k$ is the residual direction vector. Trivially, for any
$m$, we have $\hat{\xi}_k\leq\|{\cal L}^{-1}u_k\|$. Keep in mind
that $w_{k+1}={\cal L}^{-*}\hat{w}_{k+1}$. We then get
\begin{equation}
(A-\theta_k I)w_{k+1}=u_k+\hat{\xi}_k{\cal
L}\hat{d}_k=u_k+\hat{\xi}_k\|{\cal L}\hat{d}_k\|\frac{{\cal
L}\hat{d}_k}{\|{\cal L}\hat{d}_k\|}.\label{orig}
\end{equation}
So $\xi_k$ and $d_k$ in (\ref{EqIRQILinearEquation1}) are
$\hat{\xi}_k\|{\cal L}\hat{d}_k\|$ and $\frac{{\cal
L}\hat{d}_k}{\|{\cal L}\hat{d}_k\|}$, respectively, and our general
Theorems~\ref{ThmIRQIQuadraticConvergence}--\ref{resbound} apply and
is not repeated. In practice, we require that the tuned
preconditioned MINRES solves the inner linear system with
$\hat{\xi}_k<1/\|{\cal L}\hat{d}_k\|$ such that $\xi_k<1$.
How to extend Theorem~\ref{minres} to the preconditioned case is
nontrivial and needs some work. Let $(\mu_i,y_i),\ i=1,2,\ldots,n$
be the eigenpairs of $B$ with
$$
\mid\mu_1\mid<\mid\mu_2\mid\leq\cdots\leq\mid\mu_n\mid.
$$
Define $\hat{u}_k={\cal L}^{-1}u_k/\|{\cal L}^{-1}u_k\|$. Similar to
{\rm (\ref{EqIRQIDecompositoinOfu_k})} and {\rm
(\ref{EqIRQIDecompositoinOfd_k}), let
\begin{eqnarray}
\hat{u}_k&=&y_1\cos\hat{\phi}_k+\hat{e}_k\sin\hat{\phi}_k,\ \hat{e}_k\perp y_1,
\ \|\hat{e}_k\|=1, \label{predec1}\\
\hat{d}_k&=&y_1\cos\hat{\psi}_k+\hat{f}_k\sin\hat{\psi}_k, \
\hat{f}_k\perp y_1, \ \|\hat{f}_k\|=1 \label{predec2}
\end{eqnarray}
be the orthogonal direct sum decompositions of $\hat{u}_k$ and
$\hat{d}_k$. Then it is known
\cite{freitag08b} that
\begin{eqnarray}
|\mu_1|&=&O(\sin\phi_k),\label{mu1}\\
\sin\hat{\phi}_k&\leq& c_1\sin\phi_k \label{phihat}
\end{eqnarray}
with $c_1$ a constant.
Based on the proof line of Theorem~\ref{minres} and combining
(\ref{phihat}), the following results can be proved directly.
\begin{lemma}\label{minres4}
For the tuned preconditioned MINRES, let the unit length vectors $\hat{e}_k$ and
$\hat{f}_k$ be as in {\rm (\ref{predec1})} and {\rm
(\ref{predec2})}, define the angle
$\hat{\varphi}_k=\angle(\hat{f}_k, B\hat{e}_k)$ and assume that
\begin{equation}
|\cos\hat{\varphi}_k|\geq|\cos\hat{\varphi}|>0 \label{passump1}
\end{equation}
holds uniformly for an angle $\hat\varphi$ away from $\frac{\pi}{2}$
independent of $k$. Then we have
\begin{eqnarray}
\sin\hat{\psi}_k&\leq&\frac{c_1|\mu_n|}{|\mu_2||\cos\hat\varphi|}
\sin\phi_k,\label{psinpsi}\\
\cos\hat{\psi}_k&=&\pm (1-O(\sin^2\phi_k)), \label{pcospsi}\\
\hat{d}_k&=&\pm y_1+O(\sin\phi_k).\label{pdkerror}
\end{eqnarray}
\end{lemma}
We can make a qualitative analysis, similar to that done for
Theorem~\ref{minres} in the unpreconditioned case, and justify the
assumption on $\hat{\varphi}_k$. With this theorem, we now estimate
$\sin\psi_k$ and $\cos\psi_k$.
\begin{theorem}\label{pminres}
For the tuned preconditioned MINRES, under the assumptions of
Theorem~\ref{minres2}, it holds that
\begin{eqnarray}
\sin\psi_k&=&O(\sin\phi_k),\\
\cos\psi_k&=&\pm (1-O(\sin^2\phi_k))\\
d_k&=&=\pm x+O(\sin\phi_k).
\end{eqnarray}
\end{theorem}
\begin{proof}
By definition, we have
$$
{\cal L}^{-1}(A-\theta_k I){\cal L}^{-*}y_1=\mu_1y_1,
$$
from which it follows that
$$
(A-\theta_kI)\tilde{u}_k=\mu_1\frac{{\cal L}y_1}{\|{\cal L}^{-*}y_1\|}
$$
with $\tilde{u}_k={\cal L}^{-*}y_1/\|{\cal L}^{-*}y_1\|$. Therefore,
by standard perturbation theory and (\ref{mu1}), we get
$$
\sin\angle(\tilde{u}_k,x)=O(\mu_1\frac{\|{\cal L}y_1\|}
{\|{\cal L}^{-*}y_1\|})=O(|\mu_1|)=O(\sin\phi_k).
$$
Since
$$
\angle(\tilde{u}_k,u_k)\leq\angle(\tilde{u}_k,x)+\angle(u_k,x),
$$
we get
$$
\sin\angle(\tilde{u}_k,u_k)=O(\sin\phi_k),
$$
i.e.,
$$
\tilde{u}_k=u_k+O(\sin\phi_k).
$$
As a result, from $Au_k={\cal Q}u_k$ and $\tilde{u}_k={\cal
L}^{-*}y_1/\|{\cal L}^{-*}y_1\|$ we have
\begin{eqnarray*}
{\cal L}y_1&=&\|{\cal L}^{-*}y_1\|{\cal L}{\cal L}^{*}\tilde{u}_k
=\|{\cal L}^{-*}y_1\|{\cal Q}\tilde{u}_k\\
&=&\|{\cal L}^{-*}y_1\|{\cal Q}
(u_k+O(\sin\phi_k))\\
&=&\|{\cal L}^{-*}y_1\|Au_k+O(\sin\phi_k)\\
&=&\|{\cal L}^{-*}y_1\|(\theta_ku_k+r_k)+O(\sin\phi_k)\\
&=&\theta_k\|{\cal L}^{-*}y_1\|u_k+\|{\cal L}^{-*}y_1\|r_k+O(\sin\phi_k),
\end{eqnarray*}
from which we obtain
$$
\sin\angle({\cal L}y_1,u_k)=O(\|r_k\|)+O(\sin\phi_k)=O(\sin\phi_k).
$$
Therefore, we have
\begin{equation}
\sin\angle({\cal L}y_1,x)\leq\sin\angle({\cal L}y_1,u_k)+
\sin\angle(u_k,x)=O(\sin\phi_k),\label{ly1}
\end{equation}
that is, the (unnormalized) ${\cal L}y_1$ has the same quality as
$u_k$ as an approximation to $x$. We have the orthogonal direct sum
decomposition
\begin{equation}
{\cal L}y_1=\|{\cal L}y_1\|\left(x\cos\angle({\cal
L}y_1,x)+g_k\sin\angle({\cal L}y_1,x)\right) \label{decly1}
\end{equation}
with $\|g_k\|=1$ and $g_k\perp x$. Also, make the orthogonal direct
orthogonal sum decomposition
\begin{equation}
{\cal L}\hat{f}_k=\|{\cal L}\hat{f}_k\|\left(x\cos\angle({\cal
L}\hat{f}_k,x)+h_k\sin\angle({\cal
L}\hat{f}_k,x)\right)\label{declfk}
\end{equation}
with $\|h_k\|=1$ and $h_k\perp x$. Making use of (\ref{psinpsi}) and
(\ref{pcospsi}), we get from (\ref{predec2}) that
\begin{eqnarray*}
\|{\cal L}_k\hat{d}_k\|^2&=&\|{\cal L}y_1\|^2\cos^2\hat{\psi}_k
+\|{\cal L}\hat{f}_k\|^2\sin^2\hat{\psi}_k
+2Re({\cal L}y_1,{\cal L}\hat{f}_k)\sin\hat{\psi}_k\cos\hat{\psi}_k\\
&=&\|{\cal L}y_1\|^2+O(\sin^2\phi_k)+O(\sin\phi_k)\\
&=&\|{\cal L}y_1\|^2+O(\sin\phi_k).
\end{eqnarray*}
Hence we have
\begin{equation}
\|{\cal L}y_1\|=\|{\cal L}\hat{d}_k\|+O(\sin\phi_k).\label{length}
\end{equation}
Now, from (\ref{predec2}), we obtain
$$
d_k=\frac{{\cal L}\hat{d}_k}{\|{\cal L}\hat{d}_k\|}
=\frac{1}{\|{\cal L}\hat{d}_k\|}({\cal L}y_1\cos\hat{\psi}_k +{\cal
L}\hat{f}_k\sin\hat{\psi}_k).
$$
Substituting (\ref{decly1}) and
(\ref{declfk}) into the above relation yields
\begin{eqnarray*}
d_k&=&\frac{1}{\|{\cal L}\hat{d}_k\|}((\|{\cal
L}y_1\|\cos\angle({\cal L}y_1,x)\cos\hat{\psi}_k+\|{\cal
L}\hat{f}_k\|\cos\angle({\cal
L}\hat{f}_k,x)\sin\hat{\psi}_k)x\\
&&+(\|{\cal L}y_1\|g_k\sin\angle({\cal
L}y_1,x)\cos\hat{\psi}_k+\|{\cal L}\hat{f}_k\|h_k\sin\angle({\cal
L}\hat{f}_k,x)\sin\hat{\psi}_k)).
\end{eqnarray*}
This is the orthogonal direct sum decomposition of $d_k$.
Making use of $\sin\angle({\cal L}y_1,x)=O(\sin\phi_k)$,
$\sin\hat{\psi}_k=O(\sin\phi_k)$ and (\ref{length}) and comparing with
$d_k=x\cos\psi_k+e_k\sin\psi_k$, we get
$$
\sin\psi_k=O(\sin\phi_k).
$$
Thus, it holds that $\cos\psi_k=\pm (1-O(\sin^2\phi_k))$ and
$d_k=\pm x+O(\sin\phi_k)$.
\end{proof}
Compared with Theorem~\ref{minres}, this theorem indicates that the
directions of residuals
obtained by the unpreconditioned MINRES and the tuned preconditioned
MINRES have the same properties. With the theorem, we can
write $\sin\psi_k\leq c_2\sin\phi_k$ with $c_2$ a constant. Then, based on
Theorems~\ref{ThmIRQIQuadraticConvergence}--\ref{resbound}, it is
direct to extend Theorems~\ref{minrescubic}--\ref{minres2} to the
inexact RQI with the tuned preconditioned MINRES, respectively. We
have made preliminary experiments and confirmed the theory. Since
our main concerns in this paper are the convergence theory and a
pursue of effective tuned preconditioners are beyond the scope of
the current paper, we will only report numerical experiments on the
inexact RQI with the unpreconditioned MINRES in the next section.
\section{Numerical experiments}\label{testminres}
Throughout the paper, we perform numerical experiments on an Intel
(R) Core (TM)2 Quad CPU Q9400 $2.66$GHz with main memory 2 GB using
Matlab 7.8.0 with the machine precision $\epsilon_{\rm
mach}=2.22\times 10^{-16}$ under the Microsoft Windows XP operating
system.
We report numerical experiments on four symmetric (Hermitian)
matrices: BCSPWR08 of order 1624, CAN1054 of order 1054, DWT2680 of
order 3025 and LSHP3466 of order 3466 \cite{duff}. Note that the
bigger
$\beta=\frac{\lambda_{\max}-\lambda_{\min}}{|\lambda_2-\lambda|}$
is, the worse conditioned $x$ is. For a bigger $\beta$,
Theorem~\ref{ThmIRQIQuadraticConvergence} and
Theorems~\ref{minrescubic}--\ref{minres2} show that RQI and the
inexact RQI with MINRES may converge more slowly and use more outer
iterations though they can still converge cubically.
If an interior eigenpair is required, the shifted linear systems can
be highly indefinite (i.e., many positive and negative eigenvalues)
and may be hard to solve. As a reference, we use the Matlab function
{\sf eig.m} to compute $\beta$. To better illustrate the theory, we
compute both exterior and interior eigenpairs. We compute the
smallest eigenpair of BCSPWR08, the tenth smallest eigenpair of
CAN1054, the largest eigenpair of DWT2680 and the twentieth smallest
eigenpair of LSHP3466, respectively.
Remembering that the (asymptotic) cubic convergence of the inexact RQI for
$\xi_k=O(\|r_k\|)$ is independent of iterative solvers,
in the experiments we take
\begin{equation}
\xi_k=\min\{0.1,\frac{\|r_k\|}{\|A\|_1}\}.\label{decrease}
\end{equation}
We first test the inexact RQI with MINRES for $\xi_k\leq\xi<1$ with
a few constants $\xi$ not near one and illustrate its cubic
convergence. We construct the same initial $u_0$ for each matrix
that is $x$ plus a reasonably small perturbation generated randomly
in a uniform distribution, such that
$|\lambda-\theta_0|<\frac{|\lambda-\lambda_2|}{2}$. The algorithm
stops whenever $\|r_k\|=\|(A-\theta_k I)u_k\|\leq\|A\|_1tol$, and we
take $tol=10^{-14}$ unless stated otherwise. In experiments, we use
the Matlab function {\sf minres.m} to solve the inner linear
systems. Tables~\ref{BCSPWR08minres}--\ref{lshpminres} list the
computed results, where $iters$ denotes the number of total inner
iteration steps and $iter^{(k-1)}$ is the number of inner iteration
steps when computing $(\theta_k,u_k)$, the "-" indicates that MINRES
stagnates and stops at the $iter^{(k-1)}$-th step, and $res^{(k-1)}$
is the actual relative residual norm of the inner linear system when
computing $(\theta_k,u_k)$. Clearly, $iters$ is a reasonable measure
of the overall efficiency of the inexact RQI with MINRES. We comment
that in {\sf minres.m} the output $iter^{(k-1)}=m-1$, where $m$ is
the steps of the Lanczos process.
\begin{table}[ht]
\begin{center}
\begin{tabular}{|c|c|c|c|c|c|c|}\hline
$\xi_{k-1}\leq\xi$&$k$&$\|r_k\|$&$\sin\phi_k$&$res^{(k-1)}$
&$iter^{(k-1)}$&$iters$\\\hline
0 (RQI)&1& 0.0092&0.0025& & & \\
& 2&$4.4e-8$&$5.0e-8$& & & \\
& 3&$1.0e-15$&$2.2e-15$ & & &\\\hline
$\frac{\|r_{k-1}\|}{\|A\|_1}$&1 &0.0096&0.0036&0.0423&6&126\\
&2&$8.4e-8$&$1.3e-7$&$5.5e-4$&37&\\
&3&$9.4e-15$&$3.2e-15$&-&83&\\\hline
0.1&1&0.0105&0.0049&0.0707&5&68\\
&2&$2.9e-6$&$2.4e-6$&0.0784&21&\\
&3&$1.3e-13$&$2.7e-13$&0.0863&42&\\\hline
0.5&1&0.0218&0.0111&0.2503&3&88\\
&2&$8.7e-5$&$1.9e-4$&0.4190&11&\\
&3&$6.3e-9$&$2.1e-8$&0.4280&31&\\
&4&$1.1e-14$&$3.3e-15$&-&43&\\\hline
$1-\frac{c_1\|r_{k-1}\|}{\|A\|_1}$
&1&0.1409&0.0363&0.8824&1&75\\
$tol=10^{-13}$&2&0.0068&0.2274&0.9227&3&\\
&3&$1.3e-4$&$5.4e-4$&0.9284&11&\\
&4&$3.5e-7$&$1.1e-6$&0.9845&19&\\
&5&$3.4e-11$&$1.6e-11$&$1-3.7\times 10^{-5}$&30&\\
&6&$1.3e-13$&$4.6e-13$&$1-2.2\times 10^{-8}$&11&\\
\hline
\end{tabular}
\begin{tabular}{|c|c|c|}\hline
$\xi_{k-1}$& $k \ (iter^{(k-1)})$& $iters$\\\hline
$1-\left(\frac{c_2\|r_{k-1}\|}{\|A\|_1}\right)^2$& &\\
$tol=10^{-10}$& 1 (1); 2 (3); 3 (9); 4 (13); 5 (15); 6 (16) &62
\\\hline
\end{tabular}
\caption{BCSPWR08, $\beta=40.19,\ \sin\phi_0=0.1134$,
$c_1=c_2=1000$. $k \ (iter^{(k-1)})$ denotes the number of inner
iteration steps used by MINRES when computing
$(\theta_k,u_k)$.}\label{BCSPWR08minres}
\end{center}
\end{table}
\begin{table}[ht]
\begin{center}
\begin{tabular}{|c|c|c|c|c|c|c|}\hline
$\xi_{k-1}\leq\xi$&$k$&$\|r_k\|$&$\sin\phi_k$&$res^{(k-1)}$&
$iter^{(k-1)}$&$iters$\\\hline
0 (RQI)&1&0.0139&0.038& & & \\
& 2&$6.1e-6$&$5.0e-5$& & & \\
& 3&$1.4e-14$&$3.6e-13$& & &\\\hline
$\frac{\|r_{k-1}\|}{\|A\|_1}$&1 &0.0196&0.0110&0.0387&12&606\\
&2&$6.7e-7$&$1.5e-5$&$3.5e-4$&184&\\
&3&$8.7e-14$&$4.7e-13$&-&410&\\\hline
0.1&1&0.0231&0.0155&0.0943&6&394\\
&2&$2.8e-7$&$8.4e-7$&0.0816&178&\\
&3&$7.6e-14$&$4.7e-13$&-&210&\\\hline
0.5&1&0.0646&0.0253&0.3715&3&408\\
&2&$6.0e-4$&$0.0071$&0.4757&37&\\
&3&$1.2e-7$&$1.6e-7$&0.4636&165&\\
&4&$4.5e-14$&$4.8e-13$&-&203&\\\hline
$1-\frac{c_1\|r_{k-1}\|}{\|A\|_1}$
&1&0.2331&0.0541&0.8325&1&536\\
$tol=10^{-12}$&2&0.0202&0.0161&0.90551&4&\\
&3&$2.5e-4$&0.0044&0.9469&64&\\
&4&$2.1e-6$&$3.2e-5$&0.9913&149&\\
&5&$3.1e-9$&$1.8e-8$&0.9999&155&\\
&6&$2.5e-12$&$2.8e-11$&$1-1.6\times 10^{-7}$&163&\\\hline
\end{tabular}
\begin{tabular}{|c|c|c|}\hline
$\xi_{k-1}$& $k \ (iter^{(k-1)})$& $iters$\\\hline
$1-\left(\frac{c_2\|r_{k-1}\|}{\|A\|_1}\right)^2$& &\\
$tol=10^{-10}$& 1 (1); 2 (3); 3 (16); 4 (172); 5 (143) &335
\\\hline
\end{tabular}
\caption{CAN1054, $\beta=88.28,\ \sin\phi_0=0.1137$, $c_1=c_2=1000$.
$k \ (iter^{(k-1)})$ denotes the number of inner iteration steps
used by MINRES when computing
$(\theta_k,u_k)$.}\label{can1054minres}
\end{center}
\end{table}
\begin{table}[ht]
\begin{center}
\begin{tabular}{|c|c|c|c|c|c|c|}\hline
$\xi_{k-1}\leq\xi$&$k$&$\|r_k\|$&$\sin\phi_k$&$res^{(k-1)}$
&$iter^{(k-1)}$ &$iters$\\\hline
0 (RQI)&1&0.0084&0.1493& & & \\
& 2&$1.6e-4$&0.0047& & & \\
& 3&$3.2e-9$&$1.1e-7$ & & &\\
&4&$2.0e-15$&$1.1e-13$ & & &\\\hline
$\frac{\|r_{k-1}\|}{\|A\|_1}$&1 &0.0048&0.0409&0.0755&13&595\\
&2&$3.1e-6$&$1.1e-4$&$6.0e-4$&115&\\
&3&$7.6e-14$&$1.3e-12$&-&192&\\
&4&$1.1e-14$&$2.1e-13$&-&275&\\\hline
0.1&1&0.0054&0.0492&0.0966&10&313\\
&2&$1.3e-5$&$2.9e-4$&0.0959&50&\\
&3&$2.7e-10$&$6.8e-9$&0.0967&114&\\
&4&$7.0ee-13$&$2.2e-13$&-&139&\\ \hline
0.5&1&0.0228&0.0705&0.4717&3&297\\
&2&$3.9e-4$&$0.0136$&0.4918&28&\\
&3&$2.6e-6$&$1.3e-4$&0.4681&42&\\
&4&$1.2e-10$&$2.9e-9$&0.4579&109&\\
&5&$6.2e-14$&$3.3e-13$&-&115&\\\hline
$1-\frac{c_1\|r_{k-1}\|}{\|A\|_1}$
&1&0.1031&0.0800&0.9309&1&244\\
&2&0.0081&0.0617&0.9369&5&\\
&3&$8.0e-4$&$0.0025$&0.9470&17&\\
&4&$3.7e-6$&$5.1e-4$&0.9878&31&\\
&5&$3.6e-8$&$7.5e-7$&$1-5.6\times 10^{-4}$&77&\\
&6&$1.4e-12$&$4.5e-11$&$1-5.9\times 10^{-5}$&113&\\\hline
\end{tabular}
\begin{tabular}{|c|c|c|}\hline
$\xi_{k-1}$& $k \ (iter^{(k-1)})$& $iters$\\\hline
$1-\left(\frac{c_2\|r_{k-1}\|}{\|A\|_1}\right)^2$&1 (1); 2 (3); 3
(9);
4 (18);5 (22) &\\
$tol=10^{-9}$& 6 (31); 7 (34); 8 (25); 9 (26); 10 (26) &195
\\\hline
\end{tabular}
\caption{DWT2680, $tol=10^{-12}$, $\beta=2295.6,\
\sin\phi_0=0.1133$, $c_1=10000,\ c_2=1000$. $k \ (iter^{(k-1)})$
denotes the number of inner iteration steps used by MINRES when
computing $(\theta_k,u_k)$.}\label{dwtminres}
\end{center}
\end{table}
\begin{table}[ht]
\begin{center}
\begin{tabular}{|c|c|c|c|c|c|c|}\hline
$\xi_{k-1}\leq\xi$&$k$&$\|r_k\|$&$\sin\phi_k$&$res^{(k-1)}$&
$iter^{(k-1)}$&$iters$\\\hline
0 (RQI)&1&0.0111&0.1096& & & \\
& 2&$1.0e-4$&0.0016& & & \\
& 3&$2.3e-10$&$9.0e-8$ & & &\\
&4&$2.0e-15$&$6.5e-13$ & & &\\\hline
$\frac{\|r_{k-1}\|}{\|A\|_1}$&1 &0.0100&0.0123&0.0998&5&2692\\
&2&$8.2e-7$&$8.9e-5$&$1.4e-3$&697&\\
&3&$1.3e-13$&$9.1e-13$&-&1110&\\
&4&$4.2e-14$&$6.5e-13$&-&880&\\\hline
0.1&1&0.0100&0.0077&0.09898&5&1902\\
&2&$6.3e-6$&0.0018&0.0996&223&\\
&3&$3.6e-10$&$2.9e-8$&0.0975&790&\\
&4&$1.9e-13$&$6.5e-13$&-&862&\\ \hline
0.5&1&0.0353&0.0270&0.4302&2&1710\\
&2&$3.8e-4$&$0.0064$&0.4838&14&\\
&3&$3.6e-7$&$2.5e-4$&0.4938&119&\\
&4&$3.1e-11$&$1.8e-9$&0.4794&767&\\
&5&$1.5e-13$&$6.5e-13$&-&808&\\\hline
$1-\frac{c_1\|r_{k-1}\|}{\|A\|_1}$
&1&0.0795&0.0369&0.7382&1&1967\\
$tol=10^{-12}$&2&0.0045&0.0117&0.9037&5&\\
&3&$9.2e-5$&$0.0039$&0.9454&35&\\
&4&$3.3e-7$&$0.0002$&0.9863&611&\\
&5&$4.9e-9$&$2.1e-6$&$1-4.9\times 10^{-5}$&627&\\
&6&$5.6e-12$&$1.0e-9$&$1-7.8\times 10^{-7}$&688&\\ \hline
\end{tabular}
\begin{tabular}{|c|c|c|}\hline
$\xi_{k-1}$& $k \ (iter^{(k-1)})$& $iters$\\\hline
$1-\left(\frac{c_2\|r_{k-1}\|}{\|A\|_1}\right)^2$&1 (1); 2 (3); 3
(8);
4 (84);5 (631) &\\
$tol=10^{-10}$& 6 (655); 7 (203); 8 (496) &2081
\\\hline
\end{tabular}
\caption{LSHP3466, $tol=10^{-13}$, $\beta=2613.1,\
\sin\phi_0=0.1011$, $c_1=c_2=1000$. $k \ (iter^{(k-1)})$ denotes the
number of inner iteration steps used by MINRES when computing
$(\theta_k,u_k)$.}\label{lshpminres}
\end{center}
\end{table}
Before explaining our experiments, we should remind that in finite
precision arithmetic $\|r_k\|/\|A\|_1$ cannot decrease further
whenever it reaches a moderate multiple of $\epsilon_{\rm
mach}=2.2\times 10^{-16}$. Therefore, assuming that the algorithm
stops at outer iteration $k$, if $\sin\phi_{k-1}$ or $\|r_{k-1}\|$
is at the level of $10^{-6}$ or $10^{-9}$, then the algorithm may
not continue converging cubically or quadratically at the final
outer iteration $k$.
To judge cubic convergence, we again stress that we should rely on
(\ref{rate1}) and (\ref{rate2}) for RQI and the inexact RQI with
MINRES, respectively. We observe from the tables that the inexact RQI
with MINRES for $\xi_k\leq\xi<1$ with $\xi$ fixed not near one converges
cubically and behaves like RQI and the inexact RQI with MINRES with
decreasing tolerance $\xi_k=(\|r_k\|)$; it uses (almost) the same
outer iterations as the latter two do. The results clearly indicate
that cubic convergence is generally insensitive to $\xi$ provided $\xi$
is not near one. Furthermore, we see that the algorithm with a fixed $\xi$
not near one is much more efficient than the algorithm with
$\xi_k=O(\|r_k\|)$ and is generally about one and a half to twice as
fast as the latter. Since $(A-\theta_k I)w=u_k$ becomes increasingly
ill conditioned as $k$ increases, we need more inner iteration steps
to solve the inner linear system with the same accuracy $\xi$,
though the right-hand side $u_k$ is richer in the direction of $x$
as $k$ increases. For $\xi_k=O(\|r_k\|)$, inner iteration steps
needed can be much more than those for a fixed $\xi$ not near one at
each outer iteration as $k$ increases. We refer to \cite{mgs06} for
a descriptive analysis.
For the above numerical tests, we pay special attention to the ill
conditioned DWT2680 and LSHP3466. Intuitively, RQI and the inexact
RQI with MINRES seems to exhibit quadratic convergence. However, it
indeed converges cubically in the sense of (\ref{rate1}) and
(\ref{rate2}). With $\xi=0.5$, $\sin\phi_k$ and $\|r_k\|$ decrease
more slowly than those obtained with $\xi=0.1$ and the exact RQI as
well as the inexact RQI with decreasing tolerance, and the algorithm
uses one more outer iteration. This is because the convergence
factors in both (\ref{rate1}) and (\ref{rate2}) are big and the
factor with $\xi=0.5$ is considerably bigger than those with the
others. However, the method with $\xi=0.5$ uses comparable $iters$.
Our experiments show that the inexact RQI with MINRES is not
sensitive to $\xi<1$ not near one. So it is advantageous to
implement the inexact RQI with MINRES with a fixed $\xi$ not near
one so as to achieve the cubic convergence. We can benefit much from
such a new implementation and use possibly much fewer $iters$,
compared with the method with $\xi_k=O(\|r_k\|)$.
Next we confirm Theorems~\ref{minrescubic}--\ref{minres2} and verify
quadratic convergence and linear convergence when conditions
(\ref{quadracond}) and (\ref{linrescond}) are satisfied,
respectively. Note that $\beta$ and $|\cos\varphi|$ in the upper
bounds for $\xi_k$ are uncomputable a priori during the process.
However, by their forms we can take
\begin{equation}
\xi_k=1-\frac{c_1\|r_k\|}{\|A\|_1} \label{critera}
\end{equation}
and
\begin{equation}
\xi_k=1-\left(\frac{c_2\|r_k\|}{\|A\|_1}\right)^2 \label{criterb}
\end{equation}
for reasonable $c_1$ and $c_2$, respectively, and use them to test
if the inexact RQI with MINRES converges quadratically and linearly.
It is seen from (\ref{quadracond}) and (\ref{linrescond}) that we
should take $c_1$ and $c_2$ bigger than one as $\beta\geq 1$,
$|\cos\varphi|\leq 1$ and $\zeta<1$. The bigger $\beta$ is, the
bigger $c_1$ and $c_2$ should be. Note that $\xi_k$ defined so may
be negative in the very beginning of outer iterations if $u_0$ is
not good enough. In our implementations, we take
\begin{equation}
\xi_k=\max\{0.95,1-\frac{c_1\|r_k\|}{\|A_1||}\} \label{criteria1}
\end{equation}
and
\begin{equation}
\xi_k=\max\{0.95,1-\left(\frac{c_2\|r_k\|}{\|A_1\|}\right)^2\}
\label{criteria2}
\end{equation}
with $100\leq c_1,c_2\leq 10000$ for quadratic and linear
convergence, respectively. As remarked previously, the inexact RQI
with MINRES for $\xi=0.8$ generally converges cubically though it
may reduce $\|r_k\|$ and $\sin\phi_k$ not as much as that for $\xi$
smaller at each outer iteration. We take it as a reference for cubic
convergence. We implement the method using (\ref{critera}) and
(\ref{criterb}), respectively, after very few outer iterations as
long as the algorithm starts converging. They must approach one as
outer iterations proceed. Again, we test the above four matrices. In
the experiments, we have taken several $c_1,c_2$'s ranging from 100
to 10000. The bigger $c_1$ and $c_2$ are, the safer are bounds
(\ref{criteria1}) and (\ref{criteria2}) for quadratic and linear
convergence, and the faster the algorithm converges. We report the
numerical results for $c_1=c_2=1000$ in
Tables~\ref{BCSPWR08minres}--\ref{lshpminres} except $c_1=10000$ for
DWT2680. Figure~\ref{fig1} draws the convergence curves of the
inexact RQI with MINRES for the four matrices for the fixed
$\xi_k=0.8$ and $c_1=c_2=1000$ except $c_1=10000$ for DWT2680.
\begin{figure}[ht]
\begin{center}
\includegraphics[width=7cm]{bcspwr08.eps}
\includegraphics[width=7cm]{can1054.eps}
\includegraphics[width=7cm]{dwt2680.eps}
\includegraphics[width=7cm]{lshp3466.eps}
\end{center}
\caption{Quadratic and linear convergence of the inexact RQI with
MINRES for BCSPWR08, CAN1054, DWT2680 and LSHP3466 in order, in
which the solid line denotes the convergence curve of
$\xi_k=\xi=0.8$, the dotted dash line the quadratic convergence
curve and the dashed line the linear convergence curve.}\label{fig1}
\end{figure}
Figure~\ref{fig1} clearly exhibits the typical behavior of
quadratic and linear convergence of the inexact RQI with MINRES.
Precise data details can be found in
Tables~\ref{BCSPWR08minres}--\ref{lshpminres}. As outer iterations
proceed, $\xi_k$ is increasingly closer to one but the algorithm
steadily converges quadratically and linearly; see the tables for
quadratic convergence. The tables and figure indicate that
our conditions (\ref{criteria1}) and (\ref{criteria2}) indicate the
inexact RQI works very well for chosen $c_1$ and $c_2$. For
quadratic convergence, $\xi_k$ becomes increasingly closer to one,
but on the one hand $iter^{(k-1)}$ still increases as outer
iterations proceed and on the other hand it is considerably smaller
than that with a fixed $\xi$. In contrast, for linear convergence,
$iter^{(k-1)}$ varies not much with increasing $k$ except for the
first two outer iterations, where $iter^{(k-1)}$ is no more than
five.
For other $c_1$ and $c_2$, we have made experiments in the same way.
We have observed similar phenomena for quadratic convergence and found
that the algorithm is not sensitive to $c_1$ in general, but this is not the
case for $c_2$. For different $c_2$, the method still converges
linearly but the number of outer iterations may vary quite a lot.
This should be expected as $c_2$ critically affects the linear
convergence factor $\zeta$ that uniquely determines convergence
speed, while $c_1$ does not affect quadratic convergence rate and
only changes the factor $\eta$ in the quadratic convergence bounds
(\ref{quadminres}) and (\ref{quadres}). Also, we should be careful
when using (\ref{criteria2}) in finite precision arithmetic. If
$$
\left(\frac{c_2\|r_k\|}{\|A\|_1}\right)^2\
$$
is at the level of $\epsilon$ or smaller for some $k$, then
(\ref{criteria2}) gives $\xi_k=1$ in finite precision arithmetic.
The inexact RQI with MINRES will break down and cannot continue the
$(k+1)$-th outer iteration. A adaptive strategy is to fix $\xi_k$ to
be a constant smaller than one once $\|r_k\|$ is so small that
$\xi_k=1$ in finite precision arithmetic. We found that
$\xi_k=1-10^{-8}$ is a reasonable choice. We have tested this
strategy for the four matrices and found that it works well.
\section{Concluding remarks}\label{conc}
We have considered the convergence of the inexact RQI without and
with MINRES in detail and have established a number of results on
cubic, quadratic and linear convergence. These results clearly show
how inner tolerance affects the convergence of outer iterations and
provide practical criteria on how to best control inner tolerance to
achieve a desired convergence rate. It is the first time to appear
surprisingly that the inexact RQI with MINRES generally converges
cubically for $\xi_k\leq\xi<1$ with $\xi$ a
constant not near one and quadratically
for $\xi_k$ increasingly near one, respectively.
They are fundamentally different from the existing
results and have a strong impact on effectively
implementing the algorithm so as to reduce the total
computational cost very considerably.
Using the same analysis approach in this paper, we have considered
the convergence of the inexact RQI with the unpreconditioned and
preconditioned Lanczos methods for solving inner linear systems
\cite{jia09}, where quadratic and linear convergence remarkably
allows $\xi_k\geq 1$ considerably, that is, approximate
solutions of the inner linear systems have no accuracy at all in the
sense of solving linear systems. By comparisons, we find that the
inexact RQI with MINRES is preferable in robustness and efficiency.
Although we have restricted to the Hermitian case, the analysis
approach could be used to study the convergence on the inexact
RQI with Arnoldi and GMRES for the non-Hermitian eigenvalue problem.
We have only considered the standard Hermitian eigenvalue problem
$Ax=\lambda x$ in this paper. For the Hermitian definite generalized
eigenvalue problem $Ax=\lambda Mx$ with $A$ Hermitian and $M$
Hermitian positive definite, if the $M$-inner product, the $M$-norm
and the $M^{-1}$-norm, the angle induced from the $M$-inner product
are properly placed in positions of the usual Euclidean inner
product, the Euclidean norm and the usual angle, then based on the
underlying $M$-orthogonality of eigenvectors of the matrix pair
$(A,M)$, we should be able to extend our theory developed in the paper to
the inexact RQI with the unpreconditioned and tuned preconditioned
MINRES for the generalized eigenproblem. This work is in progress. | {"config": "arxiv", "file": "0906.2238/rqiminresv3.tex"} |
\documentclass{article}
\usepackage{graphics,graphicx,psfrag,color,float, hyperref}
\usepackage{amsmath}
\usepackage{amssymb}
\usepackage{bm}
\oddsidemargin 0pt
\evensidemargin 0pt
\marginparwidth 40pt
\marginparsep 10pt
\topmargin 0pt
\headsep 10pt
\textheight 8.4in
\textwidth 6.6in
\usepackage{amssymb}
\usepackage{amsfonts}
\usepackage{amsthm, hyperref}
\usepackage {times}
\usepackage{amsfonts}
\newtheorem{theor}{Theorem}
\newtheorem{conj}[theor]{Conjecture}
\newcommand{\tab}{\hspace*{2em}}
\newcommand{\beq}{\begin{equation}}
\newcommand{\enq}{\end{equation}}
\newcommand{\bel}{\begin{lemma}}
\newcommand{\enl}{\end{lemma}}
\newcommand{\bet}{\begin{theorem}}
\newcommand{\ent}{\end{theorem}}
\newcommand{\ten}{\textnormal}
\newcommand{\tbf}{\textbf}
\newcommand{\tr}{\mathrm{Tr}}
\newcommand{\myexp}{{\mathrm{e}}}
\newcommand{\tS}{\tilde{S}}
\newcommand{\ts}{\tilde{s}}
\newcommand{\eps}{\varepsilon}
\newcommand*{\cC}{\mathcal{C}}
\newcommand*{\cA}{\mathcal{A}}
\newcommand*{\cH}{\mathcal{H}}
\newcommand*{\cF}{\mathcal{F}}
\newcommand*{\cB}{\mathcal{B}}
\newcommand*{\cD}{\mathcal{D}}
\newcommand*{\cG}{\mathcal{G}}
\newcommand*{\cK}{\mathcal{K}}
\newcommand*{\cN}{\mathcal{N}}
\newcommand*{\cS}{\mathcal{S}}
\newcommand*{\chS}{\hat{\mathcal{S}}}
\newcommand*{\cT}{\mathcal{T}}
\newcommand*{\cX}{\mathcal{X}}
\newcommand*{\cW}{\mathcal{W}}
\newcommand*{\cZ}{\mathcal{Z}}
\newcommand*{\cE}{\mathcal{E}}
\newcommand*{\cU}{\mathcal{U}}
\newcommand*{\cP}{\mathcal{P}}
\newcommand*{\cV}{\mathcal{V}}
\newcommand*{\cY}{\mathcal{Y}}
\newcommand*{\cR}{\mathcal{R}}
\newcommand*{\bbN}{\mathbb{N}}
\newcommand*{\bX}{\mathbf{X}}
\newcommand*{\bY}{\mathbf{Y}}
\newcommand*{\bU}{\mathbf{U}}
\newcommand*{\cl}{\mathcal{l}}
\newcommand*{\Xb}{\bar{X}}
\newcommand*{\Yb}{\bar{Y}}
\newcommand*{\mximax}[1]{\Xi^\delta_{\max}(P_{#1})}
\newcommand*{\mximin}[1]{\Xi^\delta_{\min}(P_{#1})}
\newcommand{\bra}[1]{\langle #1|}
\newcommand{\ket}[1]{|#1 \rangle}
\newcommand{\braket}[2]{\langle #1|#2\rangle}
\newcommand*{\renyi}{R\'{e}nyi }
\newcommand*{\enc}{\mathrm{enc}}
\newcommand*{\dec}{\mathrm{dec}}
\newcommand*{\denc}{\mathrm{d-enc}}
\newtheorem{definition}{Definition}
\newtheorem{corollary}{Corollary}
\newtheorem{theorem}{Theorem}
\newtheorem{lemma}{Lemma}
\begin {document}
\title{Coding for classical-quantum channels with rate limited side information at the encoder: An information-spectrum approach}
\author{Naqueeb Ahmad Warsi$^*$
\and
Justin Coon
\thanks{
Department of Engineering Science, University of Oxford,
Email:
{\sf
naqueeb.ahmedwarsi@eng.ox.ac.uk, justin.coon@eng.ox.ac.uk
}
}
}
\date{}
\maketitle
\begin{abstract}
We study the hybrid classical-quantum version of the channel coding problem for the famous Gel'fand-Pinsker channel. In the classical setting for this channel the conditional distribution of the channel output given the channel input is a function of a random parameter called the channel state. We study this problem when a rate limited version of the channel state is available at the encoder for the classical-quantum Gel'fand-Pinsker channel. We establish the capacity region for this problem in the information-spectrum setting. The capacity region is quantified in terms of spectral-sup classical mutual information rate and spectral-inf quantum mutual information rate.
\end{abstract}
\section{Introduction}
In traditional information theory literature it is common to study the
underlying problems assuming that the
channel characteristics do not change over multiple use. The proofs
appeal to {\em typicality} of sequences or typical subspaces in the quantum setting \cite{wilde-book}: the empirical
distribution of symbols in a long sequence of trials will with high
probability be close to the true
distribution~\cite{covertom}. However, information
theoretic arguments based on typicality or the related Asymptotic
Equipartition Property (AEP) assume that both the source and channel
are stationary and/or ergodic (memoryless), assumptions that are not
always valid, for example, in \cite{gray-book} Gray analyzes the details of asymptotically mean stationary sources, which are neither stationary nor ergodic. To overcome such assumptions Verd\'{u} and Han pioneered the technique of information-spectrum methods in their seminal work \cite{han-verdu-spectrum-94}. In this work Verd\'{u} and Han define the notions of limit inferior and limit superior in probability. They then use these definitions to establish the capacity of general channels (channels that are not necessarily stationary and/or memoryless). Since this work of Verd\'{u} and Han there have been a considerable interest in generalizing the results of information theory in the information spectrum setting, see for example, \cite{miyakaye-kanaya-1995, muramatsu, hayashi-classical-non-iid, arbitrary-wiretap} and references therein.
This general technique of information-spectrum methods wherein no assumptions are made on the channels and/or sources were extended to the quantum case by Hayashi, Nagaoka and Ogawa. Using this method they studied the problem of quantum hypothesis testing \cite{hayshi-nagaoka-2002, ogawa-nagaoka-2000-strong}, deriving the classical capacity formula of general quantum channels \cite{Hayashi-noniid} and establishing general formula for the optimal rate of entanglement concentration \cite{hayashi-entanglement-2006}. Since the work of Hayashi, Nagaoka and Ogawa the study of various quantum information theoretic protocols in the information spectrum setting have been one of the most interesting areas of research in the theoretical quantum information science. In \cite{datta-byeondiid-2006} Bowen and Datta further carried forward this approach to study various other quantum information theoretic problems. In \cite{datta-renner-2009} Datta and Renner showed that there is a close relationship with the information theoretic quantities that arise in the information-spectrum scenario and smooth \renyi entropies which play a crucial role in one-shot information theory. In \cite{radhakrishnan-sen-warsi-archive} Radhakrishnan, Sen and Warsi proved one-shot version of the Marton inner bound for the classical-quantum broadcast channels. They then showed that their one-shot bounds yields the quantum information-spectrum genralization of the Marton inner bound in the asymptotic setting.
In this paper, we carry forward the subject of studying quantum information theoretic protocols in the information-spectrum setting. We study the problem of communication over the sequence $\left\{\cX^n,\cS^n, \cN^{X^nS^n \to B^n}(x^n,s^n) = \rho^{B^n}_{x^n,s^n}\right\}_{n=1}^\infty$ (also called as the classical-quantum Gel'fand-Pinsker channel), where $\cX^n,\cS^n$ are the input and state alphabets and $\rho^{B^n}_{x^n,s^n}$ is a positive operator with trace one acting on the Hilbert space $\cH_B^{\otimes n}.$ We establish the capacity region of this channel when rate limited version of the state sequence $S^n$ is available at the encoder. Figure \ref{Fig} below illustrates this communication scheme.
\vspace{5mm}
\begin{figure}[H]
\centering
\includegraphics[scale=0.7] {Picture11.png}
\caption{Gel'fand-Pinsker channel communication scheme when coded side information is available at the encoder.}
\label{Fig}
\end{figure}
The classical version of this problem was studied by Heggard and El Gamal (achievability) in \cite{heegard-gamal-1983} in the asymptotic iid setting. They proved the following:
\begin{theorem}
\label{classical}
Fix a discrete memoryless channel with state characterized by $p(y \mid x,s).$ Let $(R, R_S)$ be such that
\begin{align*}
R &< I[U;Y] - I [U; \tS]\\
R_ S&> I [S; \tS],
\end{align*}
for some distribution $p(s,\ts, u,x,y) = p(s)p(\ts \mid s)p(u \mid \ts) p(x \mid u, \ts)p(y \mid x,s).$ Then, the rate pair $(R,R_S)$ is achievable.
\end{theorem}
Furthermore, in \cite{heegard-gamal-1983} Heggard and El Gamal argued that Theorem \ref{classical} implies the result of Gel'fand and Pinsker \cite{gelfand-pinsker} who showed the following:
\begin{theorem}
\label{gelf}
Fix a discrete memoryless channel with state characterized by $p(y \mid x,s).$ The capacity of this channel when the state information is directly available non-causally at the encoder is
\beq
C = \max_{p_{u \mid s}, g : \cU \times \cS \to \cX} \left(I[U;Y] -I[U;S]\right), \nonumber
\enq
where $|\cU| \leq \min \left\{|\cX|.|\cS|, |\cY| + |\cS| -1\right\}.$
\end{theorem}
The above formula for the capacity is quite intuitive. If we set $S =\emptyset$ and $U = X$ in Theorem \ref{gelf} then we rederive the famous Shannon's channel capacity formula \cite{shannon1948}. However, when $S \neq \emptyset$, Theorem \ref{gelf} implies that there is a loss in the maximum transmission rate per channel use at which Alice can communicate to Bob. This loss in the transmission rate is reflected by the term $I[U;S]$. Thus, $I[U;S]$ can be thought of as the minimum number of bits Alice needs to send to Bob per channel use to help him get some information about the channel state sequence $S^n$. Bob can then use this information about $S^n$ to recover the intended message.
\subsection*{Our Result}
We establish the capacity region of the classical-quantum Gel'fand-Pinsker channel in the information-spectrum setting when rate limited version of the channel state is available at the encoder. In the information-spectrum setting the channel output $\rho^{B^n}_{X^n,S^n},$ need not be a tensor product state. Furthermore, the channel state $S^n \sim p_{S^n},$ is a sequence of arbitrarily distributed random variables. This extremely general setting is the hallmark of information-spectrum approach. We prove the following:
\begin{theorem}
\label{ourresult}
Let $\left\{\cX^n,\cS^n, \cN^{X^nS^n \to B^n}(x^n,s^n) = \rho^{B^n}_{x^n,s^n}\right\}_{n=1}^\infty$ be a sequence of classical-quantum Gel'fand-Pinsker channels. The capacity region for this sequence of channels with rate limited version of the channel state available only at the encoder is the set of rate pairs satisfying the following:
\begin{align*}
R &\leq \underline{\mathbf{I}}[{\bf{U}};{\bf{B}}] - \overline{\mathbf{I}}[\bf{U};\bf{\tilde{S}}];\\
R_S & \geq \overline{\mathbf{I}} [\bf{S};\bf{\tilde{S}}].
\end{align*}
The information theoretic quantities are calculated with respect to the sequence of states $\bm{\Theta^{S\tilde{S}UXB}} =\left\{\Theta^{S^n\tilde{S}^nU^nX^nB^n}\right\}_{n=1}^\infty,$ where for every $n$,
\begin{align*}
&\Theta^{S^n\tilde{S}^nU^nX^nB^n}:= \sum_{s^n,\tilde{s}^n,u^,x^n}p_{{S^n}}({s}^n)p_{\tilde{S}^n \mid S^n}(\tilde{s}^n \mid s^n) p_{U^n \mid {\tilde{S}}^n}(u^n \mid \tilde{s}^n) p_{X^n \mid U^n\tS^n}(x^n \mid u^n,\ts^n)\ket{s^n}\bra{s^n}^{S^n}\otimes\ket{\tilde{s}^n}\bra{\tilde{s}^n}^{\tS^n}\\
&\hspace{45mm}\otimes\ket{u^n}\bra{u^n}^{U^n}\otimes\ket{x^n}\bra{x^n}^{X^n} \otimes \rho^{B^n}_{x^n,s^n}. \nonumber
\end{align*}
\end{theorem}
An immediate consequence of Theorem \ref{ourresult} is the following corollary:
\begin{corollary}
\begin{description}
\item[$(a)$] (Hayashi and Nagaoka, \cite{Hayashi-noniid}) The capacity of a sequence of classical-quantum channels $\{\cX^n, \cN^{X^n \to B^n}(x^n) = \rho^{B^n}_{x^n}\}_{n=1}^{\infty}$ is the following:
$$C = \sup_{\{X^n\}_{n=1} ^ \infty} \underline{\mathbf{I}} [\bf{X};\bf{B}].$$
\item [$(b)$] The capacity of a sequence of classical-quantum Gel'fand-Pinsker channels $\left\{\cX^n,\cS^n, \cN^{X^nS^n \to B^n}(x^n,s^n) = \rho^{B^n}_{x^n,s^n}\right\}_{n=1}^\infty$ with channel state directly available at the encoder is the following:
$$ C = \sup_{\{\Theta_n\}_{n=1}^\infty} \underline{\mathbf{I}} [\bf{U};\bf{B}] - \overline{\mathbf{I}} [\bf{U};\bf{S}],$$
where for every $n$, $$\Theta_n = \sum_{s^n,u^,x^n}p_{{S^n}}({s}^n)p_{U^n \mid {{S}}^n}(u^n \mid {s}^n) p_{X^n \mid U^nS^n}(x^n \mid u^n,s^n)\ket{s^n}\bra{s^n}^{S^n}\otimes\ket{u^n}\bra{u^n}^{U^n}\otimes\ket{x^n}\bra{x^n}^{X^n} \otimes \rho^{B^n}_{x^n,s^n}. $$
\end{description}
\end{corollary}
\begin{proof}
\begin{description}
\item[$a)$] The proof follows by setting $\tS^n=S^n = \emptyset$ and $U^n =X^n$ in Theorem \ref{ourresult}.
\item[$b)$] The proof follows by setting $\tS^n=\emptyset$ in Theorem \ref{ourresult}.
\end{description}
\end{proof}
\section {Definition}
\begin{definition}
\label{limsup}
Let $(\bU,\mathbf{{\tS}}):=\left\{U^n,\tS^n\right\}_{n=1}^{\infty}$ be a sequence of pair of random variables, where for every $n$ $(U^n,\tS^n) \sim p_{U^nS^n}$ and take values over the set $(\cU^n \times \tilde{\mathcal{S}}^n)$. The spectral-sup mutual information rate $\overline{\mathbf{I}}[\bU;\mathbf{{\tS}}]$ between $\bU$ and $\mathbf{{\tS}}$ is defined as follows:
\beq
\overline{\mathbf{I}}[\bU;\mathbf{{\tS}}]:= \inf\left\{a: \lim_{n \to \infty }\Pr \left\{\frac{1}{n}\log\frac{p_{U^n\tS^n}}{p_{U^n} p_{\tS^n}} > a+\gamma\right\} = 0\right\},
\enq
where $\gamma >0$ is arbitrary and the probability above is calculated with respect to $p_{U^n\tS^n}$.
\end{definition}
\begin{definition}
Let ${\bm{\rho}}:= \left\{\rho_n\right\}_{n=1}^{\infty}$ and $\bm{\sigma}:=\left\{\sigma_n\right\}_{n=1}^\infty$ be sequences of quantum states where for every $n,$ $\rho_n$ and $\sigma_n$ are density matrices acting on the Hilbert space $\cH_n:=\cH^{\otimes n}.$ The spectral-inf mutual information rate $\underline{\mathbf{I}}[\bm{\rho};\bm{\sigma}]$ between $\bm{\rho}$ and $\bm{\sigma}$ is defined as follows:
\beq
\underline{\mathbf{I}}[\bm{\rho};\bm{\sigma}]:=\sup \left\{a: \lim_{n\to \infty} \tr \left[\left\{\rho_n \succeq 2^{n(a-\gamma)} \sigma_n \right\} \rho_n\right] =1 \right\},
\enq
where $\gamma >0$ is arbitrary and $\left\{\rho_n \succeq 2^{n(a-\gamma)} \sigma_n \right\}$ represents a projection operator onto the non-negative Eigen space of the operator $\left(\rho_n - 2^{n(a-\gamma)} \sigma_n \right).$
\end{definition}
\begin{definition}
\label{code}
An $(n,M_n,M_{e,n},\eps_n)$ code for the Gel'fand-Pinsker channel $\left\{\cX^n,\cS^n, \cN^{X^nS^n \to B^n}(x^n,s^n) = \rho^{B^n}_{x^n,s^n}\right\}$ with coded side information available at the encoder consists of
\begin{itemize}
\item a state encoding $f_{e,n} : \cS^n \to [1:M_{e,n}]$
\item an encoding function $f_n: [1:M_n] \times [1:M_{e,n}] \to \cX^n$ (possibly randomized)
\item A decoding POVM $\left\{\beta(m): m \in [1:M_n]\right\}$ such that
\beq
\frac{1}{M_n} \sum_{s^n}p_{S^n}(s^n) \tr\left[\left(\mathbb{I} - \cN(f_n(m, f_{e,n}(s^n)),s^n)\right)\beta(m)\right] \leq \eps_n. \nonumber
\enq
\end{itemize}
\end{definition}
\begin{definition}
\label{ach}
A rate pair $(R,R_S)$ is achievable if there exists a sequence of $(n,M_n,M_{e,n}, \eps_n)$ codes such that
\begin{align*}
\liminf _{n \to \infty} \frac{1}{n} \log M_n &>R\\
\limsup_{n \to \infty} \eps_n &< \eps \\
\limsup_{n \to \infty} \frac{1}{n} \log M_{e,n}&<R_S.
\end{align*}
The set of all achievable rate pairs is known as the capacity region.
\end{definition}
\section{Proof of Theorem \ref{ourresult}}
\subsection{Achievability}
Let,
\begin{align}
\label{rhofus}
\rho^{B^n}_{u^n,\ts^n}& = \sum_{s^n,x^n} p_{S^n \mid \tS^n}(s^n \mid \ts^n)p_{X^n \mid U^n\tS^n}(x^n \mid u^n,\ts^n)\rho^{B^n}_{x^n,s^n}\\
\label{theta}
\Theta^{U^nB^n}&= \tr_{S^n\tS^nX^n}\left[\Theta^{S^n\tS^nU^nX^nB^n}\right]\\
\Theta^{U^n} & = \tr_{B^n} \left[\Theta^{U^nB^n}\right]\\
\Theta^{B^n} & = \tr_{U^n} \left[\Theta^{U^nB^n}\right].
\end{align}
Let,
\beq
\label{pi}
\Pi^{U^nB^n} := \left\{\Theta^{U^nB^n} \succeq 2^{n(\underline{\mathbf{I}}[{\bf{U}};{\bf{B}}] -\gamma)} \Theta^{U^n} \otimes \Theta^{B^n} \right\},
\enq
where $\underline{\mathbf{I}}[{\bf{U}};{\bf{B}}]$ is calculated with respect to the sequence of states $\left\{\Theta^{U^nB^n}\right\}_{n=1}^\infty$ and $\left\{\Theta^{U^n} \otimes \Theta^{B^n}\right\}_{n=1}^\infty.$ Further, for every $u^n \in \cU^n,$ let
\beq
\label{lambda}
\Lambda_{u^n} := \tr_{U^n}\left[\Pi^{U^nB^n}\left(\ket{u^n}\bra{u^n}\otimes \mathbb{I}\right)\right].
\enq
Fix $\gamma > 0.$ Define the following sets:
\begin{align*}
\cT_n(p_{S^n\tS^n})&:= \left\{(s^n,\ts^n) : \frac{1}{n}\log\frac{p_{S^n\tS^n}(s^n\ts^n)}{p_{S^n(s^n)}p_{\tS^n}(\ts^n)} \leq \overline{\mathbf{I}}[\bf{S};\bf{\tilde{S}}]+\gamma \right\};\\
\cT_n(p_{U^n\tS^n})&:= \left\{(u^n,\ts^n) : \frac{1}{n}\log\frac{p_{U^n\tS^n}(u^n\ts^n)}{p_{U^n(u^n)}p_{\tS^n}(\ts^n)} \leq \overline{\mathbf{I}}[\bf{U};\bf{\tilde{S}}]+\gamma\right\}.
\end{align*}
Furthermore, let $g_1: \tilde{\mathcal{S}}^n \to [0,1]$ and $g_2 : \tilde{\mathcal{S}}^n \to [0,1]$ be defined as follows:
\begin{align}
\label{g}
g_1(\ts^n) = \sum_{u^n:(u^n,\ts^n) \notin \cT_n(p_{U^n\tS^n})}p_{U^n \mid \tS^n}(u^n \mid \ts^n);\\
\label{gg}
g_2(\ts^n) = \sum_{u^n:\tr \left[\Lambda_{u^n}\rho^{B^n}_{u^n, \ts^n}\right] \leq 1-\sqrt{\eps} } p_{U^n \mid \tS^n}(u^n \mid \ts^n).
\end{align}
In what follows we will use the notation $[1:2^{nR}]$ to represent the set $\left\{1, \cdots, 2^{nR}\right\}.$
{\bf{The codebook:}} Let $\Theta^{S^n\tilde{S}^nU^nX^nB^n}$ be as in the statement of the theorem. Let $R = \underline{\mathbf{I}}[{\bf{U}};{\bf{B}}] - \overline{\mathbf{I}}[\bf{U};\bf{\tilde{S}}]-6\gamma$, $r = \overline{\mathbf{I}}[\bf{U};\bf{\tilde{S}}]+2\gamma$ so that $R+r = \underline{\mathbf{I}}[{\bf{U}};{\bf{B}}] -4\gamma$. Let $u^n[1],u^n[2],\cdots, u^n[2^{n(R+r)}]$ be drawn independently according to the distribution $p_{U^n}$. We associate these samples with a row vector $\cC_n^{(A)}$ having $2^{n(R+r)}$ entries. We then partition this row vector into $2^{nR}$ classes each containing $2^{nr}$ elements. Every message $m \in [1:2^{nR}]$ is uniquely assigned a class. We will denote the class corresponding to the message $m$ by $\cC_n^{(A)}(m)$.
Fix $R_S = \overline{\mathbf{I}} [\bf{S};\bf{\tilde{S}}] + 2\gamma .$ Further, let $\tilde{s}^n[1],\tilde{s}^n[2],\cdots,\tilde{s}^n[2^{nR_S}]$ be drawn independently according to the distribution $p_{\tS^n}$ We will denote this collection of sequences by $\cC_n^{(C)}.$ These collection of sequences present in $\cC_n^{(C)}$ are made known to Alice as well.
{{\bf{Charlie's encoding strategy:}}} For each $k \in [1:2^{nR_S}],$ let $Z(k)$ be independently and uniformly distributed over $[0,1].$ For a given realisation of the state sequence $s^n$, let $\zeta(k)$ be an indictor random variable defined as follows:
\beq
\label{jtyp11}
\mathbf{{\zeta}}(k) =
\begin{cases}
1 & \mbox{if }~ Z(k) \leq
\frac{p_{\tS^nS^n}(\ts^n[k],s^n)}{2^{n(\overline{\mathbf{I}} [\bf{S};\bf{\tilde{S}}]+\gamma)}{ p_{\tS^n}(\ts^n[k]) p_{S^n}(s^n)}}; \\
0 & \mbox{otherwise.}
\end{cases}
\enq
Further, for a given realisation of the state sequence $s^n,$ let $\mathbf{I}(k)$ be an indicator random variable defined as follows:
\beq
\label{jtyp}
\mathbf{I}(k) =
\begin{cases}
1 & \mbox{if }~
\zeta(k) =1 , g_1(\tS^n[k]) < \sqrt{\eps}~ \mbox{and}~ g_{2}(\tS^n[k]) < \eps^{\frac{1}{4}}; \\
0 & \mbox{otherwise,}
\end{cases}
\enq
where $g_1(\ts^n)$ and $g_2(\ts^n)$ are defined in \eqref{g} and \eqref{gg}. Charlie on observing the state sequence $s^n$ finds an index $k$ such that $\mathbf{I}(k)=1$. If there are more than one such indices then $k$ is set as the smallest one among them. If there is none such index then $k =1.$ Charlie then sends this index $k$ to Alice.
{\bf{Alice's encoding strategy:}} For each pair $(k,\ell) \in [1:2^{nR_S}] \times [1:2^{n(R+r)}]$, let $\eta(k,l)$ be independently and uniformly distributed over $[0,1]$ and let $g(k,\ell)$ be defined as follows:
\beq
g(k,\ell):=\tr\left[\Lambda_{u^n[\ell]}\rho^{B^n}_{u^n[\ell],\ts^n[k]}\right],
\enq
where $\rho^{B^n}_{u^n,\ts^n}$ is defined in \eqref{rhofus} and $\Lambda_{u^n}$ is defined in \eqref{lambda}. Let $\mathbf{I}(k,\ell)$ be an indicator random variable such that
\beq
\mathbf{I}(k,\ell) =
\begin{cases}
1 & \mbox{if }~ \eta(k,\ell) \leq
\frac{p_{\tS^nU^n}(\ts^n[k],u^n[\ell])}{2^{n(\overline{\mathbf{I}} [\bf{U};\bf{\tilde{S}}]+\gamma)}{ p_{\tS^n}(\ts^n[k]) p_{U^n}(u^n[\ell])}}; \\
0 & \mbox{otherwise.}
\end{cases}
\enq
Further, let $\mathbf{J}(k,\ell)$ be an indicator random variable defined as follows:
\beq
\label{jtyp}
\mathbf{J}(k,\ell) =
\begin{cases}
1 & \mbox{if }~ \mathbf{I}(k,\ell) =1~ \mbox{and} ~g(k,\ell) >1-\sqrt{\eps}; \\
0 & \mbox{otherwise.}
\end{cases}
\enq
To send a message $m \in [1:2^{nR}]$ and on receiving the index $k$ from Charlie, Alice finds an index $\ell\in \cC_n^{(A)}(m)$ such that $\mathbf{J}(k,\ell) =1.$ If there are more than such indices then $\ell$ is set as the smallest one among them. If there is none such index then
$\ell =1.$ Alice then randomly generates $x^n \sim p_{X^n \mid U^n[\ell] \tS^n[k]}$ and transmits it over the classical-quantum channel over $n$ channel uses. In the discussions below we will use the notation $x^n(u^n[\ell], \ts^n[k])$ to highlight the dependence of $x^n$ on $(u^n[k], \ts^n[k])$. A similar encoding technique was also used by Radhakrishnan, Sen and Warsi in \cite{radhakrishnan-sen-warsi-archive}.
{\bf{Bobs' decoding strategy:}} For each $\ell \in [1:2^{n(R+r)}],$ we have the operators $\Lambda_{u^n[\ell]}$ as defined in \eqref{lambda}. Bob will normalize these operators to obtain a POVM. The POVM element corresponding to $\ell$ will be
\beq
\beta_n(\ell) := \left(\sum_{ \ell^\prime \in [1:2^{n(R+r)}]} \Lambda_{u^n(\ell^\prime)}\right)^{-\frac{1}{2}}\Lambda_{u^n(\ell)}\left(\sum_{ \ell^\prime \in [1:2^{n(R+r)}]} \Lambda_{u^n(\ell^\prime)}\right)^{-\frac{1}{2}}.
\enq
Bob on receiving the channel output measures it using these operators. If the measurement outcome is $\tilde{\ell}$ then he outputs $\tilde{m}$ if $\tilde{\ell} \in \cC_n^{(A)}(\tilde{m}).$ Similar decoding POVM elements were also used by Wang and Renner in \cite{wang-renner-prl}.
{\bf{Probability of error analysis:}} Let a message $m \in [1:2^{nR}]$ be transmitted by Alice by using the protocol discussed above and suppose it is decoded as $\tilde{m}$ by Charlie. We will now show that the probability $\tilde{m} \neq m,$ averaged over the random choice of codebook, the state sequence $S^n$ and $X^n$ is arbitrary close to zero. By the symmetry of the code construction it is enough to prove the claim for $m=1.$ There are following sources of error:
\begin{enumerate}
\item Charlie on observing the state sequence $S^n$ does not find a suitable $k \in \cC_n^{(C)}$ such that $\mathbf{I}(k)=1$.
\item Alice on receiving the index $k$ from Charlie is not able to find a suitable $\ell \in \cC_n^{(A)}(1)$ such that $(U^n[\ell],\tilde{S}^n[k])$ such that $\mathbf{J}(k,\ell)=1.$
\item Charlie finds a suitable $k$ and Alice finds a suitable $\ell$, but Bob's measurement is not able to determine the index $\ell$ correctly.
\end{enumerate}
Let $k^\star$ and $\ell^\star$ be the indices chosen by Charlie and Alice. Let us now upper bound the probability of error while decoding the transmitted message. Towards this we first define the following events:
\begin{align*}
\cE_1&:= \mbox{for all}~k \in [1:2^{nR_s}] : \mbox{I}(k)=0;\\
\cE_2& := \mbox{for all}~\ell \in \cC_n^{(A)}(1) : \mbox{J}(k^\star, \ell) = 0.\\
\end{align*}
We now have the following bound on the error probability:
\begin{align}
\Pr\left\{\tilde{m} \neq 1 \right\} &\leq \Pr \left\{ \tilde{\ell} \neq \ell^\star\right\}\nonumber\\
& \leq \Pr\left\{\cE_1 \cup \cE_2 \right\} + \Pr \left\{ \left(\cE_1\cup \cE_2\right)^c, \tilde{\ell} \neq \ell^\star\right\}\nonumber\\
& \leq \Pr\left\{\cE_1\right\} + \Pr\left\{\cE_2\right\} + \Pr \left\{\cE_1^c \cap \cE^c_2, \tilde{\ell} \neq \ell^\star\right\} \nonumber\\
\label{errorsideinf}
& \leq 2\Pr\left\{\cE_1\right\} + \Pr\left\{\cE_1^c \cap \cE_2\right\} +\Pr \left\{\cE_1^c \cap \cE^c_2, \tilde{\ell} \neq \ell^\star\right\},
\end{align}
where the first inequality follows from the setting of the protocol discussed above and remaining all of the inequalities till \eqref{errorsideinf} follow from the union bound. In what follows we will now show that for $n$ large enough we have
$$
2\Pr\left\{\cE_1\right\} + \Pr\left\{\cE_1^c \cap \cE_2\right\} +\Pr \left\{\cE_1^c \cap \cE^c_2, \tilde{\ell} \neq \ell^\star\right\} \leq 6\eps+3\sqrt{\eps} + 3\eps^{\frac{1}{4}}+\frac{2\sqrt{\eps}}{\left(1- \eps -\sqrt{\eps} -\eps^{\frac{1}{4}}\right)}+3\exp(-2^{n\gamma}), \nonumber
$$
where $\eps>0$ is arbitrarily close to zero such that $\left(\eps + \sqrt{\eps} + \eps^{\frac{1}{4}}\right) <1 $.
{\bf{Consider $\Pr \left\{\cE_1\right\}:$}}
\begin{align}
\Pr\left\{\cE_1\right\} & \overset{a} = \sum_{s^n \in \cS^n}p_{S^n}(s^n) \left(1- \Pr\left\{\mathbf{I}(k)=1 \mid S^n =s^n\right\}\right)^{2^{nR_S}}\nonumber\\
&\leq \sum_{s^n \in \cS^n}p_{S^n}(s^n) \left(1- \sum_{\substack{\ts^n:(\ts^n,s^n) \in \cT_n(p_{S^n\tS^n})\\ g_1(\ts^n) < \sqrt{\eps} \\ g_2(\ts^n) <\eps^{\frac{1}{4}} }}p_{\tS^n}(\ts^n)\Pr\left\{\mathbf{I}(k)=1 \mid S^n =s^n,\tS^n[k] =\ts^n\right\}\right)^{2^{nR_S}} \nonumber\\
& \overset{b} = \sum_{s^n \in \cS^n}p_{S^n}(s^n)\left(1- 2^{-n(\overline{\mathbf{I}}[\bf{S};\bf{\tilde{S}}]+\gamma)}\sum_{\substack{ \ts^n:(\ts^n,s^n) \in \cT_n(p_{S^n\tS^n})\\g_1(\ts^n) < \sqrt{\eps} \\ g_2(\ts^n) < \eps^{\frac{1}{4}}}}p_{\tS^n}(\ts^n)\frac{p_{S^n\tS^n}(s^n, \ts^n)}{p_{S^n}(s^n)p_{\tS^n}(s^n)}\right)^{2^{nR_S}} \nonumber\\
&\overset{c} \leq \sum_{s^n \in \cS^n}p_{S^n}(s^n)\exp\left(-2^{n(R_S-(\overline{\mathbf{I}} [\bf{S};\bf{\tilde{S}}] + \gamma))}\sum_{\substack{\ts^n: (\ts^n,s^n) \in \cT_n(p_{S^n\tS^n})\\ g_1(\ts^n) < \sqrt{\eps} \\ g_2(\ts^n) < \eps^{\frac{1}{4}}}}p_{\tS^n \mid S^n} (\ts^n \mid s^n) \right) \nonumber\\
&\overset{d} = \sum_{s^n \in \cS^n}p_{S^n}(s^n)\exp\left(-2^{n\gamma}\sum_{\substack{\ts^n:(\ts^n,s^n) \in \cT_n(p_{S^n\tS^n})\\g_1(\ts^n) < \sqrt{\eps} \\ g_2(\ts^n) < \eps^{\frac{1}{4}}}} p_{\tS^n \mid S^n} (\ts^n \mid s^n) \right) \nonumber\\
\label{markov}
& \overset{e} \leq \Pr\left\{\cT^c_n(p_{S^n\tS^n})\right\} + \Pr\left\{g_1(\tS^n) \geq \sqrt{\eps}\right\} + \Pr\left\{g_2(\tS^n) \geq \eps^{\frac{1}{4}}\right\} + \exp(-2^{n\gamma}) \nonumber\\
& \overset{f} \leq \eps + \exp(-2^{n\gamma}) + \Pr\left\{g_1(\tS^n) \geq \sqrt{\eps}\right\} + \Pr\left\{g_2(\tS^n) \geq \eps^{\frac{1}{4}}\right\},
\end{align}
where $a$ follows because $\mathbf{I}(1),\cdots,\mathbf{I}(2^{nR_S})$ are independent and identically distributed and $\tS^n[1], \cdots, \tS^n[2^{nR_S}]$ are independent and identically distributed according to the distribution $p_{\tS^n},$ $b$ follows from the definition of $\mathbf{I}(k),$ $c$ follows from the inequality $(1-x)^y \leq e^{-xy}(0\leq x\leq1, y\geq 0),$ $d$ follows becuase $R_s=\overline{{\mathbf{I}}}[\bf{U};\bf{\tilde{S}}]+2\gamma,$ $e$ follows because $(e^{-xy}) \leq 1-x+e^{-y} (0\leq x\leq1, y\geq 0)$ and union bound and $f$ follows because $n$ is large enough such that $\Pr\left\{\cT^c_n(p_{S^n\tS^n})\right\} \leq \eps.$ Let us now bound the each of the last two terms on the R.H.S. of \eqref{markov} as follows:
{\bf{Consider $\Pr\left\{g_1(\tS^n) \geq \sqrt{\eps}\right\}$}:}
\begin{align}
\Pr\left\{g_1(\tS^n) \geq \sqrt{\eps} \right\} & \overset{a} \leq \frac{\mathbb{E}[g_1(\tS^n)]}{\sqrt{\eps}} \nonumber\\
& \overset{b}= \frac{\sum_{\ts^n}\sum_{u^n:(u^n,\ts^n) \notin \cT_n(p_{U^n\tS^n})}p_{U^n \tS^n}(u^n,\tS^n)}{\sqrt{\eps}}\nonumber\\
& = \frac{\Pr\left\{(U^n,\tS^n) \notin \cT_n(p_{U^n\tS^n})\right\}}{\sqrt{\eps}}\nonumber\\
& \overset{c} \leq \frac{\eps}{\sqrt{\eps}} \nonumber\\
\label{g1m}
& =\sqrt{\eps},
\end{align}
where $a$ follows from Markov inequality; $b$ follows from the definition of $g_{1}(\ts^n)$ and taking expectation over the random variable $\tS^n$ and $c$ follows under the assumption that $n$ is large enough such that $\Pr\left\{(U^n,\tS^n) \notin \cT_n(p_{U^n\tS^n})\right\} \leq \eps.$
{\bf{Consider $\Pr\left\{g_2(\tS^n) \geq {\eps}^{\frac{1}{4}}\right\}$}:}
\begin{align}
\Pr\left\{g_2(\tS^n) \geq {\eps}^{\frac{1}{4}} \right\} & \overset{a} \leq \frac{\mathbb{E}[g_2(\tS^n)]}{{\eps}^{\frac{1}{4}}} \nonumber\\
& \overset{b} =\frac{\sum_{\ts^n}\sum_{u^n:\tr \left[\Lambda_{u^n}\rho^{B^n}_{u^n, \ts^n}\right] \leq 1-\sqrt{\eps} } p_{U^n\tS^n}(u^n,\ts^n)}{\eps^{\frac{1}{4}}} \nonumber\\
& = \frac{\Pr\left\{\tr \left[\Lambda_{U^n}\rho^{B^n}_{U^n, \tS^n}\right] \leq 1-\sqrt{\eps}\right\}}{\eps^{\frac{1}{4}}} \nonumber\\
&\overset{c} \leq \frac{\sqrt{\eps}}{\eps^{\frac{1}{4}}} \nonumber\\
\label{g2m}
& = \eps^{\frac{1}{4}},
\end{align}
where $a$ follows from Markov inequality; $b$ follows from the definition of $g_{2}(\ts^n)$ and by taking the expectation over the random variable $\tS^n$ and $c$ follows because of the following set of inequalities:
\begin{align}
&\Pr\left\{\tr \left[\Lambda_{U^n}\rho^{B^n}_{U^n, \tS^n}\right] \leq1-\sqrt{\eps}\right\} \nonumber\\
&\overset{a}
\leq \frac{1- \mathbb{E} \tr\left[\Lambda_{U^n}\rho^{B^n}_{U^n,\tS^n}\right]}{\sqrt{\eps}}\nonumber\\
& \overset{b}=\frac{1- \mathbb{E} \tr\left[\tr_{U^n}\left[\Pi^{U^nB^n}\left(\ket{U^n}\bra{U^n}\otimes \mathbb{I}\right)\right]\rho^{B^n}_{U^n,\tS^n}\right]}{\sqrt{\eps}}\nonumber \\
& = \frac{1- \tr\left[\tr_{U^n}\left[\Pi^{U^nB^n}\sum_{u^n}p_{U^n}(u^n)\left(\ket{u^n}\bra{u^n}\otimes \sum_{\ts^n}p_{\tS^n \mid U^n}(\ts^n \mid u^n)\rho^{B^n}_{u^n,\ts^n}\right)\right]\right]}{\sqrt{\eps}}\nonumber \\
& \overset{c}= \frac{1- \tr \left[\Pi^{U^nB^n}\Theta^{U^nB^n}\right]}{\sqrt{\eps}}\nonumber\\
& \overset{d} \leq \sqrt{\eps},\nonumber
\end{align}
where $a$ follows from the Markov inequality, $b$ follows from the definition of $\Lambda_{U^n}$ mentioned in \eqref{lambda}, $c$ follows from the definition of $\Theta^{U^nB^n}$ mentioned in \eqref{theta} and $d$ follows under the assumption that $n$ is large enough such that $\tr \left[\Pi^{U^nB^n}\Theta^{U^nB^n}\right] \geq 1-\eps.$ Thus, it now follows from \eqref{markov}, \eqref{g1m} and \eqref{g2m} that
\beq
\label{e1}
\Pr\left\{\cE_1\right\} \leq \eps+ \sqrt{\eps}+ \eps^{\frac{1}{4}} + \exp(-2^{n\gamma}).
\enq
{\bf{Consider $\Pr\left\{\cE_1^c \cap \cE_2\right\}:$}}
\begin{align}
&\Pr\left\{\cE_1^c \cap \cE_2\right\} \nonumber \\
& = \mathbb{E}_{\cC_n^{(C)} S^n}\mathbf{I}(\cE^c_1) \Pr \left\{\forall \ell \in \cC(1) : \mathbf{J}(k^\star,\ell)=0\right\} \nonumber\\
&\overset{a} = \mathbb{E}_{\cC_n^{(C)}S^n}\mathbf{I}(\cE^c_1) \left(1- \Pr\left\{\mathbf{J}(k^\star,\ell)=1\right\}\right)^{2^{nr}} \nonumber\\
&\overset{b} = \mathbb{E}_{\cC_n^{(C)} S^n}\mathbf{I}(\cE^c_1) \left(1- \Pr\left\{\mathbf{I}(k^\star,\ell) =1, g(k^\star,\ell) > 1- \sqrt{\eps}\right\}\right)^{2^{nr}} \nonumber\\
& \leq \mathbb{E}_{\cC_n^{(C)} S^n}\mathbf{I}(\cE^c_1) \left(1- \sum_{\substack{u^n: (\tS^n[k^\star], u^n) \in \cT_{n}(p_{U^n\tS^n})\\ \tr \left[\Lambda_{u^n}\rho^{B^n}_{u^n, \tS^n[k^\star]}\right] >1-\sqrt{\eps}}}2^{-n(\overline{\mathbf{I}}[\bf{U};\bf{\tilde{S}}]+\gamma)}p_{U^n}(u^n)\frac{p_{\tS^nU^n}(\tS^n[k^\star],u^n)}{p_{\tS^n}(\tS^n[k^\star])p_{U^n}(u^n)} \right)^{2^{nr}}\nonumber\\
&\overset{c} \leq \mathbb{E}_{\cC_n^{(C)} S^n}\mathbf{I}(\cE^c_1)\exp\left(-2^{n(r-(\overline{\mathbf{I}}[\bf{U};\bf{\tilde{S}}])+\gamma))} \sum_{\substack{u^n: (\tS^n[k^\star], u^n) \in \cT_{n}(p_{U^n\tS^n})\\ \tr \left[\Lambda_{u^n}\rho^{B^n}_{u^n, \tS^n[k^\star]}\right] >1-\sqrt{\eps}}}p_{U^n \mid \tS^n}(u^n\mid \tS^n[k^\star])\right)\nonumber\\
& \overset{d}\leq \mathbb{E}_{\cC_n^{(C)} S^n} \mathbf{I}(\cE^c_1)\exp\left(-2^{n\gamma} \sum_{\substack{u^n: (\tS^n[k^\star], u^n) \in \cT_{n}(p_{U^n\tS^n})\\ \tr \left[\Lambda_{u^n}\rho^{B^n}_{u^n, \tS^n[k^\star]}\right] >1-\sqrt{\eps}}}p_{U^n \mid \tS^n}(u^n\mid \tS^n[k^\star])\right)\nonumber\\
&\overset{e}\leq \mathbb{E}_{\cC_n^{(C)}S^n}\mathbf{I}(\cE^c_1)\left(1- \sum_{\substack{u^n: (\tS^n[k^\star], u^n) \in \cT_{n}(p_{U^n\tS^n})\\ \tr \left[\Lambda_{u^n}\rho^{B^n}_{u^n, \tS^n[k^\star]}\right] >1-\sqrt{\eps}}}p_{U^n \mid \tS^n}(u^n\mid \tS^n[k^\star])\right)+
\mathbb{E}_{\cC_n^{(C)} S^n}\mathbf{I}(\cE^c_1)\exp(-2^{n\gamma}) \nonumber\\
&\overset{f}\leq \mathbb{E}_{\cC_n^{(C)}S^n}\mathbf{I}(\cE^c_1) \sum_{u^n: (\tS^n[k^\star], u^n) \notin\cT_{n}(p_{U^n\tS^n})}p_{U^n \mid \tS^n}\left(u^n\mid \tS^n[k^\star]\right) + \mathbb{E}_{\cC_n^{(C)} S^n}\mathbf{I}(\cE^c_1)\exp(-2^{n\gamma})\nonumber\\
&\hspace{5mm}+ \mathbb{E}_{\cC_n^{(C)} S^n}\mathbf{I}(\cE^c_1) \sum_{u^n: \tr \left[\Lambda_{u^n}\rho^{B^n}_{u^n, \tS^n[k^\star]}\right] \leq1-\sqrt{\eps}}p_{U^n \mid \tS^n}\left(u^n\mid \tS^n[k^\star]\right)\nonumber\\
\label{n1}
&\overset{g}\leq \sqrt{\eps} + \eps^{\frac{1}{4}} + \exp(2^{-n\gamma}),
\end{align}
where $a$ follows because $\eta(k^\star,1), \cdots \eta(k^\star,2^{nr})$ are independent and identically distributed and the fact that $U^n[1], \cdots,U^n[2^{nr}]$ are independent and subject to identical distribution $p_{U^n}$, $b$ follows from the definition of $\mathbf{J}(k^\star,\ell)$; $c$ follows from the fact that $(1-x)^y \leq e^{-xy} (0\leq x \leq 1, y \geq 0)$, $d$ follows because $r = \overline{\mathbf{I}} [\bf{U};\bf{\tilde{S}}]+2\gamma,$ $e$ follows because $ e^{-xy} \leq 1- x + e^{-y} (0\leq x \leq 1, y \geq 0)$; $f$ follows because of the union bound and $g$ follows because if the event $\cE^c_1$ happens then $\sum_{u^n: (\tS^n[k^\star], u^n) \notin\cT_{n}(p_{U^n\tS^n})}p_{U^n \mid \tS^n}\left(u^n\mid \tS^n[k^\star]\right) < \sqrt{\eps}$ and $\sum_{u^n: \tr \left[\Lambda_{u^n}\rho^{B^n}_{u^n, \tS^n[k^\star]}\right] \leq1-\sqrt{\eps}}p_{U^n \mid \tS^n}\left(u^n\mid \tS^n[k^\star]\right) < \eps^{\frac{1}{4}}.$
{\bf{Consider {$\Pr \left\{\cE_1^c \cap \cE^c_2, \tilde{\ell} \neq \ell^\star\right\}$:}}}
\begin{align}
\Pr \left\{\cE_1^c \cap \cE^c_2, \tilde{\ell} \neq \ell^\star\right\} &=\mathbb{E}_{\cC_n^{(A)}\cC_n^{(C)}X^nS^n} \left[\mathbf{I}\left\{\cE_1^c\right\}\mathbf{I}\left\{\cE^c_2\right\} \tr\left[\left(\mathbb{I}-\beta_n(\ell^\star)\right) \rho^{B^n}_{X^n(U^n[\ell^\star],\tilde{S}^n[k^{\star}]),S^n}\right]\right]\nonumber\\
& \leq 2 \mathbb{E}_{\cC_n^{(A)}\cC_n^{(C)}X^nS^n}\left[\mathbf{I}\left\{\cE_1^c\right\}\mathbf{I}\left\{\cE^c_2\right\} \tr\left[\left(\mathbb{I}-\Lambda_{U^n[\ell^\star]}\right) \rho^{B^n}_{X^n(U^n[\ell^\star],\tilde{S}^n[k^{\star}]),S^n}\right] \right] \nonumber\\
\label{firster}
&\hspace{4mm}+ 4 \sum_{\ell^\prime \neq \ell^\star}\mathbb{E}_{\cC_n^{(A)}\cC_n^{(C)}X^nS^n}\left[\mathbf{I}\left\{\cE_1^c\right\}\mathbf{I}\left\{\cE^c_2\right\} \tr\left[\Lambda_{U^n(\ell^\prime)} \rho^{B^n}_{X^n(U^n[\ell^\star],\tilde{S}^n[k^{\star}]),S^n}\right] \right],
\end{align}
where the inequality above follows from the Hayashi-Nagaoka operator inequality \cite{Hayashi-noniid} .
In what follows we show that for $n$ large enough,
\begin{align}
\label{nontyp}
2\mathbb{E}_{\cC_n^{(A)}\cC_n^{(C)}X^nS^n}\left[\mathbf{I}\left\{\cE_1^c\right\}\mathbf{I}\left\{\cE^c_2\right\} \tr\left[\left(\mathbb{I}-\Lambda_{U^n[\ell^\star]}\right) \rho^{B^n}_{X^n(U^n[\ell^\star],\tilde{S}^n[k^{\star}]),S^n}\right] \right] \leq \frac{2\sqrt{\eps}}{\left(1- \eps - \sqrt{\eps} -\eps^{\frac{1}{4}}\right)},
\end{align}
and
\begin{align}
\label{othertyp}
4 \sum_{\ell^\prime \neq \ell^\star}\mathbb{E}_{\cC_n^{(A)}\cC_n^{(C)}X^nS^n}\left[\mathbf{I}\left\{\cE_1^c\right\}\mathbf{I}\left\{\cE^c_2\right\} \tr\left[\Lambda_{U^n(\ell^\prime)} \rho^{B^n}_{X^n(U^n[\ell^\star],\tilde{S}^n[k^{\star}]),S^n}\right] \right] \leq 4\eps.
\end{align}
We would like to highlight here that the proof pertaining to the derivation of \eqref{nontyp} is nontrivial and requires careful analysis of the probabilistic terms involved. In fact, the proof for the derivation of \eqref{othertyp} is nontrivial as well. However, we would borrow the idea of \emph{over-counting} from \cite{radhakrishnan-sen-warsi-archive} to bound \eqref{othertyp}. The reason for all this nontrivallity is because $k^\star$ and $\ell^\star$ are random.
{{$2\mathbb{E}_{\cC_n^{(A)}\cC_n^{(C)}X^nS^n}\left[\mathbf{I}\left\{\cE_1^c\right\}\mathbf{I}\left\{\cE^c_2\right\} \tr\left[\left(\mathbb{I}-\Lambda_{U^n[\ell^\star]}\right) \rho^{B^n}_{X^n(U^n[\ell^\star],\tilde{S}^n[k^{\star}]),S^n}\right] \right]$ is bounded as follows:}}
\begin{align}
&2\mathbb{E}_{\cC_n^{(A)}\cC_n^{(C)} X^n S^n}\left[\mathbf{I}\{\cE_1^c\}\mathbf{I}\left\{\cE^c_2\right\} \tr\left[\left(\mathbb{I}-\Lambda_{U^n[\ell^\star]}\right) \rho^{B^n}_{X^n(U^n[\ell^\star],\tilde{S}^n[k^{\star}]),S^n}\right] \right] \nonumber\\
& \hspace{-1mm}= 2\sum_{u^n,\tilde{s}^n,x^n,s^n} \Pr\bigg\{\mathbf{I}(k^\star)=1,\mathbf{J}(k^\star,\ell^\star)=1, U^n[\ell^\star]=u^n, \tilde{S}^n[k^\star] = \tilde{s}^n, \nonumber\\
&\hspace{30mm} X^n(U^n[\ell^\star] =u^n, \tS^n [k^\star] =\ts^n) = x^n, S^n=s^n\bigg\} \nonumber\\
&\hspace{30mm} \tr\left[\left(\mathbb{I}-\Lambda_{(U^n[\ell^\star] = u^n)} \right) \rho^{B^n}_{X^n(U^n[\ell^\star] = u^n,\tS^n[k^\star]=\tilde{s}^n),S^n = s^n}\right]\nonumber\\
&\hspace{-1mm} \leq2\sum_{\ts^n,s^n,u^n,x^n} \Pr\left\{\tS^n[k^\star]=\ts^n, S^n=s^n\mid \mathbf{I}(k^\star)=1\right\} \Pr\bigg\{ U^n[\ell^\star]=u^n\mid S^n = s^n, \tS^n[k^\star]= \tS^n, \nonumber\\
&\hspace{30mm}\mathbf{I}(k^\star) =1, \mathbf{J}(k^\star,\ell^\star)=1\bigg\}
\Pr\left\{X^n = x^n \mid U^n[\ell^\star]=u^n, \tS^n[k^\star]=\ts^n, S^n= s^n\right\} \nonumber\\
\label{ne}
&\hspace{30mm}\tr\left[\left(\mathbb{I}-\Lambda_{(U^n[\ell^\star]=u^n)}\right) \rho^{B^n}_{X^n(U^n[\ell^\star]=u^n,\tS^n[k^\star] = \tilde{s}^n) =x^n, S^n = s^n}\right],
\end{align}
where the above inequality follows because $X^n$ given $(U^n[\ell^\star]=u^n, \tS^n[k^\star]=\ts^n, S^n= s^n)$ is conditionally independent of $(\mathbf{I}(k^\star), \mathbf{J}(k^\star,\ell^\star)).$ To get the required bound on $2\mathbb{E}_{\cC_n^{(A)}\cC_n^{(C)} X^n S^n}\left[\mathbf{I}\{\cE_1^c\}\mathbf{I}\left\{\cE^c_2\right\} \tr\left[\left(\mathbb{I}-\Lambda_{U^n[\ell^\star]}\right) \rho^{B^n}_{X^n(U^n[\ell^\star],\tilde{S}^n[k^{\star}]),S^n}\right] \right],$ we will now first show that $\Pr\left\{\tS^n[k^\star]=\ts^n, S^n=s^n\mid \mathbf{I}(k^\star)=1\right\} \leq \frac{p_{\tS^n S^n}(\ts^n ,s^n)}{\left(1- \eps- \sqrt{\eps} -\eps^{\frac{1}{4}}\right)}.$ Towards this notice the following set of inequalities:
\begin{align}
&\Pr\left\{\tS^n[k^\star]=\ts^n, S^n=s^n \mid \mathbf{I}(k^\star)=1\right\}\nonumber\\
& = \sum_{k}\Pr\left\{k^\star= k \mid \mathbf{I}(k^\star)=1 \right\}\Pr\left\{\tS^n[k^\star]=\ts^n, S^n=s^n \mid \mathbf{I}(k^\star)=1, k^\star = k \right\} \nonumber\\
& = \sum_{k}\Pr\left\{k^\star= k \mid \mathbf{I}(k^\star)=1 \right\}\Pr\left\{\tS^n[k]=\ts^n, S^n=s^n \mid \mathbf{I}(k)=1, k^\star = k \right\} \nonumber\\
&\overset{a} = \sum_{k}\Pr\left\{k^\star= k \mid\mathbf{I}(k^\star)=1 \right\}\Pr\left\{\tS^n[k]=\ts^n, S^n=s^n \mid \mathbf{I}(k)=1 \right\} \nonumber\\
&= \sum_{k}\Pr\left\{k^\star= k \mid \mathbf{I}(k^\star)=1 \right\} \frac{\Pr \left\{\mathbf{I}(k)=1 \mid \tS^n[k] =\ts^n, S^n =s^n\right\}\Pr \left\{\tS^n[k] =\ts^n, S^n =s^n\right\}}{\Pr\left\{ \mathbf{I}(k)=1\right\}} \nonumber\\
& \overset{b} \leq \sum_{k}\Pr\left\{k^\star= k \mid \mathbf{I}(k^\star)=1\right\} \frac{\frac{2^{-n(\overline{\mathbf{I}} [\bf{S};\bf{\tilde{S}}]+\gamma)}p_{\tS^nS^n(\ts^n,s^n)}}{p_{\tS^n}(\ts^n)p_{S^n}(s^n)}p_{\tS^n}(\ts^n) p_S^n(s^n)}{\left(1- \eps -\sqrt{\eps} -\eps^{\frac{1}{4}}\right)2^{-n(\overline{\mathbf{I}} [\bf{S};\bf{\tilde{S}}]+\gamma)}}\nonumber\\
\label{crit}
& = \frac{p_{\tS^n S^n}(\ts^n, s^n)}{\left(1- \eps -\sqrt{\eps} -\eps^{\frac{1}{4}}\right)},
\end{align}
where $a$ follows because $(\tS^n[k], S^n)$ is conditionally independent of $k^\star$ given the indicator random variable $\mathbf{I}(k)$ and $b$ follows from the definition of $\mathbf{I}(k)$ and from the fact that $\tS^n[k]$ is independent of $S^n$ and because of the following set of inequalities:
\begin{align*}
\Pr\left\{\mathbf{I}(k) =1\right\} &\geq \Pr\left\{\zeta(k) =1\right\} - \Pr \left\{\zeta(k) = 1, g_{1}(\tS^n[k]) \geq \sqrt{\eps} \right\} - \Pr \left\{\zeta(k) = 1, g_{2}(\tS^n[k])\geq {\eps^{\frac{1}{4}}} \right\} \nonumber\\
&\overset{a} \geq \sum_{(\ts^n,s^n) \in \cT_n(p_{\tS^nS^n})} \Pr\left\{\tS^n[k]=\ts^n\right\}\Pr \left\{S^n = s^n\right\}\frac{p_{S^n\tS^n}(s^n,\ts^n)}{2^{n(\overline{\mathbf{I}} [\bf{S};\bf{\tilde{S}}]+\gamma)}p_{\tS^n}(\ts^n)p_{S^n}(s^n)}\\
& \hspace{8mm}- \sum_{s^n, \ts^n : g_{1}(\ts^n) \geq \sqrt{\eps}}\Pr\left\{\tS^n[k]=\ts^n\right\}\Pr \left\{S^n = s^n\right\}\frac{p_{S^n\tS^n}(s^n,\ts^n)}{2^{n(\overline{\mathbf{I}} [\bf{S};\bf{\tilde{S}}]+\gamma)}p_{\tS^n}(\ts^n)p_{S^n}(s^n)}\\
& \hspace{8mm}- \sum_{s^n,\ts^n : g_{2}(\ts^n) \geq \eps^{\frac{1}{4}}}\Pr\left\{\tS^n[k]=\ts^n\right\}\Pr \left\{S^n = s^n\right\}\frac{p_{S^n\tS^n}(s^n,\ts^n)}{2^{n(\overline{\mathbf{I}} [\bf{S};\bf{\tilde{S}}]+\gamma)}p_{\tS^n}(\ts^n)p_{S^n}(s^n)}\\
&= 2^{-n(\overline{\mathbf{I}} [\bf{S};\bf{\tilde{S}}]+\gamma)}\left(\Pr \left\{ \cT_n(p_{\tS^nS^n})\right\} - \Pr \left\{g_{1}(\tS^n) \geq \sqrt{\eps}\right\} -\Pr \left\{g_{2}(\tS^n) \geq {\eps}^{\frac{1}{4}}\right\} \right)\\
& \overset{b} \geq 2^{-n(\overline{\mathbf{I}} [\bf{S};\bf{\tilde{S}}]+\gamma)}\left(1- \eps -\sqrt{\eps} -\eps^{\frac{1}{4}}\right),
\end{align*}
where $a$ follows from the definition of $\mathbf{I}(k)$ and $b$ follows from the definition of the set $\cT_n(p_{\tS^nS^n})$ and under the assumption that $n$ is large enough such that $\Pr\left\{\cT_n(p_{\tS^nS^n})\right\} \geq 1- \eps$ and from \eqref{g1m} and \eqref{g2m}. Combining \eqref{ne} and \eqref{crit} we now bound $2\mathbb{E}_{\cC_n^{(A)}\cC_n^{(C)} X^n S^n}\left[\mathbf{I}\{\cE_1^c\}\mathbf{I}\left\{\cE^c_2\right\} \tr\left[\left(\mathbb{I}-\Lambda_{U^n[\ell^\star]}\right) \rho^{B^n}_{X^n(U^n[\ell^\star],\tilde{S}^n[k^{\star}]),S^n}\right] \right]$ as follows:
\begin{align}
&2\mathbb{E}_{\cC_n^{(A)}\cC_n^{(C)} X^n S^n}\left[\mathbf{I}\{\cE_1^c\}\mathbf{I}\left\{\cE^c_2\right\} \tr\left[\left(\mathbb{I}-\Lambda_{U^n[\ell^\star]}\right) \rho^{B^n}_{X^n(U^n[\ell^\star],\tilde{S}^n[k^{\star}]),S^n}\right] \right] \nonumber\\
&\hspace{-1mm} \overset{a}\leq2\sum_{\ts^n,s^n,u^n,x^n} \Pr\left\{\tS^n[k^\star]=\ts^n, S^n=s^n\mid \mathbf{I}(k^\star)=1\right\}\Pr \bigg\{ U^n[\ell^\star]=u^n\mid S^n =s^n, \tS^n[k^\star]= \ts^n, \nonumber\\
&\hspace{30mm}\mathbf{I}(k^\star) =1, \mathbf{J}(k^\star,\ell^\star)=1\bigg\}
\Pr\left\{X^n = x^n \mid U^n[\ell^\star]=u^n, \tS^n[k^\star]=\ts^n, S^n= s^n\right\} \nonumber\\
&\hspace{22mm} \tr\left[\left(\mathbb{I}-\Lambda_{(U^n[\ell^\star]=u^n)}\right) \rho^{B^n}_{X^n(U^n[\ell^\star]=u^n,\tS^n[k^\star] = \tilde{s}^n) =x^n, S^n = s^n}\right] \nonumber \\
& \hspace{-1mm}\overset{b} \leq \frac{2}{\left(1- \eps -\sqrt{\eps} -\eps^{\frac{1}{4}}\right)}\sum_{\ts^n,u^n} p_{\tS^n}(\ts^n)\Pr \bigg\{ U^n[\ell^\star]=u^n\mid \tS^n[k^\star]= \ts^n, \mathbf{J}(k^\star,\ell^\star)=1\bigg\}\nonumber\\
&\hspace{10mm} \tr\left[\left(\mathbb{I}-\Lambda_{(U^n[\ell^\star]=u^n)}\right) \sum_{x^n, s^n}p_{S^n \mid \tS^n} (s^n \mid \ts^n)p_{X^n \mid U^n\tS^n}(x^n \mid u^n,\ts^n)\rho^{B^n}_{X^n(U^n[\ell^\star]=u^n,\tS^n[k^\star] = \tilde{s}^n) =x^n, S^n = s^n}\right] \nonumber\\
&\hspace{-1mm} \overset{c}= \frac{2}{\left(1- \eps -\sqrt{\eps} -\eps^{\frac{1}{4}}\right)}\sum_{\ts^n,u^n} p_{\tS^n}(\ts^n)\Pr \bigg\{ U^n[\ell^\star]=u^n\mid \tS^n[k^\star]= \tS^n, \mathbf{J}(k^\star,\ell^\star)=1\bigg\} \nonumber\\
& \hspace{45mm} \tr\left[\left(\mathbb{I}-\Lambda_{(U^n[\ell^\star]=u^n)}\right) \rho^{B^n}_{(U^n[\ell^\star]=u^n,\tS^n[k^\star] = \tilde{s}^n)}\right] \nonumber\\
& \hspace{-1mm} \overset{d} \leq \frac{2\sqrt{\eps}}{\left(1- \eps -\sqrt{\eps} -\eps^{\frac{1}{4}}\right)} \sum_{\ts^n,u^n} p_{\tS^n}(\ts^n)\Pr \bigg\{ U^n[\ell^\star]=u^n\mid \tS^n[k^\star]= \ts^n, \mathbf{J}(k^\star,\ell^\star)=1\bigg\}\nonumber\\
\label {333}
&\hspace{-1mm} = \frac{2\sqrt{\eps}}{\left(1- \eps -\sqrt{\eps} -\eps^{\frac{1}{4}}\right)},
\end{align}
where $a$ follows from \eqref{ne}; $b$ follows from \eqref{crit} and from the fact that given $(U^n[\ell^\star]=u^n, \tS^n[k^\star]=\ts^n)$, $(k^\star,\ell^\star)$ is deterministic, thus, $X^n \mid \left\{U^n[\ell^\star]=u^n, \tS^n[k^\star]=\ts^n, S^n= s^n\right\} \sim p_{X^n \mid U^n\tS^nS^n}(x^n \mid u^n, \ts^n,s^n) = p_{X^n \mid U^n\tS^n}(x^n \mid u^n, \ts^n)$ and $U^n[\ell^\star]$ is conditionally independent of $(S^n, \mathbf{I}(k^\star))$ given $(\tS^n[k^\star]=\ts^n, \mathbf{J}(k^\star,\ell^\star)=1 )$; $c$ follows because from the definition of $\rho^{B^n}_{u^n\ts^n}$ mentioned in \eqref{rhofus} and $d$ follows because for $\mathbf{J}(k^\star,\ell^\star) =1$, we have $ \tr\left[\left(\mathbb{I}-\Lambda_{(U^n[\ell^\star]=u^n)}\right) \rho^{B^n}_{(U^n[\ell^\star]=u^n,\tS^n[k^\star] = \tilde{s}^n)}\right]< \sqrt{\eps}.$
{{$4 \sum_{\ell^\prime \neq \ell^\star}\mathbb{E}_{\cC_n^{(A)}\cC_n^{(C)}X^nS^n}\left[\mathbf{I}\left\{\cE_1^c\right\}\mathbf{I}\left\{\cE^c_2\right\} \tr\left[\Lambda_{U^n(\ell^\prime)} \rho^{B^n}_{X^n(U^n[\ell^\star],\tilde{S}^n[k^{\star}]),S^n}\right] \right]$ is bounded as follows:}}
\setlength{\belowdisplayskip}{0pt}
\begin{alignat}{3}
&4 \sum_{\ell^\prime \neq \ell^\star}\mathbb{E}_{\cC_n^{(A)}\cC_n^{(C)} X^n S^n}\left[\mathbf{I}\left\{\cE_1^c\right\}\mathbf{I}\left\{\cE^c_2\right\} \tr\left[\Lambda_{U^n[\ell^\prime]} \rho^{B^n}_{X^n(U^n[\ell^\star],\tilde{S}^n[k^{\star}]),S^n}\right] \right]\nonumber\\
&\hspace{-1mm}\overset{a} =4 \sum_{\ell^\prime \neq \ell^\star}\mathbb{E}_{\cC_n^{(A)}\cC_n^{(C)} X^n S^n}\sum_{k,\ell}\mathbf{I}\left\{k^\star = k\right\}\mathbf{I}\left\{k^\star=k, \ell^\star = \ell\right\}\tr\left[\Lambda_{U^n[\ell^\prime]} \rho^{B^n}_{X^n(U^n[\ell^\star],\tilde{S}^n[k^{\star}]),S^n}\right] \nonumber\\
&\hspace{-1mm}\overset{b}\leq4 \sum_{k, \ell, \ell^\prime \neq \ell}\mathbb{E}_{\cC_n^{(A)}\cC_n^{(C)} X^n S^n}\mathbf{I}(k)\mathbf{J}(k,\ell)\tr\left[\Lambda_{U^n[\ell^\prime]} \rho^{B^n}_{X^n(U^n[\ell],\tilde{S}^n[k]),S^n}\right] \nonumber\\
&\hspace{-1mm} =4 \sum_{k,\ell^\prime \neq \ell}\sum_{u^{\prime n}, u^n,\ts^n,x^n,s^n} \Pr\bigg\{U^n[l^\prime] = u^{\prime n}, U^n[\ell] = u^n, S^n=s^n, \tS^n[k] =\ts^n, X^n(U^n[\ell],\tS^n[k]) = x^n, \nonumber\\
&\hspace{45mm} \mathbf{I}(k)=1, \mathbf{J}(k,\ell)=1\bigg\} \tr\left[\Lambda_{u^{\prime n}} \rho^{B^n}_{X^n(U^n[\ell]=u^n,\tS^n[k] = \tilde{s}^n) =x^n, S^n = s^n}\right] \nonumber\\
&\hspace{-1mm} =4 \sum_{k, \ell^\prime \neq \ell}\sum_{u^{\prime n}, u^n,\ts^n, x^n, s^n}\hspace{-6mm} p_{U^n}(u^{\prime n})p_{U^n}(u^n)p_{\tilde{S}^n}(\tilde{s}^n)p_{X^n \mid U^n\tS^n}(x^n \mid u^n,\ts^n)p_{S^n}(s^n)\Pr\left\{\mathbf{I}(k)=1 \mid S^n =s^n, \tilde{S}^n[k] = \tilde{s}^n\right\} \nonumber\\
&\hspace{30mm}\Pr\left\{\mathbf{J}(k, \ell)=1 \mid \mathbf{I}(k)=1, S^n =s^n, U^n[\ell] = u^n,\tilde{S}^n[k] = \tilde{s}^n\right\} \nonumber\\
&\hspace{30mm}\tr\left[\Lambda_{u^{\prime n}} \rho^{B^n}_{X^n(U^n[\ell]=u^n,\tS^n[k] = \tilde{s}^n) =x^n, S^n = s^n}\right] \nonumber\\
&\hspace{-1mm}\overset{c}\leq4 \sum_{k, \ell, \ell^\prime \neq \ell} \sum_{u^{\prime n}, u^n,\ts^n, x^n, s^n} p_{U^n}(u^{\prime n})p_{U^n}(u^n)p_{\tilde{S}^n}(\tilde{s}^n)p_{X^n \mid U^n\tS^n}(x^n \mid u^n,\ts^n)p_{S^n}(s^n)2^{-n(\overline{\mathbf{I}}[\bf{S};\bf{\tilde{S}}]+\gamma)}\frac{p_{S^n\tilde{S}^n}(s^n,\tilde{s}^n)}{p_{S^n}(s^n)p_{\tilde{S}^n}(\tilde{s}^n)} \nonumber\\
&\hspace{35mm} 2^{-n(\overline{\mathbf{I}}[\bf{U};\bf{\tilde{S}}]+\gamma)}\frac{p_{U^n\tilde{S}^n}(u^n,\tilde{s}^n)}{p_{U^n}(u^n)p_{\tilde{S}^n}(\tilde{s}^n)}
\tr\left[\Lambda_{u^{\prime n}} \rho^{B^n}_{X^n(U^n[\ell]=u^n,\tS^n[k] = \tilde{s}^n) =x^n, S^n = s^n}\right] \nonumber\\%\tr
&\hspace{-1mm} =4 \times 2^{-n\left(\overline{\mathbf{I}}[\bf{U};\bf{\tilde{S}}] + \overline{\mathbf{I}}[\bf{S};\bf{\tilde{S}}] +2\gamma\right)}\sum_{k, \ell, \ell^\prime \neq \ell} \sum_{u^n,\ts^n}p_{U^n\tS^n}(u^n\ts^n)p_{X^n \mid U^n\tS^n}(x^n \mid u^n,\ts^n)\sum_{u^{\prime n}}p_{U^n}(u^{\prime n}) \nonumber\\
& \hspace{45mm} \tr\left[\Lambda_{u^{\prime n}}\sum_{s^n,x^n}p_{S^n \mid \tS^n}(s^n \mid \ts^n)p_{X^n \mid U^n\tS^n}(x^n \mid u^n,\ts^n) \rho^{B^n}_{X^n(U^n[\ell]=u^n,\tS^n[k] = \tilde{s}^n) =x^n, S^n = s^n}\right] \nonumber\\
&\hspace{-1mm}\overset{d}= 4 \times 2^{-n\left(\overline{\mathbf{I}}[\bf{U};\bf{\tilde{S}}] + \overline{\mathbf{I}}[\bf{S};\bf{\tilde{S}}] + 2\gamma \right)}\sum_{k, \ell, \ell^\prime \neq \ell} \sum_{u^n,\ts^n}p_{U^n\tS^n}(u^n\ts^n)
\sum_{u^{\prime n}}p_{U^n}(u^{\prime n})\tr\left[\Lambda_{u^{\prime n}} \rho^{B^n}_{u^n, \ts^n}\right] \nonumber\\
&\hspace{-1mm}\overset{e}= 4 \times 2^{-n\left(\overline{\mathbf{I}}[\bf{U};\bf{\tilde{S}}] + \overline{\mathbf{I}}[\bf{S};\bf{\tilde{S}}] + 2 \gamma \right)}\sum_{k, \ell, \ell^\prime \neq \ell} \sum_{u^n,\ts^n}p_{U^n\tS^n}(u^n\ts^n)\sum_{u^{\prime n}}p_{U^n}(u^{\prime n})\tr\left[\tr_{U^n}\left[\Pi^{U^nB^n}\left(\ket{u^{\prime n}}\bra{u^{\prime n}}\otimes \mathbb{I}\right)\right] \rho^{B^n}_{u^n, \ts^n}\right] \nonumber\\
&\hspace{-1mm}= 4 \times 2^{-n\left(\overline{\mathbf{I}}[\bf{U};\bf{\tilde{S}}] + \overline{\mathbf{I}}[\bf{S};\bf{\tilde{S}}] + 2\gamma \right)}\sum_{k, \ell, \ell^\prime \neq \ell} \sum_{u^n,\ts^n}p_{U^n\tS^n}(u^n\ts^n)\sum_{u^{\prime n}}p_{U^n}(u^{\prime n})\tr\left[\tr_{U^n}\left[\Pi^{U^nB^n}\left(\ket{u^{\prime n}}\bra{u^{\prime n}}\otimes \rho^{B^n}_{u^n, \ts^n}\right)\right] \right] \nonumber\\
&\hspace{-1mm}= 4 \times 2^{-n\left(\overline{\mathbf{I}}[\bf{U};\bf{\tilde{S}}] + \overline{\mathbf{I}}[\bf{S};\bf{\tilde{S}}]+2\gamma\right)}\nonumber\\
&\hspace{3mm}\sum_{k, \ell, \ell^\prime \neq \ell} \tr\bigg[\tr_{U^n}\bigg[\Pi^{U^nB^n}\bigg(\sum_{u^{\prime n}}p_{U^n}(u^{\prime n})\ket{u^{\prime n}}\bra{u^{\prime n}}\otimes \sum_{(u^n,\ts^n)}p_{U^n}(u^n)p_{\tS^n \mid U^n}(\ts^n \mid u^n)\rho^{B^n}_{u^n, \ts^n}\bigg)\bigg] \bigg] \nonumber\\
&\hspace{-1mm} = 4 \times 2^{-n\left(\overline{\mathbf{I}}[\bf{U};\bf{\tilde{S}}] + \overline{\mathbf{I}}[\bf{S};\bf{\tilde{S}}] +2\gamma\right)}\sum_{k, \ell, \ell^\prime \neq \ell} \tr\left[\Pi^{U^nB^n}\left(\Theta^{U^n }\otimes \Theta^{B^n}\right)\right] \nonumber\\
&\hspace{-1mm}\overset{f}\leq 4 \times 2^{-n\left(\overline{{\mathbf{I}}}[\bf{U};\bf{\tilde{S}}] + \overline{{\mathbf{I}}}[\bf{S};\bf{\tilde{S}}] + \underline{{\mathbf{I}}}[{\bf{U}};{\bf{B}}] + \gamma\right) }2^{n(R+2r+R_S)} \nonumber\\
&\hspace{-1mm} \overset{g} = 4 \times 2^{-n\gamma}, \nonumber\\
\label{444}
& \hspace{-1mm} \overset{h} \leq 4 \eps,
\end{alignat}
where $a$ follows from the union bound; $b$ follows because $\mathbf{I}\left\{k^\star = k\right\} =\mathbf{I}\left\{k^\star = k\right\}\mathbf{I}(k) \leq \mathbf{I(k)}$ and $\mathbf{I}\left\{k^\star = k, \ell^\star = \ell \right\} =\mathbf{I}\left\{k^\star = k, \ell^ \star= \ell \right\}\mathbf{J}(k,l) \leq \mathbf{J}(k,\ell)$, $c$ follows from the definition of $\mathbf{I}(k)$ and $\mathbf{J}(k,\ell)$, $d$ follows from the definition of $\rho^{B^n}_{u^n,\ts^n}$ mentioned in \eqref{rhofus}, $e$ follows from the definition of $\Lambda_{u^{\prime n}}$
mentioned in \eqref{lambda}, $f$ follows from the definition of $\Pi^{U^nB^n}$ mentioned in \eqref{pi} and because $k$ ranges over $[1:2^{nR_S}],$ $\ell$ ranges over $[1:2^{nr}]$ and $\ell^\prime$ ranges over $[1:2^{n(R+r)}]$ and $g$ follows because of our choice of $R,r$ and $R_S$ and $h$ follows under the assumption that $n$ is large enough such that $2^{-n\gamma} \leq \eps.$ Thus, it now follows from \eqref{errorsideinf}, \eqref{e1}, \eqref{n1}, \eqref{firster}, \eqref{333} and \eqref{444} that
\beq
\Pr\left\{\tilde{m} \neq 1\right\} \leq 6\eps+3\sqrt{\eps} + 3\eps^{\frac{1}{4}}+\frac{2\sqrt{\eps}}{\left(1- \eps -\sqrt{\eps} -\eps^{\frac{1}{4}}\right)}+3\exp(-2^{n\gamma}). \nonumber
\enq
This completes the proof for achievability.
\subsection{Converse}
Suppose $(R,R_S)$ be an achievable rate pair. It then follows from Definition \ref{ach} that there exists an $(n,M_n,M_{e,n}, \eps_n)$ code such that $R \leq \liminf_{n\to \infty} \frac{\log M_n}{n}$ and $R_S \geq \limsup_{n\to \infty} \frac{\log M_{e,n}}{n}.$ Let
\beq \label {conts}\tS^n = f_{e,n}(S^n),\enq where $f_{e,n}$ is defined in Definition \ref{code}. Also, let $U^n$ represent an arbitrary random variable denoting the uniform choice of a message in $[1:M_n]$.
Notice that the message random variable is independent of $S^n.$ Hence, from the definition of $U^n$ and $\tS^n$ it now follows that $U^n$ and $\tS^n$ are independent of each other. Thus, \beq\label{ens}\overline{\mathbf{I}}[{\bf{U}};{\bf{\tS}}] = 0.\enq
Further, notice that in the setting of the problem and for the choice of $U^n$ and $\tS^n$ fixed above the following classical-quantum state is induced
\begin{align}
&\sigma^{S^nU^nX^nB^n}= \sum_{(s^n,\ts^n,u^n,x^n)}p_{S^n}(s^n)p_{\tS^n \mid S^n}(\ts^n \mid s^n)p_{U^n} (u^n)p_{X^n \mid \tS^n U^n}(x^n \mid \ts^n,u^n)\ket{s^n}\bra{s^n}^{S^n} \nonumber \\
&\hspace{45mm}\otimes \ket{\ts^n}\bra{\ts^n}^{\tS^n} \otimes \ket{u^n}\bra{u^n}^{U^n}\otimes \ket{x^n}\bra{x^n}^{X^n} \otimes \rho^{B^n}_{x^n,s^n} .
\end{align}
We will now first prove the lower bound on $R_S$. Towards this notice that from \eqref{conts} and from the definition of $f_{e,n}$ it follows that the cardinality of the set over which the random variable $\tS^n$ takes values cannot be larger than $M_{e,n}$. Thus, it now follows that from \cite[Lemma 2.6.2]{han-book} that
\beq
\label{con1}
\Pr\left\{\frac{1}{n}\log \frac{1}{p_{\tS^n}} \geq \frac{1}{n} \log M_{e,n}+ \gamma \right\} \leq 2^{-n\gamma},
\enq
where $\gamma >0$ is an arbitrary constant. Furthermore, since $\frac{1}{n}\log \frac{p_{\tS^n \mid S^n}}{p_{\tS^n}} \leq \frac{1}{n}\log \frac{1}{p_{\tS^n}}$ it now follows from \eqref{con1} that
\beq
\label{conv2}
\Pr\left\{\frac{1}{n}\log \frac{p_{\tS^n \mid S^n}}{p_{\tS^n}} \geq \frac{1}{n} \log M_{e,n}+ \gamma \right\} \leq 2^{-n\gamma}.
\enq
Thus, from the Definition \ref{limsup} and \eqref{conv2} it now follows that there exists an $n_0$ such that for every $n >n_o,$ we have
\begin{align*}
\overline{\mathbf{I}} [\mathbf{S};\mathbf{\tS}] &\leq \frac{1}{n} \log M_{e,n} + \gamma \\
& \leq \limsup _{n \to \infty}\frac{1}{n} \log M_{e,n} + 2\gamma\\
& \leq R_S + 2\gamma,
\end{align*}
where the last inequality follows from the definition of $R_S.$
We now prove the upper bound on $R.$ Let $\rho^{B^n}_{u^n}= \sum_{(s^n,\ts^n,x^n)}p_{S^n }(s^n)p_{\tS^n \mid S^n}(\ts^n \mid s^n)p_{X^n \mid U^n\tS^n}(x^n \mid u^n,s^n) \rho^{B^n}_{x^n,s^n}$ and $\rho^{B^n} = \mathbb{E}\left[\rho^{B_n}_{U^n}\right],$ where $U^n$ as mentioned above is uniformly distributed over the set $[1:M_n]$ and $\tS^n$ is as defined in \eqref{conts}. Fix $\gamma>0$. It now follows from Definition \ref{code} and \cite[Lemma $4$]{hayshi-nagaoka-2002} that $\eps_n$ satisfies the following bound,
\beq
\label{conerr}
\eps_n \geq \sum_{u^n\in \cU^n} p_{U^n}(u^n)\tr \left[\rho^{B^n}_{u^n} \left\{\rho^{B^n}_{u^n} \preceq 2^{n\left(\frac{1}{n}\log M_n -\gamma\right)} \rho^{B^n}\right\}\right] - 2^{-n\gamma}.
\enq
Letting $\frac{1}{n}\log M_n = \underline{{\mathbf{I}}}[{\bf{U}};{\bf{B}}]+ 2\gamma$ in \eqref{conerr} it holds that
\beq
\eps_n \geq \sum_{u^n\in \cU^n} p_{U^n}(u^n)\tr \left[\rho^{B^n}_{u^n} \left\{\rho^{B^n}_{u^n} \preceq 2^{n\left(\underline{{\mathbf{I}}}[{\bf{U}};{\bf{B}}]+ \gamma\right)} \rho^{B^n}\right\}\right] - 2^{-n\gamma}.
\enq
However, while $2^{-n\gamma} \to 0$ as $n \to \infty$, the definition of $\underline{{\mathbf{I}}}[{\bf{U}};{\bf{B}}]$ implies the existence of $\eps_o > 0$ and infinitely many $n'$s satisfying
$\sum_{u^n\in \cU^n} p_{U^n}(u^n)\tr \left[\rho^{B^n}_{u^n} \left\{\rho^{B^n}_{u^n} \prec 2^{n\left(\frac{1}{n}\log M_n -\gamma\right)} \rho^{B^n}\right\}\right]> \eps_o.$ This further implies that $R \leq \liminf_{n\to \infty} \frac{\log M_n}{n}$ must satisfy the following for arbitrarily small probability of error:
\begin{align}
R & \leq \underline{{\mathbf{I}}}[{\bf{U}};{\bf{B}}] + 2 \gamma \nonumber\\
& \leq \underline{{\mathbf{I}}}[{\bf{U}};{\bf{B}}] - \overline{{\mathbf{I}}}[{\bf{U}};{\bf{\tS}}] + 2 \gamma \nonumber\\
& \leq \sup_{\{\omega_n\}_{n=1}^{\infty}}\left(\underline{{\mathbf{I}}}[{\bf{U}};{\bf{B}}] - \overline{{\mathbf{I}}}[{\bf{U}};{\bf{\tS}}]\right) + 2\gamma,\nonumber
\end{align}
where the second inequality follows from \eqref{ens} and in the last inequality the supremum is taken over all sequence of classical-quantum states having the following form for every $n$,
\begin{align*}
\omega_n& := \sum_{(s^n,\ts^n,u^n,x^n)}p_{S^n}(s^n)p_{\tS^n \mid S^n}(\ts^n \mid s^n)p_{U^n \mid \tS^n} (u^n \mid \ts^n)p_{X^n \mid \tS^n U^n}(x^n \mid \ts^n,u^n)\ket{s^n}\bra{s^n}^{S^n} \otimes \ket{\ts^n}\bra{\ts^n}^{\tS^n} \nonumber \\
&\hspace{25mm}\otimes \ket{u^n}\bra{u^n}^{U^n}\otimes \ket{x^n}\bra{x^n}^{X^n} \otimes \rho^{B^n}_{x^n,s^n} .
\end{align*}
This completes the proof for the converse.
\section{Conclusion and Acknowledgement}
We extended the result of Heggard and El Gamal \cite{heegard-gamal-1983} to the quantum case in the information-spectrum setting. The proof in \cite{heegard-gamal-1983} is based on the covering lemma \cite[Lemma 3.3]{Gamal-Kim-book}, conditional typicality lemma \cite{Gamal-Kim-book} and Markov lemma \cite[Lemma 12.1]{Gamal-Kim-book}. We have shown in this paper that quantum information-spectrum generalization of the result of Heggard and El Gamal \cite{heegard-gamal-1983} can be derived even in the absence of these powerful lemmas. A natural open question which arises from the problem studied in this manuscript is to study the problem of communication when side information is available at the decoder end. This study along with the techniques presented in this paper will then lead to the establishing of the capacity region when the side information is available both at the encoder and decoder end.
This work was supported by the Engineering and Physical Sciences Research Council (Grant No. EP/M013243/1).
\bibliographystyle{ieeetr}
\bibliography{master}
\end{document} | {"config": "arxiv", "file": "1512.03878/Arxiv-Submission.tex"} |
TITLE: Meaning of the notation $\sigma_{ji,j}$
QUESTION [0 upvotes]: In page 28 of the book Introduction to Linear Elasticity, 4ed by Phillip L. Gould · Yuan Feng, it says
$$
\int_V{\left( f_i+\sigma _{ji,j} \right) \text{d}V=0}
$$
What does it mean by writing $\sigma _{ji,j}$?
Also in equation $(2.32)$,
$$
\int_V{G_{i,i}\text{d}V=\int_A{n_iG_i\text{d}A}}
$$
What does $G_{i,i}$ means?
REPLY [1 votes]: As already mentioned, Einstein's summation convention is a notation shortcut in which repeated indices are summed over. The comma that follows the physical quantities you have mentioned is just partial derivation with respect to the $i^{\text{th}}$ coordinate of whatever system coordinate you are working in. The only resources I have seen this comma notation used are GR books and articles, but it is obvious that this is also the case here.
Thus, in your case, $G_{i,i}= \dfrac{\partial G_i}{\partial x^i} \equiv \sum_i \dfrac{\partial G_i}{\partial x^i} \equiv \nabla \cdot G,$
the divergence of your vector field $G$. | {"set_name": "stack_exchange", "score": 0, "question_id": 599549} |
TITLE: Maximum number of elemets in the range of function $ f{2012} $
QUESTION [0 upvotes]: Consider a family of functions $ f_m : N \cup \{0\} \to N \cup \{0\} $ that follows the relations :
$ f_m(a+b) = f_m(f_m(a)) + f_m(b) $,
$ f_m(km) = 0 $
HERE m is any positive integer apart from 1. How can I find the maximum number of elements in the range of function $ f_{2012} $ ?
REPLY [1 votes]: On the face of it this is trivial, so you may have an error in the question.
If the sum of two non-negative numbers is zero then both must be zero. But
$$0 = f_{2012}(k\,2012) = f_{2012}(k\,2011 +k) = f_{2012}(f_{2012}(k\,2011))+f_{2012}(k)$$ so both $f_{2012}(f_{2012}(k\,2011))$ and $f_{2012}(k)$ must be zero for all $k$. | {"set_name": "stack_exchange", "score": 0, "question_id": 563601} |
TITLE: Distribution of a continuous random variable
QUESTION [0 upvotes]: I was reading through my"Random Variability in Business Situations" notes and wanted to enquire about some difficulty I've encountered.
On the fourth and final column "probability calculated from model X~N (53, 2^2)" how are the values determined? What steps are taken? Because from "X is greater then/equal to 47 and less than 48" there is a 0.0003 difference between the last two columns (0.0048 and 0.0045). I am very interested to understand why this is happening.
The mean weight is 53g and the standard deviation is 2g. Please refer to the image below for reference.
REPLY [3 votes]: In the third column is simply the number f/5000, where f is the number given in the second column.
In the fourth column, we assume X is normally distributed with mean 53 and standard deviation 2. If you want to use a standard normal table to look up the probability of, say, $47\leq X < 48$, you would "normalize" the variable, by subtracting off the mean and dividing by the standard deviation. That is,
$$P(47\leq X < 48) = P\left(\frac{47-53}{2}\leq \frac{X-53}{2} < \frac{48-53}{2}\right) = P\left(-3\leq \frac{X-53}{2} < -\frac{5}{2}\right)$$
Now, the variable $Z = \frac{X-53}{2}$ is a standard normal random variable, so you can use the standard normal table to look up this probability, or you can use Wolfram alpha.
Go to Wolfram alpha and type in the phrase
"probability standard normal between -3 and -5/2"
You will see that it is equal to 0.004859. However, this doesn't explain why they have put 0.0045 in the last column. I suspect they used normal tables to get an approximation, and the number given by Wolfram alpha is more accurate than theirs. As long as you understand what I've described above, I wouldn't worry about it.
REPLY [1 votes]: I'm french, so it could explain my bad english.
The probability density function is known by the mean and the variance :
$p(x)\ =\ \tfrac{1}{\sigma \sqrt{2\pi}}\ \mathrm{e}^{-\frac{1}{2}\left(\frac{x-\mu}{\sigma}\right)^2}$
The probability that $X$ is greater then/equal to $47$ and less than $48$ is the integral of the density function between these bounds. | {"set_name": "stack_exchange", "score": 0, "question_id": 116645} |
\begin{document}
\title[On Leinster groups of order $pqrs$]{On Leinster groups of order $pqrs$}
\author[S. J. Baishya ]{Sekhar Jyoti Baishya}
\address{S. J. Baishya, Department of Mathematics, Pandit Deendayal Upadhyaya Adarsha Mahavidyalaya, Behali, Biswanath-784184, Assam, India.}
\email{sekharnehu@yahoo.com}
\begin{abstract}
A finite group is said to be a Leinster group if the sum of the orders of its normal subgroups equals twice the order of the group itself. Let $p<q<r<s$ be primes. We prove that if $G$ is a Leinster group of order $p^2qr$, then $G \cong Q_{20}\times C_{19}$ or $Q_{28} \times C_{13}$. We also prove that no group of order $pqrs$ is Leinster.
\end{abstract}
\subjclass[2010]{11A25, 20D60, 20E99}
\keywords{finite groups, perfect numbers, Leinster groups}
\maketitle
\section{Introduction and Preliminary results} \label{S:intro}
A number is perfect if the sum of its divisors equals twice the number itself. In 2001, T. Leinster \cite{leinster}, developed and studied a group theoretic analogue of perfect numbers. A finite group is said to be a perfect group (not to be confused with the one which is equal to its commutator subgroup) or an immaculate group or a Leinster group if the sum of the orders of its normal subgroups equals twice the order of the group itself. The Leinster groups have a beautiful connection with the perfect numbers. Obviously, in the case of cyclic groups and less obviously in the case of dihedral groups. Clearly, a finite cyclic group $C_n$ is Leinster if and only if its order $n$ is a perfect number. In fact, the nilpotent Leinster groups are precisely the finite cyclic groups whose orders are perfect numbers. On the other hand, the Leinster dihedral groups are in one to one correspondence with odd perfect numbers. It may be mentioned here that, till now it is not known whether there are infinitely many Leinster groups or not. Another interesting fact is that upto now, only one odd order Leinster group is known, namely $(C_{127} \rtimes C_7) \times C_{3^4.{11}^2.{19}^2.113}$. It was discovered by F. Brunault, just one day after the question on existence of odd order Leinster groups was asked by Tom Leinster in Mathoverflow \cite{brunault}. More information on this and the related concepts can be found in the works of S. J. Baishya \cite{baishya1}, S. J. Baishya and A. K. Das \cite{baishya2}, A. K. Das \cite{das}, M. T$\check{\rm a}$rn$\check{\rm a}$uceanu \cite{t1, t2}, T. D. Medts and A. Mar$\acute{\rm o}$ti \cite{maroti}, etc.
Given a finite group $G$, $\sigma(G), \tau(G), G'$ and $Z(G)$ denotes the sum of the orders of the normal subgroups, the number of normal subgroups , the commutator subgroup, and the center of $G$ respectively. For any prime $l$, we have used the symbol $T_l$ to denote a $l$-Sylow subgroup of a group $G$.
The following theorems will be used repeatedly to obtain our results:
\begin{thm}{(Theorem 4.1 \cite{leinster})} \label{aqt}
If $G$ is a group with $\sigma(G) \leq 2\mid G \mid$, then any abelian quotient of $G$ is cyclic.
\end{thm}
\begin{thm}{(Corollary 4.2 \cite{leinster})} \label{alg}
The abelian Leinster groups are precisely the cyclic groups $C_n$ of order $n$ with $n$ perfect.
\end{thm}
\begin{thm}{(Proposition 3.1, Corollary 3.2 \cite{leinster})} \label{mul}
If $G_1$ and $G_2$ be two groups with $(\mid G_1 \mid , \mid G_2 \mid)=1$, then $\tau(G_1 \times G_2)=\tau(G_1)\tau(G_2)$ and $\sigma(G_1 \times G_2)=\sigma(G_1)\sigma(G_2)$.
\end{thm}
\begin{thm} {(Proposition 3.1, Theorems 3.4, 3.5, 3.7 \cite{baishya1})} \label{sjb}
If $G (\neq C_7 \rtimes C_8)$ is a Leinster group of order $pqrs$, $p, q, r, s$ being primes (not necessarily distinct), then $\tau(G)>7$.
\end{thm}
\begin{thm} {(Lemma 4, p. 303 \cite{zumud}}) \label{zumud1}
If a finite group $G$ has an abelian normal subgroup of prime index $p$, then $\mid G \mid=p \mid G' \mid \mid Z(G) \mid$.
\end{thm}
\begin{thm} {(Theorem15 \cite{sqf}}) \label{sqfree}
Let $G$ be a finite group, and let $p$ be the smallest prime divisor of $\mid G \mid$. Let $Q$ be a $p$-Sylow subgroup of $G$. If $Q$ is cyclic, then $Q$ has a normal complement in $G$.
\end{thm}
\section{ Leinster groups of order $p^2qr$}
It is easy to verify that $Q_{20}\times C_{19}$ and $Q_{28} \times C_{13}$ are Leinster groups. In this connection, a natural question arises: Is there any other Leinster group whose order is of the form $p^2qr$, $p<q<r$ being primes? We obtain the answer of this question and found that $Q_{20}\times C_{19}$ and $Q_{28} \times C_{13}$ are the only Leinster groups whose orders are of the form $p^2qr$.
We begin with the following elementary remark:
\begin{rem}\label{rem125}
If $G$ is a Leinster group, then the number of odd order normal subgroups of $G$ must be even. Hence, for any odd order Leinster group $G$, $\tau(G)$ is even.
\end{rem}
The following Lemmas will be used to get the main result of this section.
\begin{lem}\label{506}
If $G$ is a Leinster group of order $p^2qr$, where $p<q<r$ are primes, then $T_p \ntriangleleft G$.
\end{lem}
\begin{proof}
In view of Theorem \ref{alg}, $G$ is non-abelian. Now, if $T_p \lhd G$, then by Correspondence theorem, $G$ has an abelian normal centralizer say $N$ of index $q$ and consequently, by Theorem \ref {zumud1}, we have $p^2qr=q \mid G' \mid \mid Z(G) \mid$.
Now, if $q>3$, then $G$ has an abelian centralizer say $K$ of index $r$. Clearly, $N \cap K =Z(G)$ and $G=NK$. Therefore $\mid Z(G) \mid =p^2$ and hence $\mid G' \mid =r$. But then, $G$ has a normal subgroup of order $qr$, which implies $G \cong T_p \times (C_r \rtimes C_q)$. In the present scenario, if $T_p=C_{p^2}$, then by Theorem \ref{mul}, $\tau(G)=9$ and consequently, by Remark \ref{rem125}, $\mid G \mid =4qr$. Now, using Theorem \ref{mul} again, we have $8qr=\sigma(G)=\sigma(T_2) \times \sigma (C_r \rtimes C_q)=7(1+r+qr)$, which is impossible. Again, if $T_p= C_p \times C_p$, then $\sigma (G)> 2 \mid G \mid$.
Next, suppose $q=3$. Then we have $\mid G' \mid \mid Z(G) \mid=4r$. Now, if $\mid G' \mid=4$, then $G$ has a normal subgroup of order $12$ and consequently, $G \cong A_4 \times C_r, Q_{12} \times C_r$ or $D_{12} \times C_r$, which are not Leinster. Again, if $\mid G' \mid=4r$, then $\sigma(G) < 2 \mid G \mid$. Finally, if $\mid G' \mid=2, r$ or $2r$, then by Correspondence theorem, $G$ has a normal subgroup of index $2$, which is a contradiction. Hence $T_p \ntriangleleft G$.
\end{proof}
\begin{lem}\label{6}
If $G$ is a Leinster group of order $p^2qr$, $p<q<r$ being primes, then $\mid G' \mid \neq r$.
\end{lem}
\begin{proof}
If $\mid G' \mid = r$, then in view of Theorem \ref{aqt}, Lemma \ref{506} and Correspondence theorem, $G$ has unique normal subgroup for each of the orders $r, pr, qr, p^2r, pqr$ and $G$ cannot have any normal subgroup of order $p^2, p^2q$. Note that $G$ can have at the most one normal subgroup of order $pq$. Suppose $G$ has a normal subgroup $N$ of order $pq$. Let $K$ be the normal subgroup of order $pr$. Now, if $\mid N \cap K \mid=1$, then exactly one of $N$ or $K$ is cyclic, noting that $\mid G' \mid = r$. Consequently, $\tau(G)=10$. Again, if $\mid N \cap K \mid=p$, then also we have $\tau(G)=10$. Therefore from the definition of Leinster groups, we have $p^2qr=1+p+q+pq+r+pr+qr+p^2r+pqr$. But then $(p-1)pqr=(1+p)(1+q)+(1+p+q+p^2)r$, which is a contradiction. Therefore $G$ cannot have a normal subgroup of order $pq$. Consequently, in view of Theorem \ref{sjb}, using the definition of Leinster groups, we have $p^2qr=1+p+r+pr+qr+p^2r+pqr$ or $1+q+r+pr+qr+p^2r+pqr$, which is again a contradiction. Hence $\mid G' \mid \neq r$.
\end{proof}
\begin{lem}\label{60}
If $G$ is a Leinster group of order $p^2qr$, $p<q<r$ being primes, then $\mid G' \mid \neq qr$.
\end{lem}
\begin{proof}
If $\mid G' \mid = qr$, then in view of Theorem \ref{aqt}, Lemma \ref{506} and Correspondence theorem, $G$ has unique normal subgroup for each of the orders $qr, pqr$ and $G$ cannot have normal subgroup of order $p^2, p^2q, p^2r$. Note that $G$ can have at the most one normal subgroup for each of the orders $pq, pr$. If $G$ has no normal subgroup of order $pq$ or $pr$, then $\tau(G) \leq 7$, which is a contradiction to Theorem \ref{sjb}. Therefore $G$ must have a normal subgroup $N$ of order $pq$ and a normal subgroup $K$ of order $pr$. Now, if $\mid N \cap K \mid=1$, then $G=(C_q \rtimes C_p) \times (C_r \rtimes C_p)$. Therefore using the definition of Leinster groups, we have $p^2qr=1+q+r+pq+pr+qr+pqr$, which implies $r=\frac{1+(p+1)q}{q(p^2-p-1)-(p+1)}$. In the present situation, one can easily verify that $p>7$, which is a contradiction. Therefore $\mid N \cap K \mid=p$ and consequently, from the definition of Leinster groups, we have
$p^2qr=1+p+q+r+pq+qr+pr+pqr=(1+p)(1+q)(1+r)$, which is again impossible.
Hence $\mid G' \mid \neq qr$.
\end{proof}
\begin{lem}\label{61}
If $G$ is a Leinster group of order $p^2qr$, $p<q<r$ being primes, then $\mid G' \mid \neq pq$.
\end{lem}
\begin{proof}
If $\mid G' \mid = pq$, then in view of Theorem \ref{aqt} and Correspondence theorem, $G$ has unique normal subgroups, say, $H$ and $K$ of order $p^2q$ and $pqr$ respectively. Moreover, $G$ cannot have normal subgroups of order $qr$ and $p^2r$. Note that $G$ has unique normal subgroup of order $pq$ since $K$ is unique and can have at the most one normal subgroup of order $pr$. Now, suppose $G$ has a normal subgroup $N$ of order $pr$. Then $ \mid G' \cap N \mid=p$, otherwise $G=G' \times N$, which is a contradiction since $\mid G' \mid=pq$. Therefore $G$ has a normal subgroup of order $p$. Consequently, $G'$ is cyclic and hence $G$ has a cyclic normal subgroup of index $p$. It now follows from Theorem \ref{zumud1}, that $ \mid Z(G) \mid=r$ and consequently, $T_q \ntriangleleft G$. In the present scenario, in view of Lemma \ref{506}, using the definition of Leinster groups, we have $p^2qr=1+p+r+pq+pr+p^2q+pqr$ and hence $r=\frac{p^2q+pq+p+1}{p^2-pq-p-1}$. In this case, one can verify that $p>7$, which is impossible. Therefore $G$ cannot have a normal subgroup of order $pr$. But then $\tau (G)\leq 7$, which is again impossible by Theorem \ref{sjb}. Hence $\mid G' \mid \neq pq$.
\end{proof}
\begin{lem}\label{62}
If $G$ is a Leinster group of order $p^2qr$, $p<q<r$ being primes, then $\mid G' \mid \neq pr$.
\end{lem}
\begin{proof}
If $\mid G' \mid = pr$, then in view of Theorem \ref{aqt} and Correspondence theorem, $G$ has unique normal subgroup for each of the orders $p^2r$ and $pqr$. Moreover, $G$ cannot have normal subgroups of order $qr$ and $p^2q$. Let $K$ be the normal subgroup of order $pqr$. It is easy to see that $G$ can have at the most one normal subgroup of order $pq$ and $pr$, noting that $K$ is unique.
Now, if $N$ is a normal subgroup of $G$ of order $pq$, then $\mid G' \cap N \mid=p$, otherwise $G=G' \times N$, which is a contradiction since $\mid G' \mid=pr$. It now follows that $G$ has a normal subgroup of order $p$ and hence $N$ is cyclic. Now, if $T_r \lhd G$, then $K$ is a cyclic normal subgroup of $G$ of index $p$. Therefore by Theorem \ref{zumud1}, we have $\mid Z(G) \mid=q$, which is impossible. Consequently, in view of Theorem \ref{sjb} and Lemma \ref{506}, using the definition of Leinster groups, we have $p^2qr=1+p+q+pq+pr+p^2r+pqr=(1+p)(1+q)+r(p+p^2+pq)$, which is impossible. Therefore $G$ cannot have a normal subgroup of order $pq$. But then, again we have $\tau (G)\leq 7$, which is a contradiction to Theorem \ref{sjb}. Hence $\mid G' \mid \neq pr$.
\end{proof}
Now, we are ready to state the following theorem:
\begin{thm}\label{56}
If $G$ is a Leinster group of order $p^2qr$, where $p<q<r$ are primes, then $G \cong Q_{20}\times C_{19}$ or $Q_{28} \times C_{13}$.
\end{thm}
\begin{proof}
In view of Theorem \ref{alg}, $G$ is non-abelian. Clearly, $G$ cannot be simple and consequently, $G$ is solvable, which implies $G' \neq G$.
Now, if $\mid G' \mid=pqr$ or $p^2r$, then $\tau(G) \leq 7$, which is impossible by Theorem \ref{sjb}.
Next, suppose $\mid G' \mid = p^2q$. In this situation, if $G$ has more than one normal subgroup of order $pq$, then $T_q \lhd G$ and hence $T_r \ntriangleleft G$. Now, from the definition of Leinster group, we have $\mid G \mid=12r$ and consequently, $\mid G \mid=60$ or $132$, which is impossible by GAP \cite{gap}. Therefore $G$ can have at the most one normal subgroup of order $pq$. But then $\tau(G) \leq 7$, which is again impossible by Theorem \ref{sjb}.
Again, if $\mid G' \mid=p$, then by Theorem \ref{aqt} and Correspondence theorem, $T_p \vartriangleleft G$, which is a contradiction to Lemma \ref {506}. Therefore, in view of Lemma \ref{506}, Lemma \ref{6}, Lemma \ref{60}, Lemma \ref{61}, Lemma \ref{62}, we have $\mid G' \mid=q$. In the present scenario, by Theorem \ref{aqt} and Correspondence theorem, $G$ has unique normal subgroup for each of the following orders $q, pq, qr, p^2q, pqr$. Also, note that $G$ cannot have normal subgroup of order $p^2r$.
Let $N$ be the normal subgroup of $G$ of order $p^2q$. Now, if $T_r \ntriangleleft G$, then $G$ cannot have a normal subgroup of order $pr$. In this situation, from the definition of Leinster groups, we have $p^2qr=1+p+q+pq+qr+p^2q+pqr=(1+p)(1+q)+q(r+p^2+pr)$, noting that by Theorem \ref{sjb}, we have $\tau(G)>7$ , which is impossible. Therefore $T_r \lhd G$. In the present scenario, if $G$ don't have a normal subgroup of order $p$, then from the definition of Leinster groups, we have $p^2qr=1+q+r+pr+pq+qr+p^2q+pqr$, i.e., $r=\frac{p^2q+pq+q+1}{p^2q-pq-p-q-1}$ or $p^2qr=1+q+r+pq+qr+p^2q+pqr$, i.e., $r=\frac{p^2q+pq+q+1}{p^2q-pq-q-1}$. But in both the cases, one can verify that $p>7$, which is impossible. Therefore $G$ must have a normal subgroup of order $p$ and consequently, using the definition of Leinster groups, we have $p^2qr=1+p+q+r+pq+qr+pr+p^2q+pqr$. In the present situation, one can verify that $p=2$ and consequently, $qr=3+7q+3r$, which implies $r=(3+7q)/(q-3)=7+24/(q-3)$. Hence $q=5, r=19$ or $q=7, r=13$. Now, using GAP \cite{gap}, we have $G \cong Q_{20}\times C_{19}$ or $Q_{28} \times C_{13}$.
\end{proof}
\section{ Leinster groups of order $pqrs$}
Given any primes $p<q<r<s$, by \cite[ Corollary 3.2, Theorem 3.6]{baishya1}, the only Leinster group of order $pq$ is $C_6$ and the only Leinster group of order $pqr$ is $S_3 \times C_5$. In this section, we consider the groups of order $pqrs$ and prove that no group of order $pqrs$ is Leinster. The following lemmas will be used to establish our result.
\begin{lem}\label{51}
Let $H$ be a group of squarefree order and $p$ be any odd prime, $p \nmid \mid H \mid $. If $G=H \times C_{2p}$, then $G$ is not Leinster.
\end{lem}
\begin{proof}
In view of Theorem \ref{alg}, $G$ is non-abelian. Now, if $ \mid H \mid$ is even, then $H$ has a subgroup $N$ of index $2$. In the present scenario, $N \times C_{2p}$ and $H \times C_p$ are two distinct subgroups of $G$ of index $2$ and hence $G$ is not Leinster. Next, suppose $ \mid H \mid$ is odd. Then using Theorem \ref{mul}, we have $3 \mid\sigma (G)$ and so if $G$ is Leinster, then $3 \mid \mid G \mid$. Therefore $G$ will have a normal subgroup of index $2$ and a normal subgroup of index $3$. Hence $G$ is not Leinster.
\end{proof}
\begin{lem}\label{52}
If $G$ is a Leinster group of order $pqrs$, $p<q<r<s$ being primes, then $8 \leq \tau (G) \leq 10 $.
\end{lem}
\begin{proof}
In view of Theorem \ref{alg}, $G$ is non abelian. Consequently, $\tau(G) \leq 12$, noting that every normal subgroup of $G$ is uniquely determined by its order.
Now, suppose $\tau (G)=11$. In view of Theorem \ref{mul}, $G$ can have at the most $3$ normal subgroups of prime index and at the most $3$ normal subgroups of order product of two primes. Consequently, $G$ is not leinster.
Next, suppose $\tau (G)=12$. Then we have $G= N_{ab} \times C_{cd}$, where $N_{ab}$ is the unique normal subgroup of order $ab$ and $a, b, c, d \in \lbrace p, q, r, s \rbrace$. Now, if $\mid G \mid$ is odd, then using Theorem \ref{mul}, we have $4 \mid \sigma (G)$ and hence $G$ is not Leinster. Next, suppose $\mid G \mid$ is even. In the present situation, if $\mid C_{cd} \mid $ is odd, then $\mid N_{ab} \mid$ must be even and by Theorem \ref{mul}, we have $8 \mid \sigma (G)$, and hence $G$ is not Leinster. On the other hand if $\mid C_{cd} \mid $ is even, then by Lemma \ref{51}, $G$ is not Leinster. Now, the result follows from Theorem \ref{sjb}.
\end{proof}
We begin with the case when $\tau(G)=8$.
\begin{rem}\label{rem133}
Let $G$ be a group of order $pqrs$, $p<q<r<s$ being primes and $\tau (G) = 8$. If $G$ is Leinster, then we have:
\begin{enumerate}
\item $\mid G' \mid = qr, qs$ or $rs$.
\item $p=2$.
\item $G$ has atleast one normal subgroup of prime order and has atleast $2$ normal subgroups whose orders are product of two primes.
\item $3 \nmid \mid G \mid$.
\end{enumerate}
\end{rem}
\begin{proof}
a) If $\mid G' \mid$ is prime, then $\tau(G) \geq 9$. Next, suppose $G'$ is of prime index. Then in view of Theorem \ref{sqfree}, we have $\mid G' \mid=qrs$. Now, if $p=2$, then by Theorem \ref{zumud1}, $\mid Z(G) \mid=1$, and hence $G$ is a dihedral group, which is impossible by \cite[Example 2.4]{leinster}. Hence $\mid G \mid$ is odd. In the present scenario, from the definition of Leinster groups, we have $\mid G \mid =1+n_1+n_2+n_3+n_4+n_5+n_6$, where $n_1>n_2>n_3>n_4>n_5>n_6$ are the orders of the normal subgroups of $G$.
Consequently, we must have $n_2 > \frac{\mid G \mid}{9}$, which is impossible. Therefore $ \mid G' \mid=qr, qs$ or $rs$, noting that we cannot have $G =G'$.
b) If $p\neq 2$, then by (a), we have $\sigma(G)-\mid G \mid \leq \frac{\mid G \mid}{3}+ \frac{\mid G \mid}{5}+ \frac{\mid G \mid}{15}+ \frac{4\mid G \mid}{21} < \mid G \mid$.
c) In view of (a), $G$ has exactly $2$ normal subgroups of prime index. Now, the result follows from the fact that $\tau(G)=8$.
d) If $\mid G \mid = 2.3rs$, then in view of (a), we have $\mid G' \mid = 3r$ or $3s$.
Now, suppose $\mid G' \mid = 3s$. Then in view of (c), by the definition of Leinster groups, we have
$\mid G \mid =1+3rs+2.3s+3s+x+y+z$, where $x, y, z$ are the orders of the remaining normal subgroups such that $x$ is a product of two primes. In the present situation, if $x=2s$, then by Remark \ref{rem125}, we have $\lbrace y, z \rbrace = \lbrace 2, s \rbrace$, which is impossible.
Similarly, if $x=2.3$, then by Remark \ref{rem125}, we have $\lbrace y, z \rbrace = \lbrace 2, 3 \rbrace$, which is impossible.
Finally, if $x=rs$, then by Remark \ref{rem125}, we have $\lbrace y, z \rbrace = \lbrace 3, s \rbrace$ or $\lbrace r, s \rbrace$, which is also impossible.
Therefore, we must have $x=3r$, noting that in the present scenario, we cannot have a normal subgroup of order $2r$.
But then, by Remark \ref{rem125}, we have $\lbrace y, z \rbrace = \lbrace 3, r \rbrace$ or $\lbrace 3, s \rbrace$, which is again impossible.
Next, suppose $\mid G' \mid = 3r$. Then in view of (c), by the definition of Leinster groups, we have
$\mid G \mid =1+3rs+2.3r+3r+x+y+z$, where $x, y, z$ are the orders of the remaining normal subgroups such that $x$ is a product of two primes. In the present situation, if $x=2r$, then by Remark \ref{rem125}, we have $\lbrace y, z \rbrace = \lbrace 2, r \rbrace$, which is impossible.
Similarly, if $x=2.3$, then by Remark \ref{rem125}, we have $\lbrace y, z \rbrace = \lbrace 2, 3 \rbrace$, which is impossible.
Again, if $x=rs$, then by Remark \ref{rem125}, we have $\lbrace y, z \rbrace = \lbrace 3, r \rbrace$ or $\lbrace 3, s \rbrace$, which is also impossible.
Finally, if $x=3s$, then by Remark \ref{rem125}, we have $\lbrace y, z \rbrace = \lbrace 3, r \rbrace$ or $\lbrace 3, s \rbrace$, which is again impossible. Hence $3 \nmid \mid G \mid$, noting that in the present scenario, we cannot have a normal subgroup of order $2s$.
\end{proof}
\begin{lem}\label{511}
If $G$ is a group of order $pqrs$, $p< q<r<s$ being primes, and $\tau (G) = 8$, then $G$ is not Leinster.
\end{lem}
\begin{proof}
If $qr > 2s$, then in view of Remark \ref{rem133}, we have
\begin{align*}
\sigma(G)- \mid G \mid \leq 1+ \frac{\mid G \mid}{2}+\frac{\mid G \mid}{5}+\frac{\mid G \mid}{10}+\frac{\mid G \mid}{14}+\frac{2\mid G \mid}{22} < \mid G \mid.
\end{align*}
Again, if $qr < 2s$, then in view of Remark \ref{rem133}, we have
\begin{align*}
\sigma(G)- \mid G \mid \leq 1+ \frac{\mid G \mid}{2}+\frac{\mid G \mid}{5}+\frac{\mid G \mid}{10}+\frac{\mid G \mid}{14}+\frac{2\mid G \mid}{35} < \mid G \mid.
\end{align*}
\end{proof}
Now, we consider the case where $\tau (G) = 9$.
\begin{rem}\label{rem123}
Let $G$ be a group of order $pqrs$, $p<q<r<s$ being primes and $\tau (G) = 9$. If $G$ is Leinster, then we have:
\begin{enumerate}
\item $p=2$.
\item $T_2 \ntriangleleft G$.
\item $\mid G' \mid = qr, qs$ or $rs$.
\item $G$ has exactly $2$ normal subgroups of prime index and has exactly $2$ normal subgroups of prime order.
\item $3 \nmid \mid G \mid$.
\end{enumerate}
\end{rem}
\begin{proof}
a) It follows from Remark \ref{rem125}.
b) It follows from Theorem \ref{mul}, noting that by Theorem \ref{sqfree}, $G$ has a normal subgroup of index $2$.
c) Since $\tau(G)=9$, therefore $\mid G' \mid$ cannot be prime. Again, if $G'$ is of prime index, then in view of Theorem \ref{sqfree}, we have $\mid G' \mid =qrs$. In the present scenario, by Theorem \ref{zumud1}, $\mid Z(G) \mid=1$, and hence $G$ is a dihedral group, which is impossible by \cite[Example 2.4]{leinster}. Therefore we must have $\mid G' \mid=qr, qs$ or $rs$, noting that we cannot have $G =G'$.
d) In view of (c), $G$ has exactly two normal subgroups of prime index. In the present scenario, if $G$ has less than two normal subgroups of prime order, then using Theorem \ref{mul}, we have $\tau(G)>9$. Hence, $G$ has exactly $2$ normal subgroups of prime order, noting that if $G$ has more than $2$ normal subgroup of prime order, then also we have $\tau(G)>9$.
e) If $\mid G \mid=2.3rs$, then in view of (c), we have $\mid G' \mid=3r$ or $3s$.
Now, suppose $\mid G' \mid=3r$. In the present situation, if $G$ has a normal subgroup of order $rs$, then in view of (d) and Remark \ref{rem125}, from the definition of Leinster groups, we have
\begin{align*}
6rs=1+3rs+2.3r+3r+rs+r+2.3+3,
\end{align*}
which is impossible.
Next, suppose $G$ has a normal subgroup of order $3s$. Then in view of (d) and Remark \ref{rem125}, from the definition of Leinster groups, we have
\begin{align*}
6rs=1+3rs+2.3r+3r+3s+3+x+y,
\end{align*}
where $x, y$ are the orders of the remaining normal subgroups such that $x$ is a product of two primes. But then, clearly, $x$ cannot be odd, otherwise $y$ will also be odd. Again, $x \neq 2r, 2s$, noting that $\mid G' \mid=3r$. Therefore $x=2.3$, which is again impossible.
Therefore, we must have $\mid G' \mid=3s$. In the present situation, if $G$ has a normal subgroup of order $rs$, then in view of (d) and Remark \ref{rem125}, from the definition of Leinster groups, we have
\begin{align*}
6rs=1+3rs+2.3s+3s+rs+s+2.3+3,
\end{align*}
which is impossible.
Next, suppose $G$ has a normal subgroup of order $3r$. Then in view of (d) and Remark \ref{rem125}, from the definition of Leinster groups, we have
\begin{align*}
6rs=1+3rs+2.3s+3s+3r+3+x+y,
\end{align*}
where $x, y$ are the orders of the remaining normal subgroups such that $x$ is a product of two primes. But then, clearly, $x$ cannot be odd, otherwise $y$ will also be odd. Again, $x \neq 2r, 2s$, noting that $\mid G' \mid=3s$. Therefore $x=2.3$, which is again impossible. Therefore we have $3 \nmid \mid G \mid$.
\end{proof}
\begin{lem}\label{257}
If $G$ is a group of order $pqrs$, $p< q<r<s$ being primes, and $\tau (G) = 9$, then $G$ is not Leinster.
\end{lem}
\begin{proof}
If $qr > 2s$, then in view of Remark \ref{rem123}, we have
\begin{align*}
\sigma(G)- \mid G \mid \leq \frac{\mid G \mid}{2}+\frac{\mid G \mid}{5}+\frac{\mid G \mid}{10}+\frac{\mid G \mid}{14}+\frac{\mid G \mid}{22}+\frac{3\mid G \mid}{70} < \mid G \mid.
\end{align*}
Again, if $qr < 2s$, then in view of (e) and (f) of Remark \ref{rem123}, we have
\begin{align*}
\sigma(G)- \mid G \mid \leq \frac{\mid G \mid}{2}+\frac{\mid G \mid}{5}+\frac{\mid G \mid}{10}+\frac{\mid G \mid}{14}+\frac{\mid G \mid}{35}+\frac{3\mid G \mid}{70} < \mid G \mid.\end{align*}
\end{proof}
Finally, we consider the case where $\tau(G)=10$.
\begin{rem}\label{rem923}
Let $G$ be a group of order $pqrs$, $p<q<r<s$ being primes and $\tau (G) = 10$. If $G$ is Leinster, then we have:
\begin{enumerate}
\item $p=2$.
\item $3 \nmid \mid G \mid$.
\item $T_2 \ntriangleleft G$.
\item $\mid G' \mid = qr, qs$ or $rs$.
\end{enumerate}
\end{rem}
\begin{proof}
a) Suppose, $p>2$. It is easy to see that $G'$ cannot be of prime index, noting that in view of Theorem \ref{mul}, $G$ can have at the most three normal subgroup whose order is product of two primes.
Now, suppose $\mid G' \mid$ is prime. Note that, in view of Theorem \ref{sqfree}, $\mid G' \mid \neq p$. Next, suppose $\mid G' \mid = q$. In this situation, if $T_p \lhd G$, then $q \mid p+1$, which is a contradiction. Again, if $T_r \lhd G$ or $T_s \lhd G$, then by Correspondence theorem, we have a normal subgroup of order $rs$, which is impossible. Therefore $\mid G' \mid \neq q$. On the other hand, if $\mid G' \mid = s$, then we have $s \leq {r+1} $, which is impossible. Therefore $\mid G' \mid = r$ and consequently, in view of Theorem \ref{mul}, from the definition of Leinster groups, we have
\begin{align*}
\sigma(G)-\mid G \mid=1+r+s+pr+qr+rs+pqr+prs+qrs.
\end{align*}
Now, suppose $pqr > rs$. Then $pr<qr<rs<pqr<prs<qrs$. In the present situation, clearly we have $qrs \geq \frac{\mid G \mid}{3}, prs \geq \frac{\mid G \mid}{5}, pqr \geq \frac{\mid G \mid}{11}, rs \geq \frac{\mid G \mid}{15}, qr \geq \frac{\mid G \mid}{33}$ and $pr \geq \frac{\mid G \mid}{55}$. But then, $\sigma(G)-\mid G \mid < \mid G \mid$.
Next, suppose $pqr< rs$. Then $pr<qr<pqr<rs<prs<qrs$. In the present situation, clearly we have $qrs \geq \frac{\mid G \mid}{3}, prs \geq \frac{\mid G \mid}{5}, rs \geq \frac{\mid G \mid}{15},, pqr \geq \frac{\mid G \mid}{17}, qr \geq \frac{\mid G \mid}{51}$ and $pr \geq \frac{\mid G \mid}{85}$. But then also, we have $\sigma(G)-\mid G \mid < \mid G \mid$.
Therefore $\mid G' \mid$ has to be product of two primes. In the present scenario, $G$ can have exactly two normal subgroups of prime index and in view of Theorem \ref{mul}, $G$ can have exactly three normal subgroups of order product of two primes.
Now, suppose $qr >ps$. Then we have
\begin{align*}
\sigma(G)-\mid G \mid \leq 1+q+r+s+qr+qs+rs+prs+qrs.
\end{align*}
In the present situation, clearly we have $qrs \geq \frac{\mid G \mid}{3}, prs \geq \frac{\mid G \mid}{5}, rs \geq \frac{\mid G \mid}{15}, qs \geq \frac{\mid G \mid}{21}, qr \geq \frac{\mid G \mid}{33}$ and $s \geq \frac{\mid G \mid}{105}$. But then, $\sigma(G)-\mid G \mid < \mid G \mid$.
Therefore we must have $qr< ps$, and consequently,
\begin{align*}
\sigma(G)-\mid G \mid \leq 1+q+r+s+ps+qs+rs+prs+qrs.
\end{align*}
In the present situation, clearly we have $qrs \geq \frac{\mid G \mid}{3}, prs \geq \frac{\mid G \mid}{5}, rs \geq \frac{\mid G \mid}{15}, qs \geq \frac{\mid G \mid}{21}, ps \geq \frac{\mid G \mid}{35}$ and $s \geq \frac{\mid G \mid}{105}$. But then also, we have $\sigma(G)-\mid G \mid < \mid G \mid$. Therefore $p=2$.
b) In view of (a), suppose $ \mid G \mid=2.3rs$. Note that, by Correspondence theorem, $T_2 \ntriangleleft G$. In the present scenario, in view of Theorem \ref{sqfree}, one can verify that $\mid G' \mid=3r$ or $3s$, and consequently, $T_3, T_r, T_s \lhd G$, noting that $\mid G' \mid \neq 3, r, s$.
Now, suppose $\mid G' \mid=3r$. Then we have $G=T_s \times (C_{3.r} \rtimes C_2)$. But then, from the definition of Leinster groups, using Theorem \ref{mul}, we have $12rs=(s+1)(1+r+3+3r+6r)$, which is a contradiction.
Next, suppose $\mid G' \mid=3s$. Then we have $G=T_r \times (C_{3s} \rtimes C_2)$. But then, again from the definition of Leinster groups, using Theorem \ref{mul}, we have $12rs=(r+1)(1+s+3+3s+6s)$, which is again a contradiction. Therefore $3 \nmid \mid G \mid$.
c) It follows from (b), using Theorem \ref{mul}.
d) Clearly, $G'$ cannot be of prime index. Now, suppose $\mid G' \mid$ be a prime. In the present situation, one can verify that $\mid G' \mid =q$ or $r$ and consequently, $s=\frac{1+r+3q+3qr}{qr-3q}$ or $s=\frac{1+3r+3qr}{qr-3r-1}$. But in both the cases, one can verify that $q>13$, which is impossible. Hence the result follows, noting that we cannot have $G=G'$.
\end{proof}
\begin{lem}\label{757}
Let $G$ be a group of order $pqrs$, $p<q<r<s$ being primes. If $\tau (G)=10$, then $G$ is not leinster.
\end{lem}
\begin{proof}
If $qr > 2s$, then in view of Remark \ref{rem923}, we have
\begin{align*}
\sigma(G)- \mid G \mid \leq \frac{\mid G \mid}{2}+\frac{\mid G \mid}{5}+\frac{\mid G \mid}{10}+\frac{\mid G \mid}{14}+\frac{\mid G \mid}{22}+\frac{4\mid G \mid}{70} < \mid G \mid.
\end{align*}
Again, if $qr < 2s$, then in view of (e) and (f) of Remark \ref{rem123}, we have
\begin{align*}
\sigma(G)- \mid G \mid \leq \frac{\mid G \mid}{2}+\frac{\mid G \mid}{5}+\frac{\mid G \mid}{10}+\frac{\mid G \mid}{14}+\frac{\mid G \mid}{35}+\frac{4\mid G \mid}{70} < \mid G \mid.\end{align*}
\end{proof}
Combining Lemma \ref{52}, Lemma \ref{511}, Lemma \ref{257}, Lemma \ref{757}, we now have the following theorem:
\begin{thm}\label{711}
If $G$ is a group of order $pqrs$, $p<q<r<s$, being primes, then $G$ is not Leinster.
\end{thm}
We have the following result on Leinster group of order $p^3q$ for any primes $p, q$, which also follows from \cite[Proposition 3.8]
{maroti}.
\begin{prop}\label{911}
If $G$ is a Leinster group of order $p^3q$, $p, q$ being primes, then $G \cong C_7 \rtimes C_8$.
\end{prop}
\begin{proof}
In view of Theorem \ref{alg}, $G$ is non-abelian. In the present scenario, one can easily verify that $p<q$ and $T_q \lhd G$. Now, by \cite[Corollary 3.3]{maroti}, we have $T_p$ is cyclic and consequently, from \cite[Theorem 3.4, Theorem 3.5]{baishya1}, we have $\tau(G)=7$. But then, using remark \ref{rem125}, we have $p=2$ forcing $q=7$. Now, the result follows using GAP \cite{gap}.
\end{proof}
\begin{concl}\label{1711}
The objective of this research was to investigate the structure of Leinster groups of order $pqrs$ for any primes $p, q, r, s$ not necessarily distinct. It is well known that no group of order $p^2q^2$ is Leinster \cite [Proposition 2.4]{baishya1}. On the other hand, in view of Proposition \ref{911}, we have $C_7 \rtimes C_8$ is the only Leinster group of order $p^3q$. In this paper, for any primes $p<q<r<s$, we have studied the cases where order of $G$ is $p^2qr$ or $pqrs$. However, we were unable to give any information for Leinster groups of order $pq^2r$ and $pqr^2$. As such we leave it as an open question to our readers.
\end{concl} | {"config": "arxiv", "file": "1911.04829.tex"} |
TITLE: Why is dx or dy width during rotation of a function around an axis?
QUESTION [0 upvotes]: For example
Why is dx the width? Isn't dx infinitely small?
Thank you
REPLY [1 votes]: In problems like this, $dx$ is the infinitesimal width. Recall the definition of an integral:
$$\int_a^bf(x)dx= \lim_{n\rightarrow\infty}\sum_{i=1}^{n}f(x)\Delta x$$
Without the limit, this is the sum of the area a bunch of rectangles, with the limit we get the sum of infinitely many rectangles of infinitesimal width, and height equal to $f(x)$. The argument is similar in this case, you are taking some function, and rotating it about some axis. You can calculate the volume by stacking a bunch of cylinders of radius $f(x)$ and infinitesimal width $dx$. | {"set_name": "stack_exchange", "score": 0, "question_id": 3757012} |
\begin{document}
\title{Distributed Sequential Detection for Gaussian Shift-in-Mean Hypothesis Testing}
\author{Anit~Kumar~Sahu,~\IEEEmembership{Student Member,~IEEE} and Soummya~Kar,~\IEEEmembership{Member,~IEEE}
\thanks{The authors are with the Department of Electrical and Computer Engineering, Carnegie Mellon University, Pittsburgh, PA 15213, USA (email: anits@andrew.cmu.edu, soummyak@andrew.cmu.edu). This work was partially supported by NSF Grant ECCS-1306128.}}
\maketitle
\begin{abstract}
This paper studies the problem of sequential Gaussian shift-in-mean hypothesis testing in a distributed multi-agent network. A sequential probability ratio test (SPRT) type algorithm in a distributed framework of the \emph{consensus}+\emph{innovations} form is proposed, in which the agents update their decision statistics by simultaneously processing latest observations (innovations) sensed sequentially over time and information obtained from neighboring agents (consensus). For each pre-specified set of type I and type II error probabilities, local decision parameters are derived which ensure that the algorithm achieves the desired error performance and terminates in finite time almost surely (a.s.) at each network agent. Large deviation exponents for the tail probabilities of the agent stopping time distributions are obtained and it is shown that asymptotically (in the number of agents or in the high signal-to-noise-ratio regime) these exponents associated with the distributed algorithm approach that of the optimal centralized detector. The expected stopping time for the proposed algorithm at each network agent is evaluated and is benchmarked with respect to the optimal centralized algorithm. The efficiency of the proposed algorithm in the sense of the expected stopping times is characterized in terms of network connectivity. Finally, simulation studies are presented which illustrate and verify the analytical findings.
\end{abstract}
\begin{IEEEkeywords}
Distributed Detection, Multi-agent Networks, Consensus, Sequential Probability Ratio Tests, Large Deviations
\end{IEEEkeywords}
\section{Introduction}
\subsection{Background and Motivation}
The focus of this paper is on sequential simple hypothesis testing in multi-agent networks in which the goal is to detect the (binary) state of the environment based on observations at the agents. By sequential we mean, instead of considering fixed sample size hypothesis tests in which the objective is to minimize the probabilities of decision error (the false alarm and the miss) based on a given deterministic number of samples or observation data collected by the network agents, we are interested in the design of testing procedures that in the \emph{quickest} time or using the \emph{minimal} amount of sensed data samples at the agents can distinguish between the two hypotheses with guaranteed accuracy given in terms of pre-specified tolerances on false alarm and miss probabilities. The motivation behind studying sequential as opposed to fixed sample size testing is that in most practical agent networking scenarios, especially in applications that are time-sensitive and/or resource constrained, the priority is to achieve inference as quickly as possible by expending the minimal amount of resources (data samples, sensing energy and communication). Furthermore, we focus on distributed application environments which are devoid of fusion centers\footnote{By fusion center or center, we mean a hypothetical decision-making architecture in which a (central) entity has access to all agent observations at all times and/or is responsible for decision-making on behalf of the agents.} and in which inter-agent collaboration or information exchange is limited to a pre-assigned, possibly sparse, communication structure.
Under rather generic assumptions on the agent observation models, it is well-known that in a (hypothetical) centralized scenario or one in which inter-agent communication is all-to-all corresponding to a complete communication graph, the sequential probability ratio test (SPRT) (\hspace{-0.5pt}\cite{wald1945sequential}) turns out to be the optimal procedure for sequential testing of binary hypotheses; specifically, the SPRT minimizes the expected detection time (and hence the number of agent observation samples that need to be processed) while achieving requisite error performance in terms of specified probability of false alarm ($\alpha$) and probability of miss ($\beta$) tolerances. The SPRT and its variants have been applied in various contexts, see, for example, spectrum sensing in cognitive radio networks (\hspace{-0.5pt}\cite{choi2009sequential,jayaprakasam2009sequential,chaudhari2009autocorrelation}), target tracking~\cite{blostein1994sequential}, to name a few. However, the SPRT, in the current multi-agent context, would require computing a (centralized) decision statistic at all times, which, in turn, would either require all-to-all communication among the agents or access to the entire network data at all times at a fusion center. In contrast, restricted by a pre-assigned possibly sparse collaboration structure among the agents, in this paper we present and characterize a distributed sequential detection algorithm, the $\cisprt$, based on the \emph{consensus}+\emph{innovations} approach (see, for example \cite{kar-asilomar-linest-2008,KarMouraRamanan-Est-2008}). Specifically, focusing on a setting in which the agent observations over time are conditionally Gaussian and independent and identically distributed (i.i.d.), we study the $\cisprt$ sequential detection procedure in which each network agent maintains a local (scalar) test statistic which is updated over time by simultaneously assimilating the test statistics of neighboring agents at the previous time instant (a consensus potential) and the most recent observations (innovations) obtained by the agent and its neighbors. Also, similar in spirit to the (centralized) SPRT, each agent chooses two (local) threshold parameters (design choices) and the test termination at an agent (and subsequent agent decision on the hypotheses) is determined by whether the local test statistic at the agent lies in the interval defined by the thresholds or not. This justifies the nomenclature that the $\cisprt$ is a distributed SPRT type algorithm of the consensus+innovations form.
The main contributions of this paper are as follows:
\noindent\textbf{Main Contribution 1: Finite Stopping Property.} We show that, given any value of probability of false alarm $\alpha$ and probability of miss $\beta$, the $\cisprt$ algorithm can be designed such that each agent achieves the specified error performance metrics and the test procedure terminates in finite time almost surely (a.s.) at each agent. We derive closed form expressions for the local threshold parameters at the agents as functions of $\alpha$ and $\beta$ which ensures that the $\cisprt$ achieves the above property.\\
\noindent\textbf{Main Contribution 2: Asymptotic Characterization.} By characterizing the stopping time distribution of the $\cisprt$ at each network agent, we compute large deviations decay exponents of the stopping time tail probabilities at each agent, and show that the large deviations exponent of the $\cisprt$ approaches that of the optimal centralized in the asymptotics of $N$, where $N$ denotes the number of agents in the network. In the asymptotics of vanishing error metrics (i.e., as $\alpha,\beta\rightarrow 0$), we quantify the ratio of the expected stopping time $T_{d,i}(\alpha,\beta)$ for reaching a decision at an agent $i$ through the $\cisprt$ algorithm and the expected stopping time $T_{c}(\alpha,\beta)$ for reaching a decision by the optimal centralized (SPRT) procedure, i.e., the quantity $\frac{\mathbb{E}[T_{d,i}(\alpha,\beta)]}{\mathbb{E}[T_{c}(\alpha,\beta)]}$, which in turn is a metric of efficiency of the proposed algorithm as a function of the network connectivity. In particular, we show that the efficiency of the proposed $\cisprt$ algorithm in terms of the ratio $\frac{\mathbb{E}[T_{d,i}(\alpha,\beta)]}{\mathbb{E}[T_{c}(\alpha,\beta)]}$ is upper bounded by a constant which is a function of the network connectivity and can be made close to one by choosing the network connectivity appropriately, thus establishing the benefits of inter-agent collaboration in the current context.\\
\textbf{Related Work.}
Detection schemes in multi-agent networks which involve fusion centers, where all agents in the network transmit their local measurements, local decisions or local likelihood ratios to a fusion agent which subsequently makes the final decision (see, for example, \cite{chamberland2003decentralized, tsitsiklis1993decentralized, blum1997distributed, veeravalli1993decentralized}) have been well studied.
Consensus-based approaches for fully distributed but single snapshot processing, i.e., in which the agents first collect their observations possibly over a long time horizon and then deploy a consensus-type protocol~\cite{jadbabailinmorse03,olfatisaberfaxmurray07,dimakis2010gossip} to obtain distributed information fusion and decision-making have also been explored, see, for instance,~\cite{kar2008topology, kar2007consensus}. Generalizations and variants of this framework have been developed, see for instance~\cite{zhang2014asymptotically} which proposes truncated versions of optimal testing procedures to facilitate efficient distributed computation using consensus; scenarios involving distributed information processing where some of the agents might be faulty or there is imperfect model information (see, for example, \cite{zhou2011robust, zhou2012distributed}) have also been studied. More relevant to the current context are distributed detection techniques that like the $\cisprt$ procedure perform simultaneous assimilation of neighborhood decision-statistics and local agent observations in the same time step, see, in particular, the running consensus approach~\cite{braca2008enforcing,braca2010asymptotic}, the diffusion approach~\cite{cattivelli2009diffusion,cattivelli2009distributed,cattivelli2011distributed} and the consensus+innovations approach~\cite{bajovic2011distributed ,jakovetic2012distributed,kar2011distributed}. These works address important questions in fixed (but possibly large) sample size distributed hypothesis testing, including asymptotic characterization of detection errors~\cite{braca2010asymptotic,cattivelli2011distributed}, fundamental performance limits as characterized by large deviations decay of detection error probabilities in generic nonlinear observation models and random networks~\cite{bajovic2011distributed ,jakovetic2012distributed}, and detection with noisy communication links~\cite{kar2011distributed}. A continuous time version of the running consensus approach~\cite{braca2010asymptotic} was studied in~\cite{srivastava2014collective} recently with implications on sequential distributed detection; specifically, asymptotic properties of the continuous time decision statistics were obtained and in the regime of large thresholds bounds on expected decision time and error probability rates were derived. However, there is a fundamental difference between the mostly fixed or large sample size procedures discussed above and the proposed $\cisprt$ sequential detection procedure -- technically speaking, the former focuses on analyzing the probability distributions of the detection errors as a function of the sample size and/or specified thresholds, whereas, in this paper, we design thresholds, stopping criteria and characterize the probability distributions of the (random) stopping times of sequential distributed procedures that aim to achieve quickest detection given specified tolerances on the detection errors. Addressing the latter requires novel technical machinery in the design and analysis of dynamic distributed inference procedures which we develop in this paper.
We also contrast our work with sequential detection approaches based in other types of multi-agent networking scenarios. In the context of decentralized sequential testing in multi-agent networks, fundamental methodological advances have been reported, see, for instance,~\cite{hashemi1989decentralized,mei2008asymptotic,nayyar2011sequential,veeravalli1994decentralized,fellouris2011decentralized,poor2009quickest,tartakovsky2004change,tartakovsky2000sequential}, which address very general models and setups. These works involve fusion center based processing where all agents in the network either transmit their local decisions, measurements or their quantized versions to a fusion center. In contrast, in this paper we restrict attention to Gaussian binary testing models only, but focus on a fully distributed paradigm in which there is no fusion center and inter-agent collaboration is limited to a pre-assigned, possibly sparse, agent-to-agent local interaction graph.\\
\textbf{Paper Organization :}
We briefly summarize the organization of the rest of the paper. Section~\ref{subsec:not} presents notation to be used throughout the paper. The sensing models and the abstract problem formulation are stated and discussed in Section~\ref{subsec:sys_model}. Section~\ref{subsec:prel_seq_test} presents preliminaries on centralized sequential detection and motivates the distributed setup pursued in this paper. Section~\ref{dsd} presents the $\cisprt$ algorithm. The main results of the paper are stated in Section~\ref{sec:main_res} which includes the derivation of the thresholds for the $\cisprt$ algorithm, the stopping time distribution for the $\cisprt$ algorithm and the key technical ingredients concerning the asymptotic properties and large deviation analysis for the stopping time distributions of the centralized and distributed setups. It also includes the characterization of the expected stopping times of the $\cisprt$ algorithm and its centralized counterpart in asymptotics of vanishing error metrics. Section~\ref{sec:sim} presents simulation studies. The proofs of the main results appear in Section~\ref{sec:proof_res}, whereas, Section~\ref{sec:conc} concludes the paper.
\subsection{Notation}
\label{subsec:not}
We denote by~$\mathbb{R}$ the set of reals, $\mathbb{R}_{+}$ the set of non-negative reals, and by~$\mathbb{R}^{k}$ the $k$-dimensional Euclidean space.
The set of $k\times k$ real matrices is denoted by $\mathbb{R}^{k\times k}$. The set of integers is denoted by $\mathbb{Z}$, whereas, $\mathbb{Z}_{+}$ denotes the subset of non-negative integers and $\overline{\mathbb{Z}}_{+}=\mathbb{Z}_{+}\cup\{\infty\}$. We denote vectors and matrices by bold faced characters. We denote by $A_{ij}$ or $[A]_{ij}$ the $(i,j)$th entry of a matrix $\mathbf{A}$; $a_{i}$ or $[a]_{i}$ the $i$th entry of a vector $\mathbf{a}$. The symbols $\mathbf{I}$ and $\mathbf{0}$ are used to denote the $k\times k$ identity matrix and the $k\times p$ zero matrix respectively, the dimensions being clear from the context. We denote by $\mathbf{e}_{i}$ the $i$th column of $\mathbf{I}$. The symbol $\top$ denotes matrix transpose. The $k\times k$ matrix $\mathbf{J}=\frac{1}{k}\mathbf{1}\mathbf{1^{\top}}$ where $\mathbf{1}$ denotes the $k\times 1$ vector of ones. The operator $\|\cdot\|$ applied to a vector denotes the standard Euclidean $\mathcal{L}_{2}$ norm, while applied to matrices it denotes the induced $\mathcal{L}_{2}$ norm, which is equivalent to the spectral radius for symmetric matrices. All the logarithms in the paper are with respect to base $e$ and represented as $\log(\cdot)$. Expectation is denoted by $\mathbb{E}[\cdot]$ and $\mathbb{E}_{\theta}[\cdot]$ denotes expectation conditioned on hypothesis $H_{\theta}$ for $\theta \in \{0,1\}$. $\mathbb{P}(\cdot)$ denotes the probability of an event and $\mathbb{P}_{\theta}( . )$ denotes the probability of the event conditioned on hypothesis $H_{\theta}$ for $\theta \in \{0,1\}$. $\mathbb{Q}( . )$ denotes the $\mathbb{Q}$-function which calculates the right tail probability of a normal distribution and is given by $\mathbb{Q}(x)=\frac{1}{\sqrt{2\pi}}\int_{x}^{\infty}e^{-\frac{u^{2}}{2}}du$, $x \in \mathbb{R}$. We will use the following property of the $\mathbb{Q}(.)$ function, namely for any $x >0$, $\mathbb{Q}(x) \le \frac{1}{2}e^{-\frac{x^{2}}{2}}$. For deterministic $\mathbb{R}_{+}$-valued sequences $\{a_{t}\}$ and $\{b_{t}\}$, the notation $a_{t}=O(b_{t})$ denotes the existence of a constant $c>0$ such that $a_{t}\leq cb_{t}$ for all $t$ sufficiently large, whereas, $a_{t}=o(b_{t})$ indicates $a_{t}/b_{t}\rightarrow 0$ as $t\rightarrow\infty$.\\
\noindent{\bf Spectral Graph Theory.} For an undirected graph $G=(V, E)$, $V$ denotes the set of agents or vertices with cardinality $|V|=N$, and $E$ the set of edges with $|E|=M$. The unordered pair $(i,j) \in E$ if there exists an edge between agents $i$ and $j$. We only consider simple graphs, i.e. graphs devoid of self loops and multiple edges. A path between agents $i$ and $j$ of length $m$ is a sequence ($i=p_{0},p_{1},\hdots,p_{m}=j)$ of vertices, such that $(p_{n}, p_{n+1})\in E$, $0\le n \le m-1$. A graph is connected if there exists a path between all the possible agent pairings.
The neighborhood of an agent $i$ is given by $\Omega_{i}=\{j \in V~|~(i,j) \in E\}$. The degree of agent $i$ is given by the cardinality $d_{i}=|\Omega_{i}|$. The structure of the graph may be equivalently represented by the symmetric $N\times N$ adjacency matrix $\mathbf{A}=[A_{ij}]$, where $A_{ij}=1$ if $(i,j) \in E$, and $0$ otherwise. The degree matrix is represented by the diagonal matrix $\mathbf{D}=diag(d_{1}\hdots d_{N})$. The graph Laplacian matrix is represented by
\begin{align}
\label{eq:0.2}
\mathbf{L}=\mathbf{D}-\mathbf{A}.
\end{align}
The Laplacian is a positive semidefinite matrix, hence its eigenvalues can be sorted and represented in the following manner
\begin{align}
\label{eq:0.3}
0=\lambda_{1}(\mathbf{L})\le\lambda_{2}(\mathbf{L})\le \hdots \lambda_{N}(\mathbf{L}).
\end{align}
Furthermore, a graph is connected if and only if $\lambda_{2}(\mathbf{L})>0$ (see~\cite{chung1997spectral}~for instance). We stress that under our notation, $\lambda_{2}$, also known as the Fiedler value (see~\cite{chung1997spectral}), plays an important role because it acts as an indicator of whether the graph is connected or not.
\section{Problem Formulation}
\label{sec:prob_form}
\subsection{System Model}
\label{subsec:sys_model}
\noindent The $N$ agents deployed in the network decide on either of the two hypothesis $H_{0}$ and $H_{1}$. Each agent $i$ at (discrete) time $t$ makes a scalar observation $y_{i}(t)$ of the form
\begin{align}
\label{eq:0.6}
Under~~ H_{\theta} : y_{i}(t)= \mu_{\theta}+n_{i}(t),~~\theta=0,1.
\end{align}
\noindent For the rest of the paper we consider $\mu_{1}=\mu$ and $\mu_{0}=-\mu$, and assume that, the agent observation noise processes are independent and identically distributed (i.i.d.) Gaussian processes under both hypotheses formalized as follows:
\begin{myassump}{A1}
\label{as:1}
\emph{For each agent $i$ the noise sequence $\{n_{i}(t)\}$ is i.i.d. Gaussian with mean zero and variance $\sigma^{2}$ under both $H_{0}$ and $H_{1}$. The noise sequences are also spatially uncorrelated, i.e., $\mathbb{E}_{\theta}[n_{i}(t)n_{j}(t)]=\mathbf{0}$ for all $i\neq j$ and $\theta\in\{0,1\}$.}
\end{myassump}
\noindent Collect the $y_{i}(t)$'s, $i=1,2,\hdots N$ into the $N \times 1$ vector $\mathbf{y}(t)=(y_{1}(t),\hdots,y_{N}(t))^{\top}$ and the $n_{i}(t)$'s, $i=1,2,\hdots N$ into the $N \times 1$ vector $\mathbf{n}(t)=(n_{1}(t),\hdots,n_{N}(t))^{\top}$.
\noindent The log-likelihood ratio at the $i$-th sensor at time index $t$ is calculated as follows:-
\begin{align}
\label{eq:-0.5}
\eta_{i}(t)=\frac{f_{1}(y_{i}(t))}{f_{0}(y_{i}(t))}=\frac{2\mu y_{i}(t)}{\sigma^{2}},
\end{align}
where $f_{0}(\cdot)$ and $f_{1}(\cdot)$ denote the probability distribution functions (p.d.f.s) of $y_{i}(t)$ under $H_{0}$ and $H_{1}$ respectively.
\noindent We note that,
\begin{align}
\label{eq:f8}
\eta_{i}(t)\sim
\begin{cases}
\mathcal{N}(m,2m),& H=H_{1}\\
\mathcal{N}(-m,2m),& H=H_{0},
\end{cases}
\end{align}
where $\mathcal{N}(\cdot)$ denotes the Gaussian p.d.f. and $m=\frac{2\mu^{2}}{\sigma^{2}}$.
The Kullback-Leibler divergence at each agent is given by
\begin{align}
\label{eq:snr}
KL=m.
\end{align}
\subsection{Sequential Hypothesis Testing -- Centralized or All-To-All Communication Scenario}
\label{subsec:prel_seq_test} We start by reviewing concepts and results from (centralized) sequential hypothesis testing theory, see~\cite{chernoff1972sequential} for example, to motivate our distributed hypothesis testing setup. Broadly speaking, the goal of sequential simple hypothesis testing is as follows: given pre-specified constraints on the error metrics, i.e., upper bounds $\alpha$ and $\beta$ on the probability of false alarm $\mathbb{P}_{FA}$ and probability of miss $\mathbb{P}_{M}$, the decision-maker keeps on collecting observations sequentially over time to decide on the hypotheses $H_{1}$ or $H_{0}$, i.e., which one is true; the decision-maker also has a \emph{stopping criterion} or \emph{stopping rule} based on which it decides at each time (sampling) instant whether to continue sampling or terminate the testing procedure. Finally, after termination, a (binary) decision is computed as to which hypothesis is in force based on all the obtained data. A sequential testing procedure is said to be \emph{admissible} if the stopping criterion, i.e., the decision whether to continue observation collection or not, at each instant is determined solely on the basis of observations collected thus far. Naturally, from a resource optimization viewpoint, the decision-maker seeks to design the sequential procedure (or equivalently the stopping criterion) that minimizes the expected number of observation samples (or equivalently time) required to achieve a decision with probabilities of false alarm and miss upper bounded by $\alpha$ and $\beta$ respectively. To formalize in the current context, first consider a setup in which inter-agent communication is all-to-all, i.e., at each time instant each agent has access to all the sensed data of all other agents. In this complete network scenario, each agent behaves like a (hypothetical) center and the information available at any agent $n$ at time $t$ is the sum-total of network observations till $t$, formalized by the $\sigma$-algebra~\cite{jacod1987limit}
\begin{align}
\label{eq:0.01a}
\mathcal{G}_{c}(t)=\sigma\left\{y_{i}(s),~\forall i=1,2,\hdots N~\mbox{and}~\forall 1\le s \le t\right\}.
\end{align}
An admissible test $D_{c}$ consists of a stopping criteria, where at each time $t$ the agents' (or the center in this case) decision to stop or continue taking observations is adapted to (or measurable with respect to) the $\sigma$-algebra $\mathcal{G}_{c}(t)$. Denote by $T_{D_{c}}$ the termination time of $D_{c}$, a random time taking values in $\mathbb{Z}_{+}\cup\{\infty\}$. Formally, by the above notion of admissibility, the random time $T_{D_{c}}$ is necessarily a stopping time with respect to (w.r.t.) the filtration $\{\mathcal{G}_{c}(t)\}$, see~\cite{jacod1987limit}, and, in this paper, we restrict attention to tests $D_{c}$ that terminate in finite time a.s., i.e., $T_{D_{c}}$ takes values in $\mathbb{Z}_{+}$ a.s. Denote by $\mathbb{E}_{\theta}[T_{D_{c}}]$ the expectation of $T_{D_{c}}$ under $H_{\theta}$, $\theta=0,1$, and $\widehat{H}_{D_{c}}\in\{0,1\}$ the decision obtained after termination. (Note that, assuming $T_{D_{c}}$ is finite a.s., the random variable $\widehat{H}_{D_{c}}$ is measurable w.r.t. the stopped $\sigma$-algebra $\mathcal{G}_{T_{D_{c}}}$.) Let $\mathbb{P}_{FA}^{D_{c}}$ and $\mathbb{P}_{M}^{D_{c}}$ denote the associated probabilities of false alarm and miss respectively, i.e.,
\begin{equation}
\label{admiss:FAM} \mathbb{P}_{FA}^{D_{c}}=\mathbb{P}_{0}\left(\widehat{H}_{D_{c}}=1\right)~~\mbox{and}~~\mathbb{P}_{M}^{D_{c}}=\mathbb{P}_{1}\left(\widehat{H}_{D_{c}}=0\right).
\end{equation}
Now, denoting by $\mathcal{D}_{c}$ the class of all such (centralized) admissible tests, the goal in sequential hypothesis testing is to obtain a test in $\mathcal{D}_{c}$ that minimizes the expected stopping time subject to attaining specified error constraints. Formally, we aim to solve\footnote{Note, in~\eqref{eq:0.01} the objective is to minimize the expected stopping time under hypothesis $H_{1}$. Alternatively, we might be interested in minimizing $\mathbb{E}_{0}[T_{D_{c}}]$ over all admissible tests; similarly, in a Bayesian setup with prior probabilities $p_{0}$ and $p_{1}$ on $H_{0}$ and $H_{1}$ respectively, the objective would consist of minimizing the overall expected stopping time $p_{0}\mathbb{E}_{0}[T_{D_{c}}]+p_{1}\mathbb{E}_{1}[T_{D_{c}}]$. However, it turns out that, in the current context, the Wald's SPRT~\cite{wald1948optimum} (to be discussed soon) can be designed to minimize each of the above criteria. Hence, without loss of generality, we adopt $\mathbb{E}_{1}[T_{D_{c}}]$ as our test design objective and use it as a metric to determine the relative performance of tests.}
\begin{align}
\label{eq:0.01}
&\min\limits_{ D_{c}\in \mathcal{D}_{c}}\mathbb{E}_{1}[T_{D_{c}}], \nonumber\\
& \mbox{s.t.}~\mathbb{P}_{FA}^{D_{c}}\le\alpha, \mathbb{P}_{M}^{D_{c}}\le\beta,
\end{align}
for specified $\alpha$ and $\beta$. Before proceeding further, we make the following assumption:
\begin{myassump}{A2}
\label{as:2}
\emph{The pre-specified error metrics, i.e., $\alpha$ and $\beta$, satisfy $\alpha,\beta\in (0,1/2)$.}
\end{myassump}
Noting that the (centralized) Kullback-Leibler divergence, i.e., the divergence between the probability distributions induced on the joint observation space $\mathbf{y}(t)$ by the hypotheses $H_{1}$ and $H_{0}$, is $Nm$ where $m$ is defined in~\eqref{eq:f8}, we obtain (see~\cite{wald1948optimum}) for each $D_{c}\in\mathcal{D}_{c}$ that attains $\mathbb{P}_{FA}^{D_{c}}\le\alpha$ and $\mathbb{P}_{M}^{D_{c}}\le\beta$,
\begin{align}
\label{eq:edc300}
\mathbb{E}_{1}[T_{D_{c}}]\ge \mathcal{M}(\alpha,\beta),
\end{align}
where the universal lower bound $\mathcal{M}(\alpha,\beta)$ is given by
\begin{align}
\label{eq:edc3100}
\mathcal{M}(\alpha,\beta)=\frac{(1-\beta)\log(\frac{1-\beta}{\alpha})+\beta\log(\frac{\beta}{1-\alpha})}{Nm}.
\end{align}
\noindent{\bf Optimal (centralized) tests: Wald's SPRT.} We briefly review Wald's sequential probability ratio test (SPRT), see~\cite{wald1948optimum}, that is known to achieve optimality in~\eqref{eq:0.01}. To this end, denote by $S_{c}(t)$ (the centralized) test statistic
\begin{align}
\label{eq:cent_stat}
S_{c}(t)=\sum_{s=1}^{t}\frac{\mathbf{1}^{\top}}{N}\mathbf{\eta}(s),
\end{align}
where $\mathbf{\eta}(s)$ denotes the vector of log-likelihood ratios $\eta_{i}(s)$'s at the agents \footnote{Both the sum and average (over $N$) can be taken as the test statistics for the centralized detector. We divide by $N$ for notational simplicity, so that the centralized decision statistic update becomes a special case of the $\mathcal{CI}SPRT$ decision statistic update studied in Section~\ref{dsd}.}. The SPRT consists of a pair of thresholds (design parameters) $\gamma_{c}^{l}$ and $\gamma_{c}^{h}$, such that, at each time $t$, the decision to continue or terminate is determined on the basis of whether $S_{c}(t) \in [\gamma_{c}^{l},\gamma_{c}^{h}]$ or not. Formally, the stopping time of the SPRT is defined as follows:
\begin{align}
\label{eq:f5b}
T_{c}=\inf \{t~|~S_{c}(t) \notin [\gamma_{c}^{l},\gamma_{c}^{h}]\}.
\end{align}
At $T_{c}$ the following decision rule is followed:
\begin{align}
\label{eq:f5c}
H=
\begin{cases}
H_{0}, & S_{c}(T_{c})\le \gamma_{c}^{l}\\
H_{1}, & S_{c}(T_{c})\ge \gamma_{c}^{h}.
\end{cases}
\end{align}
The optimality of the SPRT w.r.t. the formulation~\eqref{eq:0.01} is well-studied; in particular, in~\cite{wald1948optimum} it was shown that, for any specified $\alpha$ and $\beta$, there exist choices of thresholds $(\gamma_{c}^{l},\gamma_{c}^{h})$ such that the SPRT~\eqref{eq:f5b}-\eqref{eq:f5c} achieves the minimum in~\eqref{eq:0.01} among all possible admissible tests $D_{c}$ in $\mathcal{D}_{c}$.
For given $\alpha$ and $\beta$, exact analytical expressions of the optimal thresholds are intractable in general. A commonly used choice of thresholds, see~\cite{wald1945sequential}, is given by
\begin{align}
\label{eq:c1}
&\gamma_{c}^{h} = \log\big(\frac{1-\beta}{\alpha}\big)\nonumber\\
&\gamma_{c}^{l}= \log\big(\frac{\beta}{1-\alpha}\big),
\end{align}
which, although not strictly optimal in general, ensures that $\mathbb{P}_{FA}^{c}\le \alpha$ and $\mathbb{P}_{M}^{c}\le \beta$. (For SPRT procedures we denote by $\mathbb{P}_{FA}^{c}$ and $\mathbb{P}_{M}^{c}$ the associated probabilities of false alarm and miss respectively, which depend on the choice of thresholds used.) Nonetheless, the above choice~\eqref{eq:c1} yields \emph{close} to optimal behavior, and is, in fact, \emph{asymptotically} optimal; formally, supposing that $\alpha=\beta=\epsilon$, the SPRT with thresholds given by~\eqref{eq:c1} guarantees that (see~\cite{chernoff1972sequential})
\begin{equation}
\label{SPRT:cent:opt} \lim_{\epsilon\rightarrow 0}\frac{\mathbb{E}_{1}[T_{c}]}{\mathcal{M}(\epsilon,\epsilon)}=1,
\end{equation}
where $\mathcal{M}(\cdot)$ is defined in~\eqref{eq:edc3100}.
In the sequel, given a testing procedure $D_{c}\in\mathcal{D}_{c}$ and assuming $\alpha=\beta=\epsilon$, we will study the quantity $\limsup_{\epsilon\rightarrow 0}\left(\mathbb{E}_{1}[T_{D_{c}}]/\mathcal{M}(\epsilon,\epsilon)\right)$ as a measure of its efficiency. Also, by abusing notation, when $\alpha=\beta=\epsilon$, we will denote $\mathcal{M}(\epsilon)\doteq\mathcal{M}(\epsilon,\epsilon)$.
\subsection{Subclass of Distributed Tests}
\label{subsec:dist_tests} The SPRT~\eqref{eq:f5b}-\eqref{eq:f5c} requires computation of the statistic $S_{c}(t)$ (see~\eqref{eq:cent_stat}) at all times, which, in turn, requires access to all agent observations at all times. Hence, the SPRT may not be implementable beyond the fully centralized or all-to-all agent communication scenario as discussed in Section~\ref{subsec:prel_seq_test}. Motivated by practicable agent networking applications, in this paper we are interested in distributed scenarios, in which inter-agent communication is restricted to a preassigned (possibly sparse) communication graph. In particular, given a graph $G=(V,E)$, possibly sparse, modeling inter-agent communication, we consider scenarios in which inter-agent cooperation is limited to a single round of message exchanges among neighboring agents per observation sampling epoch. To formalize the distributed setup and the corresponding subclass $\mathcal{D}_{d}$ of distributed tests, denote by $\mathcal{G}_{d,i}(t)$ the information available at an agent $i$ at time $t$. The information set includes the observations sampled by $i$ and the messages received from its neighbors till time $t$, and is formally given by the $\sigma$-algebra
\begin{align}
\label{eq:0.02a}
\mathcal{G}_{d,i}(t)=\sigma\left\{y_{i}(s), m_{i,j}(s),~~\forall 1\le s \le t, \forall j\in \Omega_{i}\right\}.
\end{align}
The quantity $m_{i,j}(s)$ denotes the message received by $i$ from its neighbor $j\in\Omega_{i}$ at time $s$, assumed to be a vector of constant (time-invariant) dimension; the exact message generation rule is determined by the (distributed) testing procedure $D_{d}$ in place and, necessarily, $m_{i,j}(s)$ is measurable w.r.t. the $\sigma$-algebra $\mathcal{G}_{d,j}(s)$. Based on the information content $\mathcal{G}_{d,i}(t)$ at time $t$, an agent decides on whether to continue taking observations or to stop in the case of which, it decides on one of the hypothesis $H_{0}$ or $H_{1}$. A distributed testing procedure $D_{d}$ then consists of message generation rules, and, local stopping and decision criteria at the agents. Intuitively, and formally by~\eqref{eq:0.02a} and the fact that $m_{i,j}(s)$ is measurable w.r.t $\mathcal{G}_{d,j}(s)$ for all $(i,j)$ and $s$, we have
\begin{equation}
\label{subsigma}
\mathcal{G}_{d,i}(t)\subset\mathcal{G}_{c}(t)~~~\forall i,t,
\end{equation}
i.e., the information available at an agent $i$ in the distributed setting is a subset of the information that would be available to a hypothetical center in a centralized setting as given in Section~\ref{subsec:prel_seq_test}. Formally, this implies that the class of distributed tests $\mathcal{D}_{d}$ is a subset of the class of centralized or all-possible tests $\mathcal{D}_{c}$ as given in Section~\ref{subsec:prel_seq_test}, i.e., $\mathcal{D}_{d}\subset\mathcal{D}_{c}$. (Intuitively, it means any distributed test can be implemented in a centralized setup or by assuming all-to-all communication.) In this paper, we are interested in characterizing the distributed test that conforms to the communication restrictions above and is optimal in the following sense:
\begin{align}
\label{eq:0.02}
&\min\limits_{D_{d}\in \mathcal{D}_{d}}\max\limits_{i=1,2,\hdots,N}\mathbb{E}_{1}[T_{D_{d},i}], \nonumber\\
&\mbox{s.t.}~\mathbb{P}_{FA}^{D_{d},i}\le\alpha, \mathbb{P}_{M}^{D_{d},i}\le\beta, \forall i=1,2,\hdots,N.
\end{align}
In the above, $T_{D_{d},i}$ denotes the termination (stopping) time at an agent $i$ and $\mathbb{P}_{FA}^{D_{d},i}$, $\mathbb{P}_{M}^{D_{d},i}$, the respective false alarm and miss probabilities at $i$. Note that, since $\mathcal{D}_{d}\subset\mathcal{D}_{c}$, for any distributed test $D_{d}$ we have $\mathbb{E}_{1}[T_{D_{d},i}]\geq \mathbb{E}_{1}[T_{c}]$ for all $i$ at any specified $\alpha$ and $\beta$, i.e., a distributed procedure cannot outperform the optimal centralized procedure, the SPRT given by~\eqref{eq:f5b}-\eqref{eq:f5c}. Rather than solving~\eqref{eq:0.02}, in this paper, we propose a distributed testing procedure of the consensus+innovations type (see Section~\ref{dsd}), which is efficiently implementable and analyze its performance w.r.t. the optimal centralized testing procedure. In particular, we study its performance as a function of the inter-agent communication graph and show that as long as the network is \emph{reasonably} well-connected, but possibly much sparser than the complete or all-to-all network, the suboptimality (in terms of the expected stopping times at the agents) of the proposed distributed procedure w.r.t. the optimal centralized SPRT procedure is upper bounded by a constant factor much smaller than $N$. Our results clearly demonstrate the benefits of collaboration (even over a sparse communication network) as, in contrast, in the non-collaboration case (i.e., each agent relies on its own observations only) each agent would require $N$ times the expected number of observations to achieve prescribed $\alpha$ and $\beta$ as compared to the optimal centralized scenario.\footnote{In the non-collaboration setup, the optimal procedure at an agent is to perform an SPRT using its local observation sequence only; w.r.t. the centralized, this implies that the effective SNR at an agent reduces by a factor $1/N$ and hence (see~\cite{wald1948optimum}) the agent would require $N$ times more observations (in expectation) to achieve the same level of false alarm and miss.}
\section{A Distributed Sequential Detector}
\label{dsd}
\noindent To mitigate the high communication and synchronization overheads in centralized processing, we propose a distributed sequential detection scheme where network communication is restricted to a more \emph{localized} agent-to-agent interaction scenario. More specifically, in contrast to the fully centralized setup described in Section \ref{subsec:prel_seq_test}, we now consider sequential detection in a distributed information setup in which inter-agent information exchange or cooperation is restricted to a preassigned (arbitrary, possibly sparse) communication graph, whereby an agent exchanges its (scalar) test statistic and a scalar function of its latest sensed information with its (one-hop) neighbors.
In order to achieve reasonable detection performance with such localized interaction, we propose a distributed sequential detector of the \emph{consensus}+\emph{innovations} form. Before discussing the details of our algorithm, we state an assumption on the inter-agent communication graph.
\begin{myassump}{A3}
\label{as:3}
\emph{The inter-agent communication graph is connected, i.e. $\lambda_{2}(\mathbf{L}) > 0$, where $\mathbf{L}$ denotes the associated graph Laplacian matrix}.
\end{myassump}
\noindent\textbf{Decision Statistic Update}.
In the proposed distributed algorithm, each agent $i$ maintains a test statistic $P_{d,i}(t)$, which is updated recursively in a distributed fashion as follows :
\begin{align}
\label{eq:2}
&P_{d,i}(t+1)=\frac{t}{t+1}\left(w_{ii}P_{d,i}(t)+\sum_{j\in \Omega_{i}}w_{ij}P_{d,j}(t)\right)\nonumber\\&+\frac{1}{t+1}\left(w_{ii}\eta_{i}(t+1)+\sum_{j\in \Omega_{i}}w_{ij}\eta_{j}(t+1)\right),
\end{align}
\noindent where $\Omega_{i}$ denotes the communication neighborhood of agent $i$ and the $w_{ij}$'s denote appropriately chosen combination weights (to be specified later).
\noindent We collect the weights $w_{ij}$ in an $N\times N$ matrix $\mathbf{W}$, where we assign $w_{ij}=0$, if $(i,j)\notin E$. Denoting by $\mathbf{P}_{d}(t)$ and $\eta(t)$ as the vectors $[P_{d,1}(t),P_{d,2}(t),\hdots,P_{d,N}(t)]^{\top}$ and $[\eta_{1}(t),\eta_{2}(t),\hdots,\eta_{N}(t)]^{\top}$ respectively, \eqref{eq:2} can be compactly written as follows:-
\begin{align}
\label{eq:3}
\mathbf{P_{d}}(t+1)=\mathbf{W}\left(\frac{t}{t+1}\mathbf{P_{d}}(t)+\frac{1}{t+1}\mathbf{\eta}(t+1)\right).
\end{align}
\noindent Now we state some design assumptions on the weight matrix $\mathbf{W}$.
\begin{myassump}{A4}
\label{as:4}
\emph{We design the weights $w_{ij}$'s in \eqref{eq:2} such that the matrix $\mathbf{W}$ is non-negative, symmetric, irreducible and stochastic, i.e., each row of $\mathbf{W}$ sums to one}.
\end{myassump}
\noindent We remark that, if Assumption \ref{as:4} is satisfied, then the second largest eigenvalue in magnitude of $\mathbf{W}$, denoted by $r$, turns out to be strictly less than one, see for example \cite{dimakis2010gossip}. Note that, by the stochasticity of $\mathbf{W}$, the quantity $r$ satisfies
\begin{align}
\label{eq:3.2}
r=||\mathbf{W}-\mathbf{J}||.
\end{align}
For connected graphs, a simple way to design $\mathbf{W}$ is to assign equal combination weights, in which case we have,
\begin{align}
\label{eq:3.1}
\mathbf{W}=\mathbf{I}-\delta\mathbf{L},
\end{align}
where $\delta$ is a suitably chosen constant. As shown in \cite{xiao2004fast,kar2008sensor}, Assumption \ref{as:4} can be enforced by taking $\delta$ to be in $(0, 2/\lambda_{N}(\mathbf{L}))$. The smallest value of $r$ is obtained by setting $\delta$ to be equal to $2/(\lambda_{2}(\mathbf{L})+\lambda_{N}(\mathbf{L}))$, in which case we have,
\begin{align}
\label{eq:q2}
r=||\mathbf{W}-\mathbf{J}||=\frac{(\lambda_{N}(\mathbf{L})-\lambda_{2}(\mathbf{L}))}{(\lambda_{2}(\mathbf{L})+\lambda_{N}(\mathbf{L}))}.
\end{align}
\begin{Remark}
\label{rm:0}
It is to be noted that Assumption \ref{as:4} can be enforced by appropriately designing the combination weights since the inter-agent communication graph is connected (see Assumption \ref{as:3}). Several weight design techniques satisfying Assumption \ref{as:4} exist in the literature (see, for example, \cite{xiao2004fast}). The quantity $r$ quantifies the rate of information flow in the network, and in general, the smaller the $r$ the faster is the convergence of information dissemination algorithms (such as the consensus or gossip protocol on the graph, see for example \cite{dimakis2010gossip,kar2008sensor,kar2009distributed}). The optimal design of symmetric weight matrices $\mathbf{W}$ for a given network topology that minimizes the value $r$ can be cast as a semi-definite optimization problem \cite{xiao2004fast}.
\end{Remark}
\noindent\textbf{Stopping Criterion for the Decision Update}.
We now provide a stopping criterion for the proposed distributed scheme. To this end, let $S_{d,i}(t)$ denote the quantity $tP_{d,i}(t)$, and let $\gamma_{d,i}^{h}$ and $\gamma_{d,i}^{l}$ be thresholds at an agent $i$ (to be determined later) such that agent $i$ stops and makes a decision only when,
\begin{align}
\label{eq:g1}
S_{d,i}(t) \notin [\gamma_{d,i}^{l},\gamma_{d,i}^{h}]
\end{align}
for the first time.
The stopping time for reaching a decision at an agent $i$ is then defined as,
\begin{align}
\label{eq:f3}
T_{d,i}=\inf \{t ~~| S_{d,i}(t) \notin [\gamma_{d,i}^{l},\gamma_{d,i}^{h}]\},
\end{align}
and the following decision rule is adopted at $T_{d,i}$ :
\begin{align}
\label{eq:f5}
H=
\begin{cases}
H_{0} & S_{d,i}(T_{d,i}) \le \gamma_{d,i}^{l}\\
H_{1} & S_{d,i}(T_{d,i}) \ge \gamma_{d,i}^{h}.
\end{cases}
\end{align}
\\
\noindent We refer to this distributed scheme \eqref{eq:2}, \eqref{eq:f3} and \eqref{eq:f5} as the \emph{consensus+innovations} SPRT ($\cisprt$) hence forth.
\begin{Remark}
\label{rm:1}
It is to be noted that the decision statistic update rule is distributed and recursive, in that, to realize \eqref{eq:2} each agent needs to communicate its current statistic and a scalar function of its latest sensed observation to its neighbors only; furthermore, the local update rule \eqref{eq:2} is a combination of a consensus term reflecting the weighted combination of neighbors' statistics and a local innovation term reflecting the new sensed information of itself and its neighbors. Note that the stopping times $T_{d,i}$'s are random and generally take different values for different agents. It is to be noted that the $T_{d,i}$'s are in fact stopping times with respect to the respective agent information filtrations $\mathcal{G}_{d,i}(t)$'s as defined in \eqref{eq:0.02a}. For subsequent analysis we refer to the stopping time of an agent as the stopping time for reaching a decision at an agent.
\end{Remark}
\noindent We end this section by providing some elementary properties of the distributed test statistics.
\begin{proposition}
\label{prop:SdGauss} Let the Assumptions~\ref{as:1}, \ref{as:3} and \ref{as:4} hold. For each $t$ and $i$, the statistic $S_{d,i}(t)$, defined in~\eqref{eq:g1}-\eqref{eq:f5}, is Gaussian under both $H_{0}$ and $H_{1}$. In particular, we have
\begin{align}
\label{prop:SdGauss1}
\mathbb{E}_{0}[S_{d,i}(t)] = -mt~~~\mbox{and}~~~\mathbb{E}_{1}[S_{d,i}(t)]=mt,
\end{align}
and
\begin{align}
\label{prop:SdGauss2}
\mathbb{E}_{0}\left[(S_{d,i}(t)+mt)^{2}\right]=\mathbb{E}_{1}\left[(S_{d,i}(t)-mt)^{2}\right]\leq \frac{2mt}{N}+\frac{2mr^{2}(1-r^{2t})}{1-r^{2}}.
\end{align}
\end{proposition}
\begin{IEEEproof}
Recall from \eqref{eq:f8}, $\eta_{i}(t)$ is distributed as $\mathcal{N}(m,2m)$,~~$\forall i=1,2,\cdots,N$, when conditioned on hypothesis $H_{1}$ and where $m$ is the Kullback-Leibler divergence as defined in \eqref{eq:snr}. Hence,
\begin{align}
\label{eq:13.1}
&\mathbb{E}_{1}[S_{d,i}(t)]=\sum_{j=1}^{t}\mathbf{e}_{i}^{\top}\mathbf{W}^{t+1-j}\mathbb{E}_{1}[\mathbf{\eta}(j)]\nonumber\\
&=m\sum_{j=1}^{t}\mathbf{e}_{i}^{\top}\mathbf{W}^{t+1-j}\mathbf{1}\nonumber\\
&\Rightarrow \mathbb{E}_{1}[S_{d,i}(t)]=mt.
\end{align}
We note that $\mathbf{S_{\eta}}=Cov(\mathbf{\eta}(t))=2m\mathbf{I}$.
By standard algebraic manipulations we have,
\begin{align}
\label{eq:13.2}
&Var(S_{d,i}(t))=\mathbb{E}_{1}\left[(S_{d,i}(t)-mt)^{2}\right]=\sum_{j=1}^{t}\mathbf{e}_{i}^{\top}\mathbf{W}^{t+1-j}\mathbf{S}_{\eta}\mathbf{W}^{t+1-j}\mathbf{e}_{i}\nonumber\\
&=\sum_{j=1}^{t}\mathbf{e_{i}^{\top}(W^{t-j}-J)S_{\eta}(W^{t-j}-J)e_{i}}+\sum_{j=1}^{t}\mathbf{e_{i}^{\top}JS_{\eta}Je_{i}}\nonumber\\
&=2m\sum_{j=1}^{t}\mathbf{e_{i}}^{\top}(\mathbf{W}^{2(t-j)}-\mathbf{J})\mathbf{e_{i}}+2m\sum_{j=1}^{t}\mathbf{e_{i}}^{\top}\mathbf{J}\mathbf{e_{i}}\nonumber\\
&=2m||\sum_{j=1}^{t}\mathbf{e_{i}}^{\top}(\mathbf{W}^{2(t-j)}-\mathbf{J})\mathbf{e_{i}}||+\frac{2mt}{N}\nonumber\\
&\le 2m\sum_{j=1}^{t}||\mathbf{e_{i}}^{\top}(\mathbf{W}^{2(t-j)}-\mathbf{J})\mathbf{e_{i}}||+\frac{2mt}{N}\nonumber\\
&\le 2m\sum_{j=1}^{t}||\mathbf{e_{i}}^{T}\mathbf{e_{i}}|| ||\mathbf{W}^{2(t-j)}-\mathbf{J}||+\frac{2mt}{N}\nonumber\\
&=2m\sum_{j=0}^{t-1}r^{2j}++\frac{2mt}{N}\nonumber\\
&\le\frac{2mt}{N}+\frac{2m(1-r^{2t})}{1-r^{2}}.
\end{align}
The assertion for hypothesis $H_{0}$ follows in a similar way.
\end{IEEEproof}
\section{Main Results}
\label{sec:main_res}
\noindent We formally state the main results in this section, the proofs being provided in Section~\ref{sec:proof_res}.
\subsection{Thresholds for the $\cisprt$}
\noindent In this section we derive thresholds for the $\cisprt$, see \eqref{eq:g1}-\eqref{eq:f5}, in order to ensure that the procedure terminates in finite time a.s. at each agent and the agents achieve specified error probability requirements.
We emphasize that in the proposed approach, a particular agent has access to its one hop neighborhood's test statistics and latest sensed information only. Moreover the latest sensed information is accessed through a scalar function of the latest observation of the agents in an agent's neighborhood. Recall, by \eqref{eq:2} and \eqref{eq:g1} the (distributed) test statistic at agent $i$ is given by
\begin{align}
\label{eq:dist_stat}
S_{d,i}(t)=\sum_{j=1}^{t}\mathbf{e}_{i}^{\top}\mathbf{W}^{t+1-j}\mathbf{\eta}_{j}.
\end{align}
\noindent For the proposed $\cisprt$, we intend to derive thresholds which guarantee the error performance in terms of the error probability requirements $\alpha$ and $\beta$, i.e., such that $\mathbb{P}_{FA}^{d,i}\le \alpha$ and $\mathbb{P}_{M}^{d,i}\le \beta, ~~\forall i=1, 2, \hdots, N$, where $\mathbb{P}_{FA}^{d,i}$ and $\mathbb{P}_{M}^{d,i}$ represent the probability of false alarm and the probability of miss for the $i$th agent defined as
\begin{align}
\label{eq:dth1}
&\mathbb{P}_{FA}^{d,i}=\mathbb{P}_{0}(S_{d,i}(T_{d,i})\ge\gamma_{d,i}^{h})\nonumber\\
&\mathbb{P}_{M}^{d,i}=\mathbb{P}_{1}(S_{d,i}(T_{d,i})\le\gamma_{d,i}^{l}),
\end{align}
with $T_{d,i}$ as defined in \eqref{eq:f3}.
\begin{Theorem}
\label{dist_th}
Let the Assumptions~\ref{as:1}-\ref{as:4} hold.\\
1) Then, for each $\alpha$ and $\beta$ there exist $\gamma_{d,i}^{h}$ and $\gamma_{d,i}^{l}$,~$\forall i=1,2,\hdots,N$, such that $\mathbb{P}_{FA}^{d,i} \le \alpha$ and $\mathbb{P}_{M}^{d,i} \le \beta$ and the test concludes in finite time a.s. i.e.\\
\begin{align}
\label{eq:th7.1}
\mathbb{P}_{1}(T_{d,i}<\infty)= 1, \forall i=1,2,\hdots,N ,
\end{align}
where $T_{d,i}$ is the stopping time for reaching a decision at agent $i$.\\
2) In particular, for given $\alpha$ and $\beta$, any choice of thresholds $\gamma_{d,i}^{h}$ and $\gamma_{d,i}^{l}$ satisfying
\begin{align}
\label{eq:d1}
\gamma_{d,i}^{h}\ge\frac{8(k+1)}{7N}\left(\log\left(\frac{2}{\alpha}\right)-\log(1-e^{\frac{-Nm}{4(k+1)}})\right)=\gamma_{d}^{h,0}
\end{align}
\begin{align}
\label{eq:d2}
\gamma_{d,i}^{l}\le\frac{8(k+1)}{7N}\left(\log\left(\frac{\beta}{2}\right)+\log(1-e^{\frac{-Nm}{4(k+1)}})\right)=\gamma_{d}^{l,0},
\end{align}
where $m$ is defined in \eqref{eq:snr} and $k$ is defined by
\begin{align}
\label{eq:d3}
Nr^{2}= k,
\end{align}
with $r$ as in \eqref{eq:3.2}, achieves a.s. finite stopping at an agent $i$ while ensuring that $\mathbb{P}_{FA}^{d,i} \le \alpha$ and $\mathbb{P}_{M}^{d,i} \le \beta$.
\end{Theorem}
\noindent The first assertion ensures that for any set of pre-specified error metrics $\alpha$ and $\beta$ (satisfying Assumption \ref{as:2}), the $\cisprt$ can be designed to achieve the error requirements while ensuring finite stopping a.s.
It is to be noted that the ranges associated with the thresholds in \eqref{eq:d1}-\eqref{eq:d2} provide sufficient threshold design conditions for achieving pre-specified performance, but may not be necessary. The thresholds chosen according to \eqref{eq:d1}-\eqref{eq:d2} are not guaranteed to be optimal in the sense of the expected stopping time of the $\cisprt$ algorithm and there might exist better thresholds (in the sense of expected stopping time) that achieve the pre-specified error requirements.
\begin{Remark}
\label{rm:31}
We remark the following: 1) We have shown that the $\cisprt$ algorithm can be designed so as to achieve the pre-specified error metrics at every agent $i$. This, in turn, implies that the probability of not reaching decision consensus among the agents can be upper bounded by $N\beta$ when conditioned on $H_{1}$ and $N\alpha$ when conditioned on $H_{0}$. It is to be noted that with $\alpha \to 0$ and $\beta \to 0$, the probability of not reaching decision consensus conditioned on either of the hypothesis goes to $0$ as well;
2) The factor $k$ in the closed form expressions of the thresholds in \eqref{eq:d1} and \eqref{eq:d2} relates the value of the thresholds to the rate of flow of information $r$ and, hence, in turn, can be related to the degree of connectivity of the inter-agent communication graph under consideration, see \eqref{eq:3.2}-\eqref{eq:3.1} and the accompanying discussion. From Assumption \ref{as:4}, we have that $r < 1$. As $r$ goes smaller, which intuitively means increased rate of flow of information in the inter-agent network, the value of thresholds needed to achieve the pre-specified error metrics become smaller i.e. the interval $[\gamma_{d,i}^{l}, \gamma_{d,i}^{h}]$ shrinks for all $i=1, 2, \hdots, N$.
\end{Remark}
\subsection{Probability Distribution of $T_{d,i}$ and $T_{c}$}
\noindent We first characterize the stopping time distributions for the centralized SPRT detector (see Section \ref{subsec:prel_seq_test}) and those of the distributed $\cisprt$. Subsequently, we compare the centralized and distributed stopping times by studying their respective large deviation tail probability decay rates.
\begin{Theorem}(\hspace{-0.5pt}\cite{darling1953first,hieber2012note})
\label{1.1}
Let the Assumptions \ref{as:1} and \ref{as:2} hold and
given the SPRT for the centralized setup in \eqref{eq:cent_stat}-\eqref{eq:f5c}, we have
\begin{align}
\label{eq:1.01}
\mathbb{P}_{1}(T_{c}> t)\geq\exp\left(\frac{N\mu\gamma_{c}^{l}}{\sigma^{2}}\right)K_{t}^{\infty}\left(\gamma_{c}^{h}\right)-\exp\left(\frac{N\mu\gamma_{c}^{h}}{\sigma^{2}}\right)K_{t}^{\infty}\left(\gamma_{c}^{l}\right),
\end{align}
where
\begin{align}
\label{eq:1.011}
K_{t}^{S}\left(a\right)=\frac{\sigma^{2}\pi}{N(\gamma_{c}^{h}-\gamma_{c}^{l})^{2}}\sum_{s=1}^{S}\frac{l(-1)^{l+1}}{\frac{Nm}{4}+\frac{\sigma^{2}s^{2}\pi^{2}}{2N(\gamma_{c}^{h}-\gamma_{c}^{l})^{2}}}\exp\left(-\left(\frac{Nm}{4}+\frac{\sigma^{2}s^{2}\pi^{2}}{2N(\gamma_{c}^{h}-\gamma_{c}^{l})^{2}}\right)t\right)\sin\left(\frac{s\pi a}{\gamma_{c}^{h}-\gamma_{c}^{l}}\right),
\end{align}
whereas, $T_{c}$ is defined in \eqref{eq:f5b} and $\gamma_{c}^{h}$ and $\gamma_{c}^{l}$ are the associated SPRT thresholds chosen to achieve specified error requirements $\alpha$ and $\beta$.
\end{Theorem}
\noindent The above characterization of the stopping distribution of Wald's SPRT was obtained in \cite{darling1953first,hieber2012note}. In particular, this was derived by studying the first passage time distribution of an associated continuous time Wiener process with a constant drift; intuitively, the continuous time approximation of the discrete time SPRT consists of replacing the discrete time likelihood increments by a Wiener process accompanied by a constant drift that reflects the mean of the hypothesis in place. This way, the sequence obtained by sampling the continuous time process at integer time instants is equivalent in distribution to the (discrete time) Wald's SPRT considered in this paper. The term on the R.H.S. of~\eqref{eq:1.01} is exactly equal to the probability that the first passage time of the continuous time Wiener process with left and right boundaries $\gamma_{c}^{l}$ and $\gamma_{c}^{h}$ respectively is greater than $t$, whereas, is, in general, a lower bound for the discrete time SPRT (as given in Theorem~\ref{1.1}) as increments in the latter happen at discrete (integer) time instants only.
\noindent We now provide a characterization of the stopping time distributions of the $\cisprt$ algorithm.
\begin{Lemma}
\label{1.2}
Let the assumptions \ref{as:1}-\ref{as:4} hold. Consider the $\cisprt$ algorithm given in \eqref{eq:3}, \eqref{eq:f3} and \eqref{eq:f5} and suppose that, for specified $\alpha$ and $\beta$, the thresholds $\gamma_{d,i}^{h}$ and $\gamma_{d,i}^{l}$, $i=1,\cdots,N$, are chosen to satisfy the condtions derived in \eqref{eq:d1} and \eqref{eq:d2}. We then have,
\begin{align}
\label{eq:1.02}
&\mathbb{P}_{1}(T_{d,i}>t) \le\mathbb{Q}\Big(\frac{-\gamma_{d,i}^{h}+mt}{\sqrt{\frac{2mt}{N}+\frac{2mr^{2}(1-r^{2t})}{1-r^{2}}}}\Big),~~\forall i=1, 2, \hdots, N,
\end{align}
where $T_{d,i}$ is the stopping time of the $i$-th agent to reach a decision as defined in~\eqref{eq:f3}.
\end{Lemma}
\subsection{Comparison of stopping times of the distributed and centralized detectors}
\noindent In this section we compare the stopping times $T_{c}$ and $T_{d,i}$ by studying their respective large deviation tail probability decay rates. We utilize the bounds derived in Theorem \ref{1.1} and Lemma \ref{1.2} to this end.
\begin{corollary}
\label{7.2}
Let the hypotheses of Lemma \ref{1.1} hold. Then we have the following large deviation characterization for the tail probabilities of $T_{c}$:
\begin{align}
\label{eq:12.52}
\liminf_{t\to\infty}\frac{1}{t}\log(\mathbb{P}_{1}(T_{c}> t))\geq -\frac{Nm}{4}-\frac{\sigma^{2}\pi^{2}}{2N(\gamma_{c}^{h}-\gamma^{l})^{2}}.
\end{align}
\end{corollary}
It is to be noted that the exponent is a function of the thresholds $\gamma_{c}^{h}$ and $\gamma_{c}^{l}$ and with the decrease in the error constraints $\alpha$ and $\beta$, $\frac{Nm}{4}+\frac{\sigma^{2}\pi^{2}}{2N(\gamma_{c}^{h}-\gamma^{l})^{2}} \approx \frac{Nm}{4}$.
\begin{Theorem}
\label{7.3}
Let the hypotheses of Lemma \ref{1.2} hold. Then we have the following large deviation characterization for the tail probabilities of the $T_{d,i}$'s:
\begin{align}
\label{eq:12.69}
\limsup_{t\to\infty}\frac{1}{t}\log(\mathbb{P}_{1}(T_{d,i}> t))\le -\frac{Nm}{4} ~~,\forall i=1,2,\hdots,N.
\end{align}
\end{Theorem}
\noindent
Importantly, the upper bound for the large deviation exponent of the $\cisprt$ in Theorem \ref{7.3} is independent of the inter-agent communication topology as long as the connectivity conditions Assumptions \ref{as:3}-\ref{as:4} hold. Finally, in the asymptotic regime, i.e., as $N$ goes to $\infty$, since $\frac{\sigma^{2}\pi^{2}}{2N(\gamma_{c}^{h}-\gamma^{l})^{2}}=o(Nm)$, we have that the performance of the distributed $\cisprt$ approaches that of the centralized SPRT, in the sense of stopping time tail exponents, as $N$ tends to $\infty$.
\subsection{Comparison of the expected stopping times of the centralized and distributed detectors}
\noindent In this section we compare the expected stopping times of the centralized SPRT detector and the proposed $\cisprt$ detector. Recall that $\mathbb{E}_{j}[T_{d,i}]$ and $\mathbb{E}_{j}[T_{c}]$ represent the expected stopping times for reaching a decision for the $\cisprt$ (at an agent $i$) and its centralized counterpart respectively, where $j\in\{0,1\}$ denotes the hypothesis on which the expectations are conditioned on. Without loss of generality we compare the expectations conditioned on Hypothesis $H_{1}$, similar conclusions (with obvious modifications) hold when the expectations are conditioned on $H_{0}$ (see also Section \ref{subsec:prel_seq_test}).
\noindent Also, for the sake of mathematical brevity and clarity, we approximate $\alpha=\beta=\epsilon$ in this subsection.
\noindent Recall Section \ref{subsec:prel_seq_test} and note that, at any instant of time $t$, the information $\sigma$-algebra $\mathcal{G}_{d,i}(t)$ at any agent $i$ is a subset of $\mathcal{G}_{c}(t)$, the information $\sigma$-algebra of a (hypothetical) center, which has access to the data of all agents at all times. This implies that any distributed procedure (in particular the $\cisprt$) can be implemented in the centralized setting, and, since $M(\epsilon)$ (see~\eqref{eq:edc3100}) constitutes a lower bound on the expected stopping time of any sequential test achieving error probabilities $\alpha=\beta=\epsilon$, we have that
\begin{align}
\label{eq:edc1}
\frac{\mathbb{E}_{1}[T_{d,i}]}{\mathcal{M}(\epsilon)}\ge 1,~~\forall~~i=1,2,\hdots,N,
\end{align}
for all $\epsilon\in (0,1/2)$.
\noindent In order to provide an upper bound on the ratio $\mathbb{E}_{1}[T_{d,i}]/\mathcal{M}(\epsilon)$ and, hence, compare the performance of the proposed $\cisprt$ detector with the optimal centralized detector, we first obtain a characterization of $\mathbb{E}_{1}[T_{d,i}]$ in terms of the algorithm thresholds as follows.
\begin{Theorem}
\label{edct_1}
Let the assumptions \ref{as:1}-\ref{as:4} hold and let $\alpha=\beta=\epsilon$. Suppose that the thresholds of the $\cisprt$ be chosen as $\gamma_{d,i}^{h}=\gamma_{d}^{h,0}$ and $\gamma_{d,i}^{l}=\gamma_{d}^{l,0}$ for all $i=1,\cdots,N$, where $\gamma_{d}^{h,0}$ and $\gamma_{d}^{l,0}$ are defined in \eqref{eq:d1}-\eqref{eq:d2}. Then, the stopping time $T_{d,i}$ of the $\cisprt$ at an agent $i$ satisfies
\begin{align}
\label{eq:edct_11}
\frac{(1-2\epsilon)\gamma_{d,i}^{h}}{m}-\frac{c}{m}\le\mathbb{E}_{1}[T_{d,i}] \le \frac{5\gamma_{d,i}^{h}}{4m}+\frac{1}{1-e^{\frac{-Nm}{4(k+1)}}},
\end{align}
where $k=Nr^{2}$, $r$ is as defined in \eqref{eq:3.2}, and $c>0$ is a constant that may be chosen to be independent of the thresholds and the $\epsilon$.
\end{Theorem}
\noindent It is to be noted that, when $\alpha=\beta=\epsilon$, then $\gamma_{d,i}^{h}=-\gamma_{d,i}^{l}$ from \eqref{eq:d1} and \eqref{eq:d2}. The upper bound derived in the above assertion might be loose, owing to the approximations related to the non-elementary $\mathbb{Q}$-function. We use the derived upper bound for comparing the performance of the $\cisprt$ algorithm with that of its centralized counterpart. The constant $c>0$ in the lower bound is independent of the thresholds $\gamma_{d,i}^{l}$ and $\gamma_{d,i}^{h}$ (and hence, also independent of the error tolerance $\epsilon$) and is a function of the network topology and the Gaussian model statistics only. Explicit expressions and bounds on $c$ may be obtained by refining the various estimates in the proofs of Lemma~\ref{lm:div_est} and Theorem~\ref{edct_1}, see Section~\ref{sec:main_res}. However, for the current purposes, it is important to note that $c=o(\gamma_{d}^{h,0})$, i.e., as $\epsilon$ goes to zero or equivalently in the limit of large thresholds $c/\gamma_{d}^{h,0}\rightarrow 0$. Hence, as $\epsilon\rightarrow 0$, the more readily computable quantity $\frac{(1-2\epsilon)\gamma_{d,i}^{h}}{m}$ may be viewed as a reasonably good approximation to the lower bound in Theorem~\ref{edct_1}.
\begin{Theorem}
\label{edct}
Let the hypotheses of Theorem \ref{edct_1} hold. Then, we have the following characterization of the ratio of the expected stopping times of the $\cisprt$ and the centralized detector in asymptotics of the $\epsilon$,
\begin{align}
\label{eq:edc4}
1\le\limsup_{\epsilon\to0}\frac{\mathbb{E}_{1}[T_{d,i}]}{\mathcal{M}(\epsilon)}\le\frac{10(k+1)}{7},~~\forall i=1,2,\hdots,N,
\end{align}
where $k=Nr^{2}$ and $r$ is as defined in \eqref{eq:3.2}.
\end{Theorem}
\noindent Theorem \ref{edct} shows that the $\cisprt$ algorithm can be designed in such a way that with pre-specified error metrics $\alpha$ and $\beta$ going to $0$ , the ratio of the expected stopping time for the $\cisprt$ algorithm and its centralized counterpart are bounded above by $\frac{10(k+1)}{7}$ where the quantity $k$ depends on $r$ which essentially quantifies the dependence of the $\cisprt$ algorithm on the network connectivity.
\begin{Remark}
\label{rm:2}
It is to be noted that the derived upper bound for the ratio of the expected stopping times of the $\cisprt$ algorithm and its centralized counterpart may not be a tight upper bound. The looseness in the upper bound is due to the fact that the set of thresholds chosen are oriented to be sufficient conditions and not necessary. As pointed out in Remark \ref{rm:31} there might exist possibly better choice of thresholds for which the pre-specified error metrics are satisfied. Hence, given a set of pre-specified error metrics and a network topology the upper bound of the derived assertion above can be minimized by choosing the optimal weights for $\mathbf{W}$ as shown in \cite{xiao2004fast}. It can be seen that the ratio of expected stopping times of the isolated SPRT based detector case, i.e., the non-collaboration case, and the centralized SPRT based detector is $N$ (see Section~\ref{subsec:prel_seq_test}). So, for the $\cisprt$ case in order to make savings as far as the stopping time is concerned with respect to the isolated SPRT based detector, $\frac{10(k+1)}{7}\le N$ should be satisfied. Hence, we have that $r\le \sqrt{\frac{7N-10}{10N}}$ is a sufficient condition for the same.
\end{Remark}
\section{Dependence of the $\cisprt$ on Network Connectivity: Illustration}
\label{illust}
\noindent In this section, we illustrate the dependence of the $\cisprt$ algorithm on the network connectivity, by considering a class of graphs. Recall from section \ref{dsd} that the quantity $r$ quantifies the \emph{rate of information flow} in the network, and in general, the smaller the $r$ the faster is the convergence of information dissemination algorithms (such as the consensus or gossip protocol (\hspace{-0.5pt}\cite{dimakis2010gossip,kar2008sensor,kar2009distributed}) on the graph and the optimal design of symmetric weight matrices $\mathbf{W}$ for a given network topology that minimizes the value $r$ can be cast as a semi-definite optimization problem \cite{xiao2004fast}.
To quantify the dependance of the $\cisprt$ algorithm on the graph topology, we note that the limit derived in \eqref{eq:edc4} is a function of $\mathbf{W}$ and can be re-written as follows :
\begin{align}
\label{eq:i1}
\limsup_{\epsilon\to0}\frac{\mathbb{E}_{1}[T_{d,i}]}{\mathcal{M}(\epsilon)}\le\frac{10(Nr^{2}+1)}{7}\doteq\mathcal{R}(\mathbf{W}),
\end{align}
i.e., the derived upper bound $\mathcal{R}(\mathbf{W})$ is a function of the chosen weight matrix $W$. Based on~\eqref{eq:i1}, naturally, a weight design guideline would be to design $\mathbf{W}$ (under the network topological constraints) so as to minimize $\mathcal{R}(\mathbf{W})$, which, by~\eqref{eq:i1} and as discussed earlier corresponds to minimizing $r=\|\mathbf{W}-\mathbf{J}\|$. This leads to the following upper bound on the achievable performance of the $\cisprt$:
\begin{equation}
\label{23456}
\limsup_{\epsilon\to0}\frac{\mathbb{E}_{1}[T_{d,i}]}{\mathcal{M}(\epsilon)}\le\min_{\mathbf{W}}\mathcal{R}(\mathbf{W}).
\end{equation}
By restricting attention to constant link weights, i.e., $\mathbf{W}$'s of the form $(\mathbf{I}-\delta\mathbf{L})$ and noting that
\begin{equation}
\label{234567}
\min_{\delta}\|\mathbf{I}-\delta\mathbf{L}-\mathbf{J}\|=\frac{(\lambda_{N}(\mathbf{L})-\lambda_{2}(\mathbf{L}))}{(\lambda_{2}(\mathbf{L})+\lambda_{N}(\mathbf{L}))},
\end{equation}
(see~\eqref{eq:q2}), we further obtain
\begin{align}
\label{eq:i3}
\limsup_{\epsilon\to0}\frac{\mathbb{E}_{1}[T_{d,i}]}{\mathcal{M}(\epsilon)}\le \min_{\mathbf{W}}\mathcal{R}(\mathbf{W})\le \min_{\delta}\mathcal{R}(\mathbf{I}-\delta\mathbf{L}) =\frac{10}{7}+\frac{10N(\lambda_{N}(\mathbf{L})-\lambda_{2}(\mathbf{L}))^{2}}{7(\lambda_{2}(\mathbf{L})+\lambda_{N}(\mathbf{L}))^{2}}.
\end{align}
The final bound obtained in~\eqref{eq:i3} might not be tight, being an upper bound (there may exist $\mathbf{W}$ matrices not of the form $\mathbf{I}-\delta\mathbf{L}$ with smaller $r$) to a possibly loose upper bound derived in \eqref{eq:edc4}, but, nonetheless, directly relates the performance of the $\cisprt$ to the spectra of the graph Laplacian and hence the graph topology. From~\eqref{eq:i3} we may further conclude that networks with smaller value of the ratio $\lambda_{2}(\mathbf{L})/\lambda_{N}(\mathbf{L})$ tend to achieve better performance. This leads to an interesting graph design question: given resource constraints, specifically, say a restriction on the number of edges of the graph, how to design inter-agent communication networks that tend to minimize the eigen-ratio $\lambda_{2}(\mathbf{L})/\lambda_{N}(\mathbf{L})$ so as to achieve improved $\cisprt$ performance. To an extent, such graph design questions have been studied in prior work, see~\cite{kar2008topology}, which, for instance, shows that expander graphs tend to achieve smaller $\lambda_{2}(\mathbf{L})/\lambda_{N}(\mathbf{L})$ ratios given a constraint on the total number of network edges.
\section{Simulations}
\label{sec:sim}
We generate planar random geometric networks of $30$, $300$ and $1000$ agents. The $x$ coordinates and the $y$ coordinates of the agents are sampled from an uniform distribution on the open interval $(0,1)$. We link two vertices by an edge if the distance between them is less than or equal to $g$. We go on re-iterating this procedure until we get a connected graph. We construct the geometric network for each of $N=30, 300~\textrm{and}~1000$ cases with three different values of $g$ i.e. $g=0.3, 0.6~\textrm{and}~0.9$. The values of $r$ obtained in each case is specified in Table \ref{tab:1}.
\begin{table}[h]
\centering
\begin{tabular}{| l | l | l | l |}
\hline
r & g=0.3 & g=0.6 & g=0.9 \\ \hline
N=30 & 0.8241 & 0.5580 & 0.2891 \\ \hline
N=300 & 0.7989 & 0.6014 & 0.2166 \\ \hline
N=1000 & 0.7689 & 0.5940 & 0.2297 \\
\hline
\end{tabular}
\caption{Values of $r$}\label{tab:1}
\end{table}
We consider two cases, the $\cisprt$ case and the non-collaborative case. We consider $\alpha=\beta=\epsilon$ and ranging from $10^{-8}$ to $10^{-4}$ in steps of $10^{-6}$. For each such $\epsilon$, we conduct $2000$ simulation runs to empirically estimate the stopping time distribution $\mathbb{P}_{1}( T > t)$ of a randomly chosen agent (with uniform selection probability) for each of the cases. From these empirical probability distributions of the stopping times, we estimate the corresponding expected stopping times. Figure \ref{fig:4} shows the instantaneous behavior of the test statistics in the case of $N=300$ with $\epsilon=10^{-10}$.
\begin{figure}
\centering
\captionsetup{justification=centering}
\includegraphics[width=90mm]{expN30}
\caption{Comparison of Stopping Time Distributions for N=30}\label{fig:1}
\end{figure}
\begin{figure}
\centering
\captionsetup{justification=centering}
\includegraphics[width=90mm]{expN300}
\caption{Comparison of Stopping Time Distributions for N=300}\label{fig:2}
\end{figure}
\begin{figure}
\centering
\captionsetup{justification=centering}
\includegraphics[width=90mm]{expN1000}
\caption{Comparison of Stopping Time Distributions for N=1000}\label{fig:3}
\end{figure}
\begin{figure}
\centering
\captionsetup{justification=centering}
\includegraphics[width=90mm]{expsim}
\caption{Instantaneous behavior of $S_{d,i}(t)$}\label{fig:4}
\end{figure}
In Figures \ref{fig:1}, \ref{fig:2} and \ref{fig:3} it is demonstrated that the ratio of the expected stopping time of the $\cisprt$ algorithm and the universal lower bound $\mathcal{M}(\epsilon)$ is less than that of the ratio of the expected stopping times of the isolated (non-collaborative) case and $\mathcal{M}(\epsilon)$. The ratio of the theoretical lower bound of the expected stopping time of the $\cisprt$ derived in Theorem \ref{edct_1} and $\mathcal{M}(\epsilon)$ was also studied. More precisely, we compared the experimental ratio of the expected stopping times of the $\cisprt$ and $\mathcal{M}(\epsilon)$ with the ratio of the quantity $\frac{(1-2\epsilon)\gamma_{d,i}^{h}}{m}$ (the small $\epsilon$ approximation of the theoretical lower bound given in Theorem~\ref{edct_1}, see also the discussion provided in Section~\ref{sec:main_res} after the statement of Theorem~\ref{edct_1}) and $\mathcal{M}(\epsilon)$. It can be seen that the experimental ratio of the expected stopping times of the $\cisprt$ and $\mathcal{M}(\epsilon)$ is very close to the ratio of the (approximate) theoretical lower bound of expected stopping time of the $\cisprt$ and $\mathcal{M}(\epsilon)$, which shows that the lower bound derived in Theorem \ref{edct_1} is reasonable. Figure \ref{fig:4} is an example of a single run of the algorithm which shows the instantaneous behavior of the distributed test statistic $S_{d,i}(t)$ for $N=300$, where we have plotted three randomly chosen agents i.e. $i=1$, $i=10$ and $i=50$.
\section{Proofs of Main Results}
\label{sec:proof_res}
\noindent
\begin{IEEEproof}[Proof of Theorem \ref{dist_th}]
Let $\hat{A}=e^{\gamma_{d,i}^{l}}$ and $\hat{B}=e^{\gamma_{d,i}^{h}}$ where $\gamma_{d,i}^{h}$ and $\gamma_{d,i}^{l}$ $ \in \mathbb{R}$ are thresholds (to be designed) for the $\cisprt$.
In the following derivation, for a given random variable $z$ and an event $A$, we use the notation $\mathbb{E}[z;A]$ to denote the expectation $\mathbb{E}[z\mathbb{I}_{A}]$.
Let $T$ denote the random time which can take values in $\overline{\mathbb{Z}}_{+}$ given by
\begin{align}
\label{def:T}
T = \inf\left\{t | S_{d,i}(t) \notin \left[\gamma_{d,i}^{l}, \gamma_{d,i}^{h}\right]\right\}.
\end{align}
First, we show that for any $\gamma_{d,i}^{h}$ and $\gamma_{d,i}^{l}$ $\in \mathbb{R}$,
\begin{align}
\label{def:T1}
\mathbb{P}_{0}\left(T<\infty\right)=\mathbb{P}_{1}\left(T<\infty\right)=1,
\end{align}
i.e., the random time $T$ defined in~\eqref{def:T} is a.s. finite under both the hypotheses. Indeed, we have,
\begin{align}
\label{eq:stop}
&\mathbb{P}_{1}\left(T > t\right)\le\mathbb{Q}\left(\frac{-\gamma_{d,i}^{h}+mt}{\sqrt{\frac{2mt}{N}+\frac{2mr^{2}(1-r^{2t})}{1-r^{2}}}}\right)\nonumber\\
&\Rightarrow\lim_{t\to\infty}\mathbb{P}_{1}\left(T > t\right)=0\nonumber\\
&\Rightarrow\mathbb{P}_{1}\left(T <\infty\right)=1.
\end{align}
The proof for $H_{0}$ follows in a similar way.
Now, since~\eqref{def:T1} holds, the quantity $S_{d,i}(T)$ is well-defined a.s. under $H_{0}$. Now, noting that, under $H_{0}$, for any $t$, the quantity $S_{d,i}(t)$ is Gaussian with mean $-mt$ and variance upper bounded by $\frac{2mt}{N}+\frac{2mr^{2}(1-r^{2t})}{1-r^{2}}$ (see Proposition~\ref{prop:SdGauss}), we have,
\begin{align}
\label{eq:3.59}
&\mathbb{P}_{FA}^{d,i}=\mathbb{P}_{0}(S_{d,i}(T)\ge \log\hat{B})=\sum_{t=1}^{\infty}\mathbb{P}_{0}(T=t,S_{d,i}(t)\ge \log\hat{B})\nonumber\\
&\le \sum_{t=1}^{\infty}\mathbb{P}_{0}(S_{d,i}(t)\ge \log\hat{B})\nonumber\\
&\leq\sum_{t=1}^{\infty}\mathbb{Q}\Big(\frac{\log\hat{B}+mt}{\sqrt{\frac{2mt}{N}+\frac{2mr^{2}(1-r^{2t})}{1-r^{2}}}}\Big).
\end{align}
To obtain a condition for $\gamma_{d,i}^{h}$ in the $\cisprt$ such that $\mathbb{P}_{FA}^{d,i}\leq\alpha$, let's define $k > 0 $ such that $k=Nr^{2}$. Now, note that $k$ thus defined satisfies
\begin{align}
\label{eq:3.6a}
&\frac{2mr^{2}(1-r^{2t})}{1-r^{2}}\le \frac{2mkt}{N}, ~~\forall t.
\end{align}
Then we have, by~\eqref{eq:3.59}-\eqref{eq:3.6a},
\allowdisplaybreaks[1]
\begin{align}
\label{eq:3.6}
&\mathbb{P}_{FA}^{d,i}\le \sum_{t=1}^{\infty}\mathbb{Q}\Big(\frac{\log\hat{B}+mt}{\sqrt{\frac{2mt}{N}+\frac{2mr^{2}(1-r^{2t})}{1-r^{2}}}}\Big)
\le \sum_{t=1}^{\infty}\mathbb{Q}\Big(\frac{\log\hat{B}+mt}{\sqrt{\frac{2mt(k+1)}{N}}}\Big)
\le \frac{1}{2}\sum_{t=1}^{\infty}e^{\frac{-(\gamma_{d,i}^{h})^{2}-m^{2}t^{2}-2\gamma_{d,i}^{h}mt}{\frac{4mt(k+1)}{N}}}\nonumber\\
&=\frac{e^{-\frac{N\gamma_{d,i}^{h}}{2(k+1)}}}{2}\Big(\sum_{t=1}^{\lfloor\frac{\gamma_{d,i}^{h}}{2m}\rfloor}e^{\frac{-N(\gamma_{d,i}^{h})^{2}-Nm^{2}t^{2}}{4mt(k+1)}}+\sum_{t=\lfloor\frac{\gamma_{d,i}^{h}}{2m}\rfloor+1}^{\lfloor\frac{\gamma_{d,i}^{h}}{m}\rfloor}e^{\frac{-N(\gamma_{d,i}^{h})^{2}-Nm^{2}t^{2}}{4mt(k+1)}}\nonumber\\
&+\sum_{t=\lfloor\frac{\gamma_{d,i}^{h}}{m}\rfloor+1}^{\lfloor\frac{2\gamma_{d,i}^{h}}{m}\rfloor}e^{\frac{-N(\gamma_{d,i}^{h})^{2}-Nm^{2}t^{2}}{4mt(k+1)}}+\sum_{t=\lfloor\frac{2\gamma_{d,i}^{h}}{m}\rfloor+1}^{\infty}e^{\frac{-N(\gamma_{d,i}^{h})^{2}-Nm^{2}t^{2}}{4mt(k+1)}}\Big)\nonumber\\
&\le \frac{e^{-\frac{N\gamma_{d,i}^{h}}{2(k+1)}}}{2}\Big( e^{-\frac{N\gamma_{d,i}^{h}}{2(k+1)}}\underbrace{\sum_{t=1}^{\lfloor\frac{\gamma_{d,i}^{h}}{2m}\rfloor}e^{\frac{-Nmt}{4(k+1)}}}_\text{(1)}+e^{-\frac{N\gamma_{d,i}^{h}}{4(k+1)}}\underbrace{\sum_{t=\lfloor\frac{\gamma_{d,i}^{h}}{2m}\rfloor+1}^{\lfloor\frac{\gamma_{d,i}^{h}}{m}\rfloor}e^{\frac{-Nmt}{4(k+1)}}}_\text{(2)}\nonumber\\
&+e^{-\frac{N\gamma_{d,i}^{h}}{8(k+1)}}\underbrace{\sum_{t=\lfloor\frac{\gamma_{d,i}^{h}}{m}\rfloor+1}^{\lfloor\frac{2\gamma_{d,i}^{h}}{m}\rfloor}e^{\frac{-Nmt}{4(k+1)}}}_\text{(3)}+\underbrace{\sum_{t=\lfloor\frac{2\gamma_{d,i}^{h}}{m}\rfloor+1}^{\infty}e^{\frac{-Nmt}{4(k+1)}}}_\text{(4)}\Big)\nonumber\\
&\le \frac{e^{-\frac{N\gamma_{d,i}^{h}}{2(k+1)}}}{2(1-e^{-\frac{Nm}{4(k+1)}})}\Big( e^{-\frac{N\gamma_{d,i}^{h}}{2(k+1)}}+e^{-\frac{N\gamma_{d,i}^{h}}{4(k+1)}}e^{-\frac{N\gamma_{d,i}^{h}}{8(k+1)}}+e^{-\frac{N\gamma_{d,i}^{h}}{8(k+1)}}e^{-\frac{N\gamma_{d,i}^{h}}{4(k+1)}}+e^{-\frac{N\gamma_{d,i}^{h}}{2(k+1)}}\Big)\nonumber\\
&\le \frac{2e^{-\frac{7N\gamma_{d,i}^{h}}{8(k+1)}}}{1-e^{-\frac{Nm}{4(k+1)}}}.
\end{align}
\noindent In the above set of equations we use the fact that $\mathbb{Q}(x)$ is a non-increasing function, the inequality $\mathbb{Q}(x)\le\frac{1}{2}e^{\frac{-x^{2}}{2}}$, and we upper bound $(1)-(4)$ by their infinite geometric sums.
\noindent We now note that, a sufficient condition for $\mathbb{P}_{FA}^{d,i}\le\alpha$ to hold is the following:
\begin{align}
\label{eq:3.61}
\frac{2e^{-\frac{7N\gamma_{d,i}^{h}}{8(k+1)}}}{1-e^{-\frac{Nm}{4(k+1)}}}\le\alpha.
\end{align}
Solving \eqref{eq:3.61}, we have that, any $\gamma_{d,i}^{h}$ that satisfies
\begin{align}
\label{eq:3.62}
\gamma_{d,i}^{h}\ge \gamma_{d}^{h,0}=\frac{8(k+1)}{7N}\left(\log\left(\frac{2}{\alpha}\right)-\log(1-e^{-\frac{Nm}{4(k+1)}})\right),
\end{align}
achieves $\mathbb{P}_{FA}^{d,i}\le\alpha$ in the $\cisprt$.
Proceeding as in \eqref{eq:3.59} and \eqref{eq:3.6} we have that, any $\gamma_{d,i}^{l}$ that satisfies
\begin{align}
\label{eq:3.62a}
\gamma_{d,i}^{l}\le \gamma_{d}^{l,0}\doteq\frac{8(k+1)}{7N}\left(\log\left(\frac{\beta}{2}\right)+\log(1-e^{-\frac{Nm}{4(k+1)}})\right),
\end{align}
achieves $\mathbb{P}_{M}^{d,i}\le\beta$ in the $\cisprt$.
Clearly, by the above, any pair $(\gamma_{d,i}^{h},\gamma_{d,i}^{l})$ satisfying $\gamma_{d,i}^{h}\in [\gamma_{d}^{h,0},\infty)$ and $\gamma_{d,i}^{l}\in (-\infty,\gamma_{d}^{l,0}]$ (see~\eqref{eq:3.62} and~\eqref{eq:3.62a}) ensures that $\mathbb{P}_{FA}^{d,i}\le\alpha$ and $\mathbb{P}_{M}^{d,i}\leq\beta$. The a.s. finiteness of the corresponding stopping time $T_{d,i}$ (see~\eqref{eq:f3}) under both $H_{0}$ and $H_{1}$ follows readily by arguments as in~\eqref{def:T1}.
\end{IEEEproof}
\begin{Remark}
\label{rem:tighter}
It is to be noted that the derived thresholds are sufficient conditions only. The approximations (see $(1)-(4)$ in \eqref{eq:3.6}) made in the steps of deriving the expressions of the thresholds were done so as to get a tractable expression of the range. By solving the following set of equations
\begin{align}
\label{eq:3.12111}
&\frac{1}{2}\sum_{t=1}^{\infty}e^{\frac{-N(\gamma_{d,i}^{l})^{2}-Nm^{2}t^{2}+2N\gamma_{d,i}^{l}mt}{4mt(k+1)}}\le \beta \nonumber\\
&\frac{1}{2}\sum_{t=1}^{\infty}e^{\frac{-N(\gamma_{d,i}^{h})^{2}-Nm^{2}t^{2}-2N\gamma_{d,i}^{h}mt}{4mt(k+1)}}\le \alpha
\end{align}
numerically, tighter thresholds can be obtained.
\end{Remark}
\noindent\begin{IEEEproof}[Proof of Lemma \ref{1.2}]
Let us define the event $A_{s}^{i}$ as $\{\gamma_{d,i}^{l}\le S_{d,i}(s) \le \gamma_{d,i}^{h}\}$.
Now, note that
\begin{align}
\label{eq:12}
\mathbb{P}_{1}(T_{d,i}>t)=\mathbb{P}_{1}(\cap_{s=1}^{t}A_{s}^{i}),
\end{align}
and
\begin{align}
\label{eq:13}
\mathbb{P}_{1}(\cap_{s=1}^{t}A_{s}^{i}) \le \mathbb{P}_{1}(A_{t}^{i}).
\end{align}
By Proposition~\ref{prop:SdGauss}, under $H_{1}$, for any $t$, the quantity $S_{d,i}(t)$ is Gaussian with mean $mt$ and variance upper bounded by $\frac{2mt}{N}+\frac{2mr^{2}(1-r^{2t})}{1-r^{2}}$. Hence we have, for all $i=1,2,\hdots,N$.
\begin{align}
\label{eq:14}
&\mathbb{P}_{1}(T_{d,i}>t) \le\mathbb{Q}\Big(\frac{-\gamma_{d,i}^{h}+mt}{\sqrt{\frac{2mt}{N}+\frac{2mr^{2}(1-r^{2t})}{1-r^{2}}}}\Big).
\end{align}
\end{IEEEproof}
\noindent\begin{IEEEproof}[Proof of Corollary \ref{7.2}]
For simplicity of notation, let $a=\frac{Nm}{4}$ and $b=\frac{\sigma^{2}\pi^{2}}{2N(\gamma_{c}^{h}-\gamma_{c}^{l})^{2}}$.
From \eqref{eq:1.01}, we have,
\begin{align}
\label{eq:12.6}
&\frac{1}{t}\log(\mathbb{P}_{1}(T_{c}> t))\geq\frac{1}{t}\log\left(\exp\left(\frac{N\mu\gamma_{c}^{l}}{\sigma^{2}}\right)K_{t}^{\infty}\left(\gamma_{c}^{h}\right)-\exp\left(\frac{N\mu\gamma_{c}^{h}}{\sigma^{2}}\right)K_{t}^{\infty}\left(\gamma_{c}^{l}\right)\right)\nonumber\\
&=\frac{1}{t}\log\left(\exp\left(-\left(a+b\right)t\right)\right)\nonumber\\&+\frac{1}{t}\log\left(b\sum_{s=1}^{\infty}\frac{s(-1)^{s+1}}{a+s^{2}b}\exp\left(-b(s^{2}-1)t\right)\right.\nonumber\\&\left.\times\left(\exp\left(\frac{N\mu\gamma_{c}^{l}}{\sigma^{2}}\right)\sin\left(\frac{s\pi\gamma_{c}^{h}}{\gamma_{c}^{h}-\gamma_{c}^{l}}\right)-\exp\left(\frac{N\mu\gamma_{c}^{h}}{\sigma^{2}}\right)\sin\left(\frac{s\pi\gamma_{c}^{l}}{\gamma_{c}^{h}-\gamma_{c}^{l}}\right)\right)\right).
\end{align}
For all $t,S\geq 1$, let
\begin{align}
U(t,S)=\frac{1}{t}\log\left(b\sum_{s=1}^{S}\frac{s(-1)^{s+1}}{a+s^{2}b}\exp\left(-b(s^{2}-1)t\right)\right.\nonumber\\\left.\times\left(\exp\left(\frac{N\mu\gamma_{c}^{l}}{\sigma^{2}}\right)\sin\left(\frac{s\pi\gamma_{c}^{h}}{\gamma_{c}^{h}-\gamma_{c}^{l}}\right)-\exp\left(\frac{N\mu\gamma_{c}^{h}}{\sigma^{2}}\right)\sin\left(\frac{s\pi\gamma_{c}^{l}}{\gamma_{c}^{h}-\gamma_{c}^{l}}\right)\right)\right)
\end{align}
and let $g=\exp\left(\frac{N\mu\gamma_{c}^{h}}{\sigma^{2}}\right)+\exp\left(\frac{N\mu\gamma_{c}^{l}}{\sigma^{2}}\right)$.
Note that for all $t\geq 1$, the limit
\begin{equation}
\label{eq:12345}
\lim_{S\rightarrow\infty}U(t,S)
\end{equation}
exists and is finite (by Theorem~\ref{1.1}), and similarly for all $S\geq 1$,
\begin{align}
\label{eq:1234}
\lim_{t\rightarrow\infty}U(t,S)=\lim_{t\to\infty}\frac{1}{t}\log\left(b\sum_{s=1}^{S}\frac{s(-1)^{s+1}}{a+s^{2}b}\exp\left(-b(s^{2}-1)t\right)\right.\nonumber\\\left.\times\sin\left(\frac{s\pi\gamma_{c}^{h}}{\gamma_{c}^{h}-\gamma_{c}^{l}}\right)\left(\exp\left(\frac{N\mu\gamma_{c}^{l}}{\sigma^{2}}\right)+(-1)^{s+1}\exp\left(\frac{N\mu\gamma_{c}^{h}}{\sigma^{2}}\right)\right)\right)\nonumber\\
=\lim_{t\to\infty}\frac{1}{t}\log\left(\frac{bg}{a+b}\sin\left(\frac{\pi\gamma_{c}^{h}}{\gamma_{c}^{h}-\gamma_{c}^{l}}\right)\right)\nonumber\\ = 0,
\end{align}
where we use the fact that only the largest exponent in a finite summation of exponential terms contributes to its log-normalised limit as $t\rightarrow\infty$ and
\begin{equation}
\sin\left(\frac{s\pi\gamma_{c}^{h}}{\gamma_{c}^{h}-\gamma_{c}^{l}}\right)=(-1)^{s}\sin\left(\frac{s\pi\gamma_{c}^{l}}{\gamma_{c}^{h}-\gamma_{c}^{l}}\right).
\end{equation}
Finally, using the fact that there exists a constant $c_{5}>0$ (independent of $t$ and $S$) such that for all $t,S\geq 1$,
\begin{align}
\label{eq:12.61}
U(t,S)\le \frac{1}{t}\log\left(bg\sum_{s=1}^{S}\frac{s}{a+s^{2}b}\exp\left(-b(s^{2}-1)t\right)\right)\leq c_{5},
\end{align}
we may conclude that the convergence in~\eqref{eq:12345}-\eqref{eq:1234} are uniform in $S$ and $t$ respectively. This in turn implies that the order of the limits may be interchanged and we have that
\begin{equation}
\label{eq:123456}
\lim_{t\rightarrow\infty}\lim_{S\rightarrow\infty}U(t,S)=\lim_{S\rightarrow\infty}\lim_{t\rightarrow\infty}U(t,S)=0.
\end{equation}
Hence, we have from \eqref{eq:12.6} and~\eqref{eq:123456},
\begin{align}
\label{eq:12.62}
&\liminf_{t\to\infty}\frac{1}{t}\log(\mathbb{P}_{1}(T_{c}> t))\geq -(a+b)\nonumber\\&+\lim_{t\to\infty}\lim_{S\to\infty}\frac{1}{t}\log\left(b\sum_{s=1}^{S}\frac{s(-1)^{s+1}}{a+s^{2}b}\exp\left(-b(s^{2}-1)t\right)\right.\nonumber\\&\left.\times\sin\left(\frac{s\pi\gamma_{c}^{h}}{\gamma_{c}^{h}-\gamma_{c}^{l}}\right)\left(\exp\left(\frac{N\mu\gamma_{c}^{l}}{\sigma^{2}}\right)-(-1)^{s}\exp\left(\frac{N\mu\gamma_{c}^{h}}{\sigma^{2}}\right)\right)\right)\nonumber\\
&=-(a+b)+\lim_{S\to\infty}\lim_{t\to\infty}\frac{1}{t}\log\left(b\sum_{s=1}^{S}\frac{s(-1)^{s+1}}{a+s^{2}b}\exp\left(-b(s^{2}-1)t\right)\right.\nonumber\\&\left.\times\sin\left(\frac{s\pi\gamma_{c}^{h}}{\gamma_{c}^{h}-\gamma_{c}^{l}}\right)\left(\exp\left(\frac{N\mu\gamma_{c}^{l}}{\sigma^{2}}\right)+(-1)^{s+1}\exp\left(\frac{N\mu\gamma_{c}^{h}}{\sigma^{2}}\right)\right)\right)\nonumber\\
&=-(a+b)+\lim_{S\to\infty}\lim_{t\to\infty}\frac{1}{t}\log\left(\frac{bg}{a+b}\sin\left(\frac{\pi\gamma_{c}^{h}}{\gamma_{c}^{h}-\gamma_{c}^{l}}\right)\right)\nonumber\\
&=-(a+b)=-\left(\frac{Nm}{4}+\frac{\sigma^{2}\pi^{2}}{2N(\gamma_{c}^{h}-\gamma_{c}^{l})^{2}}\right).
\end{align}
\end{IEEEproof}
\noindent\begin{IEEEproof}[Proof of Theorem \ref{7.3}]
We use the following upper bound for $\mathbb{Q}$ function in the proof below
\begin{align}
\label{eq:12.79}
\mathbb{Q}(x)\le \frac{1}{x\sqrt{2\pi}}e^{-x^{2}/2}
\end{align}
From \eqref{eq:1.02},\eqref{eq:12.79} and \eqref{eq:12.61}, we have,
{\small
\begin{align}
\label{eq:12.8}
&\limsup_{t\to\infty}\frac{1}{t}\log(\mathbb{P}_{1}(T_{d,i}> t))\nonumber\\& \le \limsup_{t\to\infty}\frac{1}{t}\log\Big(\frac{1}{\sqrt{2\pi}}\frac{\sqrt{\frac{2mt}{N}+2m\frac{r^{2}(1-r^{2t})}{(1-r^{2})}}}{(-\gamma_{d,i}^{h}+mt)}e^{\frac{-N(-\gamma_{d,i}^{h}+mt)^{2}}{4mt+4mN\frac{r^{2}(1-r^{2t})}{(1-r^{2})}}}\Big)\nonumber\\
&\le \limsup_{t\to\infty}\frac{1}{t}\left(\log\Big(\frac{\sqrt{\frac{2mt}{N}+2m\frac{r^{2}(1-r^{2t})}{(1-r^{2})}}}{\sqrt{2\pi}(mt-\gamma_{d,i}^{h})}\Big)-\frac{N(\gamma_{d,i}^{h})^{2}}{4mt+4m\frac{r^{2}(1-r^{2t})}{(1-r^{2})}}\right.\nonumber\\& \left.-\frac{Nmt}{4+4\frac{r^{2}(1-r^{2t})}{(t(1-r^{2}))}}+\frac{Nm\gamma_{d,i}^{h}t}{2mt+2mN\frac{r^{2}(1-r^{2t})}{(1-r^{2})}}\right)\nonumber\\
&\Rightarrow\limsup_{t\to\infty}\frac{1}{t}\log(\mathbb{P}_{1}(T_{d,i}> t))\le -\frac{Nm}{4}.\nonumber\\
\end{align}
}
\end{IEEEproof}
\noindent The proof of Theorem~\ref{edct_1} requires an intermediate result that estimates the divergence between the agent statistics over time.
\noindent\begin{Lemma}
\label{lm:div_est}
Let the Assumptions~\ref{as:1}, \ref{as:3} and \ref{as:4} hold. Then, there exists a constant $c_{1}$, depending on the network topology and the Gaussian model statistics only, such that
\begin{equation}
\label{lm:div_est1}
\mathbb{E}_{1}\left[\sup_{t\geq 0}\|S_{d,i}(t)-S_{d,j}(t)\|\right]\leq c_{1}
\end{equation}
for all agent pairs $(i,j)$.
\end{Lemma}
\noindent\begin{IEEEproof}
Denoting by $\mathbf{S}_{d}(t) = t\mathbf{P}_{d}(t)$ the vector of the agent test statistics $S_{d,i}(t)$'s, we have by~\eqref{eq:3},
\begin{equation}
\label{lm:div_est2}
\mathbf{S}_{d}(t+1) = W\left(\mathbf{S}_{d}(t) + \mathbf{\eta}(t+1)\right).
\end{equation}
Let $\overline{S}_{d}(t)$ denote the average of the $S_{d,i}(t)$'s, i.e.,
\begin{equation}
\label{lm:div_est3}
\overline{S}_{d}(t)=\left(1/N\right).\left(S_{d,1}(t)+\cdots+ S_{d,N}(t)\right),
\end{equation}
Noting that $J\mathbf{S}_{d}(t)=\overline{S}_{d}(t)\mathbf{1}$ and $WJ=JW=J$, we have from~\eqref{lm:div_est2}
\begin{equation}
\label{lm:div_est4}
\mathbf{v}_{t+1} = \left(W-J\right)\mathbf{v}_{t}+\mathbf{u}_{t+1},
\end{equation}
where $\mathbf{v}_{t}$ and $\mathbf{u}_{t}$, for all $t\geq 0$, are given by
\begin{equation}
\label{lm:div_est5}
\mathbf{v}_{t} = \mathbf{S}_{d}(t) - \overline{S}_{d}(t)\mathbf{1}
\end{equation}
and
\begin{equation}
\label{lm:div_est6}
\mathbf{u}_{t+1} = \left(W-J\right)\mathbf{\eta}(t+1).
\end{equation}
It is important to note that the sequence $\{\mathbf{u}_{t}\}$ is i.i.d. Gaussian and, in particular, there exists a constant $c_{2}$ such that $\mathbb{E}_{1}[\|\mathbf{u}_{t}\|^{2}]\leq c_{2}$ for all $t$.
Now, by~\eqref{lm:div_est4} we obtain
\begin{equation}
\label{lm:div_est7}
\|\mathbf{v}_{t+1}\|\leq r\|\mathbf{v}_{t}\|+\|\mathbf{u}_{t+1}\|,
\end{equation}
where recall $r=\|W-J\|<1$. Since the sequence $\{\mathbf{u}_{t}\}$ is i.i.d. and $\mathcal{L}_{2}$-bounded, an application of the Robbins-Siegmund's lemma (see~\cite{baldi2002martingales}) yields
\begin{equation}
\label{lm:div_est8}
\mathbb{E}_{1}\left[\sup_{t\geq 0}\|\mathbf{v}_{t}\|\right]\leq c_{3}<\infty,
\end{equation}
where $c_{3}$ is a constant that may be chosen as a function of $r$, $c_{2}$ and $\mathbb{E}_{1}[\|\mathbf{v}_{0}\|]$. Now, noting that, for any pair $(i,j)$,
\begin{align}
\label{lm:div_est9}
\mathbb{E}_{1}\left[\sup_{t\geq 0}\|S_{d,i}(t)-S_{d,j}(t)\|\right]\leq \mathbb{E}_{1}\left[\sup_{t\geq 0}\|S_{d,i}(t)-\overline{S}_{d}(t)\|\right]+\mathbb{E}_{1}\left[\sup_{t\geq 0}\|S_{d,j}(t)-\overline{S}_{d}(t)\|\right]\leq 2c_{3},
\end{align}
the desired assertion follows.
\end{IEEEproof}
\noindent\begin{IEEEproof}[Proof of Theorem \ref{edct_1}] We prove the upper bound in Theorem~\ref{edct_1} first. Since $\mathbb{P}_{1}(T_{d,i}<\infty)=1$, for the upper bound we have,
\allowdisplaybreaks[1]
\begin{align}
\label{eq:edcp211}
&\mathbb{E}_{1}[T_{d,i}]=\sum_{t=0}^{\infty}\mathbb{P}_{1}(T_{d,i} > t)\nonumber\\&\overset{(a)}{\le}\sum_{0}^{\infty}\mathbb{Q}\big(\frac{-\gamma_{d,i}^{h}+mt}{\sqrt{\frac{2mt}{N}+\frac{2mr^{2}(1-r^{2t})}{1-r^{2}}}}\Big)\nonumber\\
&=\underbrace{\sum_{0}^{\lfloor\frac{\gamma_{d,i}^{h}}{m}\rfloor}\mathbb{Q}\Big(\frac{-\gamma_{d,i}^{h}+mt}{\sqrt{\frac{2mt}{N}+\frac{2mr^{2}(1-r^{2t})}{1-r^{2}}}}\Big)}_\text{(1)}+\underbrace{\sum_{\lfloor\frac{\gamma_{d,i}^{h}}{m}\rfloor+1}^{\lfloor\frac{3\gamma_{d,i}^{h}}{2m}\rfloor}\mathbb{Q}\Big(\frac{-\gamma_{d,i}^{h}+mt}{\sqrt{\frac{2mt}{N}+\frac{2mr^{2}(1-r^{2t})}{1-r^{2}}}}\Big)}_\text{(2)}\nonumber\\&+\underbrace{\sum_{\lfloor\frac{3\gamma_{d,i}^{h}}{2m}\rfloor+1}^{\lfloor\frac{2\gamma_{d,i}^{h}}{m}\rfloor}\mathbb{Q}\Big(\frac{-\gamma_{d,i}^{h}+mt}{\sqrt{\frac{2mt}{N}+\frac{2mr^{2}(1-r^{2t})}{1-r^{2}}}}\Big)}_\text{(3)}+\underbrace{\sum_{\lfloor\frac{2\gamma_{d,i}^{h}}{m}\rfloor+1}^{\infty}\mathbb{Q}\Big(\frac{-\gamma_{d,i}^{h}+mt}{\sqrt{\frac{2mt}{N}+\frac{2mr^{2}(1-r^{2t})}{1-r^{2}}}}\Big)}_\text{(4)}\nonumber\\
&\overset{(b)}{\le} \frac{\gamma_{d,i}^{h}}{m}+\frac{\gamma_{d,i}^{h}}{4m}+\frac{1}{2}e^{\frac{N\gamma_{d,i}^{h}}{2(k+1)}}\sum_{\lfloor\frac{3\gamma_{d,i}^{h}}{2m}\rfloor+1}^{\lfloor\frac{2\gamma_{d,i}^{h}}{m}\rfloor}e^{\frac{-(N\gamma_{d,i}^{h})^{2}-Nm^{2}t^{2}}{4m(k+1)t}}+\frac{1}{2(1-e^{\frac{-Nm}{4(k+1)}})}\nonumber\\
&\le \frac{5\gamma_{d,i}^{h}}{4m}+\frac{1}{2(1-e^{\frac{-Nm}{4(k+1)}})}+\frac{1}{2}e^{\frac{3N\gamma_{d,i}^{h}}{8(k+1)}}\sum_{\lfloor\frac{3\gamma_{d,i}^{h}}{2m}\rfloor+1}^{\lfloor\frac{2\gamma_{d,i}^{h}}{m}\rfloor}e^{\frac{-Nmt}{4(k+1)}}\nonumber\\
&\le \frac{5\gamma_{d,i}^{h}}{4m}+\frac{1}{1-e^{\frac{-Nm}{4(k+1)}}},
\end{align}
where $(a)$ is due to the upper bound derived in Lemma \ref{1.2} and $(b)$ is due to the following : 1) $\forall t \in [0,\lfloor\frac{\gamma_{d,i}^{h}}{m}\rfloor]$ in $(1)$, $-\gamma_{d,i}^{h}+mt$ is negative and hence every term in the summation can be upper bounded by $1$; 2) $\forall t \in [\lfloor\frac{\gamma_{d,i}^{h}}{m}\rfloor+1, \lfloor\frac{3\gamma_{d,i}^{h}}{2m}\rfloor ]$ in $(2)$, $-\gamma_{d,i}^{h}+mt$ is positive and hence every term in the summation can be upper bounded by $\frac{1}{2}$; and 3) for the terms $(3)$ and $(4)$, the inequality $\mathbb{Q}(x) \le \frac{1}{2}e^{-x^{2}/2}$ is used and the sums are upper bounded by summing the resulting geometric series.
In order to obtain the lower bound, we first note that conditioned on hypothesis $H_{1}$, at the stopping time $T_{d,i}$, an agent exceeds the threshold $\gamma_{d,i}^{h}$ with probability at least $1-\epsilon$ and is lower than the threshold $\gamma_{d,i}^{l}$ with probability at most $\epsilon$. Moreover, with $\alpha=\beta=\epsilon$, $\gamma_{d,i}^{h}=-\gamma_{d,i}^{l}$.
Now, denote by $E^{h}_{i}$ the event $E^{h}_{i} = \{S_{d,i}(T_{d,i}) \geq \gamma_{d,i}^{h}\}$ and by $E^{l}_{i}$ the event $E^{l}_{i} = \{S_{d,i}(T_{d,i}) \leq \gamma_{d,i}^{l}\}$. Since $\mathbb{P}_{1}(T_{d,i} < \infty) =1$, we have that
\begin{equation}
\label{ext1}
\mathbb{E}_{1}\left[S_{d,i}(t)\right]=\mathbb{E}_{1}\left[S_{d,i}(t).\mathbb{I}_{E^{h}_{i}}\right]+ \mathbb{E}_{1}\left[S_{d,i}(t).\mathbb{I}_{E^{l}_{i}}\right],
\end{equation}
where $\mathbb{I}_{\{\cdot\}}$ denotes the indicator function. We now lower bound the quantities on the R.H.S. of~\eqref{ext1}. Note that $\gamma_{d,i}^{h}\geq 0$ and $S_{d,i}(t)\geq\gamma_{d,i}^{h}$ on $E^{h}_{i}$. Hence
\begin{equation}
\label{ext2}
\mathbb{E}_{1}\left[S_{d,i}(t).\mathbb{I}_{E^{h}_{i}}\right]\geq \gamma_{d,i}^{h}\mathbb{P}_{1}\left(E^{h}_{i}\right)\geq (1-\epsilon)\gamma_{d,i}^{h}.
\end{equation}
Now recall the construction in the proof of Lemma~\ref{lm:div_est} and note that by~\eqref{lm:div_est2} we have
\begin{equation}
\label{ext3}
S_{d,i}(t) = S_{d,i}(t-1)-\sum_{j\in\Omega_{i}}w_{ij}\left(S_{d,i}(t-1)-S_{d,j}(t-1)\right)+\eta_{i}(t).
\end{equation}
Hence, we have that
\begin{align}
\label{ext40}
S_{d,i}(T_{d,i}).\mathbb{I}_{E^{l}_{i}} \\ \geq S_{d,i}(T_{d,i}-1).\mathbb{I}_{E^{l}_{i}} - \sum_{j\in\Omega_{i}}w_{ij}\|S_{d,i}(T_{d,i}-1)-S_{d,j}(T_{d,i}-1)\|-\|\eta_{i}(T_{d,i})\|\\
\geq S_{d,i}(T_{d,i}-1).\mathbb{I}_{E^{l}_{i}} - \sum_{j\in\Omega_{i}}w_{ij}\sup_{t\geq 0}\|S_{d,i}(t)-S_{d,j}(t)\|-\|\eta_{i}(T_{d,i})\|.
\end{align}
Now, observe that on the event $E^{l}_{i}$, $S_{d,i}(T_{d,i}-1)> \gamma_{d,i}^{l}$ a.s. Since $\gamma_{d,i}^{l}<0$ and $\mathbb{P}_{1}(E^{l}_{i})\leq\epsilon$ (by hypothesis), we have that
\begin{align}
\label{ext4}
\gamma_{d,i}^{l}\epsilon\leq\gamma_{d,i}^{l}\mathbb{P}_{1}(E^{l}_{i}) \\ = \mathbb{E}_{1}\left[\gamma_{d,i}^{l}.\mathbb{I}_{E^{l}_{i}}\right]\leq \mathbb{E}_{1}\left[S_{d,i}(T_{d,i}-1).\mathbb{I}_{E^{l}_{i}}\right].
\end{align}
Note that, by Lemma~\ref{lm:div_est}, we have
\begin{align}
\label{ext5}
\mathbb{E}_{1}\left[\sum_{j\in\Omega_{i}}w_{ij}\sup_{t\geq 0}\|S_{d,i}(t)-S_{d,j}(t)\|\right]\\
\leq \sum_{j\in\Omega_{i}}w_{ij}\mathbb{E}_{1}\left[\sup_{t\geq 0}\|S_{d,i}(t)-S_{d,j}(t)\|\right]\\
\leq |\Omega_{i}|c_{1}.
\end{align}
Finally, by arguments similar to~\cite{wald1973sequential,lorden1970excess} for characterizing expected overshoots in stopped random sums (see, in particular, Theorem 1 in~\cite{lorden1970excess}) it follows that there exists a constant $c_{4}$ (depending on the Gaussian model statistics and the network topology only) such that
\begin{equation}
\label{ext6}
\mathbb{E}_{1}\left[\|\eta_{i}(T_{d,i})\|\right]\leq c_{4}.
\end{equation}
In particular, note that, the constant $c_{4}$ in~\eqref{ext6} may be chosen to be independent of the thresholds and, hence, the error tolerance parameter $\epsilon$.
Substituting~\eqref{ext4}-\eqref{ext6} in~\eqref{ext40} we obtain
\begin{align}
\label{ext7}
\mathbb{E}_{1}\left[S_{d,i}(T_{d,i}).\mathbb{I}_{E^{l}_{i}}\right]\geq \gamma_{d,i}^{l}\epsilon - |\Omega_{i}|c_{1} - c_{4}.
\end{align}
This together with~\eqref{ext1}-\eqref{ext2} yield
\begin{align}
\label{ext8}
\mathbb{E}_{1}\left[S_{d,i}(T_{d,i})\right]\geq\left(1-\epsilon\right)\gamma_{d,i}^{h}+\gamma_{d,i}^{l}\epsilon - |\Omega_{i}|c_{1} - c_{4}\\
=\left(1-2\epsilon\right)\gamma_{d,i}^{h} - c,
\end{align}
where the last equality follows by noting that $\gamma_{d,i}^{h}=-\gamma_{d,i}^{l}$ and taking the constant $c$ to $c=|\Omega_{i}|c_{1} + c_{4}$.
We note that the event $\{T_{d,i}=t\}$ is independent of $\eta_{i}, i > t$. We also have from Theorem \ref{dist_th} that $\mathbb{P}_{1}(T_{d,i} < \infty) =1$. Hence, we have,
\begin{align}
\label{eq:edcp11a}
&\mathbb{E}_{1}[S_{d,i}(T_{d,i})]=\mathbb{E}_{1}[\sum_{j=1}^{T_{d,i}}\mathbf{e_{i}}^{\top}\mathbf{W}^{t+1-j}\mathbf{\eta}(j)]\nonumber\\
&=\mathbb{E}_{1}\left[\sum_{j=1}^{\infty}\mathbb{I}_{\left\{T_{d,i}\ge j\right\}}\mathbf{e_{i}}^{\top}\mathbf{W}^{T_{d,i}+1-j}\mathbf{\eta}(j)\right]\nonumber\\
&=\sum_{j=1}^{\infty}\mathbb{E}_{1}\left[\mathbb{I}_{\left\{T_{d,i}\ge j\right\}}\mathbf{e_{i}}^{\top}\mathbf{W}^{T_{d,i}+1-j}\right]\mathbb{E}_{1}\left[\mathbf{\eta}(j)\right]\nonumber\\
&=m\sum_{j=1}^{\infty}\mathbb{E}_{1}\left[\mathbb{I}_{\left\{T_{d,i}\ge j\right\}}\mathbf{e_{i}}^{\top}\mathbf{W}^{T_{d,i}+1-j}\right]\mathbf{1}\nonumber\\
&=m\sum_{j=1}^{\infty}\mathbb{E}_{1}\left[\mathbb{I}_{\left\{T_{d,i}\ge j\right\}}\mathbf{e_{i}}^{\top}\mathbf{W}^{T_{d,i}+1-j}\mathbf{1}\right]\nonumber\\
&=m\mathbb{E}_{1}\left[T_{d,i}\right].
\end{align}
Combining \eqref{eq:edcp11a} and \eqref{ext8} we have,
\begin{align}
\label{eq:edcp11b}
\frac{(1-2\epsilon)\gamma_{d,i}^{h}}{m} - \frac{c}{m}\le \mathbb{E}_{1}[T_{d,i}]
\end{align}
and the desired assertion follows.
\end{IEEEproof}
\begin{IEEEproof}[Proof of Theorem \ref{edct}]
From \eqref{eq:edc1}, we first note that
\begin{align}
\label{eq:edcp1}
\frac{\mathbb{E}_{1}[T_{d,i}]}{\mathbb{E}_{1}[T_{c}]}\ge 1,\forall i=1,2,\hdots,N.
\end{align}
From the upper bound for the stopping time distribution derived for the $\cisprt$ in \eqref{eq:14}, we have the following upper bound for $\mathbb{E}_{1}[T_{d,i}]$
\begin{align}
\label{eq:edcp2}
\mathbb{E}_{1}[T_{d,i}]\le \frac{5\gamma_{d,i}^{h}}{4m}+\frac{1}{1-e^{\frac{-Nm}{4(k+1)}}}.
\end{align}
We choose the threshold $\gamma_{d,i}^{h}$ to be
\begin{align}
\label{eq:edcp2b}
\gamma_{d,i}^{h}=\gamma_{d}^{h,0}=\frac{8(k+1)}{7N}(\log(\frac{2}{\epsilon})-\log(1-e^{\frac{-Nm}{4(k+1)}})).
\end{align}
Using \eqref{eq:edcp2} and \eqref{eq:edcp2b}, we have
\begin{align}
\label{eq:edcp3}
\limsup_{\epsilon\to0}\frac{\mathbb{E}_{1}[T_{d,i}]}{\mathbb{E}_{1}[T_{c}]}\le\lim_{\epsilon\to0}\frac{\frac{10}{7}(k+1) \log(\frac{2}{\epsilon})+O(1)}{(1-2\epsilon)\log(\frac{1-\epsilon}{\epsilon})}.
\end{align}
Noting that,
\begin{align}
\label{eq:edcp4}
\limsup_{\epsilon\to0}\frac{O(1)}{(1-2\epsilon)\log(\frac{1-\epsilon}{\epsilon})}= 0,
\end{align}
we obtain
\begin{align}
\label{eq:edcp5}
\limsup_{\epsilon\to0}\frac{\mathbb{E}_{1}[T_{d,i}]}{\mathbb{E}_{1}[T_{c}]}\le \frac{10(k+1)}{7}.
\end{align}
Combining \eqref{eq:edcp5} and \eqref{eq:edcp1}, the result follows.
\end{IEEEproof}
\section{Conclusion}
\label{sec:conc}
In this paper we have considered sequential detection of Gaussian binary hypothesis observed by a sparsely interconnected network of agents. The $\cisprt$ algorithm we proposed combines two terms : a \emph{consensus} term that updates at
each sensor its test statistic with the test statistics provided by agents in its one-hop neighborhood and an \emph{innovation} term that updates the current agent test statistic with the new local sensed information. We have shown that the $\cisprt$ can be designed to achieve a.s. finite stopping at each network agent with guaranteed error performance. We have provided explicit characterization of the large deviation decay exponents of tail probabilities of the $\cisprt$ stopping and its expected stopping time as a function of the network connectivity. The performance of the $\cisprt$ was further benchmarked w.r.t. the optimal centralized sequential detector, the SPRT. The techniques developed in this paper are of independent interest and we envision their applicability to other distributed sequential procedures. An interesting future direction would be to consider networks with random time-varying topology. We also intend to develop extensions of the $\cisprt$ for setups with correlated and non-linear non-Gaussian observation models.
\section{Acknowledgment}
\label{sec:ack}
The authors would like to thank the associate editor and the reviewers for their comments and detailed feedback that helped to improve the clarity and content of the paper and, in particular, an anonymous reviewer for pointing out a technical gap in a previous version of Corollary 4.5.
\bibliographystyle{IEEEtran}
\bibliography{dsprt,CentralBib}
\end{document} | {"config": "arxiv", "file": "1411.7716/p21_paper_1column.tex"} |
\begin{document}
\title{About tests of the ``simplifying'' assumption for conditional copulas}
\author{Alexis Derumigny\thanks{ENSAE, 3 avenue Pierre-Larousse, 92245 Malakoff cedex, France. alexis.derumigny@ensae.fr}, Jean-David Fermanian\thanks{ENSAE, J120, 3 avenue Pierre-Larousse, 92245 Malakoff cedex, France. jean-david.fermanian@ensae.fr. This research has been supported by the Labex Ecodec.}}
\date{\today}
\maketitle
\abstract{We discuss the so-called ``simplifying assumption'' of conditional copulas in a general framework. We introduce several tests of the latter assumption for non- and semiparametric copula models. Some related test procedures based on conditioning subsets instead of point-wise events are proposed. The limiting distribution of such test statistics under the null are approximated by several bootstrap schemes, most of them being new. We prove the validity of a particular semiparametric bootstrap scheme.
Some simulations illustrate the relevance of our results.
}
\mds
{\bf Keywords:} conditional copula, simplifying assumption, bootstrap.
\mds
{\bf MCS:} 62G05, 62G08, 62G09.
\section{Introduction}
In statistical modelling and applied science more generally, it is very common to distinguish two subsets of variables: a random vector of interest (also called explained/exogenous variables) and a vector of covariates (explanatory/endogenous variables). The objective is to predict the law of the former vector given the latter vector belongs to some subset, possibly a singleton. This basic idea constitutes the first step towards forecasting some important statistical sub-products as conditional means, quantiles, volatilities, etc.
Formally, consider a $d$-dimensional random vector $\X$. We are faced with two random sub-vectors $\X_I$ and $\X_J$, s.t. $\X=(\X_I,\X_J)$, $I\cup J =\{1,\ldots,d\}$, $I\cap J=\emptyset$, and our models of interest specify the conditional law of $\X_I$ knowing $\X_J=\x_J$ or knowing $\X_J\in A_J$ for some subset $A_J\subset \RR^{|J|}$.
We use the standard notations for vectors: for any set of indices $I$,
$\x_I$ means the $|I|$-dimensional vector whose arguments are the $x_k$, $k\in I$. For convenience and without a loss of generality, we will set $I=\{1,\ldots,p\}$ and $ J=\{p+1,\ldots,d\}$.
\mds
Besides, the problem of dependence among the components of $d$-dimensional random vectors has been extensively studied in the academic literature and among practitioners in a lot of different fields. The raise of copulas for more than twenty years illustrates the need of flexible and realistic multivariate models and tools.
When covariates are present and with our notations, the challenge is to study the dependence among the components of $\X_I$ given $\X_J$.
Logically, the concept of conditional copulas has emerged.
{\color{black} First introduced for pointwise (atomic) conditioning events by Patton (2006a, 2006b), the definition has been generalized in Fermanian and Wegkamp (2012) for arbitrary measurable conditioning subsets.
In this paper, we rely on the following definition:} for any borel subset $A_J \subset \RR^{d-p}$, a conditional copula of $\X_I$ given $(\X_J \in A_J)$ is denoted by $C_{I|J}^{A_J}(\cdot | \X_J \in A_J)$.
This is the cdf of the random vector $(F_{1|J}(X_{1}|\X_J\in A_J),\ldots, F_{p|J}(X_{p}|\X_J\in A_J))$ given $(\X_J \in A_J)$.
Here, $F_{k|J}(\cdot | \X_J\in A_J)$ denotes the conditional law of $X_k$ knowing $\X_J\in A_J$, $k=1,\ldots,p$.
The latter conditional distributions will be assumed continuous in this paper, implying
the existence and uniqueness of $C_{I|J}^{A_J}$ (Sklar's theorem).
In other words, for any $\x_I\in \RR^p$,
\begin{equation*}
\PP \left(\X_I \leq \x_I | \X_J \in A_J \right) = C_{I|J}^{A_J}\Big( F_{1|J}(x_1 | \X_J\in A_J),\ldots, F_{p|J}(x_p | \X_J \in A_J) \, \Big| \, \X_J \in A_J \Big).
\label{cond_cop_def_0}
\end{equation*}
{\color{black} Note that the influence of $A_J$ on $C_{I|J}^{A_J}$ is twofold: when $A_J$ changes, the conditioning event $(\X_J \in A_J)$ changes, but the conditioned random vector $(F_{1|J}(X_{1}|\X_J\in A_J),\ldots, F_{p|J}(X_{p}|\X_J\in A_J))$ changes too.}
\mds
In particular, when the conditioning events are reduced to singletons, we get that the conditional copula of $\X_I$ knowing $\X_J=\x_J$ is a cdf $C_{I|J}(\cdot | \X_J=\x_J)$ on $[0,1]^{p}$ s.t., for every $\x_I\in \RR^{p}$,
\begin{equation*}
\PP \left(\X_I \leq \x_I | \X_J=\x_J \right)
= C_{I|J} \left( F_{1|J}(x_1 | \X_J=\x_J), \ldots, F_{p|J}(x_p | \X_J=\x_J) \, | \, \X_J=\x_J \right).
\label{cond_cop_def}
\end{equation*}
\mds
With generalized inverse functions, an equivalent definition of a conditional copula is as follows:
\begin{equation*}
\label{cond_cop_def_geninverse}
C_{I|J}\left( \u_I \, | \, \X_J=\x_J \right) =
F_{I|J} \big( F^{-}_{1|J}(u_1 | \X_J=\x_J),\ldots,F^{-}_{p|J}(u_p | \X_J=\x_J) | \X_J = \x_J \big),
\end{equation*}
for every $\u_I$ and $\x_J$, setting $F_{I|J}(\x_I | \X_J=\x_J):=\PP \left(\X_I \leq \x_I | \X_J = \x_J \right)$.
\mds
Most often, the dependence of $C_{I|J}(\cdot | \X_J=\x_J)$ w.r.t. to $\x_J$ is a source of significant complexities, in terms of model specification and inference.
Therefore, most authors assume that the following ``simplifying assumption'' is fulfilled.
\mds
{\it Simplifying assumption} $(\Hc_0)$: the conditional copula $C_{I|J}(\cdot | \X_J=\x_J)$ does not depend on $\x_J$, i.e., for every $\u_I\in [0,1]^p$, the function
$ \x_J\in \RR^{d-p}\mapsto C_{I|J}(\u_I | \X_J=\x_J)$ is a constant function (that depends on $\u_I$).
\mds
Under the simplifying assumption, we will set
$C_{I|J}(\u_I | \X_J=\x_J) =: C_{s,I|J}(\u_I)$.
The latter identity means that the dependence on $\X_J$ across the components of $\X_I$ is passing only through their conditional margins. Note that $C_{s,I|J}$ is different from the usual copula of $\X_I$:
{\color{black} $C_I(\cdot)$ is always the cdf of the vector $(F_1(X_1), \dots, F_p(X_p))$ whereas, under $\Hc_0$, $C_{s,I|J}$ is the cdf of the vector $\Z_{I|J}:=(F_{1|J}(X_1|X_J), \dots, F_{p|J}(X_p|X_J))$ (see Proposition~\ref{prop_indep_H0} below).
Note that the latter copula is identical to the partial copula introduced by Bergsma (2011), and recently studied by Gijbels et al. (2015b), Spanhel and Kurz (2015) in particular.
Such a partial copula $C_{I|J}^P$ can always be defined (whether $\Hc_0$ is satisfied or not) as the cdf of $\Z_{I|J}$, and it satisfies an interesting ``averaging'' property: $C_{I|J}^P(\u_I) :=
\int_{\RR^{d-p}} C_{I|J}(\u_I|\X_J=\x_J) dP_J(\x_J)$.}
\mds
\begin{rem}
\label{ex_simple}
The simplifying assumption $\Hc_0$ {\it does not imply} that
$C_{s,I|J}(\cdot)$ is $C_I(\cdot)$, the usual copula of $\X_{I}$.
This can be checked with a simple example: let $\X=(X_1,X_2,X_3)$ be a trivariate random vector s.t., given $X_3$, $X_1\sim \Nc(X_3,1)$ and $X_2\sim \Nc(X_3,1)$. Moreover, $X_1$ and
$X_2$ are independent given $X_3$. The latter variable may be $\Nc(0,1)$, to fix the ideas. Obviously, with our notations, $I=\{1,2\}$, $J=\{3\}$, $d=3$ and $p=2$. Therefore, for any couple $(u_1,u_2)\in [0,1]^2$ and any real number $x_3$, $C_{1,2|3}(u_1,u_2 | x_3)=u_1 u_2$ and does not depend on $x_3$. Assumption $\Hc_0$ is then satisfied. But the copula of $(X_1,X_2)$ is not the independence copula, simply because $X_1$ and $X_2$ are not independent.
\end{rem}
\mds
Basically, it is far from obvious to specify and estimate relevant conditional copula models in practice, especially when the conditioning and/or conditioned variables are numerous.
The simplifying assumption is particularly relevant with vine models (Aas et al. 2009, among others).
Indeed, to build vines from a $d$-dimensional random vector $\X$, it is necessary to consider sequences of conditional bivariate copulas $C_{I|J}$, where $I=\{i_1,i_2\}$ is a couple of indices in $\{1,\ldots,d\}$, $J\subset \{1,\ldots,d\}$, $I \cap J = \emptyset$, and $(i_1,i_2|J)$ is a node of the vine.
In other words, a bivariate conditional copula is needed at every node of any vine, and the sizes of the conditioning subsets of variables are increasing along the vine.
Without additional assumptions, the modelling task becomes rapidly very cumbersome (inference and estimation by maximum likelihood).
Therefore, most authors adopt the simplifying assumption $\Hc_0$ at every node of the vine.
Note that the curse of dimensionality still apparently remains because conditional marginal cdfs $F_{k|J}(\cdot |\X_J)$ are invoked with different subsets $J$ of increasing sizes. But this curse can be avoided by calling recursively the non-parametric copulas that have been estimated before (see Nagler and Czado, 2015).
\mds
Nonetheless, the simplifying assumption has appeared to be rather restrictive, even if it may be
seen as acceptable for practical reasons and in particular situations. The debate between pro and cons of the simplifying assumption is still largely open, particularly
when it is called in some vine models. On one side, Hob\ae k-Haff et al. (2010) affirm that this simplifying assumption is not only required for
fast, flexible, and robust inference, but that it provides ‘‘a rather good approximation, even when the simplifying assumption
is far from being fulfilled by the actual model’’. On the other side, Acar et al. (2012) maintain that ``this view is too optimistic''.
They propose a visual test of $\Hc_0$ when $d=3$ and in a parametric framework.
Their technique was based on local linear approximations and sequential likelihood maximizations. They illustrate the limitations of $\Hc_0$ by simulation and through real datasets. They note that ``an uncritical use of the simplifying assumption may be misleading''. Nonetheless, they do not provide formal test procedures.
Beside, Acar et al. (2013) have proposed a formal likelihood test of the simplifying assumption but when the conditional marginal distributions are known, a rather restrictive situation.
Some authors have exhibited classes of parametric distributions for which $\Hc_0$ is satisfied: see Hob\ae k-Haff et al. (2010), significantly extended by St\"{o}ber et al. (2013). Nonetheless, such families are rather strongly constrained. Therefore, these two papers propose to approximate some conditional copula models by others for which the simplifying assumption is true.
This idea has been developed in Spanhel and Kurz (2015) in a vine framework, because
they recognize that ``it is very unlikely that the unknown data generating process satisfies the simplifying assumption in a strict mathematical sense.''
\mds
Therefore, there is a need for formal universal tests of the simplifying assumption. It is likely that the latter assumption is acceptable in some circumstances, whereas it is too rough in others.
This means, for given subsets of indices $I$ and $J$,
we would like to test
\begin{equation*}
\Hc_0: C_{I|J}(\cdot| \X_J=\x_J) \;\text{does not depend on } \x_J,
\label{atester}
\end{equation*}
against that opposite assumption.
Hereafter, we will propose several test statistics of $\Hc_0$, possibly assuming that the conditional copula belongs to some parametric family.
\mds
Note that several papers have already proposed estimators of conditional copula. Veraverbeke et al. (2011), Gijbels et al. (2011) and
Fermanian and Wegkamp (2012) have studied some nonparametric kernel based estimators.
Craiu and Sabeti (2012), Sabeti, Wei and Craiu (2014) studied bayesian additive models of conditional copulas.
Recently, Schellhase and Spanhel (2016) invoke B-splines to manage vectors of conditioning variables.
In a semiparametric framework, i.e. assuming an underlying parametric family of conditional copulas, numerous models and estimators have been proposed, notably
Acar et al. (2011), Abegaz et al. (2012), Fermanian and Lopez (2015) (single-index type models), Vatter and Chavez-Demoulin (2015) (additive models), among others.
But only a few of these papers have a focus on testing the simplifying assumption $\Hc_0$ specifically, although convergence of the proposed estimators
are necessary to lead such a task in theory. Actually, some tests of $\Hc_0$ is invoked ``in passing'' in these papers as potential applications,
but without a general approach and/or without some guidelines to evaluate p-values in practice. As an exception, in a very recent paper, Gijbels et al. (2016) have tackled the simplifying assumption directly through comparisons between conditional and unconditional Kendall's tau.
\mds
\begin{example}
\label{example:Cases_of_SA}
To illustrate the problem, let us consider a simple example of $\Hc_0$ in dimension $3$.
Assume that $p=2$ and $d=3$. For simplicity, let us assume that $(X_1, X_2)$ follows a Gaussian distribution conditionally on $X_3$, that is :
\begin{equation}
\left( \begin{matrix} X_1 \\ X_2 \end{matrix} \right)
\Big| X_3 = x_3 \sim \Nc
\left( \left( \begin{matrix}
\mu_1 (x_3) \\ \mu_2 (x_3) \end{matrix} \right) \, , \,
\left( \begin{matrix}
\sigma_1^2 (x_3) &
\rho (x_3) \sigma_1(x_3) \sigma_2 (x_3) \\
\rho (x_3) \sigma_1(x_3) \sigma_2 (x_3)
& \sigma_2^2 (x_3)
\end{matrix} \right)
\right).
\label{model_Gaussian:Cases_of_SA}
\end{equation}
Obviously, $\alpha(\cdot) := (\mu_1 , \mu_2 , \sigma_1 , \sigma_2)(\cdot)$ is a parameter that only affects the conditional margins.
Moreover, the conditional copula of $(X_1,X_2)$ given $X_3=x_3$ is gaussian with the parameter $\rho(x_3)$.
Six possible cases can then be distinguished:
\begin{enumerate}[a.]
\item All variables are mutually independent.
\item $(X_1,X_2)$ is independent of $X_3$, but $X_1$ and $X_2$ are not independent.
\item $X_1$ and $X_2$ are both marginally independent of $X_3$, but the conditional copula of $X_1$ and $X_2$ depends on $X_3$.
\item $X_1$ (or $X_2$) and $X_3$ are not independent but $X_1$ and $X_2$ are independent conditionally given $X_3$.
\item $X_1$ (or $X_2$) and $X_3$ are not independent but the conditional copula of $X_1$ and $X_2$ is independent of $X_3$.
\item $X_1$ (or $X_2$) and $X_3$ are not independent and the conditional copula of $X_1$ and $X_2$ is dependent of $X_3$.
\end{enumerate}
These six cases are summarized in the following table:
\begin{center}
\begin{tabular}{|c|c|c|c|}
\hline
& $\rho(\cdot) = 0$ & $\rho(\cdot) = \rho_0$ & $\rho(\cdot)$ is not constant \\
\hline
$\alpha(\cdot) = \alpha_0$
& a & b & c \\
\hline
$\alpha(\cdot)$ is not constant
& d & e & f \\
\hline
\end{tabular}
\end{center}
{\color{black} In the conditional Gaussian model (\ref{model_Gaussian:Cases_of_SA}), the simplifying assumption $\Hc_0$ consists in assuming that we live in one of the cases $\{a,b,d,e\}$, whereas the alternative cases are $c$ and $f$. In this model, the conditional copula is entirely determined by the conditional correlation. Note that, in other models, the conditional correlation can vary only because of the conditional margins, while the conditioning copula stay constant: see Property 8 of Spanhel and Kurz (2015).}
\end{example}
Note that, in general, there is no reason why the conditional margins would be constant in the conditioning variable (and in most applications, they are not).
Nevertheless, if we knew the marginal cdfs' were constant with respect to the conditioning variable, then the test of $\Hc_0$ (i.e. b against c) would become a classical test of independence between $\X_I$ and $\X_J$.
\mds
Testing $\Hc_0$ is closely linked to the $m$-sample copula problem, for which
we have $m$ different and independent samples of a $p$-dimensional variable $\X_I = (X_1, \dots, X_p)$. In each sample $k$, the observations are i.i.d., with
their own marginal laws and their own copula $C_{I,k}$.
The $m$-sample copula problem consists on testing whether the $m$ latter copulas $C_{I,k}$ are equal.
Note that we could merge all samples into a single one, and create discrete variables $Y_i$ that are equal to $k$ when $i$ lies in the sample $k$.
Therefore, the $m$-sample copula problem is formally equivalent to testing $\Hc_0$ with the conditioning variable $\X_J:=Y$.
\mds
Conversely, assume we have defined a partition $\{A_{1,J} , \dots , A_{m,J}\}$ of $\RR^{d-p}$ composed of borelian subsets such that $\PP(\X_{J} \in A_{k,J} ) > 0$ for all $k=1, \dots, m$,
and we want to test
$$\bar \Hc_0: k \in \{1, \dots, m\} \mapsto
C_{I|J}^{A_{k,J}} (\, \cdot \, | \X_J \in A_{k,J})
\, \text{does not depend on} \; k.$$
Then, divide the sample in $m$ different sub-samples, where any sub-sample $k$ contains the observations for which the conditioning variable belongs to $A_{k,J}$.
Then, $\bar \Hc_0$ is equivalent to a $m$-sample copula problem.
Note that $\bar\Hc_0$ looks like a ``consequence'' of $\Hc_0$ when it is not the case in general (see Section~\ref{Link_SA}), for continuous $\X_J$ variables.
\mds
Nonetheless, $\bar\Hc_0$ conveys the same intuition as $\Hc_0$. Since it can be led more easily in practice (no smoothing is required), some researchers could prefer the former assumption than the latter. That is why it will be discussed hereafter.
Note that the 2-sample copula problem has already been addressed by Rémillard and Scaillet (2009), and the $m$-sample by Bouzebda et al. (2011). However, both paper are designed only in a nonparametric framework, and these authors have not noticed the connection with the simplifying assumption.
\mds
The goal of the paper is threefold: first, to write a ``state-of-the art'' of the simplifying assumption problem; second to propose some ``reasonable''
test statistics of the simplifying assumption in different contexts; third, to introduce a new approach of the latter problem, through ``box-related'' zero assumptions
and some associated test statistics. Since it is impossible to state the theoretical properties of all these test statistics, we will rely on ``ad-hoc arguments'' to
convince the reader they are relevant, without trying
to establish specific results.
Globally, this paper can be considered also as a work program around the simplifying assumption $\Hc_0$ for the next years.
\mds
In Section \ref{Tests_SA}, we introduce different ways of testing $\Hc_0$. We propose different test statistics under a fully nonparametric perspective,
i.e. when $C_{I|J}$ is not supposed to belong into a particular parametric copula family, through some comparisons between empirical cdfs' in Subsection~\ref{Bruteforce_SA},
or by invoking a particular independence property in Subsection~\ref{IndepProp_SA}. In Subsection~\ref{ParApproach_SA}, new tools are needed if we assume underlying parametric copulas.
To evaluate the limiting distributions of such tests, we propose several bootstrap techniques (Subsection \ref{Boot_SA}).
Section~\ref{Boxes} is related to testing $\bar \Hc_0$.
In Subsection \ref{Link_SA}, we detail the relations between $\Hc_0$ and $\bar \Hc_0$.
Then, we provide tests statistics of $\bar \Hc_0$ for both the nonparametric (Subsection \ref{NPApproach_m_SA}) and the parametric framework (Subsection \ref{ParApproach_m_SA}), as well as bootstrap methods (Subsection \ref{Boot_m_SA}). In particular, we prove the validity of the so-called ``parametric independent'' bootstrap when testing $\bar\Hc_0$.
The performances of the latter tests are assessed and compared by simulation in Section \ref{NumericalApplications}.
{\color{black}{{A table of notations is available in Appendix \ref{Section_notations} and} some of the proofs are collected in Appendix \ref{Section_Proofs}.}}
\mds
\section{Tests of the simplifying assumption}
\label{Tests_SA}
\subsection{``Brute-force'' tests of the simplifying assumption}
\label{Bruteforce_SA}
A first natural idea is to build a test of $\Hc_0$ based on a comparison between some estimates of the conditional copula $C_{I|J}$ with and without the simplifying assumption, for different conditioning events.
Such estimates will be called $\hat C_{I|J}$ and $\hat C_{s,I|J}$ respectively.
Then, introducing some distance $\Dc$ between conditional distributions, a test can be based on the statistics
$\Dc(\hat C_{I|J} , \hat C_{s,I|J})$.
Following most authors, we immediately think of Kolmogorov-Smirnov-type statistics
\begin{equation}
\label{Tc0KS}
\Tc^0_{KS,n}:=\|\hat C_{I|J} -\hat C_{s,I|J}\|_{\infty} =
\sup_{\u_I \in [0,1]^p} \sup_{\x_J \in \RR^{d-p}} |\hat C_{I|J}(\u_I | \x_J) -\hat C_{s,I|J}(\u_I) |,
\end{equation}
or Cramer von-Mises-type test statistics
\begin{equation}
\label{TcOCvM}
\Tc^0_{CvM,n}:=\int \left( \hat C_{I|J}(\u_I | \x_J) -\hat C_{s,I|J}(\u_I) \right)^2\, w(d\u_I,d\x_J),
\end{equation}
for some weight function of bounded variation $w$, that could be chosen as random (see below).
\mds
To evaluate $\hat C_{I|J}$, we propose to invoke the nonparametric estimator of conditional copulas proposed by Fermanian and Wegkamp (2012).
Alternative kernel-based estimators of conditional copulas can be found in Gijbels et al. (2011), for instance.
\medskip
Let us start with an iid $d$-dimensional sample $(\X_i)_{i=1,\ldots,n}$.
Let $\hat F_k$ be the marginal empirical distribution function of $X_k$,
based on the sample $(X_{1,k},\ldots,X_{n,k})$, for any $k=1,\ldots,d$.
Our estimator of $C_{I|J}$ will be defined as
\begin{equation*}
\hat C_{I|J} (\u_I| \X_J=\x_J) := \hat F_{I|J}
\left( \hat F_{1|J}^{-}(u_1 | \X_J=\x_J)
, \dots, \hat F_{p|J}^{-} (u_p | \X_J=\x_J) | \X_J=\x_J \right),
\label{hatC_IJ}
\end{equation*}
\begin{equation}
\hat F_{I|J} (\x_I | \X_J=\x_J) := \frac{1}{n}
\sum_{i=1}^n K_n(\X_{i,J}, \x_J)
\ii \X_{i,I} \leq \x_I),
\label{hatF_I_J}
\end{equation}
where
$$ { \color{black} K_n(\X_{i,J}, \x_J) }
:= K_h \left(
\hat F_{p+1}(X_{i,p+1}) - \hat F_{p+1}(x_{p+1}) ,
\dots, \hat F_{d}(X_{i,d}) - \hat F_{d}(x_{d}) \right),$$
$$ K_h(\x_J) :=
h^{-(d-p)} K\left(x_{p+1}/h,\ldots,x_{d}/h\right),$$
and $K$ is a $(d-p)$-dimensional kernel.
Obviously, for $k\in I$, we have introduced some estimates of the marginal conditional cdfs' similarly:
\begin{equation}
\hat F_{k|J} (x | \X_J=\x_J) :=
{\color{black}
\frac{\sum_{i=1}^n K_n(\X_{i,J}, \x_J)
\ii \X_{i,I}\leq \x_I)}{\sum_{j=1}^n K_n(\X_{j,J}, \x_J)}
}\cdot
\label{hatF_k_J}
\end{equation}
\medskip
Obviously, $h=h(n)$ is the term of a usual bandwidth sequence, where $h(n)\rightarrow 0$ when $n$ tends to the infinity.
Since $\hat F_{I|J}$ is a nearest-neighbors estimator, it does not necessitate a fine-tuning of local bandwidths (except for those values $\x_J$ s.t. $F_J(\x_J)$ is close to one or zero), contrary to more usual Nadaraya-Watson techniques.
In other terms, a single convenient choice of $h$ would provide ``satisfying'' estimates of $\hat C_{I|J}(\x_I | \X_J=\x_J)$ for most values of $\x$.
{\color{black} For practical reasons, it is important that $\hat F_{k|J}(x_k|\x_{J})$ belongs to $[0,1]$ and that $\hat F_{k|J}(\cdot|\x_{J})$ is a true distribution. This is the reason why we use a normalized version for the estimator of the conditional marginal cdfs.}
\medskip
To calculate the latter statistics~(\ref{Tc0KS}) and~(\ref{TcOCvM}), it is necessary to provide an estimate of the underlying conditional copula under $\Hc_0$.
This could be done naively by particularizing a point $\x_J^*\in \RR^{d-p}$ and by setting $\hat C_{s,I|J}^{(1)} (\cdot) := \hat C_{I|J}(\cdot | \X_J=\x_J^*)$.
Since the choice of $\x_J^*$ is too arbitrary, an alternative could be to set
\begin{equation*}
\hat C_{s,I|J}^{(2)} (\cdot) :=
\int \hat C_{I|J}(\cdot | \X_J=\x_J)\, w(d\x_J),
\end{equation*}
for some function $w$ that is of bounded variation, and $\int w(d\x_J)=1$. Unfortunately, the latter choice induce $(d-p)$-dimensional integration procedures, that becomes
a numerical problem rapidly when $d-p$ is larger than three.
\mds
Therefore, let us randomize the ``weight'' functions $w$, to avoid multiple integrations. For instance, choose the empirical distribution of $\X_J$ as $w$, providing
\begin{equation}
\hat C_{s,I|J}^{(3)} (\cdot):=\int \hat C_{I|J}(\cdot | \X_J=\x_J)\, \hat F_J(d\x_J) = \frac{1}{n}\sum_{i=1}^n \hat C_{I|J}(\cdot | \X_J=\X_{i,J}).
\label{estimator_meanCI_J}
\end{equation}
\mds
An even simpler estimate of $C_{s,I|J}$, the conditional copula of $\X_I$ given $\X_J$ under the simplifying assumption, can be obtained by noting that, under $\Hc_0$, $C_{s,I|J}$ is the joint law of $\Z_{I|J}:=(F_1(X_1|\X_J),\ldots,F_p(X_p|\X_J))$ (see Property~\ref{prop_indep_H0} below). Therefore, it is tempting to estimate $C_{s,I|J}(\u_I)$ by
\begin{equation}
\hat C_{s,I|J}^{(4)} (\u_I) :=
\frac{1}{n} \sum_{i=1}^n
\1 \left( \hat F_{1|J}(X_{i,1}|\X_{i,J}) \leq u_1 , \dots, \hat F_{p|J}(X_{i,p}|\X_{i,J}) \leq u_p \right),
\label{estimator_cond_cdf}
\end{equation}
when $\u_I\in [0,1]^p$,
for some consistent estimates $\hat F_{k|J}(x_k|\x_{J})$ of $F_{k|J}(x_k |\x_J)$.
A similar estimator has been promoted and studied in Gijbels et al. (2015a) or in Portier and Segers (2015), but they have considered the empirical copula associated to the
pseudo sample $((\hat F_1(X_{i1}|\X_{iJ}),\ldots,\hat F_p(X_{ip}|\X_{iJ})))_{i=1,\ldots,n}$ instead of its empirical cdf. It will be called $\hat C_{s,I|J}^{(5)}$.
Hereafter, we will denote $\hat C_{s,I|J}$ one of the ``averaged'' estimators $\hat C_{s,I|J}^{(k)}$, $k>1$ and we can forget the naive pointwise estimator $\hat C_{s,I|J}^{(1)}$.
Therefore, under some conditions of regularity, we guess that our estimators $\hat C_{s,I|J}(\u_I)$ of the conditional copula under $\Hc_0$ will be $\sqrt{n}$-consistent and asymptotically normal. It has been proved for $C^{(5)}_{s,I|J}$ in Gijbels et al. (2015a) or in Portier and Segers (2015), as a byproduct of the weak convergence of the associated process.
\mds
Under $\Hc_0$, we would like that the previous test statistics $\Tc^0_{KS,n}$ or $\Tc^0_{CvM,n}$ are convergent.
Typically, such a property is given as a sub-product by the weak convergence of a relevant empirical process, here
$(\u_I,\x_J)\in [0,1]^{p} \times \RR^{d-p} \mapsto \sqrt{nh_n^{d-p}}(\hat C_{I|J} - C_{I|J})(\u_I | \x_J)$.
Unfortunately, this will not be the case in general seing the previous process as a function indexed by $\x_J$, at least for wide ranges of bandwidths.
Due to the difficulty of checking the tightness of the process indexed by $\x_J$, some alternative techniques may be required
as Gaussian approximations (see Chernozhukov et al. 2014, e.g.). Nonetheless, they would lead us far beyond the scope of this paper.
Therefore, we simply propose to slightly modify the latter test statistics, to manage only a {\it fixed} set of arguments $\x_J$. For instance, in the case of
the Kolmogorov-Smirnov-type test, consider a simple grid $\chi_J:=\{\x_{1,J},\ldots, \x_{m,J}\}$, and the modified test statistics
\begin{equation*}
\label{Tc0KSm}
\Tc^{0,m}_{KS,n}:=
\sup_{\u_I \in [0,1]^p} \sup_{\x_J \in \chi_J} |\hat C_{I|J}(\u_I | \x_J) -\hat C_{s,I|J}(\u_I) |.
\end{equation*}
In the case of the Cramer von-Mises-type test, we can approximate any integral by finite sums, possibly after a change of variable to manage a compactly supported integrand.
Actually, this is how they are calculated in practice!
For instance, invoking Gaussian quadratures, the modified statistics would be
\begin{equation}
\label{TcOCvMm}
\Tc^{0,m}_{CvM,n}:=\sum_{j=1}^m \omega_j \left( \hat C_{I|J}(\u_{j,I} | \x_{j,J}) -\hat C_{s,I|J}(\u_{j,I}) \right)^2,
\end{equation}
for some conveniently chosen constants $\omega_j$, $j=1,\ldots,m$. Note that the numerical evaluation of $\hat C_{I|J}$ is relatively costly. Since
quadrature techniques require a lot less points $m$ than ``brute-force'' equally spaced grids (in dimension $d$, here), they have to be preferred most often.
\mds
Therefore, at least for such modified test statistics, we can insure the tests are convergent.
Indeed, under some conditions of regularity, it can be proved that $\hat C_{I|J}(\u_I| \X_J=\x_J) $ is consistent and asymptotically normal,
for every choice of $\u_I$ and $\x_J$ (see Fermanian and Wegkamp, 2012).
And a relatively straightforward extension of their Corollary 1 would provide that,
under ${\mathcal H}_0$ and for all $\Uc := (\u_{I,1},\ldots,\u_{I,q})\in [0,1]^{p(q+r)}$
and $\Xc := (\x_{J,1},\ldots,\x_{J,q})\in \RR^{(d-p)q}$,
\begin{eqnarray*}
\lefteqn{
\left\{
\sqrt{nh_n^{d-p}}(\hat C_{I|J} - C_ {s,I|J}) (\u_{I,1}| \X_J=\x_{J,1}),\ldots,\sqrt{nh_n^{d-p}}(\hat C_{I|J} - C_{s,I|J}) (\u_{I,q}| \X_J=\x_{J,q}), \right. \nonumber }\\
&& \left. \sqrt{n}(\hat C_{s,I|J} - C_ {s,I|J}) (\u_{I,q+1}),\ldots,\sqrt{n}(\hat C_{s,I|J} - C_ {s,I|J}) (\u_{I,q+r}) \right\},
\hspace{5cm}
\end{eqnarray*}
converges in law towards a Gaussian random vector.
As a consequence, $\sqrt{nh_n^{d-p}}\Tc^{0,m}_{KS,n}$ and $nh_n^{d-p}\Tc^{0,m}_{CvM,n}$ tends to a complex but not degenerate law under the $\Hc_0$.
\mds
\begin{rem}
Other test statistics of $\Hc_0$ can be obtained by comparing directly the functions $\hat C_{I|J}(\cdot | \X_J=\x_J)$, for different values of $\x_J$.
For instance, let us define
\begin{eqnarray}
\label{Tc0KS_bis}
\lefteqn{\tilde\Tc^0_{KS,n}:=\sup_{\x_J,\x_J'\in \RR^{d-p}}\|\hat C_{I|J}(\cdot | \x_J) -\hat C_{I|J}(\cdot | \x'_J)\|_{\infty} \nonumber }\\
&=&
\sup_{\x_J,\x_J'\in \RR^{d-p}} \sup_{\u_I \in [0,1]^p}|\hat C_{I|J}(\u_I | \x_J) -\hat C_{I|J}(\u_I | \x'_J) |,
\end{eqnarray}
or
\begin{equation}
\label{TcOCvM_bis}
\tilde\Tc^0_{CvM,n}:=\int \left( \hat C_{I|J}(\u_I | \x_J) -\hat C_{I|J}(\u_I | \x'_J) \right)^2\, w(d\u_I,d\x_J,d\x'_J),
\end{equation}
for some function of bounded variation $w$. As above, modified versions of these statistics can be obtained considering fixed $\x_J$-grids.
Since these statistics involve higher dimensional integrals/sums than previously, they will not be studied more in depth.
\end{rem}
The $L^2$-type statistics $\Tc^0_{CvM,n}$ and $\tilde\Tc^0_{CvM,n}$ involve at least $d$ summations or integrals, which can become numerically expensive when the dimension of $\X$ is ``large''.
Nonetheless, we are free to set convenient weight functions. To reduce the computational cost, several versions of $\Tc^0_{CvM,n}$ are particularly well-suited,
by choosing conveniently the functions $w$. For instance, consider
\begin{equation*}
\Tc^{(1)}_{CvM,n} := \int \left(
\hat C_{I|J}(\u_I | \x_J) - \hat{C}_{s,I|J}(\u_I) \right)^2
\, \hat C_I (d\u_I) \, \hat F_J(d\x_J),
\label{CvM_Alexis}
\end{equation*}
where $\hat F_J$ and $\hat C_I$ denote the empirical cdf of $(\X_{i,J})$ and the empirical copula of $(\X_{i,I})$ respectively. Therefore, $\Tc^{(1)}_{CvM,n}$ simply becomes
\begin{equation}
\Tc^{(1)}_{CvM,n} =
\frac{1}{n^2} \sum_{j=1}^n \sum_{i=1}^n \left(
\hat C_{I|J} (\hat U_{i,I} | \X_J=\X_{j,J}) -
\hat C_{s,I|J} (\hat U_{i,I}) \right)^2, \label{T_CvM_1}
\end{equation}
where $\hat U_{i,I}=(\hat F_1 (X_{i,1}), \dots, \hat F_p (X_{i,p}))$, $i=1,\ldots,n$.
Similarly, we can choose
\begin{eqnarray*}
\lefteqn{ \tilde\Tc^{(1)}_{CvM,n} := \int \left(
\hat C_{I|J}(\u_I | \x_J) - \hat C_{I|J}(\u_I | \x'_J)
\right)^2 \, \hat C_I (d\u_I) \, \hat F_J(d\x_J) \, \hat F_J(d\x'_J) }\\
&=&
\frac{1}{n^3}\sum_{j=1}^n \sum_{j'=1}^n \sum_{i=1}^n \left( \hat C_{I|J}(\hat U_{i,I} | \X_J=\X_{j,J})
- \hat C_{I|J}(\hat U_{i,I} |\X_J=\X_{j',J}) \right)^2.
\end{eqnarray*}
To deal with a single summations only, it is even possible to propose to set
\begin{align*}
\Tc^{(2)}_{CvM,n} := \int &\left(
\hat C_{I|J} (\hat F_{1|J} (x_1 | \x_J), \dots,
\hat F_{p|J} (x_p|\x_J) |\x_J) \right. \\
& \qquad - \left.
\hat{C}_{s,I|J}( \hat F_{1|J} (x_1 | \x_J), \dots,
\hat F_{p|J}(x_p|\x_J)) \right)^2 \, \hat F(d\x_I, d\x_J) ,
\end{align*}
where $\hat F$ denotes the empirical cdf of $\X$.
This means
\begin{align*}
\Tc^{(2)}_{CvM,n} = \frac{1}{n} \sum_{i=1}^n
\Big(& \hat C_{I|J} \big(
\hat F_{1|J}(X_{i,1} | \X_{i,J}), \dots,
\hat F_{p|J}(X_{i,p} | \X_{i,J}) | \X_J=\X_{i,J} \big) \nonumber\\
&- \hat{C}_{s,I|J} \big(
\hat F_{1|J}(X_{i,1} | \X_{i,J}), \dots,
\hat F_{p|J}(X_{i,p} | \X_{i,J}) \big) \Big)^2.
\label{CvM_JDF}
\end{align*}
\mds
We have introduced some tests based on comparisons between empirical cdfs'. Obviously, the same idea could be applied to associated densities, as in Fermanian (2005) for instance, or even
to other functions of the underlying distributions.
\mds
Since the previous test statistics are complicated functionals of
some ``semi-smoothed'' empirical process, it is very challenging to evaluate their asymptotic laws under $\Hc_0$ analytically.
In every case, these limiting laws will not be distribution free, and their calculation would be very tedious.
Therefore, as usual with copulas, it is necessary to evaluate the limiting distributions of such tests
statistics by a convenient bootstrap procedure (parametric or nonparametric).
{\color{black} These bootstrap techniques will be presented in Section \ref{Boot_SA}}.
\subsection{Tests based on the independence property}
\label{IndepProp_SA}
Actually, testing $\Hc_0$ is equivalent to a test of the independence between the random vectors $\X_J$ and
$\Z_{I|J}:=(F_{1}(X_{1}|\X_J),\ldots,F_{p}(X_{p}|\X_J))$ strictly speaking, as proved in the following proposition.
\begin{prop}
\label{prop_indep_H0}
The vectors $\Z_{I|J}$ and $\X_J$ are independent iff
$C_{I|J}(\u_I|\X_J=\x_J)$ does not depend on $\x_J$ for every
vectors $\u_I$ and $\x_J$. In this case, the cdf of $\Z_{I|J}$ is $C_{s,I|J}$.
\end{prop}
\mds
{\it Proof:} For any vectors $\u_I \in [0,1]^p$ and any subset
$A_J\subset \RR^{d-p}$,
\begin{eqnarray*}
\lefteqn{ \PP(\Z_{I|J} \leq \u_I , \X_J\in A_J) = \EE\left[ \1 (\X_J\in A_J ) \PP(\Z_{I|J} \leq \u_I |\X_J) \right] }\\
&=& \int \1 (\x_J\in A_J ) \PP(\Z_{I|J} \leq \u_I | \X_J=\x_J) \, d\PP_{\X_J}(\x_J) \\
&=& \int_{A_J} \PP(F_{k}(X_{k} | \X_J=\x_J)\leq u_{k},\forall k\in I | \X_J=\x_J) \, d\PP_{\X_J}(\x_J) \\
&=& \int_{A_J} C_{I|J}( \u_I | \X_J=\x_J) \, d\PP_{\X_J}(\x_J).
\end{eqnarray*}
If $\Z_{I|J}$ and $\X_J$ are independent, then
$$\PP(\Z_{I|J} \leq \u_I) \PP(\X_J\in A_J) = \int \1 (\x_J\in A_J ) C_{I|J}( \u_I | \X_J=\x_J) \, d\PP_{\X_J}(\x_J),$$
for every $\u_I$ and $A_J$. This implies $\PP(\Z_{I|J} \leq \u_I) =
C_{I|J}( \u_I | \X_J=\x_J)$ for every $\u_I\in [0,1]^{p}$ and every
$\x_J$ in the support of $\X_J$. This means that
$C_{I|J}(\u_I|\X_J=\x_J)$ does not depend on $\x_J$, because
$\Z_{I|J}$ does not depend on any $\x_J$ by definition.
\medskip
Reciprocally, under $\Hc_0$, $C_{s,I|J}$ is the cdf of $\Z_{I|J}$.
Indeed,
\begin{eqnarray*}
\lefteqn{\PP( \Z_{I|J} \leq \u_I ) =
\PP \left( F_{k}(X_{k}|\X_J) \leq u_{k},\forall k\in I \right) }\\
&=& \int \PP \left( F_{k}(X_{k}|\X_J=\x_J) \leq u_{k},\forall k \in I |\, \X_J=\x_J \right)\, d\PP_{\X_J}(\x_J) \\
&=& \int C_{I|J}(\u_I | \X_J=\x_J)\, d\PP_{\X_J}(\x_J)= \int C_{s,I|J}(\u_I)\, d\PP_{\X_J}(\x_J) =C_{s,I|J}(\u_I).
\end{eqnarray*}
Moreover, due to Sklar's Theorem, we have
\begin{eqnarray*}
\lefteqn{ \PP(\Z_{I|J} \leq \u_I , \X_J\in A_J) = \int \1 (\x_J\in A_J ) C_{I|J}( \u_I | \X_J=\x_J) \, d\PP_{\X_J}(\x_J) }\\
&= \int \1 (\x_J\in A_J ) C_{s,I|J}( \u_I ) \, d\PP_{\X_J}(\x_J) =\PP(\Z_{I|J} \leq \u_I ) \PP( \X_J\in A_J),
\end{eqnarray*}
implying the independence between $\Z_{I|J}$ and $\X_J$. $\Box$
\medskip
Then, testing $\Hc_0$ is formally equivalent to testing
$$\Hc^*_0:
\Z_{I|J}=(F_{1}(X_{1}|\X_J),\ldots,F_{p}(X_{p}|\X_J)) \; \text{and}\; \X_J\; \text{are independent}.$$
\mds
Since the conditional marginal cdfs' are not observable, keep in mind that we have to work with
pseudo-observations in practice, i.e. vectors of observations that are not
independent. In other words, our tests of independence should be based on pseudo-samples
\begin{equation}
\left( \hat F_{1|J}(X_{i,1}|\X_{i,J}), \dots,
\hat F_{p|J}(X_{i,p}|\X_{i,J}) \right)_{i=1,\dots,n}
:=(\hat \Z_{i,I|J})_{i=1,\ldots,n},
\label{pseudoObs}
\end{equation}
for some consistent estimate $\hat F_{k|J}(\cdot|\X_J)$, $k\in I$ of the conditional cdfs',
for example as defined in Equation~(\ref{hatF_k_J}).
The chance of getting distribution-free asymptotic statistics will be very tiny, and we will have to rely on some bootstrap techniques again.
To summarize, we should be able to apply some usual tests of independence, but replacing iid observations with (dependent) pseudo-observations.
\mds
Most of the tests of $\Hc^*_0$ rely on the joint law of $(\Z_{I|J},\X_J)$, that may be evaluated empirically as
\begin{eqnarray*}
\lefteqn{G_{I,J}(\x_I,\x_J):=\PP(\Z_{I|J}\leq \x_I,\X_J\leq \x_J) }\\
&\simeq &\hat G_{I,J}(\x):= n^{-1} \sum_{i=1}^n \1 (\hat \Z_{i,I|J} \leq \x_I,\X_{i,J} \leq \x_J).
\end{eqnarray*}
Now, let us propose some classical strategies to build independence tests.
\begin{itemize}
\item Chi-square-type tests of independence:
Let $B_1,\ldots,B_N$ (resp. $A_1,\ldots,A_{m}$) some disjoint subsets in $\RR^p$ (resp. $\RR^{d-p}$).
\begin{equation}
\label{IcChi}
\Ic_{\chi,n} = n\sum_{k=1}^{N} \sum_{l=1}^{m} \frac{\left( \hat G_{I,J}(B_k\times A_l) - \hat G_{I,J}(B_k\times \RR^{d-p}) \hat G_{I,J}(\RR^{p} \times A_l) \right)^2}{\hat G_{I,J}(B_k\times \RR^{d-p}) \hat G_{I,J}(\RR^{p} \times A_l)} \cdot
\end{equation}
\item Distance between distributions:
\begin{equation}
\label{IcKS}
\Ic_{KS,n} = \sup_{\x \in \RR^d} | \hat G_{I,J}(\x) - \hat G_{I,J}(\x_I,\infty^{d-p}) \hat G_{I,J}(\infty^{p}, \x_J)|,\; \text{or}
\end{equation}
\begin{equation}
\label{Ic2n}
\Ic_{2,n} = \int \left( \hat G_{I,J}(\x) - \hat G_{I,J}(\x_I, \infty^{d-p}) \hat G_{I,J}(\infty^{p} , \x_J) \right)^2 \omega(\x)\, d\x,
\end{equation}
for some (possibly random) weight function $\omega$. Particularly, we can propose the single sum
\begin{eqnarray}
\label{IcCvM}
\lefteqn{ \Ic_{CvM,n} = \int \left( \hat G_{I,J}(\x) - \hat G_{I,J}(\x_I, \infty^{d-p}) \hat G_{I,J}(\infty^{p} , \x_J) \right)^2 \, \hat G_{I,J}(d\x) \nonumber }\\
&=& \frac{1}{n} \sum_{i=1}^n
\left( \hat G_{I,J}(\hat \Z_{i,I|J},\X_{i,J} ) - \hat G_{I,J}(\hat \Z_{i,I|J}, \infty^{d-p}) \hat G_{I,J}(\infty^{p} ,\X_{i,J}) \right)^2 .
\end{eqnarray}
\item Tests of independence based on comparisons of copulas:
let $\breve C_{I,J}$ and $\hat C_{J}$ be the empirical copulas based on the pseudo-sample
$(\hat \Z_{i,I|J}, \X_{i,J})_{i=1,\ldots,n}$, and
$( \X_{i,J})_{i=1,\ldots,n}$ respectively. Set
\begin{equation*}
\breve \Ic_{KS,n} = \sup_{\u \in [0,1]^d} | \breve C_{I,J}(\u) - \hat C_{s,I|J}^{(k)} (\u_I) \hat C_J (\u_J) | , k=1,\ldots,5,\, \text{or}
\end{equation*}
\begin{equation*}
\breve \Ic_{2,n} = \int_{\u \in [0,1]^d}
\left( \breve C_{I,J}(\u) - \hat C_{s,I|J}^{(k)}(\u_I)
\hat C_J (\u_J) \right)^2 \omega(\u)\, d\u,
\end{equation*}
and in particular
\begin{equation*}
\breve \Ic_{CvM,n} = \int_{\u \in [0,1]^d}
\left( \breve C_{I,J}(\u) - \hat C_{s,I|J}^{(k)}(\u_I)
\hat C_J (\u_J)\right)^2 \, \breve C_{I,J}(d\u).
\end{equation*}
The underlying ideas of the test statistics $\breve \Ic_{KS,n}$ and $\breve\Ic_{CvM,n}$ are similar to those that have been proposed by Deheuvels (1979,1981) in the case of unconditional copulas.
Nonetheless, in our case, we have to calculate pseudo-samples of the pseudo-observations
$(\hat \Z_{i,I|J})$ and $( \X_{i,J})$, instead of a usual pseudo-sample of $(\X_i)$.
\end{itemize}
\mds
Note that the latter techniques require the evaluation of some
conditional distributions, for instance by kernel smoothing.
Therefore, the level of numerical complexity of these test statistics of
$\Hc_0^*$ is comparable with those we have proposed before to test
$\Hc_0$ directly.
\subsection{Parametric tests of the simplifying assumption}
\label{ParApproach_SA}
In practice, modelers often assume a priori that the underlying
copulas belong to some specified parametric family $\Cc:=\{ C_\theta, \theta \in \Theta\subset \RR^m\}$.
Let us adapt our tests under this parametric assumption.
Apparently, we would like to test
$$ \check\Hc_0: C_{I|J}(\cdot| \X_J)= C_{\theta}(\cdot),\; \text{for some }
\theta\in \Theta \; \text{and almost every } \X_J.$$
Actually, $\check\Hc_0$ requires two different things: the fact that the conditional copula is a constant copula w.r.t. its conditioning events (test of $\Hc_0$) and, additionally, that the right copula belongs to $\Cc$ (classical composite Goodness-of-Fit test).
Under this point of view, we would have to adapt ``omnibus'' specification tests to manage conditional copulas and pseudo observations.
For instance, and among of alternatives, we could consider an amended version of Andrews (1997)'s specification test
$$ CK_n := \frac{1}{\sqrt{n}}\max_{j\leq n} |\sum_{i=1}^n \left[ \1(\hat\Z_{i,I|J}\leq \hat\Z_{j,I|J}) - C_{\hat\theta_0}(\hat \Z_{j,I|J}) \right]\1( \X_{i,J}\leq \X_{j,J})|,$$
recalling the notations in~(\ref{pseudoObs}).
For other ideas of the same type, see Zheng (2000) and the references therein.
\mds
The latter global approach is probably too demanding.
Here, we prefer to isolate the initial problem that was related to the simplifying assumption only.
Therefore, let us assume that, for every $\x_J$, there
exists a parameter $\theta(\x_J)$ such that $C_{I|J}(\cdot|
\x_J)=C_{\theta(\x_J)}(\cdot)$. To simplify, we assume the function $\theta(\cdot)$ is continuous.
Our problem is then reduced to testing the
constancy of $\theta$, i.e.
$$\Hc^c_0: \text{the function } \x_J \mapsto \theta(\x_J) \text{ is
a constant, called} \;\theta_0.$$
\mds
For every $\x_J$, assume we estimate $\theta(\x_J)$ consistently. For instance, this can be done by modifying the standard semiparametric Canonical Maximum Likelihood methodology (Genest et al., 1995, Tsukahara, 2005): set
$$ \hat\theta(\x_J):= \arg\max_{\theta\in\Theta} \sum_{i=1}^n \log c_\theta \left(
\hat F_{1|J}(X_{i,1}|\X_J=\X_{i,J}), \dots,
\hat F_{p|J}(X_{i,p}|\X_J=\X_{i,J}) \right) \cdot K_n(\X_{i,J}, \x_J),$$
through usual kernel smoothing in $\RR^{d-p}$ {, \color{black} where
$c_\theta(\u) := \frac{\partial^p C_\theta(\u)}
{\partial u_1 \cdots \partial u_p}$
for $\theta\in\Theta$ and $\u\in [0,1]^p$}.
Alternatively, we could consider
$$\tilde\theta(\x_J) := \arg\max_{\theta\in\Theta} \sum_{i=1}^n \log c_\theta \left(
\hat F_{1|J}(X_{i,1}|\X_J=\x_J), \dots,
\hat F_{p|J}(X_{i,p}|\X_J=\x_J) \right) \cdot K_n(\X_{i,J}, \x_J),$$ instead of $\hat\theta(\x_J).$
See Abegaz et al. (2012) concerning the theoretical properties of $\tilde\theta(\x_J)$ and some choice of conditional cdfs'.
Those of $ \hat\theta(\x_J)$ remain to be stated precisely, to the best of our knowledge.
But there is no doubt both methodologies provide consistent estimators, even jointly, under some conditions of regularity.
\mds
Under $\Hc^c_0$, the natural ``unconditional'' copula parameter $\theta_0$ of the copula of the $\Z_{I|J}$ will be estimated by
\begin{equation}
\hat\theta_0:= \arg\max_{\theta\in\Theta} \sum_{i=1}^n \log c_\theta \left(
\hat F_{1|J}(X_{i,1}|\X_{i,J}), \dots,
\hat F_{p|J}(X_{i,p}|\X_{i,J}) \right).
\label{CML}
\end{equation}
Surprisingly, the theoretical properties of the latter estimator do not seem to have been established in the literature explicitly.
Nonetheless, the latter M-estimator is a particular case of those considered in Fermanian and Lopez (2015)
in the framework of single-index models when the link function is a known function (that does not depend on the index).
Therefore, by adapting their assumption in the current framework, we easily obtain that $\hat \theta_0$ is consistent and
asymptotically normal if $c_\theta$ is sufficiently regular, for convenient choices of bandwidths and kernels.
\mds
Now, there are some challengers to test $\Hc_0^c$:
\begin{itemize}
\item Tests based on the comparison between $\hat\theta(\cdot)$ and $\hat\theta_0$:
\begin{equation}
\Tc_{\infty}^c := \sup_{\x_J\in \RR^{d-p}} \| \hat\theta(\x_J) - \hat\theta_0 \| ,\; \text{or}\;
\Tc_{2}^c := \int \|\hat\theta(\x_J) - \hat\theta_0\|^2 \omega(\x_J)\, d\x_J,
\label{Tc_infty_C}
\end{equation}
for some weight function $\omega$.
\item Tests based on the comparison between $C_{\hat\theta(\cdot)}$ and $C_{\hat\theta_0}$:
\begin{equation}
\Tc_{dist}^c := \int dist\left(C_{\hat\theta(\x_J)},C_{\hat\theta_0} \right) \omega(\x_J)\,
d\x_J,
\label{Tcdfcomp}
\end{equation}
for some distance $dist(\cdot,\cdot)$ between cdfs'.
\item Tests based on the comparison between copula densities (when they exist):
\begin{equation}
\Tc_{dens}^c := \int \left(c_{\hat\theta(\x_J)}(\u_I) -c_{\hat\theta_0}(\u_I)\right)^2 \omega(\u_I,\x_J)\,
d\u_I\,d\x_J.
\label{Tdensitycomp}
\end{equation}
\end{itemize}
\begin{rem}
It might be difficult to compute some of these integrals numerically, because of unbounded supports.
One solution is to to make change of variables. For example,
\begin{equation*}
\Tc_{2}^c = \int \| \hat \theta (F_J^{-} (\u_J))
- \hat \theta_0 \|^2 \omega(F_J^{-} (\u_J)) \,
\frac{d\u_J}{f_J(F_J^{-} (\u_J))} \cdot
\end{equation*}
Therefore, the choice $\omega = f_J$ allows us to simplify the latter statistics to
$\int \| \hat \theta (F_J^{-} (\u_J))
- \hat \theta_0 \|^2 d\u_J$, which is rather easy to evaluate. We used this trick in the numerical section below.
\end{rem}
\subsection{Bootstrap techniques for tests of $\Hc_0$}
\label{Boot_SA}
It is necessary to evaluate the limiting laws of the latter test statistics under the null.
As a matter of fact, we generally cannot exhibit explicit - and distribution-free a fortiori - expressions for these limiting laws.
The common technique is provided by bootstrap resampling schemes.
\mds
More precisely, let us consider a general statistics $\Tc$, built from the initial sample $\Sc:=(\X_1,\ldots,\X_n)$. The main idea of the bootstrap is to construct $N$ new samples $\Sc^*:=(\X_1^*,\ldots,\X_n^*)$ following a given resampling scheme given $\Sc$. Then, for each bootstrap sample $\Sc^*$, we will evaluate a bootstrapped test statistics $\Tc^*$, and the empirical law of all these $N$ statistics is used as an approximation of the limiting law of the initial statistic $\Tc$.
\subsubsection{Some resampling schemes}
\label{resampling_sch}
The first natural idea is to invoke Efron's usual ``nonparametric bootstrap'', where we draw independently with replacement $\X_i^*$ for $i=1,\ldots,n$ among the initial sample $\Sc=(\X_1,\ldots,\X_n)$.
This provides a bootstrap sample $\Sc^*:=(\X_1^*,\ldots,\X_n^*)$.
\mds
The nonparametric bootstrap is an ``omnibus'' procedure whose theoretical properties are well-known
but that may not be particularly adapted to the problem at hand.
Therefore, we will propose alternative sampling schemes that should be of interest, even if we do not state their validity on the theoretical basis.
Such a task is left for further researches.
\mds
An natural idea would be to use some properties of $\X$ under $\Hc_0$, in particular the characterization given in Proposition~\ref{prop_indep_H0}: under $\Hc_0$, we known that $\Z_{i,I|J}$ and $\X_{i,J}$ are independent.
This will be only relevant for the tests of Subsection~\ref{IndepProp_SA}, and for a few tests of Subsection~\ref{Bruteforce_SA},
where such statistics are based on the pseudo-sample $(\hat \Z_{i,I|J}, \X_{i,J})_{i=1,\ldots,n}$.
Therefore, we propose the following so-called ``pseudo-independent bootstrap'' scheme:
\mds
\noindent
Repeat, for $i=1$ to $n$,
\begin{enumerate}
\item draw $\X^*_{i,J}$ among $(\X_{j,J})_{j=1,\ldots,n}$;
\item draw $\hat \Z^*_{i,I|J}$ independently, among the observations $\hat \Z_{j,I|J}$, $j=1,\ldots,n$.
\end{enumerate}
\noindent This provides a bootstrap sample
$\Sc^*:=\big( (\hat \Z^*_{1,I|J} , \X^*_{1,J}), \ldots, (\hat \Z^*_{n,I|J} , \X^*_{n,J}) \big)$.
\mds
Note that we could invoke the same idea, but with a usual nonparametric bootstrap perspective: draw with replacement a $n$-sample among the pseudo-observations
$(\hat \Z_{i,I|J}, \X_{i,J})_{i=1,\ldots,n}$ for each bootstrap sample.
This can be called a ``pseudo-nonparametric bootstrap'' scheme.
\mds
Moreover, note that we cannot draw independently $\X_{i,J}^*$ among $(\X_{j,J})_{j=1,\ldots,n}$,
and beside $\X_{i,I} ^*$ among $(\X_{j,I})_{j=1,\ldots,n}$ independently.
Indeed, $\Hc_0$ does not imply the independence between $\X_I$ and $\X_J$.
At the opposite, it makes sense to build a ``conditional bootstrap'' as follows:
\mds
\noindent
Repeat, for $i=1$ to $n$,
\begin{enumerate}
\item draw $\X^*_{i,J}$ among $(\X_{j,J})_{j=1,\ldots,n}$;
\item draw $\hat\X^*_{i,I}$ independently, along the estimated conditional law of $\X_I$ given $\X_J=\X^*_{i,J}$.
This can be down by drawing a realization along the law $\hat F_{I|J} (\cdot |\X_J=\X^*_{i,J})$, for instance (see~(\ref{hatF_I_J})).
This can be done easily because the latter law is purely discrete, with unequal weights that depend on $\X^*_{i,J}$ and $\Sc$.
\end{enumerate}
\noindent This provides a bootstrap sample
$\Sc^*:=\big( (\hat \X^*_{1,I} , \X^*_{1,J}), \ldots, (\hat \X^*_{n,I} , \X^*_{n,J}) \big)$.
\mds
\begin{rem}
Note that the latter way of resampling is not far from the usual nonparametric bootstrap. Indeed, when the bandwidths tend to zero, once $\x_J^*=\X_{i,J}$ is drawn, the procedure above will select the other components of $\X_i$ (or close values), i.e. the probability that $\x_I^* =\X_{i,I}$ is ``high''.
\end{rem}
\bigskip
In the parametric framework, we might also want to use an appropriate resampling scheme.
As a matter of fact, all the previous resampling schemes can be used, as in the nonparametric framework,
but we would not take advantage of the parametric hypothesis, i.e. the fact that all conditional copulas belong to a known family.
We have also to keep in mind that even if the conditional copula has a parametric form, the global model is not fully parametric, because we have not provided a parametric model neither for the conditional marginal cdfs $F_{k|J}$, $k=1,\ldots,p$, nor for the cdf of $\X_J$.
\mds
Therefore, we can invoke the null hypothesis $\Hc^c_0$ and approximate the real copula $C_{\theta_0}$ of $\Z_{I|J}$ by $C_{\hat \theta_0}$.
This leads us to define the following ``parametric independent bootstrap'':
\mds
\noindent
Repeat, for $i=1$ to $n$,
\begin{enumerate}
\item draw $\X_{i,J}^*$ among $(\X_{j,J})_{j=1,\dots,n}$;
\item sample $\Z_{i,I|J,\hat \theta_0}^*$ from the copula with parameter $\hat \theta_0$ independently.
\end{enumerate}
\noindent This provides a bootstrap sample
$\Sc^*:=\big( ( \Z^*_{1,I|J, \hat\theta_0} , \X^*_{1,J}), \ldots, ( \Z^*_{n,I|J, \hat\theta_0} , \X^*_{n,J}) \big)$.
\begin{rem}
At first sight, this might seem like a strange mixing of parametric and nonparametric bootstrap.
If $|J|=1$, we can nonetheless do a ``full parametric bootstrap'', by observing that all estimators of our previous test statistics do not depend on $\X_J$, but on realizations of $\hat F_{J}(\X_J)$ (see Equations (\ref{hatF_I_J}) and (\ref{hatF_k_J})).
Since the law of latter variable is close to a uniform distribution, it is tempting to sample $V_{i,J}^* \sim {\mathcal U}_{[0,1]}$ at the first stage, $i=1,\ldots,n$, and then to replace $\hat F_{J}(\X_{i,J})$ with $V_{i,J}^*$ to get an alternative bootstrap sample.
\end{rem}
Without using $\Hc^c_0$, we could define the ``parametric conditional bootstrap'' as:
\mds
\noindent
Repeat, for $i=1$ to $n$,
\begin{itemize}
\item draw $\X_{i,J}^*$ among $(\X_{j,J})_{j=1,\dots,n}$;
\item sample $\Z_{i,I|J,\theta^*_i}^*$ from the copula with parameter $\hat \theta(\X^*_{i,J})$.
\end{itemize}
\noindent This provides a bootstrap sample
$\Sc^*:=\big( ( \Z^*_{1,I|J,\theta^*_i} , \X^*_{1,J}), \ldots, ( \Z^*_{n,I|J,\theta^*_i} , \X^*_{n,J}) \big)$.
\mds
Note that, in several resampling schemes, we should be able to keep the same $\X_J$ as in the original sample, and simulate only $\Z^*_{i,I|J}$ in step 2,
as in Andrews(1997), pages 10-11.
Such an idea has been proposed by Omelka et al. (2013), in a slightly different framework and univariate conditioning variables.
They proved that such a bootstrap scheme ``works'', after a fine-tuning of different smoothing parameters: see their Theorem 1.
\subsubsection{Bootstrapped test statistics}
The problem is now to evaluate the law of a given test statistic, say $\Tc$, under $\Hc_0$ by the some bootstrap techniques.
We recall the main technique in the case of the classical nonparametric bootstrap.
We conjecture that the idea is still theoretically sound under the other resampling schemes that have been proposed in Subsection~\ref{resampling_sch}.
\mds
The principle for the nonparametric bootstrap is based on the weak convergence of the underlying empirical process. Formally, if $\Sc:=\{\X_1,\ldots,\X_n\}$ in an iid sample in $\RR^d$, $\X\sim F$ and if $F_n$ denotes its empirical distribution, it is well-known that $\sqrt{n}\left(F_n - F\right)$ tends weakly in $\ell^{\infty}$ towards a $d$-dimensional Brownian bridge $\BB_F$.
And the nonparametric bootstrap works in the sense that
$\sqrt{n}\left(F_n^* - F_n\right)$ converges weakly towards a process $\BB_F'$, an independent version of $\BB_F$, given the initial sample $\Sc$.
\mds
Due to the Delta Method, for every Hadamard-differentiable functional $\chi$ from $\ell^\infty(\RR^d)$ to $\RR$, there exists a random variable
$H_\chi$ s.t. $ \sqrt{n}\left( \chi(F_n) - \chi(F)\right) \Rightarrow H_\chi.$
Assume a test statistics $\Tc_n$ of $\Hc_0$ can be written as a sufficiently regular functional of the underlying empirical process as
\begin{equation*}
\label{eq:decomp_Tc_1}
\Tc_n := \psi \left( \sqrt{n}\left( \chi_s(F_n) - \chi(F_n)\right) \right),
\end{equation*}
where $\chi_s(F)=\chi(F)$ under the null assumption.
Then, under $\Hc_0$, we can rewrite this expression as
\begin{equation}
\label{eq:decomp_Tc_2}
\Tc_n := \psi \left( \sqrt{n}\left( \chi_s(F_n) - \chi_s(F) + \chi(F) - \chi(F_n)\right) \right).
\end{equation}
Given any bootstrap sample $\Sc^*$ and the associated empirical distribution $F_n^*$,
the usual bootstrap equivalent of $\Tc_n$ is
\begin{equation*}
\label{eq:decomp_Tc_st}
\Tc_n^* := \psi \left( \sqrt{n}\left( \chi_s(F_n^*) - \chi_s(F_n) + \chi(F_n) - \chi(F_n^*)\right) \right),
\end{equation*}
from Equation (\ref{eq:decomp_Tc_2}).
See van der Vaart and Wellner (1996), Section 3.9, for details and mathematically sound statements.
\mds
Applying these ideas, we can guess the bootstrapped statistics corresponding to the
tests statistics of $\Hc_0$, at least when the usual nonparametric bootstrap is invoked.
Let us illustrate the idea with $\Tc_{KM,n}^0$.
Note that $\hat C_{I|J}( \cdot |\X_J=\cdot) = \chi_{KM}(F_n)(\cdot)$ and $\hat C_{s,I|J} = \chi_{s,KM}(F_n)$ for some
smoothed functional $\chi_{KM}$ and $\chi_{s,KM}$.
Under $\Hc_0$, $\chi_{KM}=\chi_{s,KM}$ and $\Tc^0_{KS,n}:=\|\chi_{KM}(F_n) - \chi_{KM}(F) -\chi_{s,KM}(F_n) + \chi_{s,KM}(F) \|_{\infty}$.
Therefore, its bootstrapped version is
\begin{eqnarray*}
\label{Tc0KS_B}
\lefteqn{ \Tc^{0,*}_{KS,n}
:=\|\chi_{KM}(F_n^*) - \chi_{KM}(F_n) -\chi_{s,KM}(F_n^*) + \chi_{s,KM}(F_n) \|_{\infty} }\\
&=&\|\hat C_{I|J}^* - \hat C_{I|J} -\hat C^*_{s,I|J} + \hat C_{s,I|J}\|_{\infty}. \hspace{5cm}\nonumber
\end{eqnarray*}
Obviously, the functions $\hat C_{I|J}^*$ and $\hat C_{s,I|J}^*$ have been calculated as $\hat C_{I|J}$ and $\hat C_{s,I|J}$ respectively, but replacing $\Sc$ by $\Sc^*$.
Similarly, the bootstrapped versions of some Cramer von-Mises-type test statistics are
\begin{equation*}
\label{TcOCvM_B}
\Tc^{0,*}_{CvM,n}:=\int \left( \hat C^*_{I|J}(\u_I | \x_J) - \hat C_{I|J}(\u_I | \x_J) -\hat C^*_{s,I|J}(\u_I) + \hat C_{s,I|J}(\u_I) \right)^2\, w(d\u_I,d\x_J).
\end{equation*}
When playing with the weight functions $w$, it is possible to keep the same weights for the bootstrapped versions, or to replace them with some functionals of $F_n^*$.
For instance, asymptotically, it is equivalent to consider
\begin{equation*}
\Tc^{(1),*}_{CvM,n}:=\int \left( \hat C_{I|J}^*(\u_I | \x_J) - \hat C_{I|J}(\u_I | \x_J) -\hat{C}^*_{s,I|J}(\u_I) + \hat{C}_{s,I|J}(\u_I) \right)^2\, \hat{C}_{n}(d\u_I) \, \hat F_J(d\x_J),\; \text{or}
\end{equation*}
\begin{equation*}
\Tc^{(1),*}_{CvM,n}:=\int \left( \hat C_{I|J}^*(\u_I | \x_J) - \hat C_{I|J}(\u_I | \x_J) -\hat{C}^*_{s,I|J}(\u_I) + \hat{C}_{s,I|J}(\u_I) \right)^2\, \hat{C}^*_{n}(d\u_I) \, \hat F^*_J(d\x_J).
\end{equation*}
Similarly, the limiting law of
\begin{eqnarray*}
\lefteqn{ \Tc^{(2),*}_{CvM,n}:=\int \left( \hat C^*_{I|J}(\hat F^*_{n,1}(x_1 | \x_J),\ldots, \hat F^*_{n,p}(x_p|\x_J) |\x_J) \right. }\\
&-& \left. \hat C_{I|J}(\hat F^*_{n,1}(x_1 | \x_J),\ldots, \hat F^*_{n,p}(x_p|\x_J) |\x_J) -
\hat{C}_{s,I|J}^*( \hat F^*_{n,1}(x_1 | \x_J),\ldots, \hat F^*_{n,p}(x_p|\x_J))
\right. \\
&+& \left.
\hat{C}_{s,I|J}( \hat F_{n,1}^*(x_1 | \x_J),\ldots, \hat F^*_{n,p}(x_p|\x_J))\right)^2 \, H_n(d\x_I, d\x_J) ,
\end{eqnarray*}
given $F_n$ is unchanged replacing $H_n$ by $H_n^*$.
\mds
The same ideas apply concerning the tests of Subsection~\ref{IndepProp_SA}, but they require some modifications.
Let $H$ be some cdf on $\RR^d$. Denote by $H_I$ and $H_J$ the associated cdf on the first $p$ and $d-p$ components respectively.
Denote by $\hat H$, $\hat H_I$ and $\hat H_J$ their empirical counterparts.
Under $\Hc_0$, and for any measurable subsets $B_I$ and $A_J$, $ H(B_I\times A_J) = H(B_I) H(A_J)$. Our tests will be based on the difference
\begin{eqnarray*}
\lefteqn{\hat H(B_I\times A_J) - \hat H_I(B_I) \hat H_J(A_J)= (\hat H-H)(B_I\times A_J) }\\
& -& (\hat H_I - H_I)(B_I) \hat H_J(A_J) -
(\hat H_J - H_J)(A_J) H_I(B_I) .
\end{eqnarray*}
Therefore, a bootstrapped approximation of the latter quantity will be
\begin{equation*}
(\hat H^*-\hat H)(B_I\times A_J) - (\hat H^*_I - \hat H_I)(B_I) \hat H^*_J(A_J) -
(\hat H^*_J - \hat H_J)(A_J) \hat H_I(B_I).
\end{equation*}
To be specific, the bootstrapped versions of our tests are specified as below.
\begin{itemize}
\item Chi-square-type test of independence:
\begin{eqnarray*}
\lefteqn{ \Ic^*_{\chi,n} := n\sum_{k=1}^{N} \sum_{l=1}^{m} \frac{1}{\hat G^*_{I,J}(B_k\times \RR^{d-p}) \hat G^*_{I,J}(\RR^{p} \times A_l)}
\left( (\hat G^*_{I,J} - \hat G_{I,J})(B_k\times A_l) \right. }\\
&-& \left. \hat G^*_{I,J} (B_k\times \RR^{d-p}) \hat G^*_{I,J}(\RR^{p} \times A_l)+
\hat G_{I,J}(B_k\times \RR^{d-p}) \hat G_{I,J}(\RR^{p} \times A_l)
\right)^2.
\end{eqnarray*}
\item Distance between distributions:
\begin{align*}
\Ic^*_{KS,n} = \sup_{\x \in \RR^d} | (\hat G^*_{I,J}- \hat G_{I,J})(\x) - \hat G^*_{I,J}(\x_I,\infty^{d-p}) \hat G^*_{I,J}(\infty^{p}, \x_J)
+ \hat G_{I,J}(\x_I,\infty^{d-p}) \hat G_{I,J}(\infty^{p}, \x_J)|
\end{align*}
$$ \Ic^*_{2,n} = \int \bigg( (\hat G^*_{I,J}- \hat G_{I,J})(\x)
- \hat G^*_{I,J}(\x_I, \infty^{d-p}) \hat G^*_{I,J}(\infty^{p} , \x_J)
+ \hat G_{I,J}(\x_I, \infty^{d-p}) \hat G_{I,J}(\infty^{p} , \x_J) \bigg)^2 \omega(\x)\, d\x,$$
and $\Ic^*_{CvM,n}$ is obtained replacing $\omega(\x)\,d\x$ by $\hat G^*_{I,J}(d\x)$ (or even $\hat G_{I,J}(d\x)$).
\item A test of independence based on the independence copula: Let $\breve C^*_{I,J}$, $\breve C^*_{I|J}$ and $\hat C^*_{J}$ be the empirical copulas based on a bootstrapped version of the pseudo-sample
$(\hat \Z_{i,I|J}, \X_{i,J})_{i=1,\ldots,n}$, $(\hat \Z_{i,I|J})_{i=1,\ldots,n}$ and $( \X_{i,J})_{i=1,\ldots,n}$ respectively. This version can be obtained by nonparametric bootstrap, as usual, providing new vectors $\hat \Z^*_{i,I|J}$ at every draw.
The associated bootstrapped statistics are
\begin{align*}
&\breve \Ic^*_{KS,n} = \sup_{\u \in [0,1]^d} | (\breve C^*_{I,J}- \breve C_{I,J})(\u) - \breve C^*_{I|J}(\u_I)\hat C^*_{J}(\u_J)+ \breve C_{I|J}(\u_I)\hat C_{J}(\u_J) | , \\
&\breve \Ic^*_{2,n} = \int_{\u \in [0,1]^d} \left( (\breve C^*_{I,J}- \breve C_{I,J}) (\u) - \breve C^*_{I|J}(\u_I)\hat C^*_{J}(\u_J)+ \breve C_{I|J}(\u_I)\hat C_{J}(\u_J)\right)^2 \omega(\u)\, d\u, \\
&\breve \Ic^*_{CvM,n} = \int_{\u \in [0,1]^d} \left( (\breve C^*_{I,J}- \breve C_{I,J})(\u) - \breve C^*_{I|J}(\u_I)\hat C^*_{J}(\u_J) +
\breve C_{I|J}(\u_I)\hat C_{J}(\u_J) \right)^2 \, \breve C^*_{I,J}(d\u).
\end{align*}
\end{itemize}
\mds
In the case of the parametric statistics, the situation is pretty much the same, as long as we invoke the nonparametric bootstrap. For instance, the bootstrapped versions of some previous test statistics are
\begin{equation*}
\left(\Tc_{2}^c \right)^* := \int \|
\hat\theta^*(\x_J) - \hat\theta (\x_J) - \hat\theta^*_0 + \hat\theta_0\|^2 \omega(\x_J)\,d\x_J, \; \text{or}
\label{T2compBootNP}
\end{equation*}
\begin{equation*}
\left(\Tc_{dens}^c \right)^* := \int \left(
c_{\hat\theta^*(\x_J)}(\u_I) - c_{\hat\theta (\x_J)}(\u_I)
- c_{\hat\theta_0^*}(\u_I) + c_{\hat\theta_0}(\u_I)
\right)^2 \omega(\u_I,\x_J)
d\u_I\,d\x_J.
\label{TdensitycompBootNP}
\end{equation*}
in the case of the nonparametric bootstrap.
We conjecture that the previous techniques can be applied with the other resampling schemes that have been proposed in Subsection~\ref{resampling_sch}.
Nonetheless, a complete theoretical study of all these alternative schemes and the statement of the validity of their associated bootstrapped statistics is beyond the scope of this paper.
\begin{rem}
\label{tricky_schemes}
For the ``parametric independent'' bootstrap scheme, we have observed
that the test powers are a lot better by considering
\begin{equation*}
\left(\Tc_{2}^c \right)^{**} := \int \|
\hat\theta^*(\x_J) - \hat\theta^*_0 \|^2 \omega(\x_J)\,d\x_J,\; \text{or}
\label{T2compBootP}
\end{equation*}
\begin{equation*}
\left(\Tc_{dens}^c \right)^{**} := \int \left(
c_{\hat\theta^*(\x_J)}(\u_I)
- c_{\hat\theta_0^*}(\u_I)
\right)^2 \omega(\u_I,\x_J)
d\u_I\,d\x_J
\label{TdensitycompBootP},
\end{equation*}
instead. The relevance of such statistics may be theoretically justified in the slightly different context of ``box-type'' tests in the next Section (see Theorem~\ref{thm:ParIndepBoot}).
Since our present case is close to the situation of ``many small boxes'', it is not surprising that we observe similar features. Note that, contrary to the nonparametric bootstrap
or the ``parametric conditional'' bootstrap, the ``parametric independent'' bootstrap scheme uses $\Hc_0$.
{\color{black} More generally, and following the same idea, we found that using the statistic $\Tc^{**} := \psi \left( \sqrt{n}\left( \chi_s(F_n^*) - \chi(F_n^*)\right) \right)$ for the pseudo-independent bootstrap yields much better performance than $\Tc^{*}$. In our simulations, we will therefore use $\Tc^{**}$ as the bootstrap test statistic (see Figures \ref{fig:I_chi_tau_max} and \ref{fig:I2n_tau_max}).
}
\end{rem}
\mds
{\color{black}
\begin{rem}
For testing $\Hc_0$ at a node of a vine model, the realizations of the corresponding explanatory variables $\X_{i,J}$ are not observed in general. In practice, they have to be replaced with pseudo-observations in our previous test statistics. Their calculation involves the bivariate conditional copulas that are associated with the previous nodes in a recursive way. The theoretical analysis of the associated bootstrap schemes is challenging and falls beyond the scope of the current work.
\end{rem}
}
\mds
\section{Tests with ``boxes''}
\label{Boxes}
\subsection{The link with the simplifying assumption}
\label{Link_SA}
As we have seen in Remark \ref{ex_simple}, we do not have $C_{s,I|J} = C_{I}$ in general, when $C_I(\u_I)=C_{I|J}(\u_I| \X_J \in \RR^{d-p})$ for every $\u_I$.
This is the hint there are some subtle relations between conditional copulas when the conditioning event is pointwise or when it is a measurable subset.
Actually, to test $\Hc_0$ in Section~\ref{Tests_SA}, we have relied on kernel estimates and smoothing parameters, at least to evaluate conditional marginal distributions empirically.
To avoid the curse of dimension (when $d-p$ is ``large'' i.e. larger than three in practice),
it is tempting to replace the pointwise conditioning events $\X_J=\x_J$ with $\X_J\in A_J$ for some borelian subsets $A_J\subset \RR^{d-p}$, $\PP (\X_J \in A_J) > 0$.
As a shorthand notation, we shall write ${\mathcal A}_J$ the set of all such $A_J$. We call them ``boxes'' because choosing $d-p$-dimensional rectangles
(i.e. intersections of half-spaces separated by orthogonal hyperplans) is natural,
but our definitions are still valid for arbitrary borelian subsets in $\RR^{d-p}$.
Technically speaking, we will assume that the functions $\x_J \mapsto \1(\x_J\in A_J)$ are Donsker, to apply uniform CLTs' without any hurdle.
Actually, working with $\X_J$-``boxes'' instead of pointwise will simplify a lot the picture.
{\color{black} Indeed, the evaluation of conditional cdfs' given $\X_J\in A_J$ does not require kernel smoothing, bandwidth choices, or
other techniques of curve estimation that deteriorate the optimal rates of convergence.}
\mds
Note that, by definition of the conditional copula of $\X_I$ given $(\X_J\in A_J)$, we have
\begin{eqnarray*}
\lefteqn{
\PP( \X_I \leq \x_I | \X_J \in A_J )} \\
&=&C_{I|J}^{A_{J}}\left(\PP( X_{1} \leq
x_{1} | \X_J \in A_J ),\ldots,\PP( X_{p} \leq x_{p} | \X_J \in A_J )|
\X_J\in A_J\right),
\end{eqnarray*}
for every point $\x_I\in \RR^p$ and every subset
$A_J$ in ${\mathcal A}_J$.
So, it is tempting to replace $\Hc_0$ by
$$\widetilde \Hc_0: C_{I|J}^{A_{J}}(\u_I | \X_J \in A_J) \, \text{does not depend on} \, A_J \in {\mathcal A}_J, \text{for any } \u_I.$$
{\color{black} For any $\x_J$, consider a sequence of boxes $(A^{(n)}_J(\x_J) )$ s.t. $\cap_n A^{(n)}_J(\x_J) = \{\x_J\}$. If the law of $\X$ is sufficiently regular, then
$\lim_n C_{I|J}^{A_{J}^{(n)}}(\u_I | \X_J \in A_J^{(n)}) = C_{I|J}(\u_I | \X_J =\x_J)$ for any $\u_I$. Therefore, $\tilde\Hc_0$ implies $\Hc_0$. This is stated formally in the next proposition.
\mds
\begin{prop}
Assume that the function $h:\RR^{d} \rightarrow [0,1]$,
defined by $h(\y):=\PP(\X_I \leq \y_I | \X_J=\y_J)$ is continuous everywhere.
Then, for every $\x_J\in \RR^{d-p}$ and any sequence of boxes $(A^{(n)}_J(\x_J) )$ such that $\cap_n A^{(n)}_J(\x_J) = \{\x_J\}$, we have
$$\lim_n C_{I|J}^{A_{J}^{(n)}(\x_J)}(\u_I | \X_J \in A_J^{(n)}(\x_J)) = C_{I|J}(\u_I | \X_J =\x_J),$$
for every $\u_I\in [0,1]^p$.
\end{prop}
{\it Proof:}
Consider a particular $\u_I\in [0,1]^p$.
If one component of $\u_I$ is zero, the result is obviously satisfied.
If one component of $\u_I$ is one, this component does not play any role.
Therefore, we can restrict ourselves on $\u_I\in (0,1)^p$.
By continuity, there exists $\x_I\in \RR^p$ s.t. $u_i=F_i(x_i|\x_J)$ for every $i=1,\ldots,p$.
Let the sequences $(x_i^{(n)})$ such that
$u_i=F_i(x^{(n)}_i|\X_J\in A_J^{(n)})$ for every $n$ and every $i=1,\ldots,p$.
First, let us show that $x^{(n)}_i \to x_i$ when $n$ tends to the infinity.
Indeed, by the definition of conditional probabilities (Shiryayev 1984, p.220), we have
$$ u_i=\PP(X_i \leq x_i^{(n)} | \X_J\in A^{(n)}_J ) = \frac{1}{\PP(\X_J\in A_J^{(n)})} \int_{\{\y_J\in A_J^{(n)}\}} \PP(X_i \leq x_i^{(n)} | \X_J=\y_J)\, d\PP_{\X_J}(\y_J) ,$$
and
\begin{eqnarray*}
\lefteqn{ u_i = \PP(X_i\leq x_i |\X_J=\x_J)=
\frac{1}{\PP(\X_J\in A_J^{(n)})} \int_{\{\y_J\in A_J^{(n)}\}} \PP(X_i \leq x_i^{(n)} | \X_J=\x_J)\, d\PP_{\X_J}(\y_J) }\\
&+& \PP( X_i\leq x_i |\x_J ) - \PP(X_i\leq x_i^{(n)} | \x_J). \hspace{5cm}
\end{eqnarray*}
By substracting the two latter identities, we deduce
\begin{eqnarray}
\lefteqn{
\frac{1}{\PP(\X_J\in A_J^{(n)})} \int_{\{\y_J\in A_J^{(n)}\}}
\left[ \PP(X_i \leq x_i^{(n)} | \X_J=\y_J) - \PP(X_i \leq x_i^{(n)} | \X_J=\x_J) \right]\, d\PP_{\X_J}(\y_J) \nonumber}\\
&=& \PP(X_i\leq x_i |\x_J) - \PP(X_i\leq x_i^{(n)} |\x_J). \label{Dinis} \hspace{5cm}
\end{eqnarray}
But, by assumption, $F_i(t |\y_J)$ tends towards $F_i(t|\x_J)$ when
$\y_J $ tends to $\x_J$, for any $t$ (pointwise convergence). A straightforward application of Dini's Theorem shows that the latter convergence is uniform on $\RR$: $\| F_i(\cdot | \y_J) - F_i(\cdot | \x_J) \|_{\infty}$ tends to zero when $\y_J \to \x_J$.
From~(\ref{Dinis}), we deduce that $\PP(X_i\leq x_i^{(n)} |\x_J)\to \PP(X_i\leq x_i |\x_J) $. By the continuity of $F_i(\cdot |\x_J)$, we get $x_i^{(n)}\to x_i$, for any $i=1,\ldots,p$.
\mds
Second, let us come back to conditional copulas: setting
$\x_I^{(n)}:=(x_1^{(n)},\ldots,x_p^{(n)})$, we have
\begin{eqnarray*}
\lefteqn{ C^{A_J^{(n)}}_{I|J} (\u_I | A_J^{(n)} ) - C_{I|J}(\u_I | \x_J) }\\
&=&
C^{A_J^{(n)}}_{I|J} (F_1(x_1^{(n)}|A_J^{(n)}),\ldots, F_p(x_p^{(n)}|A_J^{(n)}) | A_J^{(n)} )
- C_{I|J}(F_1(x_1|\x_J),\ldots, F_p(x_p|\x_J) | \x_J)\\
&=& F_{I|J}(\x_I^{(n)} | A_J^{(n)}) - F_{I|J}(\x_I | \x_J) \\
&=& \frac{1}{\PP(\X_J\in A_J^{(n)})} \int_{\{\y_J\in A_J^{(n)}\}} \left[\PP(\X_I\leq \x_I^{(n)} |\y_J)- \PP(\X_I\leq \x_I |\x_J) \right] \, d\PP_{\X_J}(\y_J).
\end{eqnarray*}
Since $\x_I^{(n)}$ tends to $\x_I$ when $n\to \infty$ and invoking the continuity of $h$ at $(\x_I,\x_J)$, we get that
$C^{A_J^{(n)}}_{I|J} (\u_I | A_J^{(n)} ) \to C_{I|J}(\u_I | \x_J)$ when $n\to \infty$. $\Box$
}
\mds
Unfortunately, the opposite is false. Counter-intuitively, $\tilde\Hc_0$ does not lead to a consistent test of the simplifying assumption. Indeed, under $\Hc_0$, we can see that $ C_{I|J}^{A_{J}}(\u_I | \X_J \in A_J)$ {\bf depends on $A_J$} in general, even if $ C_{I|J}(\u_I | \X_J =\x_J)$ {\bf does not depend on $\x_J$}!
\mds
This is due to the nonlinear transform between conditional (univariate and multivariate) distributions and conditional copulas. In other words, for a usual $d$-dimensional cdf $H$, we have
\begin{equation}
\label{condsubsetdistr}
H(\x_I | \X_J \in A_J)=\frac{1}{\PP(A_J)} \int_{A_J} H(\x_I | \X_J =\x_J) \, d\PP_{\X_J}(\x_J),
\end{equation}
for every measurable subset $A_J \in {\mathcal A}_J$ and $\x_I\in \RR^p$. At the opposite and in general, for conditional copulas,
\begin{equation}
\label{condsubsetcopula}
C_{I|J}^{A_{J}}(\u_I | \X_J \in A_J)\neq
\frac{1}{\PP(A_J)} \int_{A_J} C_{I|J}(\u_I | \X_J =\x_J) \, d\PP_{\X_J}(\x_J),
\end{equation}
for $\u_I\in [0,1]^p$. And even if we assume $\Hc_0$, we have in general,
\begin{equation}
\label{condsubsetcopula_0}
C_{I|J}^{A_{J}}(\u_I | \X_J \in A_J)\neq
\frac{1}{\PP(A_J)} \int_{A_J} C_{s,I|J}(\u_I) \, d\PP_{\X_J}(\x_J) = C_{s,I|J}(\u_I).
\end{equation}
\mds
As a particular case, taking $A_J = \RR^{d-p}$, this means again that $C_{I}(\u_I) \neq C_{s,I|J}(\u_I)$.
\mds
Let us check this rather surprising feature with the example of Remark~\ref{ex_simple} for another subset $A_J$. Recall that $\Hc_0$ is true and that $C_{s,1,2|3}(u,v)=uv$ for every
$u,v \in [0,1]$. Consider the subset $(X_3 \leq a)$, for any real number $a$. The probability of this event is $\Phi(a)$.
Now, let us verify that
$$ uv \neq H(F_{1|3}^- (u | X_3\leq a),F_{2|3}^- (v | X_3\leq a) | X_3\leq a),$$
for some $u,v$ in $(0,1)$. Clearly, for every real number $x_k$, we have
$$\PP(X_k \leq x_k | X_3\leq a)=
\frac{1}{\Phi(a)}\int_{-\infty}^a \Phi(x_k - z)\phi(z) \, dz, k=1,2,\; \text{and}$$
$$ \PP(X_1\leq x_1,X_2\leq x_2 | X_3\leq a)= \frac{1}{\Phi(a)}\int_{-\infty}^a \Phi(x_1-z)\Phi(x_2-z)\phi(z)\,dz.$$
In particular, $\PP(X_k \leq 0 | X_3\leq a)= (1+\Phi(-a))/2$. Therefore, set $u^*=v^*=(1+\Phi(-a))/2$ and
we get
\begin{eqnarray*}
\lefteqn{ H(F_{1|3}^- (u^* | X_3\leq a),F_{2|3}^- (v^* | X_3\leq a) | X_3\leq a)= H(0,0 |X_3\leq a) }\\
&=&\frac{1}{3}\left(1 + \Phi(-a)+ \Phi^2(-a) \right)\neq u^* v^*.
\end{eqnarray*}
In this example, $ C_{s,1,2|3}(\cdot)\neq C_{1,2|3}^{]-\infty,a]}(\cdot | X_3 \leq a),$
for every $a$, even if $\Hc_0$ is satisfied.
\bigskip
Nonetheless, getting back to the general case, we can easily provide an equivalent of Equation (\ref{condsubsetdistr}) for general conditional copulas, i.e. without assuming $\Hc_0$.
\begin{prop}
\label{condsubsetcopgeneral}
For all $\u_I \in [0,1]^p$ and all $ A_J \in {\mathcal A}_J$,
\begin{align*}
\nonumber \\
&C_{I|J}^{A_{J}} (\u_I | \X_J \in A_J) = \frac{1}{\PP(A_J)}
\int_{A_J} \psi(\u_I , \x_J, A_J) d\PP_{\X_J}(\x_J),\; \text{with}
\end{align*}
\begin{align*}
\psi(&\u_I , \x_J, A_J) \nonumber \\
&= C_{I|J} \Bigg(
F_{1|J} \Big( F_{1|J}^- (u_1 | \X_J \in A_J) \big| \X_J=\x_J \Big)
, \dots,
F_{p|J} \Big( F_{p|J}^- (u_p | \X_J \in A_J) \big| \X_J=\x_J \Big)
\Bigg| \X_J = \x_J \Bigg).
\end{align*}
\end{prop}
{\color{black}
{\it Proof:} From (\ref{condsubsetdistr}), we get :
\begin{align*}
H&(\x_I | \X_J \in A_J) \\
&=\frac{1}{\PP(A_J)} \int_{A_J} H(\x_I | \X_J =\x_J) \, d\PP_{\X_J}(\x_J) \\
&=\frac{1}{\PP(A_J)} \int_{A_J} C_{I|J} \Big(
F_{1|J}(x_{1} | \X_J = \x_J ) , \ldots , F_{p|J}(x_{p} | \X_J = \x_J )
\, \big| \, \X_J = \x_J
\Big) \, d\PP_{\X_J}(\x_J).
\end{align*}
We can conclude by using the following definition of the conditional copula
\begin{align*}
C_{I|J}^{A_{J}} (\u_I | \X_J \in A_J) =
H(F_{1|J}^- (u_1 | \X_J \in A_J) , \dots, F_{p|J}^- (u_p | \X_J \in A_J) | \X_J \in A_J).\;\; \Box
\end{align*}
}
\medskip
Now, we understand why (\ref{condsubsetcopula}) (and (\ref{condsubsetcopula_0}) under $\Hc_0$) are not identities:
the conditional copulas, given the subset $A_J$, still depend on the conditional margins of $\X_I$ given $\X_J$ pointwise in general.
\mds
Note that, if $X_i$ is independent of $\X_J$ for every $i=1,\ldots,p$, then, for any such $i$,
\begin{equation*}
F_{i|J} \Big( F_{i|J}^- (u_i | \X_J \in A_J) \big| \X_J=\x_J \Big)
= F_{i} \Big( F_{i}^- (u_i) \Big)= u_i,
\label{indep_XiXJ}
\end{equation*}
{\color{black} and we can revisit the identity of Proposition~\ref{condsubsetcopgeneral}: under $\Hc_0$, we have
\begin{eqnarray*}
\lefteqn{C_{I|J}^{A_{J}} (\u_I | \X_J \in A_J)
= \frac{1}{\PP(A_J)} \int_{A_J} C_{I|J}(\u_I | \X_J=\x_J) \, d\PP_{\X_J}(\x_J) }\\
&=& \frac{1}{\PP(A_J)} \int_{A_J} C_{s,I|J}(\u_I ) \, d\PP_{\X_J}(\x_J) = C_{s,I|J}(\u_I).
\end{eqnarray*}
This means $\Hc_0$ and $\tilde \Hc_0$ are equivalent.} We consider such circumstances as very peculiar and do not have to be confused with a test of $\Hc_0$. Therefore, we advise to lead a preliminary test of independence between $\X_I$ and $\X_J$ (or at least between $X_i$ and $\X_J$ for any $i=1,\ldots,p$) before trying to test $\Hc_0$ itself.
\mds
Now, let us revisit the characterisation of $\Hc_0$ in terms of the independence property, as in Subsection~\ref{IndepProp_SA}.
The latter analysis is confirmed by the equivalent of Proposition \ref{prop_indep_H0} in the case of conditioning subsets $A_J$. Now, the
relevant random vector would be
\begin{equation*}
\Z_{I|A_J} := \left(F_{1|J}(X_{1}|\X_J \in A_J) , \dots,F_{p|J}(X_{p}|\X_J \in A_J) \right),
\label{def:Z_I_AJ}
\end{equation*}
that has straightforward empirical counterparts.
Then, it is tempting to test
\begin{equation*}
\widetilde \Hc_0^{*}: \Z_{I|A_J} \text{ and } (X_J \in A_J)\;\text{are independent for every borelian subset}\; A_J\subset \RR^{d-p}.
\label{atester**}
\end{equation*}
Nonetheless, it can be proved easily that this is not a test of $\Hc_0$, unfortunately.
\begin{prop}
$\Z_{I|A_J}$ and $ (X_J \in A_J)$ are independent for every measurable subset $A_J\subset \RR^{d-p}$ iff $\X_I$ and $\X_J$ are independent.
\end{prop}
\mds
{\it Proof:}
For any measurable subset $A_J$ and any $\u_I\in [0,1]^p$, under $\widetilde \Hc_0^{*}$, we have
$$ \PP\left( \Z_{I|A_J} \leq \u_I, \X_J\in A_J\right)
= \PP\left( \Z_{I|A_J} \leq \u_I \right) \PP ( \X_J\in A_J ) .$$
Consider $\x_I\in \RR^p$. Due to the continuity of the conditional cdfs', there exists $u_k$ s.t.
$F_k(x_k |\X_J\in A_J)=u_k$, $k=1,\ldots,p$. Then, using the invertibility of $x\mapsto F_k(x | \X_J\in A_J)$, we get
$ \PP\left( \Z_{I|A_J} \leq \u_I , \X_J\in A_J \right)
= \PP\left( \X_I \leq \x_I, \X_J\in A_J \right).$
This implies that $\widetilde \Hc_0^{*}$ is equivalent to the following property: for every $\x_I\in \RR^p$ and $A_J$,
$$ \PP\left( X_I\leq \x_I, \X_J\in A_J\right) =
\PP\left( \X_I\leq \x_I \right)\PP\left( \X_J\in A_J\right).\; \Box$$
\mds
{\color{black} The previous result shows that a test of $\tilde\Hc_0^*$ is a test of independence between $\X_I$ and $\X_J$. When the latter assumption is satisfied, $\tilde \Hc_0$ and then $\Hc_0$ are true too, but the opposite is false.}
\mds
Previously, we have exhibited a simple trivariate model where $\Hc_0$ is satisfied when $\X_I$ and $\X_J$ are not independent.
Then, we see that it is not reasonable to test whether the mapping $A_J \mapsto C_{I|J}^{A_{J}} (\cdot | \X_J \in A_J)$ is constant
over ${\mathcal A}_J$, the set of {\bf all} $A_J$ such that $\PP_{\X_J}(A_J) > 0$, with the idea of testing $\Hc_0$.
\mds
Nonetheless, one can weaken the latter assumption,
and restrict oneself to a {\bf finite} family $\bar {{\mathcal A}}_J$ of subsets with positive probabilities. For such a family, we could test the assumption
$$\bar \Hc_0:
A_J \mapsto C_{I|J}^{A_{J}} (\, \cdot \, | \X_J \in A_J)
\, \text{is constant over} \; \bar{{\mathcal A}}_J .$$
\mds
To fix the ideas and w.l.o.g., we will consider a given family of disjoint subsets $\bar{\Ac}_J = \{A_{1,J},\ldots,A_{m,J}\}$ in $\RR^{d-p}$ hereafter.
Note the following consequence of Proposition~\ref{condsubsetcopgeneral}.
\begin{prop}
Assume that, for all $A_J \in \bar {{\mathcal A}}_J$ and for all $i \in I$,
\begin{equation}
F_{i|J} ( x | \X_J=\x_J ) = F_{i|J} (x | \X_J \in A_J), \;\; \forall \x_J \in A_J, x \in \RR.
\label{cnd_technik}
\end{equation}
Then, $\Hc_0$ implies $\bar\Hc_0$.
\end{prop}
\mds
Obviously, if the family $\bar {{\mathcal A}}_J$ is too big, then~(\ref{cnd_technik}) will be too demanding: $\bar \Hc_0$ will be close to a test of independence between $\X_I$ and $\X_J$, and no longer a test of $\Hc_0$. Moreover, the chosen subsets in the family $\bar{\Ac}_J$ do not need to be disjoint, even if this would be a natural choice.
As a special case, if $\RR^{d-p} \in \bar{{\mathcal A}}_J$, the previous condition is equivalent to the independence between $X_i$ and $\X_J$ for every $ i \in I$.
\mds
{\color{black} Note that~(\ref{cnd_technik}) does not imply that the vector of explanatory variables $\X_J$ should be discretized. Indeed, the full model requires the specification of the underlying conditional copula too, independently of the conditional margins and arbitrarily. For instance, we can choose a Gaussian conditional copula whose parameter is a continuous function of $\X_J$, even if~(\ref{cnd_technik}) is fulfilled. And the law of $\X_I$ given $\X_J$ will depend on the current value of $X_J$.}
\mds
A test of $\bar\Hc_0$ may be relevant in a lot of situations, beside technical arguments as the absence of smoothing.
First, the case of discrete (or discretized) explanatory variables $\X_J$ is frequent.
When $\X_J$ is discrete and takes a value among $\{\x_{1,J},\ldots,\x_{m,J}\}$, set $A_{k,J}=\{\x_{k,J}\}$, $k=1,\ldots,m$.
Then, there is identity between testing $\Hc_0$ and $\bar\Hc_0$, with $\bar\Ac_J=\{ A_{1,J},\ldots,A_{m,J}\}$.
Second, the level of precision and sharpness of a copula model is often lower than the models for (conditional) margins.
To illustrate this idea, a lot of complex and subtle models to explain the dynamics of asset volatilities are available when the dynamics of cross-assets dependencies are often a lot more basic and without clear-cut empirical findings.
Therefore, it makes sense to simplify conditional copula models compared to conditional marginal models. This can be done by considering only a few possible conditional copulas, associated to some events $(\X_J\in A_{k,J})$, $k=1,\ldots,m$.
For example, Jondeau and Rockinger (2006) (the first paper that introduced conditional dependence structures, beside Patton (2006a)) proposed a Gaussian copula parameter that take a finite of values randomly, based on the realizations of some past asset returns.
Third, similar situation occur with most Markov-switching copula models, where a finite set of copulas is managed. In such models,
the (unobservable, in general) underlying state of the economy determines the index of the box: see Cholette et al. (2009), Wang et al. (2013), St\"ober and Czado (2014), Fink et al. (2016), among others.
\mds
Therefore, testing $\bar\Hc_0$ is of interest per se.
Even if this is not equivalent to $\Hc_0$ (i.e. the simplifying assumption) formally, the underlying intuitions are close.
And, particularly when the components of the conditioning variable $\X_J$ are numerous, it can make sense to restrict the information set of the underlying conditional copula to a fixed number of conveniently chosen subsets $ A_J$. And the constancy of the underlying copula when $\X_J$ belongs to such subsets is valuable in a lot of practical situations. Therefore, in the next subsections, we study some specific tests of $\bar\Hc_0$ itself.
\subsection{Non-parametric tests with ``boxes''}
\label{NPApproach_m_SA}
To specify such tests, we need first to estimate the conditional marginal cdfs', for instance by
\begin{equation*}
\hat F_{k|J}(x|\X_J \in A_J)
:= \frac{ \sum_{i=1}^n
\1 (X_{i,k} \leq x, \X_{i,J} \in A_J) }
{\sum_{i=1}^n \1 (\X_{i,J} \in A_J)},
\end{equation*}
for every real $x$ and $k=1,\ldots,p$. Similarly the joint law of $\X_I$ given $(\X_J \in A_J)$ may be estimated by
\begin{equation*}
\hat F_{I|J}(\x_I|\X_J \in A_J)
:= \frac{ \sum_{i=1}^n
\1 (X_{i,I} \leq \x_I, \X_{i,J} \in A_J) }
{\sum_{i=1}^n \1 (\X_{i,J} \in A_J)}\cdot
\end{equation*}
The conditional copula given $(\X_J\in A_J)$ will be estimated by
$$ \hat C_{I|J}^{A_J}(\u_I | \X_J \in A_J)
:= \hat F_{I|J}(
\hat F^-_{1|J}(u_1|\X_J \in A_J), \dots,
\hat F^-_{p|J}(u_p|\X_J \in A_J) |\X_J \in A_J).$$
Therefore, it is easy to imagine tests of $\bar\Hc_0$, for instance
\begin{equation}
\bar\Tc_{KS,n} := \sup_{\u_I \in [0,1]^d} \sup_{k,l=1,\ldots,m}
| \hat C_{I|J}^{A_{k,J}}(\u_I | \X_J\in A_{k,J})
- \hat C_{I|J}^{A_{l,J}}(\u_I | \X_J\in A_{l,J})|,
\label{bar_Tc_KS}
\end{equation}
\begin{equation}
\bar\Tc_{CvM,n} := \sum_{k,l=1}^m \int \left(
\hat C_{I|J}^{A_{k,J}}(\u_I | \X_J\in A_{k,J}) -
\hat C_{I|J}^{A_{l,J}}(\u_I | \X_J\in A_{l,J}) \right)^2 w(d\u_I) ,
\label{bar_Tc_CvM}
\end{equation}
for some nonnegative weight functions $w$, or even
\begin{equation}
\bar\Tc_{dist,n} := \sum_{k,l=1}^m dist\left(
\hat C_{I|J}^{A_{k,J}}(\cdot | \X_J\in A_{k,J}) ,
\hat C_{I|J}^{A_{l,J}}(\cdot | \X_J\in A_{l,J})\right) ,
\label{bar_Tc_dist}
\end{equation}
where $dist(\cdot,\cdot)$ denotes a distance between cdfs' on $[0,1]^p$.
More generally, define the matrix
\begin{equation*}
\widehat M(\bar \Ac_J) := \bigg[ \1(k \neq l) \;
dist \Big(
\hat C_{I|J}^{A_{k,J}}(\cdot | \X_J\in A_{k,J}) ,
\hat C_{I|J}^{A_{l,J}}(\cdot | \X_J\in A_{l,J})
\Big) \bigg]_{1 \leq k,l \leq m},
\end{equation*}
and any statistic of the form $||\widehat M (\bar \Ac_J)||$ can be used as a test statistics of $\bar\Hc_0$, when $||\cdot||$ is a norm on the set of $m\times m$-matrices.
Obviously, it is easy to introduce similar statistics based on copula densities instead of cdfs'.
\subsection{Parametric test statistics with ``boxes''}
\label{ParApproach_m_SA}
When we work with subsets $A_J\in \RR^{d-p}$
instead of pointwise conditioning events $(\X_J=\x_J)$, we can adapt all the
previous parametric test statistics of Subsection~\ref{ParApproach_SA}. Nonetheless, the framework will be slightly modified.
\mds
Let us assume that,
for every $A_J \in \bar \Ac_J $, $C_{I|J}^{A_{J}} (\cdot |\X_J \in A_J)$ belongs to the same parametric copula family $ \Cc=\{C_\theta,\theta\in \Theta\}$.
In other words,
$C_{I|J}^{A_{J}} (\cdot | \X_J \in A_J) = C_{\theta(A_J)}(\cdot)$ for every $A_J\in \bar \Ac_J $.
Therefore, we could test the constancy of the mapping $A_J\mapsto \theta(A_J)$, i.e. to test
$$\bar\Hc^c_0: \text{the function } k \in \{ 1, \dots, m \} \mapsto \theta(A_{k,J}) \text{ is
a constant called} \; \theta_0^b.$$
Clearly, for every $A_J \in \bar{\Ac}_J $, we can
estimate $\theta(A_J)$ by
\begin{equation*}
\hat\theta(A_J):= \arg\max_{\theta\in\Theta} \sum_{i=1}^n \log c_\theta \left(
\hat F_{1|J}(X_{i,1}|\X_{i,J} \in A_J),\ldots,\hat F_{p|J}(X_{i,p}|\X_{i,J}
\in A_J )\right)\1( \X_{i,J}\in A_J).
\label{def_hattheta0}
\end{equation*}
It can be proved that the estimate $\hat\theta(A_J)$ is consistent and asymptotically normal, by revisiting the proof of Theorem 1 in Tsukahara (2005). Here, the single difference w.r.t. the latter paper is induced by the random sample size, modifying the limiting distributions. The proof is left to the reader.
\mds
Under the zero assumption $\bar \Hc_0^c$, the parameter of the copula of
$(F_{1}(X_{1}|\X_{J} \in A_{k,J}),\ldots, F_{p}(X_{p}|\X_{J} \in A_{k,J} ))$ given $(\X_J\in A_{k,J})$ is the same for any $k=1,\ldots,m$.
It will be denoted by $\theta_0^b$, and we can still estimate it by the semi-parametric procedure
\begin{equation*}
\hat\theta_0^b:= \arg\max_{\theta\in\Theta} \sum_{k=1}^m\sum_{i=1}^n \log c_\theta \left(
\hat F_{1|J}(X_{i,1}|\X_{i,J} \in A_{k,J}),\ldots, \hat F_{p|J}(X_{i,p}|\X_{i,J}
\in A_{k,J} )\right)\1( \X_{i,J}\in A_{k,J}).
\label{def_hatthetaAkJ}
\end{equation*}
Obviously, under some conditions of regularity and under $\bar\Hc^c_0$, it can be proved that
$\hat\theta_0^b$ is consistent and asymptotically normal, by adapting the results of Tsukahara (2005).
\mds
For convenience, let us define the ``box index'' function $k(\x_J) := \sum_{k=1}^m k \1 \{ \x_{J}\in A_{k,J} \} ,$ for any $\x_J\in \RR^{d-p}$.
In other words, $k$ is the index of the box $A_{k,J}$ that contains $\x_{J}$. It equals zero, when no box in $\bar\Ac_J$ contains $\x_J$.
Let us introduce the r.v. $Y_i := k(\X_{i,J}),$
that stores only all the needed information concerning the conditioning with respect to the variables $\X_{i,J}$.
We can then define the empirical pseudo-observations as
\begin{eqnarray*}
\Z_{i,I|Y} & := & \sum_{k=1}^m \left(
F_{1|J}(X_{i,1}|\X_{J}\in A_{k,J}), \dots,
F_{p|J}(X_{i,p}|\X_{J}\in A_{k,J}) \right)
\1 \{ \X_{i,J}\in A_{k,J} \} \\
& = & \left( F_{1|J}(X_{i,1}|\X_{J}\in A_{k(\X_{i,J}),J}), \dots,
F_{p|J}(X_{i,p}|\X_{J}\in A_{k(\X_{i,J}),J}) \right) \\
& = & \left( F_{1|Y}(X_{i,1}|Y_i), \dots,
F_{p|Y}(X_{i,p}|Y_i)\right) ,
\end{eqnarray*}
for any $i=1,\ldots,n$.
Since we do not observe the conditional marginal cdfs', we define the observed pseudo-observations that we calculate in practice: for $i=1, \dots, n,$
\begin{equation*}
\hat \Z_{i,I|Y} := \left(
\hat F_{1|J} (X_{i,1} | \X_{J} \in A_{Y_i,J}) ,\dots,
\hat F_{p|J} (X_{i,p} | \X_{J} \in A_{Y_i,J}) \right).
\end{equation*}
Note that we can then rewrite the previous estimators as
\begin{equation*}
\hat\theta(A_{k,J}) = \arg\max_{\theta\in\Theta} \sum_{i=1}^n \log c_\theta \left(
\hat \Z_{i,I|Y} \right) \1(Y_i = k),\;
\text{and}\;
\hat\theta_0^b = \arg\max_{\theta\in\Theta} \sum_{i=1}^n \log c_\theta \left( \hat \Z_{i,I|Y} \right).
\end{equation*}
\mds
Now, let us revisit some of the previously proposed test statistics in the case of ``boxes''.
\begin{itemize}
\item Tests based on the comparison between $\hat\theta(\cdot)$ and $\hat\theta_0$:
\begin{equation}
\bar\Tc_{\infty}^{c} := \sqrt{n} \max_{k=1,\ldots,m} \| \hat\theta(A_{k,J}) - \hat\theta_0 \|,\;
\bar\Tc_{2}^{c} := n \sum_{k=1}^m \|\hat\theta(A_{k,J}) - \hat\theta_0\|^2 \omega_k,
\label{bar_Tc_KS_c}
\end{equation}
for some weights $\omega_k$.
\item Tests based on the comparison between $C_{\hat\theta(\cdot)}$ and $C_{\hat\theta_0}$:
\begin{equation}
\bar\Tc_{dist}^{c} := \sum_{k=1}^m dist( C_{\hat\theta(A_{k})} , C_{\hat\theta_0} ) \omega_k,
\label{bar_Tc_dist_c}
\end{equation}
\end{itemize}
and others.
\subsection{Bootstrap techniques for tests with boxes}
\label{Boot_m_SA}
In the same way as in the previous section, we will need bootstrap schemes to evaluate the limiting laws of the test statistics of $\bar\Hc_0$ or $\bar\Hc^c_0$ under the null.
All the nonparametric resampling schemes of Subsection \ref{resampling_sch} (in particular Efron's usual bootstrap) can be used in this framework, replacing the conditional pseudo-observations $\hat\Z_{i,I|J}$
by $\hat\Z_{i,I|Y}$, $i=1,\ldots,n$.
The parametric resampling schemes of Subsection \ref{resampling_sch} can also be applied to the framework of ``boxes'', replacing $\hat \theta_0$ by $\hat \theta_0^b$ and $\hat \theta(\x_J)$ by $\hat \theta(A_J)$.
In the parametric case, the bootstrapped estimates are denoted by $\hat\theta^*_{0}$ and $\hat\theta^*(A_{J})$. They are
the equivalents of $\hat\theta_{0}^b$ and $\hat\theta_{n}(A_{J})$, replacing $(\hat \Z_{i,I|J},Y_i)$ by $(\Z^*_{i},Y^*_i)$.
\mds
The bootstrapped statistics will also be changed accordingly.
Writing them explicitly is a rather straightforward exercise and we do not provide the details, contrary to Subsection~\ref{Boot_SA}.
For example, the bootstrapped statistics corresponding to~(\ref{bar_Tc_KS_c}) is
\begin{equation*}
\big(\bar\Tc_{2}^{c} \big)^*:= n \sum_{k=1}^m \|
\hat\theta^*(A_{k,J}) - \hat\theta(A_{k,J}) - \hat\theta_0^* + \hat\theta_0^b\|^2 \omega_k,
\label{T2comp1BootNP}
\end{equation*}
where $\hat\theta_0^*$ is the result of the program $\arg\max_\theta\sum_{i=1}^n \log c_\theta \left( \hat \Z^*_{i,I|Y} \right)$, in the case of Efron's
nonparametric bootstrap.
\mds
As we noticed in Remark~\ref{tricky_schemes}, some changes are required when dealing with the ``parametric independent'' bootstrap.
Indeed, under the alternative,
we observe $\hat\theta^*(A_{k,J}) - \hat\theta_0^* \approx 0$,
because we have precisely generated a bootstrap sample under $\bar\Hc^c_0$.
As a consequence, the law of $\big(\bar\Tc_{2}^{c} \big)^*$ would be close to the law of $\bar\Tc_{2}^{c}$ but under the alternative, providing very small powers.
Therefore, convenient bootstrapped test statistics of $\bar\Hc_0$ under the ``parametric independent'' scheme will be of the type
\begin{equation*}
\big(\bar\Tc_{2}^{c} \big)^{**}:= n \sum_{k=1}^m \|
\hat\theta^*(A_{k,J}) - \hat\theta_0^* \|^2 \omega_k.
\label{T2comp1BootNP2}
\end{equation*}
Such a result is justified theoretically by the following theorem.
\begin{thm}
\label{thm:ParIndepBoot}
Assume that $\bar \Hc^c_0$ is satisfied, and that we apply the parametric independent bootstrap. Set
\begin{equation*}
\Theta{n,0} := \sqrt{n}
\big( \hat \theta_0 - \theta_0 \big),
\Theta_{n,k} := \sqrt{n}
\big( \hat \theta (A_{k,J}) - \theta_0 \big),k=1, \dots, m,
\end{equation*}
\begin{equation*}
\Theta^*_{n,0} := \sqrt{n}
\big( \hat \theta^*_{0} - \theta_0 \big),\;\text{and}\;
\Theta^*_{n,k} := \sqrt{n}
\big( \hat \theta^*(A_{k,J}) - \theta_0 \big),k=1, \dots, m.
\end{equation*}
Then there exists two independent and identically distributed random vectors $\big( \Theta_0, \dots, \Theta_m \big)$
and $\big( \Theta^{\perp}_0, \dots, \Theta^{\perp}_m \big)$, and a real number $a_0$ such that
\begin{equation*}
\Big( \Theta_{n,0}, \dots, \Theta_{n,m}, \Theta^*_{n,0}, \dots, \Theta^*_{n,m} \Big)
\Longrightarrow
\Big( \Theta_0, \dots, \Theta_m,
\Theta^{\perp}_0 + a_0 \Theta_0, \dots,
\Theta^{\perp}_m + a_0 \Theta_0 \Big).
\end{equation*}
\end{thm}
The proof of this theorem has been postponed in Appendix~\ref{Section_Proofs}.
\mds
As a consequence of the latter result, applying the parametric independent bootstrap procedures for some test statistics based on comparisons between
$\hat \theta_0$ and the $\hat\theta (A_{k,J})$ is valid.
For instance, $\bar\Tc_{2}^{c}$ and $\big( \bar\Tc_{2}^{c} \big)^{**}$ will converge jointly in distribution to a pair of independent and identically distributed variables.
Indeed, we have
\begin{eqnarray*}
\lefteqn{ \left( \bar\Tc_{2}^{c} \, , \,
\big( \bar\Tc_{2}^{c} \big)^{**} \right)
= \left( n \sum_{k=1}^m \|
\hat \theta_{n,0}^b - \hat \theta_{n} (A_{k,J})
\|^2 \omega_k\, , \,
n \sum_{k=1}^m \|
\hat \theta^*_{n,0} - \hat \theta_{n}^* (A_{k,J})
\|^2 \omega_k \right) }\\
&=& \left( n \sum_{k=1}^m \|
\hat \theta_{n,0}^b - \theta_0 + \theta_0 - \hat \theta_{n} (A_{k,J})
\|^2 \omega_k\, , \,
n \sum_{k=1}^m \|
\hat \theta^*_{n,0} - \theta_0 + \theta_0 - \hat \theta_{n}^* (A_{k,J})
\|^2 \omega_k\right)\\
&\Longrightarrow &
\left( \sum_{k=1}^m \| \Theta_0 - \Theta_k
\|^2 \omega_k\, , \,
\sum_{k=1}^m \|
\Theta^{\perp}_0 + a_0 \Theta_0 -
\Theta^{\perp}_k - a_0 \Theta_0 \|^2 \omega_k \right).
\end{eqnarray*}
The same reasoning applies with $\bar\Tc_{\infty}^{c}$ and $\bar\Tc_{dist}^{c}$, for sufficiently regular copula families.
\begin{rem}
We have to stress that the first-level bootstrap, i.e. resampling among the conditioning variables $\X_{i,J}$, $i=1,\ldots,n$ is surely necessary to obtain the latter result.
Indeed, it can be seen that the key proposition~\ref{prop:bootstrapped_Donsker_th} is no longer true otherwise, because the limiting covariance functions of the two corresponding processes $\GG_n$ and $\GG_n^*$ will not be the same: see remark~\ref{cov_boot_v2} below.
\end{rem}
\section{Numerical applications}
\label{NumericalApplications}
Now, we would like to evaluate the empirical performances of some of the previous tests by simulation.
Such an exercise has been led by Genest et al. (2009) or Berg (2009) extensively in the case of goodness-of-fit test for unconditional copulas.
Our goal is not to replicate such experiments in the case of conditional copulas and for tests of the simplifying assumption.
Indeed, we have proposed dozens of test statistics and numerous bootstrap schemes.
Moreover, testing the simplifying assumption through $\Hc_0$ or some ``box-type'' problems through $\bar\Hc_0$ doubles the scale of the task.
Finally, in the former case, we depend on smoothing parameters that induce additional degrees of freedom for the fine tuning of the experiments
(note that Genest et al. (2009) and Berg (2009)
have renounced to consider tests that require
additional smoothing parameters, as the pivotal test statistics proposed in Fermanian (2005)). In our opinion, an exhaustive simulation experiment should be the topic of (at least)
one additional paper. Here, we will restrict ourselves to some partial numerical elements.
They should convince readers that the methods and techniques we have discussed previously provide fairly good results and can be implemented in practice safely.
\mds
Hereafter, we consider bivariate conditional copulas and a single conditioning variable, i.e. $p=2$ and $d=3$.
The sample sizes will be $n=500$, except if it is differently specified.
Concerning the bootstrap, we will resample $N=200$ times to calculate approximated p-values.
Each experiment has been repeated $500$ times to calculate the percentages of rejection. {\color{black} The computations have been made on a standard laptop, and, for the non-parametric bootstrap, they took an average time of $14.1$ seconds for $\Ic_{\chi,n}$ ; $26.9$s for $\Tc^{0,m}_{CvM,n}$, $103$s for $\Ic_{2,n}$, $265$s for $\Tc_2^c$ and $0.922$s for $\bar \Tc_2^c$.}
\mds
In terms of model specification, the margins of $\X = (X_1,X_2,X_3)$ will depend on $X_3$ as
$$ X_1 \sim \Nc(X_3,1),\; X_2 \sim \Nc(X_3,1) \; \text{and} \; X_3 \sim \Nc(0,1).$$
We have studied the following conditional copula families: given $X_3=x$,
\begin{itemize}
\item the Gaussian copula model, with a correlation parameter $\theta(x)$,
\item the Student copula model, with $4$ degrees of freedom and a correlation parameter $\theta(x)$,
\item the Clayton copula model, with a parameter $\theta(x)$,
\item the Gumbel copula model, with a parameter $\theta(x)$,
\item the Frank copula model, with a parameter $\theta(x)$.
\end{itemize}
In every case, we calibrate $\theta(x)$ such that the conditional Kendall's tau $\tau(x)$ satisfies $\tau(x) = \Phi(x)\tau_{\max}$, for some constant $\tau_{\max}\in (0,1)$.
By default, $\tau_{\max}$ is equal to one.
In this case, the random Kendall's tau are uniformly distributed on $[0,1]$.
\mds
{\it Test of $\Hc_0$:} we calculate the percentage of rejections of $\Hc_0$, when the sample is drawn under the true law (level analysis) or when it is drawn under the same parametric copula family, but with varying parameters (power analysis). For example, when the true law is a Gaussian copula with a constant parameter $\rho$ corresponding to $\tau = 1/2$, we draw samples under the alternative through a bivariate Gaussian copula whose random parameters are given by $\tau(X_3)= \Phi(X_3)$.
The chosen test statistics are $\Tc_{CvM}^0$, $\tilde \Tc_{CvM}^0$ (nonparametrics test of $\Hc_0$), $\Ic_{\chi,n}$ and $\Ic_{2,n}$ (nonparametric tests of $\Hc_0$ based on the independence property) and $\Tc_2^c$ (a parametric test of $\Hc_0^c$).
To compute these statistics, we use the estimator of the simplified copula defined in Equation (\ref{estimator_meanCI_J}).
\mds
{\it Test of $\bar\Hc_0$:} in the case of the test with boxes, the data-generating process will be
$$ X_1 \sim \Nc( \gamma(X_3),1),\; X_2 \sim \Nc(\gamma(X_3),1) \; \text{and} \; X_3 \sim \Nc(0,1),$$
where $\gamma(x) = \Phi^{-1} \left( \lfloor m\Phi(X_3) \rfloor /m\right)$, so that the boxes are all of equal probability. As $m \to \infty$, we recover the continuous model for which $\gamma(x) = x$.
\mds
In the same way, we calibrate the parameter $\theta(x)$ of the conditional copulas such that the conditional Kendall's tau satisfies
$\tau(X_3) = \lfloor m\Phi(X_3) \rfloor /m$.
\mds
{\color{black} The choice of ``the best'' boxes $A_{1,J}, \dots, A_{m,J}$ is not an easy task.
This problem happens frequently in statistics (think of Pearson's chi-square test of independence, for instance), and there is no universal answer.
Nonetheless, in some applications, intuition can be fuelled by the context.
For example, in finance, it makes sense to test whether past positive returns induce different conditional dependencies between current returns than past negative returns.
And, as a general ``by default'' rule, we can divide the space of $\X_J$ into several boxes of equal (empirical) probabilities. This trick is particularly relevant when the conditioning variable is univariate.
Therefore, in our example, we have chosen $m=5$ boxes of equal empirical probability for $X_3$, with equal weights.
}
\mds
We have only evaluated $\bar \Tc_{2}^c$ for testing $\bar\Hc_0^c$.
In the following tables, for the parametric tests,
\begin{itemize}
\item ``bootNP'' means the usual nonparametric bootstrap ;
\item ``bootPI'' means the parametric independent bootstrap (where $\Z_{I|J}$ is drawn under $C_{\hat \theta_0}$ and $\X_J$ under the usual
nonparametric bootstrap);
\item ``bootPC'' means the parametric conditional bootstrap (nonparametric bootstrap for $\X_J$, and $\X_I$ is
sampled from the estimated conditional copula $C_{\hat \theta(\X^*_J)}$);
\item ``bootPseudoInd'' means the pseudo-independent bootstrap (nonparametric bootstrap for $\X_J$, and
draw $\hat \Z^*_{I|J}$ independently, among the pseudo-observations $\hat \Z_{j,I|J}$);
\item ``bootCond'' means the conditional bootstrap (nonparametric bootstrap for $\X_J$, and $\X_I$ is sampled from the estimated conditional law of $\X_I$ given $\X^*_J$).
\end{itemize}
\mds
Concerning tests of $\Hc_0$, the results are relatively satisfying. For the nonparametric tests and those based on the independence property (Tables~\ref{NP_tests_level} and~\ref{NP_tests_power}) the rejection rates are large when $\tau_{\max}=1$, and the theoretical levels ($5\%$) are underestimated (a not problematic feature in practice).
This is still the case for tests of the simplifying assumption under a parametric copula model through $\Tc_2^c$: see Tables~\ref{Param_tests_level} and~\ref{Param_tests_power}. The three bootstrap schemes provide similar numerical results. Remind that the bootstrapped statistics is $\left(\Tc_2^c \right)^{**}$ with bootPI (Remark \ref{tricky_schemes}). Tests of $\bar\Hc_0$ under a parametric framework and through $\bar\Tc_2^c$ confirm such observations.
{\color{black} To evaluate the accuracy of the bootstrap approximations asymptotically, we have compared the empirical distribution of some test statistics and their bootstrap versions under the null hypothesis for two bootstrap schemes (see Figures~\ref{fig:QQplot_bar_T2c_bootNP} and~\ref{fig:QQplot_bar_T2c_bootPI}). For the nonparametric bootstrap, the two distributions begin to match each other at $n = 5 000$ whereas $n = 500$ is enough for the parametric independent bootstrap.}
\mds
We have tested the influence of $\tau_{\max}$: the smaller is this parameter, the smaller is the percentage of rejections under the alternative, because the simulated model tends to induce lower dependencies of copula parameters w.r.t. $X_3$: see Figures~\ref{fig:I_chi_tau_max},
\ref{fig:I2n_tau_max}, \ref{fig:T2c_tau_max}, and~\ref{fig:bar_T2c_tau_max}.
{\color{black} Note that, on each of these figures, the point at the left corresponds to a conditional Kendall's tau which is constant, and equal to $0$ (because $\tau_{\max}=0$) whereas the rejection percentages in Tables~\ref{NP_tests_level} and~\ref{Param_tests_level} correspond to a conditional Kendall's tau constant, and equal to $0.5$. As the two data-generating process are not the same, the rejection percentages can differ even if both are under the null hypothesis. Nevertheless, in every case, our empirical sizes converge to $0.05$ as the sample size increases. When $n=5000$, we found that the percentage of rejections are between $4$\% and $6$\%.}
\mds
We have not tried to exhibit an ``asymptotically optimal'' bandwidth selector for our particular testing problem.
This could be the task for further research. We have preferred a basic ad-hoc procedure.
In our test statistics, we smooth w.r.t. $F_3(X_3)$ (or its estimate, to be specific), whose law is uniform on $(0,1)$.
A reasonable bandwidth $h$ is given by the so-called rule-of-thumb in kernel density estimation, i.e. $h^* = \sigma(F_3(X_3)) / n^{1/5} = 1/ (\sqrt{12} n^{1/5}) = 0.083$.
Such a choice has provided reasonable results. The typical influence of the bandwidth choice on the test results is illustrated in Figure~\ref{fig:Rejection_T0_CvM_h}.
In general, the latter $h^*$ belongs to reasonably wide intervals of ``convenient'' bandwidth values, so that the performances of our
considered tests are not very sensitive to the bandwidth choice.
\mds
{\color{black} To avoid boundary problems, we have slightly modified the test statistics: we remove the observations $i$ such that $F_3(X_{i,3}) \leq h$ or $F_3(X_{i,3}) \geq 1-h$. This corresponds to changing the integrals (resp. max) on $[0,1]$ to integrals (resp. max) on $[h, 1-h]$.
}
\begin{table}[htb]
\centering
\begin{tabular}{c|cccc}
Family & $\Tc_{CvM,n}^{0}$ (\ref{TcOCvMm})
& $\tilde \Tc_{CvM,n}^{0}$ (\ref{TcOCvM_bis})
& $\Ic_{\chi,n}$ (\ref{IcChi})
& $\Ic_{2,n}$ (\ref{Ic2n}) \\
\hline
Gaussian & 0 & 0 & 0 & 0 \\
Student & 0 & 0 & 0 & 0 \\
Clayton & 0 & 0 & 1 & 0 \\
Gumbel & 1 & 1 & 0 & 1 \\
Frank & 0 & 0 & 0 & 0 \\
\end{tabular}
\caption{Rejection percentages under the null (nonparametric tests, nonparametric bootstrap bootNP).}
\label{NP_tests_level}
\end{table}
\begin{table}[htb]
\centering
\begin{tabular}{c|cccc}
Family & $\Tc_{CvM,n}^{0}$ (\ref{TcOCvMm})
& $\tilde \Tc_{CvM,n}^{0}$ (\ref{TcOCvM_bis})
& $\Ic_{\chi,n}$ (\ref{IcChi})
& $\Ic_{2,n}$ (\ref{Ic2n}) \\
\hline
Gaussian & 98 & 100 & 100 & 93 \\
Student & 100 & 99 & 98 & 90 \\
Clayton & 99 & 99 & 99 & 98 \\
Gumbel & 99 & 98 & 100 & 95 \\
Frank & 98 & 100 & 98 & 50 \\
\end{tabular}
\caption{Rejection percentages under the alternative (nonparametric tests, nonparametric bootstrap bootNP).}
\label{NP_tests_power}
\end{table}
\begin{table}[htb]
\centering
\begin{tabular}{c|ccc|ccc}
\multirow{2}{*}{Family}
& \multicolumn{3}{c|}{$\Tc_2^c$ (\ref{Tc_infty_C})}
& \multicolumn{3}{c}{$\bar \Tc_2^c$ (\ref{bar_Tc_KS_c})} \\
& bootPI & bootPC & bootNP & bootPI & bootPC & bootNP \\
\hline
Gaussian & 4 & 0 & 0 & 6 & 4 & 1 \\
Student & 6 & 0 & 2 & 4 & 5 & 3 \\
Clayton & 7 & 0 & 1 & 7 & 1 & 1 \\
Gumbel & 3 & 1 & 0 & 9 & 2 & 2 \\
Frank & 4 & 0 & 6 & 3 & 5 & 1 \\
\end{tabular}
\caption{Rejection percentages under the null (parametric tests).}
\label{Param_tests_level}
\end{table}
\begin{table}[htb]
\centering
\begin{tabular}{c|ccc|ccc}
\multirow{2}{*}{Family}
& \multicolumn{3}{c|}{$\Tc_2^c$ (\ref{Tc_infty_C})}
& \multicolumn{3}{c}{$\bar \Tc_2^c$ (\ref{bar_Tc_KS_c})} \\
& bootPI & bootPC & bootNP & bootPI & bootPC & bootNP \\
\hline
Gaussian & 100 & 100 & 100 & 100 & 100 & 100 \\
Student & 100 & 100 & 100 & 100 & 100 & 100 \\
Clayton & 100 & 62 & 98 & 100 & 98 & 100 \\
Gumbel & 100 & 100 & 34 & 100 & 99 & 76 \\
Frank & 100 & 100 & 100 & 100 & 100 & 100 \\
\end{tabular}
\caption{Rejection percentages under the alternative (parametric tests).}
\label{Param_tests_power}
\end{table}
\FloatBarrier
\begin{figure}
\centering
\includegraphics[width = 11cm]{I_chi_rejection_pct_tauMax}
\caption{Rejection percentages for the statistics $\Ic_\chi$ (\ref{IcChi}) as a function of $\tau_{max}$: we use the gaussian copula, with a conditional parameter $\theta(x)$ calibrated such that
the conditional Kendall's tau $\tau(x)$ satisfies $\tau(x) = \tau_{max} \cdot \Phi(x)$. Solid line: bootNP. Dashed line : bootPseudoInd. Dotted line : bootCond.}
\label{fig:I_chi_tau_max}
\end{figure}
\begin{figure}
\centering
\includegraphics[width = 11cm]{I2n_rejection_pct_tauMax}
\caption{Rejection percentages for the statistics $I_{2,n}$ (\ref{Ic2n}) as a function of $\tau_{max}$: we use the gaussian copula, with a conditional parameter $\theta(x)$ calibrated such that
the conditional Kendall's tau $\tau(x)$ satisfies $\tau(x) = \tau_{max} \cdot \Phi(x)$. Solid line: bootNP. Dashed line : bootPseudoInd. Dotted line : bootCond.}
\label{fig:I2n_tau_max}
\end{figure}
\begin{figure}
\centering
\includegraphics[width = 11cm]{T2c_rejection_pct_tauMax}
\caption{Rejection percentages for the statistics $\Tc_2^c$ (\ref{Tc_infty_C}) as a function of $\tau_{max}$: {\color{black} we use the gaussian copula, with a conditional parameter $\theta(x)$ calibrated} such that
the conditional Kendall's tau $\tau(x)$ satisfies $\tau(x) = \tau_{max} \cdot \Phi(x)$. Solid line: bootNP. Dashed line : bootPI. Dotted line : bootPC.}
\label{fig:T2c_tau_max}
\end{figure}
\begin{figure}
\centering
\includegraphics[width = 11cm]{bar_T2c_rejection_pct_tauMax}
\caption{Rejection percentages for the statistics $\bar \Tc_2^c$ (\ref{bar_Tc_KS_c}) as a function of $\tau_{max}$: we use the gaussian copula, with a conditional parameter $\theta(x)$ calibrated such that
the conditional Kendall's tau $\tau(x)$ satisfies $\tau(x) = \tau_{max} \cdot \lfloor m\Phi(X_3) \rfloor /m $. Solid line: bootNP. Dashed line : bootPI. Dotted line : bootPC.}
\label{fig:bar_T2c_tau_max}
\end{figure}
\begin{figure}
\centering
\includegraphics[width = 16cm]{QQplot_bar_T2c_bootNP_family_1}
\caption{QQ-plot of a sample of the test statistic $\bar \Tc_2^c$ and a sample of the bootstrap test statistic $(\bar \Tc_2^c)^{*}$ using the non-parametric bootstrap for the gaussian copula, with different sample sizes and under $\bar \Hc_0^c$ (conditional Kendall's tau is constant and equal to $0.5$).}
\label{fig:QQplot_bar_T2c_bootNP}
\end{figure}
\begin{figure}
\centering
\includegraphics[width = 12cm]{QQplot_bar_T2c_bootPI_family_1}
\caption{QQ-plot of a sample of the test statistic $\bar \Tc_{2,c}$ and a sample of the bootstrap test statistic $(\bar \Tc_{2,c})^{**}$ using the parametric independent bootstrap for the gaussian copula, with $n = 500$ and under $\bar \Hc_0^c$ (conditional Kendall's tau is constant and equal to $0.5$).}
\label{fig:QQplot_bar_T2c_bootPI}
\end{figure}
\begin{figure}
\centering
\includegraphics[width = 12cm]{T0_CvM_reject_pct_h}
\caption{Rejection percentage for the statistic $\Tc^{0,m}_{CvM,n}$ (\ref{TcOCvMm}) with $m=20$ as a function of $h$. The red (resp. black) line corresponds to the alternative (resp. zero) assumption.}
\label{fig:Rejection_T0_CvM_h}
\end{figure}
\FloatBarrier
\section{Conclusion}
We have provided an overview of the simplifying assumption problem, under a statistical point of view.
In the context of nonparametric or parametric conditional copula models (with unknown conditional marginal distributions), numerous testing procedures have been proposed.
We have developed the theory towards a slightly different but related approach, where ``box-type'' conditioning events replace pointwise ones. This open a new field for research that is interesting per se.
Several new bootstrap procedures have been detailed, to evaluate p-values under the zero assumption in both cases.
In particular, we have proved the validity of one of them (the ``parametric independent'' bootstrap scheme under $\bar\Hc_0$).
\mds
Clearly, there remains a lot of work. We have opened the Pandora box rather than provided definitive answers. Open questions are still numerous:
precise theoretical convergence results of our test statistics (and others!), validity of these new bootstrap schemes, bandwidth choices, empirical performances,...
All these dimensions would require further research. We have made a contribution to the landscape of problems related to the simplifying assumption, and proposed a working program for the whole copula community. | {"config": "arxiv", "file": "1612.07349/On_the_simplified_vine_assumption_2017-05-04.tex"} |
TITLE: Can AND, OR and NOT be used to represent any truth table?
QUESTION [0 upvotes]: there are $2^{2^n}$ truth tables for n inputs. We know that NAND and NOR can be used to represent every truth table in two inputs. What about three inputs. Is it possible or not? If not, why?
REPLY [3 votes]: Yes, any truth-function, no matter how complex, and no mater how many inputs, can be expressed with $\land$, $\lor$, and $\neg$.
We can prove this directly by generating a conjunction for each row where the function evaluates to true, and then disjuncting of all of those conjunctions, and a second of thought will make it clear that the resulting expression captures the truth-function (if there is no row where the function evaluates to $T$, then just use the expression $P \land \neg P$)
For example, suppose we have a truth-function whose truth-conditions are given by the following table:
\begin{array}{ccc|c}
P&Q&R&f(P,Q,R)\\
\hline
T&T&T&F\\
T&T&F&T\\
T&F&T&F\\
T&F&F&T\\
F&T&T&T\\
F&T&F&F\\
F&F&T&F\\
F&F&F&F\\
\end{array}
This function is true in rows 2,4, and 5, and thus we generate the terms $P \land Q \land \neg R$, $P \land \neg Q \land \neg R$, and $\neg P \land Q \land R$ respectively. Disjuncting them gives us:
$$(P \land Q \land \neg R) \lor (P \land \neg Q \land \neg R) \lor (\neg P \land Q \land R)$$
This formula is said to be in Disjunctive Normal Form (DNF): it is a generalized disjunction, where each disjunct is a generalized conjunction of literals, and where a literal is either an atomic variable or the negation thereof.
Now, I think it should be clear from this example that you can do this for any truth-function. That is, any function can be represented using a DNF formula by generating a disjunction for each row where the function evaluates to $T$. For example, if the row is one where $P$ is True, $Q$ is True, and $R$ is false, say, then generate the expression $P \land Q \lor \neg R$. Once you have an expression for all of these rows, then conjunct them all together for the final expression. And if there is no row where the function evaluates to $T$, then just use the expression $P \land \neg P$, which is in DNF (it can be seen as a generalized disjunction with only one disjunct ... which is a conjunction of literals).
Thus, for any truth-function, there is a DNF expression that captures that function, and DNF only uses $\neg, \land, and \lor$
Just for comeplteness, I also want to point out that there is something called the Conjunctive Normal Form (CNF): this is a generalized conjunction, where each conjunct is a generalized disjunction of literals.
Can we find an expression equivalent to this function that is in CNF? Yes. There are two ways to do this. First, we can take the existing DNF expression and transform it into an expression that is in CNF, using Distribution of $lor$ over $\land$. In this case, that would actually be a pretty sizeable formula, starting with:
$(P \lor P \lor \neg P) \land (P \lor P \lor Q) \land (P \lor P \lor R) \land (P \lor \neg Q \lor \neg P) \land (P \lor \neg Q \lor Q) \land (P \lor \neg Q \lor R) \land (P \lor \neg R \lor \neg P) \land ...$
(see what I do here? I systematically find all combinations of picking one literal from each of the three disjuncts .. it works like FOIL, if you are familiar with that)
However, another way to do this is to go back to the original truth-table, and to focus on the cases where the function evaluates to False, rather than True. That is, the function is False exactly when:
$$(P \land Q \land R) \lor (P \land \neg Q \land R) \lor (\neg P \land Q \land \neg R) \lor (\neg P \land \neg Q \land R) \lor (\neg P \land \neg Q \land \neg R)$$
is True, and that means that the function is true exactly when:
$$\neg [(P \land Q \land R) \lor (P \land \neg Q \land R) \lor (\neg P \land Q \land \neg R) \lor (\neg P \land \neg Q \land R) \lor (\neg P \land \neg Q \land \neg R)]$$
is True. By DeMorgan, this is equivalent to:
$$(\neg P \lor \neg Q \lor \neg R) \land (\neg P \lor Q \lor \neg R) \land (P \lor \neg Q \lor R) \land (P \lor Q \lor \neg R) \land (P \lor Q \lor R)$$
and that expression is in CNF, i.e. in Layer-Form.
Again, I think it should be clear from this example that you can do this for any truth-function. That is, any function can be represented using a CNF formula by generating a disjunction for each row where the function evaluates to $F$. For example, if the row is one where $P$ is True, $Q$ is True, and $R$ is false, say, then generate the expression $\neg P \lor \neg Q \lor R$ (for this is the negation of $P \land Q \land \neg R$). Once you have an expression for all of these rows, then conjunct them all together for the final expression. If there is no row where the function evaluates to $F$, then just use the expression $P \lor \neg P$), which is in DNF (it can be seen as a generalized conjunction with only one conjunct ... which is a disjunction of literals).
We can also give a proof on the basis of the expressively completeness of the $NAND$. Since $P \ NAND \ Q \Leftrightarrow \neg (P \land Q)$, we can simply take whatever expression built up from only $NAND$'s that captures some truth-function $f$, and change any of those $NAND$'s, with $\neg$'s and $\land$'s, and the result will be equivalent to the original, and thus also capture $f$. Notice that this in fact shows that $\{ \neg, \land \}$ is expressively complete, so we don't need any $\lor$'s (of course, that should be obvious: if $\{ \land, \lor, \neg \}$ is expressively complete, and given that $P \lor Q \Leftrightarrow \neg (\neg P \land \neg Q)$, we can change any $\lor$ in any expression capturing some truth-function $f$ by a bunch of $\land$'s and $\neg$'s, and capture that truth-function as well). And of course we can likewise show that $\{ \lor, \neg \}$ is complete, and that immediately follwos from the $NOR$ being complete as well, since $P \ NOR \ Q \Leftrightarrow \neg (P \lor Q)$ | {"set_name": "stack_exchange", "score": 0, "question_id": 2646024} |
TITLE: Are there any functions where differentiating the Taylor series yields an accurate approximation for the differentiated function?
QUESTION [0 upvotes]: Considering the Taylor series for $e^x$, you can differentiate it and get the same series. ie $$\frac{d}{dx} (1 + x + \frac{x^2}{2!} + ...) = 1 + x + ...$$
This is obviously a property of $e^x$, thus implying that differentiating the Taylor series yields the series for the differentiated function. Is this a valid property, or just a fluke because it's $e^x$? And does this hold true for other functions with Taylor series--can you differentiate other series and still get an accurate representation of the differentiated function? If you can explain the underlying analysis that would be very helpful too.
Thanks :)
REPLY [2 votes]: If $f(x) = \sum_{n=0}^{\infty}a_nx^n, x\in (-r,r),$ then
$$f'(x) = \sum_{n=1}^{\infty}na_nx^{n-1}, x\in (-r,r).$$
This is a standard result in the theory of Taylor series. | {"set_name": "stack_exchange", "score": 0, "question_id": 3619217} |
TITLE: Measurable sets of $\mathbb R^n$ forming unique absolutely continuous convex combinations?
QUESTION [1 upvotes]: If we consider a finite set $A\subset\mathbb R^n$, uniqueness of the convex decomposition of points in $A$ is equivalent to the absence of $\mu\neq0$ signed measure supported on $A$ such that $\mu(\mathbb R^n) = 0$ and,
$$
\int_{\mathbb R^n}x\mathrm d\mu(x)=0.
$$
My question is, what happens when $A$ is a measurable set of non-null measure and we restrict combinations to be absolutely continuous? More precisely:
Is there a Borel set $A \subset R^n$ of positive (Lebesgue)
measure such that there exists no $\mu\neq0$ signed measure
verifying $|\mu|\leq\lambda_A$ (noting $\lambda$ the Lebesgue measure,
and $\lambda_A$ its trace on $A$), $\mu(\mathbb R^n) = 0$ and,
$$\int_{\mathbb R^n}x\mathrm d\mu(x)=0?$$
Typically, as soon as $A$ contains an open set, there exists such $\mu$. On the other hand, $A$ does not need to contain an open set to have non-null measure.
REPLY [0 votes]: There is no such set. Given an $A\subseteq\mathbb R^n$, we can pick arbitrarily many disjoint positive measure subsets $A_j$, $j=1,\ldots ,N$, and consider measures of the form
$$
d\mu = \left( \sum_j c_j \chi_{A_j}\right)\, dx .
$$
The conditions we're trying to satisfy lead to a homogeneous linear system on the $c_j$. We have $N$ variables and $n+1$ equations. A homogeneous linear system with more variables than equations always has a non-trivial solution. | {"set_name": "stack_exchange", "score": 1, "question_id": 422781} |
TITLE: $H:=\{ g^2 : g \in G \}$ is a subgroup of $G$ $\implies $ $H$ is normal in $G$
QUESTION [1 upvotes]: Let $G$ be a group . If $H :=\{ g^2 : g \in G\}$ is a subgroup of $G$ , the how to prove that $H$ is normal in $G$ ?
REPLY [6 votes]: Consider if $h\in G$ then $h^2 \in H$, so for any $g\in G$ we have:
$$gh^2g^{-1} = (ghg^{-1})^2 \in H$$
REPLY [2 votes]: Hint: What is $(hgh^{-1})^n$ equal to, for any $n$? | {"set_name": "stack_exchange", "score": 1, "question_id": 894102} |
TITLE: West German Mathematics Olympiad 1981
QUESTION [4 upvotes]: If $n=2^k$ then from a set of $2n-1$ elements we can select $n$ numbers such that their sum is divisible by $n$.
I divided it in two case, first if any $n$ of them have the same remainder mod $n$ the we are done, in the other case i am having difficulty.
What if $n$ is any natural number?
REPLY [8 votes]: Induction on $k$:
The claims is clear for $n=2^0$.
Let $k>0$, assume the claim holds for $n'=2^{k-1}$ and let $S$ be a set of $2n-1$ integers.
By induction hypothesis, we can pick $n'$ numbers from the first $2n'-1$ elements of $S$, such that their sum is a multiple of $n'$.
After removing these from $S$, we still have $2n-1-n'=3n'-1$, so from the first $2n'-1$ of these, we can again pick $n'$ such that their sum is a multiple of $n'$. After removing these as well, we are left with $2n-1-2n'=2n'-1$ numbers and again we can pick $n'$ elements such that their sum is a multiple of $n'$.
So now we have three disjoint subsets of $S$ of size $n'$ each and such that each sums to a multiple of $n'$. Note that a multiple of $n'$ is either a multiple of $n$ or an odd multiple of $n'$; one of these two types must occur at least twice among our three subsets. By combining two such like subsets, we arrive at a set of size $n$ with sum a multiple of $n$. | {"set_name": "stack_exchange", "score": 4, "question_id": 2049522} |
TITLE: Conditional distribution of the sum of k, independent ordered draws
QUESTION [1 upvotes]: Suppose I make k independent draws from the same distribution (say uniform).
The distribution of the sum of these order statistics should equal the distribution of the sum of k independent random variables, am I right?
But what if I am also told that the highest of these draws has a value less (or equal than) x? In other words, what is the expression for the distribution of the sum of k order statistics conditional on the highest of them not exceeding x?
REPLY [1 votes]: If you know that the highest draw is at most $x$, then you know that all $k$ draws are at most $x$, and otherwise they are arbitrary. So in effect you are sampling from your distribution conditioned on being at most $x$.
For example, if your initial distribution is uniform on $[0,1]$ and $x \in [0,1]$, then the expected sum of $k$ independent draws conditioned on the maximum being at most $x$ is the same as the expected sum of $k$ independent draws of a uniform $[0,x]$ variable, which is $kx/2$. | {"set_name": "stack_exchange", "score": 1, "question_id": 1277145} |
TITLE: Generating infinite index subgroups of a free group
QUESTION [4 upvotes]: Let $F$ be nonabelian finitely generated free group, let $H \leq F$ be a finitely generated subgroup of infinite index and let $x,y \in F \setminus H$. Must there be some $a \in F$ such that $[F : \langle H,a,xay\rangle] = \infty$ ?
REPLY [2 votes]: Taking the finite subgraph of the Schreier graph contains the Stallings graph of H and x,y^{-1} read from the base point. Going to a conjugate we may assume some letter is not read at the base point. Choose a word a not readable on this finite graph and beginning with the letter not readable at the base and ending with the inverse of that letter. Then we can sow a at the base point and have a Stallings graph. Now sow a from the end of x to the end of y^{-1} and do Stallings folding. Because a cannot be read on the original graph the folds should not result in a covering space. I'll try later to write details later. | {"set_name": "stack_exchange", "score": 4, "question_id": 191855} |
\begin{document}
\begin{abstract}
Gröbner bases are a fundamental tool when studying ideals in multivariate
polynomial rings. More recently there has been a growing interest in
transferring techniques from the field case to other
coefficient rings, most notably Euclidean domains and principal ideal rings.
In this paper we will consider multivariate polynomial rings over Dedekind
domain. By generalizing methods from the theory of finitely generated
projective modules, we show that it is possible to describe Gröbner bases
over Dedekind domains in a way similar to the case of principal ideal
domains, both from a theoretical and algorithmic point of view.
\end{abstract}
\maketitle
\section{Introduction}
The theory of Gröbner bases, initiated by Buchberger~\cite{Buchberger1965} plays an
important role not only in mathematical disciplines like algorithmic commutative
algebra and algebraic geometry, but also in related areas of science and
engineering. Although the original approach of Buchberger was restricted to
multivariate polynomials with coefficients in a field,
Trinks~\cite{Trinks1978} and Zacharias~\cite{Zacharias1978} showed that by
generalizing the notions of $S$-polynomials and reduction, Gröbner bases can
also be constructed in the ring case. For coefficient rings that are principal
ideal domains, the approach to constructing Gröbner bases is very close to the
field case has attracted a lot of attention, see for example \cite{Pauer1988, KandriRody1988, Moller1988, Pan1989}, also \cite[Chapter 4]{Adams1994} or \cite[Chapter 10]{Becker1993}.
In this paper, we will investigate Gröbner bases over Dedekind domains, that
is, over integral domains which are locally discrete valuation rings. Despite
the prominent role of Dedekind domains as coefficient rings for example in
arithmetic geometry, not much is known in connection with the construction of
Gröbner bases. Our aim is to show that it is possible to improve upon the
generic algorithms for Noetherian domains. In particular, we will show that using
the notions of pseudo-polynomials and pseudo-Gröbner bases the approach comes
very close to that of principal ideal domains.
The idea of using so called pseudo-objects to interpolate between principal
ideal domains and Dedekind domains has already been successfully applied to the
theory of finitely generated projective modules. Recall that over a principal
ideal domain such modules are in fact free of finite rank. By using the Hermite
and Smith form, working with such modules is as easy as working with finite dimensional
vector spaces over a field. If the ring is merely a Dedekind domain, such
modules are in general not free, rendering the Hermite and Smith form useless. But
since the work of Steinitz~\cite{Steinitz1911, Steinitz1912} it has been known that these modules
are direct sums of projective submodules of rank one. In \cite{Cohen1996} (see
also \cite{Cohen2000}), based upon ideas already present in
\cite{Bosma1991}, a theory of pseudo-elements has been developed, which
enables an algorithmic treatment of this class of modules very close to the
case of principal ideal domains. In particular, a generalized Hermite form
algorithm is described, which allows for similar improvements as the classical
Hermite form algorithm in the principal ideal case, see also \cite{Biasse2017, Fieker2014}.
Now---in contrast to the setting of finitely generated projected modules just described---Gröbner bases do exist if the coefficient ring is a
Dedekind domain.
In~\cite{Adams1997} using a generalized version of Gröbner basis, the structure of ideals in univariate polynomial rings over Dedekind domains is studied.
Apart from that, nothing is published on how to exploit the structure of Dedekind
domains in the algorithmic study of ideals in multivariate polynomial rings.
Building upon the notion of
pseudo-objects, in this paper we will introduce pseudo-Gröbner bases, that will
interpolate more smoothly between the theory of Gröbner bases for Dedekind
domains and principal ideal domains.
Of course the hope is that one can apply more
sophisticated techniques from principal ideal domains to Dedekind domains, for example, signature-based algorithms as introduced in \cite{Eder2017}.
As an illustration of this idea, we prove a simple generalization of the product criterion for pseudo-polynomials.
We will also show how to use the pseudo-Gröbner basis to solve basic tasks from algorithmic commutative algebra,
including the computation of primes of bad reduction.
The paper is organized as follows.
In Section~2 we recall standard notions from multivariate polynomials and translate
them to the context of pseudo-polynomials.
This is followed by a generalization of Gröbner bases in Section~3, where we
present various characterizations of the so called pseudo-Gröbner bases.
In Section~4 by analyzing syzygies of pseudo-polynomials, we prove a variation of Buchberger's criterion.
As a result we obtain a simple to formulate algorithm for computing Gröbner bases.
We also use this syzygy-based approach to prove the generalized product criterion.
In Section~5 we consider the situation over a ring of integers of a number
field and address the omnipresent problem of quickly growing coefficients by
employing classical tools from algorithmic number theory.
In the final section we give some applications to classical problems in algorithmic
commutative algebra and the computation of primes of bad reduction.
\subsection*{Acknowledgments}
The author was supported by Project II.2 of SFB-TRR 195 `Symbolic Tools in
Mathematics and their Application'
of the German Research Foundation (DFG).
\begin{notation*}
Throughout this paper, we will use $R$ to denote a Dedekind domain, that is, a Noetherian integrally closed domain of Krull dimension one, and $K$ to denote its total ring of fractions.
Furthermore, we fix a multivariate ring $R[x] = R[x_1,\dotsc,x_n]$ and a monomial ordering $<$ on $R[x]$.
\end{notation*}
\section{Pseudo-elements and pseudo-polynomials}
In this section we recall basic notions from multivariate polynomials and
generalize them in the context of pseudo-polynomials over Dedekind domains.
\subsection{Multivariate polynomials.}
For $\alpha = (\alpha_1,\dotsc,\alpha_n) \in \N^n$, we denote by $x^\alpha$ the
monomial $x_1^{\alpha_1}\dotsm x_n^{\alpha_n}$. We call $\alpha$ the \textit{degree} of $f$ and denote it by $\deg(f)$.
A polynomial $f = c x^\alpha$ with $c \in R$ and $\alpha \in \N^n$ is called a \textit{term}.
For an arbitrary multivariate polynomial $f \in \sum_{\alpha \in \N^n} c_\alpha x^\alpha$ we denote by
$\deg(f) = \max_>\{ \alpha \in \N^n \mid c_\alpha \neq 0\}$ the \textit{degree} of $f$, by $\LM(f) = x^{\deg(f)}$ the \textit{leading monomial}, by $\LC(f) = c_{\deg(\alpha)}$ the \textit{leading coefficient} and by $\LT(f) = c_{\deg(f)}x^{\deg(f)}$ the \textit{leading term} of $f$.
\subsection{Pseudo-elements and pseudo-polynomials.}
A fractional ideal of $R$ is a non-zero finitely generated $R$-submodule of $K$.
Let now $V$ be a vector space over $K$ and $M$ an $R$-submodule of $V$ such that $KM = V$, that is, $M$ contains a $K$-basis of $V$.
Given a fractional ideal $\mathfrak a$ of $R$ and an element $v \in V$ we denote
by $\mathfrak av$ the set $\{ \alpha v \mid \alpha \in \mathfrak a \} \subseteq V$,
which is in fact an $R$-submodule of $V$.
\begin{definition}
A pair $(v, \mathfrak a)$ consisting of an element $v \in V$ and a fractional ideal $\mathfrak a$ of $R$ is called
a \textit{pseudo-element} of $V$. In case $\mathfrak a v \subseteq M$, we call $(v, \mathfrak a)$ a \textit{pseudo-element} of $M$.
\end{definition}
\begin{remark}
The notion of pseudo-objects goes back to Cohen~\cite{Cohen1996}, who introduced them to compute with finitely generated projective modules over Dedekind domains.
Note that in~\cite{Cohen2000} the $R$-submodule $\mathfrak a v$ itself is
defined to be a pseudo-element, whereas with our definition, this
$R$-submodule is only attached to the pseudo-element $(v, \mathfrak a)$.
We choose the slightly modified version to simplify the exposition and to ease notation.
\end{remark}
\begin{lemma}\label{lem:lem1}
Let $V$ be a $K$-vector space.
\begin{enumerate}
\item
For $v, w \in V$ and $\mathfrak a, \mathfrak b, \mathfrak c$ fractional ideals of $R$ we have
$\mathfrak a(\mathfrak b v) = (\mathfrak a \mathfrak b)v$ and $\mathfrak c(\mathfrak av + \mathfrak bw) = (\mathfrak c \mathfrak a)v + (\mathfrak c \mathfrak b)vw$.
\item
Let $(v, \mathfrak a)$, $(v_i, \mathfrak a_i)_{1 \leq i \leq l}$ be
pseudo-elements of $V$. If $\mathfrak av \subseteq \sum_{1 \leq i
\leq l} \mathfrak a_i v_i$, then there
exist $a_i \in \mathfrak a_i\mathfrak a^{-1}$, $1 \leq i \leq l$, such that
$v = \sum_{1 \leq i \leq l} a_i v_i$.
\end{enumerate}
\end{lemma}
\begin{proof}
(i): Clear. (ii): Using (i) and by multiplying with $\mathfrak a^{-1}$ we are reduced to the case where $\mathfrak a = R$, that is,
$v \in \sum_{1 \leq i \leq l} \mathfrak a_i v_i$. But then the assertion is clear.
\end{proof}
We will now specialize to the situation of multivariate polynomial rings, where additionally we have the $R[x]$-module structure.
For a fractional ideal $\mathfrak a$ of $R$ we will denote by $\mathfrak a[x]$ the ideal $\{ \sum_{\alpha} c_\alpha x^\alpha \mid c_\alpha \in \mathfrak a\}$ of $R[x]$.
\begin{lemma}\label{lem:rep}
The following hold:
\begin{enumerate}
\item
For fractional ideals $\mathfrak a, \mathfrak b, \mathfrak c$ of $R$ we have $\mathfrak a(\mathfrak b[x]) = (\mathfrak a \mathfrak b)[x]$ and $\mathfrak a(\mathfrak b[x] + \mathfrak c[x]) = (\mathfrak a\mathfrak b)[x] + (\mathfrak a \mathfrak c)[x]$.
\item
If $M$ is an $R[x]$-module and $(v, \mathfrak a)$ a pseudo-element, then $\langle \mathfrak a v \rangle_{R[x]} = \mathfrak a[x]v$.
\item
Let $M$ be an $R[x]$-module and $(v_i,\mathfrak a_i)_{1 \leq i \leq l}$ pseudo-elements of $M$ with $\langle \mathfrak a_i v_i \mid 1 \leq i \leq l\rangle_{R[x]} = M$. Given a pseudo-element $(v, \mathfrak a)$ of $M$, there exist $f_i \in \mathfrak a_i \mathfrak a^{-1}[x]$, $1 \leq i \leq l$, such that $v = \sum_{1 \leq i \leq l} f_i v_i$.
\end{enumerate}
\end{lemma}
\begin{proof}
Item~(i) follows from the distributive properties of ideal multiplication. Proving (ii), (iii) is analogous to Lemma~\ref{lem:lem1}.
\end{proof}
\begin{definition}
A \textit{pseudo-polynomial} of $R[x]$ is a pseudo-element of $R[x]$, that is, a pair $(f, \mathfrak f)$ consisting of a polynomial $f \in K[x]$ and a fractional ideal $\mathfrak f$ of $R$ such that $\mathfrak f \cdot f \subseteq R[x]$.
We call $\mathfrak f \LC(f) \subseteq R$ the \textit{leading coefficient} of $(f, \mathfrak f)$ and denote it by $\LC(f, \mathfrak f)$.
The set $\mathfrak f[x] f \subseteq R[x]$ is called the \textit{ideal generated} by $(f, \mathfrak f)$ and is denoted by $\langle(f, \mathfrak f)\rangle$.
We say that the pseudo-polynomial $(f, \mathfrak f)$ is \textit{zero}, if $f = 0$.
\end{definition}
\begin{lemma}
Let $(f, \mathfrak f)$ be a pseudo-polynomial of $R[x]$. Then the following hold:
\begin{enumerate}
\item
The leading coefficient $\LC(f, \mathfrak f)$ is an integral ideal of $R$.
\item
We have $\langle \mathfrak f f \rangle_{R[x]} = \mathfrak f[x] f$.
\end{enumerate}
\end{lemma}
\begin{proof}
Clear.
\end{proof}
\section{Reduction and pseudo-Gröbner bases}
At the heart of the construction of Gröbner bases lies a generalization of the Euclidean division in univariate polynomial rings.
In the context of pseudo-polynomials this takes the following form.
\begin{definition}[Reduction]
Let $(f, \mathfrak f)$ and $G = \{ (g_i, \mathfrak g_i) \mid 1 \leq i \leq l\}$ be set of non-zero pseudo-polynomials of $R[x]$ and $J = \{ 1 \leq i \leq l \mid \LM(g_i) \text{ divides } \LM(f) \}$.
We say that $(f, \mathfrak f)$ \textit{can be reduced modulo} $G$ if $\LC(f, \mathfrak f) \subseteq \sum_{i \in J} \LC(g_i,
\mathfrak g_i)$.
In case $G = \{(g, \mathfrak g)\}$ consists of a single pseudo-polynomial, we say that $(f, \mathfrak f)$ \textit{can be reduced modulo} $(g, \mathfrak g)$.
We define $(f, \mathfrak f)$ to be \textit{minimal with respect to $G$}, if it cannot be reduced modulo $G$.
\end{definition}
\begin{lemma}\label{lem:red}
Let $(f, \mathfrak f)$ and $G = \{ (g_i, \mathfrak g_i) \mid 1 \leq i \leq l\}$ be
non-zero pseudo-polynomials of $R[x]$ and $J = \{ 1 \leq i \leq l \mid \LM(g_i) \text{ divides }
\LM(f) \}$. Then $(f, \mathfrak f)$ can be reduced modulo $G$ if and only if there exist
$a_i \in \mathfrak g_i \mathfrak f^{-1}$, $i \in J$, such that $\LC(f) =
\sum_{i \in J} a_i \LC(g_i)$.
\end{lemma}
\begin{proof} Set $\mathfrak c = \sum_{i \in J} \LC(g_i, \mathfrak g_i)$.
First assume that $(f, \mathfrak f)$ can be reduced modulo $G$, that is $\LC(f, \mathfrak f) = \mathfrak f \LC(f) \subseteq \mathfrak c$.
Hence $\LC(f) \in \mathfrak c \mathfrak f^{-1} = \sum_{i \in J} \LC(g_i,\mathfrak
g_i) \mathfrak f^{-1}$ and there exist $b_i \in \mathfrak g_i
\mathfrak f^{-1}\LC(g_i)$, $i \in J$, such that $\LC(f) = \sum_{i \in J}
b_i$. Then the elements $a_i = b_i/\LC(g_i) \in \mathfrak g_i \mathfrak
f^{-1}$, $i \in J$, satisfy the claim.
On the other hand, if $\LC(f) = \sum_{i \in J} \alpha \LC(g_i)$ for $a_i \in \mathfrak g_i \mathfrak f^{-1}$, then
\[ \LC(f, \mathfrak f) = \mathfrak f \LC(f) \subseteq \sum_{i \in J} \mathfrak f a_i \LC(g_i) \subseteq \sum_{i \in J} \LC(g, \mathfrak g_i). \qedhere \]
\end{proof}
\begin{lemma}\label{lem:div}
Let $(f, \mathfrak f)$ and $(g, \mathfrak g)$ be two non-zero pseudo-polynomials of $R[x]$.
Then the following are equivalent:
\begin{enumerate}
\item
$(f, \mathfrak f)$ can be reduced modulo $(g, \mathfrak g)$.
\item
$\mathfrak f[x] \LT(f) \subseteq \mathfrak g[x] \LT(g)$,
\item
$\mathfrak f \LC(f) \subseteq \mathfrak g \LC(g)$ and $\LM(f)$ divides $\LM(g)$.
\end{enumerate}
\end{lemma}
\begin{proof}
(i) $\Rightarrow$ (ii): By assumption $\LM(g)$ divides $\LM(f)$ and $\LC(f) = \alpha \LC(g)$ for some $\alpha \in \mathfrak g \mathfrak f^{-1}$.
Hence
\[ \mathfrak f[x] \LT(f) = \mathfrak f[x] \LC(f) x^{\deg(f)} = \mathfrak f[x] \alpha x^{\deg(f) - \deg(g)} \LT(g) \subseteq \mathfrak f[x] \alpha \LT(g) \subseteq \mathfrak g[x] \LT(g) .\]
(ii) $\Rightarrow$ (iii): Let $\mu \in \mathfrak \subseteq \mathfrak f[x]$. Since $\mu \LT(f) \in \mathfrak g[x] \LT(g)$ it follows that $\LM(g)$ divides $\LM(f)$ and $\mathfrak f \LC(f) \subseteq \mathfrak g \LC(g)$.
(iii) $\Rightarrow$ (i): Clear.
\end{proof}
\begin{definition}
Let $(f, \mathfrak f)$ and $G = \{ (g_i, \mathfrak g_i) \mid 1 \leq i \leq l\}$ be pseudo-polynomials of $R[x]$ and assume that $(f,\mathfrak f)$ can be reduced modulo $G$ and $(a_i)_{i \in J}$ are as in Lemma~\ref{lem:red}. Then we call
$(f - \sum_{i \in J} a_i g_i, \mathfrak f)$ a \textit{one step reduction} of $(f, \mathfrak f)$ with respect to $G$ and
we write
\[ (f, \mathfrak f) \xrightarrow{G} \Bigl(f - \sum_{i \in J} a_i x^{\deg(f) - \deg(g_i)} g_i, \mathfrak f\Bigr). \]
\end{definition}
\begin{lemma}\label{lem:propred}
Let $(h, \mathfrak f)$ be a one step reduction of $(f, \mathfrak f)$ with respect to $G = \{ (g_i,\mathfrak g_i) \mid 1 \leq i \leq l\}$.
Denote by $I = \langle G \rangle$ the ideal of $R[x]$ generated by $G$. Then the
following hold:
\begin{enumerate}
\item
The pair $(h, \mathfrak f)$ is a pseudo-polynomial of $R[x]$.
\item
We have $\mathfrak f[x](f - h) \subseteq I$.
\item
We have $\langle (f, \mathfrak f) \rangle \subseteq I$ if and only if $\langle (h, \mathfrak f) \rangle \subseteq I$.
\end{enumerate}
\end{lemma}
\begin{proof}
By definition there exists $J \subseteq \{1,\dotsc,r\}$, $a_i \in \mathfrak g_i \mathfrak f^{-1}$, $i \in J$, with $\LC(f) = \sum_{i\in J} a_i \LC(g_i)$.
(i): We have
\[ \mathfrak f h = \mathfrak f\Bigl(f - \sum_{i \in J} a_i g_i\Bigr) \subseteq \mathfrak f f + \sum_{i \in J} \mathfrak f a_i g_i \subseteq \mathfrak f f + \sum_{i \in J} \mathfrak g_i g_i \subseteq R[x]. \]
(ii): Since $f - h = \sum_{i \in I} a_i g_i$ and $a_i \in \mathfrak g_i \mathfrak f^{-1}$ it is clear that $\mathfrak f a_i g_i \subseteq I$.
(iii): If $\mathfrak f[x]f \subseteq I$, then $\mathfrak f[x](f - \sum_{i \in J} a_i g_i) \subseteq I$, since $\mathfrak f a_i \subseteq \mathfrak g_i$.
On the other hand, if $\mathfrak f[x] f \subseteq I$, then
\[ \mathfrak f[x] f = \mathfrak f[x]\Bigl(f - \sum_{i \in J}a_i g_i + \sum_{i \in J} a_i g_i\Bigr) \subseteq \mathfrak f[x]f + \mathfrak f[x]\Bigl(\sum_{i \in J} a_i g_i\Bigr) \subseteq I.\qedhere \]
\end{proof}
\begin{definition}
Let $(f, \mathfrak f)$, $(h, \mathfrak f)$ and $G = \{(g_i, \mathfrak g_i) \mid 1 \leq i \leq l\}$ be non-zero pseudo-polynomials of $R[x]$. We say that
\textit{$(f, \mathfrak f)$ reduces to $(h, \mathfrak f)$ modulo $G$} if there exist pseudo-polynomials $(h_i, \mathfrak f)$, $1 \leq i \leq l$ such that
\[ (f, \mathfrak f) = (h_1,\mathfrak f) \xrightarrow{G} (h_2,\mathfrak f) \xrightarrow{G} \dotsb \xrightarrow{G} (h_l,\mathfrak f) = (h,\mathfrak f). \]
In this case we write $(f,\mathfrak f) \xrightarrow{G}_+ (h,\mathfrak f)$. (The relation $\xrightarrow{G}_+$ is thus the reflexive closure of $\xrightarrow{G}$.)
\end{definition}
\begin{lemma}\label{lem:2}
If $(f, \mathfrak f)$, $(h, \mathfrak f)$ and $G = \{(g_i, \mathfrak g_i) \mid 1 \leq i \leq l\}$ are non-zero pseudo-polynomials with
$(f, \mathfrak f) \xrightarrow{G} (h, \mathfrak f)$,
then $\mathfrak f[x](f - h) \subseteq I$. Moreover $\langle (f, \mathfrak f) \rangle \subseteq I$ if and only if $\langle (h, \mathfrak f ) \rangle \subseteq I$.
\end{lemma}
\begin{proof}
Note that $f - h = f - h_1 + h_1 - h_2 + \dotsb - h_l - h$. Hence the claim follows from Lemma~\ref{lem:propred}~(ii).
\end{proof}
\begin{remark}\label{rem:1}
If $(h, \mathfrak f)$ is a one step reduction of $(f, \mathfrak f)$, then $\deg(h) < \deg(f)$ and there exist terms $h_i \in (\mathfrak g_i \mathfrak f^{-1})[x]$, $i \in I$, such that $f - h = \sum_{1 \leq i \leq l} h_i g_i$.
Applying this iteratively we see that if $(h, \mathfrak f)$ is a pseudo-polynomial of $R[x]$ with $(f, \mathfrak f) \xrightarrow{G}_+ (h, \mathfrak f)$, then $\deg(h) < \deg(f)$ and there exists $h_i \in (\mathfrak g_i \mathfrak f^{-1})[x]$, $i \in I$, such that $f - h = \sum_{1 \leq i \leq} h_i g_i$.
Moreover in both cases we have $\deg(f) = \max_{i \in I} (\deg(h_i g_i))$.
\end{remark}
\begin{definition}
Let $(f, \mathfrak f)$ and $G = \{ (g_i, \mathfrak g_i) \mid 1 \leq i \leq l\}$ be pseudo-polynomials.
The \textit{leading term} $\LT(f, \mathfrak f)$ is defined to be $\mathfrak f \LT(f)$.
Moreover we define the \textit{leading term ideal} of $(f, \mathfrak f)$ and $G$ as $\Lt(f, \mathfrak f) = \mathfrak f[x]\LT(f) = \langle \LT(f, \mathfrak f) \rangle_{R[x]}$ and $\Lt(G) = \sum_{i = 1}^r \Lt(g_i,\mathfrak g_i) \rangle_{R[x]}$ respectively.
If $F \subseteq R[x]$ is a set of polynomials, then we define $\Lt(F) = \langle \LT(f) \mid f \in F \rangle_{R[x]}$.
\end{definition}
We can now characterize minimality in terms of leading term ideals.
\begin{lemma}\label{lem:3}
Let $G = \{ (g_i, \mathfrak g_i) \mid 1 \leq i \leq l\}$ be non-zero pseudo-polynomials of $R[x]$.
A non-zero pseudo-polynomial $(f, \mathfrak f)$ is minimal with respect to $G$, if and only if $\Lt(f,\mathfrak f) \not\subseteq \Lt(G)$.
\end{lemma}
\begin{proof}
Denote by $J = \{ i \in \{1,\dotsc,r\} \mid \LM(g_i) \text{ divides } \LM(f) \}$.
Assume first that $(f, \mathfrak f)$ is not minimal, that is, the pseudo-polynomial can be reduced modulo $G$.
Then there exist $a_i \in \mathfrak g_i \mathfrak f^{-1}$, $i \in J$, such that $\LC(f) = \sum_{i \in J} a_i \LC(g_i)$.
For every $i \in J$ there exists a monomial $x^{a_i}$ with $\LM(g_i) x^{a_i} = \LM(f)$.
Hence
\[ \LT(f) = \LM(f)\LC(f) = \sum_{i \in J} a_i \LM(f) \LC(g_i) = \sum_{ \in J} a_i x^{\alpha_i} \LT(g_i). \]
Thus it holds that $\mathfrak f \LT(f) = \sum_{i \in J} \mathfrak f a_i x^{\alpha_i} \LT(g_i) \in \sum_{i \in J} \mathfrak g_i[x] \LT(g_i) \subseteq \Lt(G)$.
This implies $\Lt(f, \mathfrak f) = \mathfrak f[x] f \subseteq \Lt(G)$, as claimed.
Now assume that $\Lt(f, \mathfrak f) \subseteq \Lt(G)$.
Let $\alpha \in \mathfrak f$. Since $\alpha \LT(f) \in \Lt(f, \mathfrak f) \subseteq \Lt(G)$, there exist
$h_i \in \mathfrak g_i[x]$, $1 \leq i \leq l$, with $\alpha \LT(f) = \sum_{i = 1}^r h_i \LT(g_i)$.
Without loss of generality we may assume that $h_i$ is a term, say, $h_i = a_i x^{\alpha_i}$, where $a_i \in \mathfrak g_i$.
Denote by $J'$ the set $\{i \in \{1,\dotsc,r\} \mid x^{\alpha_i} \LM(g_i) = \LM(f_i)\}$. Hence we have
\[ \alpha \LM(f) = \sum_{i \in J'} a_i x^{\alpha_i} \LT(g_i) = \sum_{i \in J'} a_i x^{\alpha_i} \LM(g_i) \LC(g_i). \]
Comparing coefficients this yields $\alpha \LC(f) = \sum_{i \in J'} a_i \LC(g_i)$.
Thus $\LC(f, \mathfrak f) = \mathfrak f \LC(f) \subseteq \sum_{i \in J'} \mathfrak g_i \LC(g_i) = \sum_{i \in J'} \LC(g_i, \mathfrak g_i)$.
As $J' \subseteq J$, it follows that $(f, \mathfrak f)$ can be reduced modulo $G$.
\end{proof}
\begin{theorem}\label{thm:rep}
Let $(f, \mathfrak f)$ and $G = \{ (g_i,\mathfrak g_i) \mid 1 \leq i \leq l\}$ be pseudo-polynomials.
There exists a pseudo-polynomial $(h, \mathfrak f)$ which is minimal with respect to $G$ and $h_i \in (\mathfrak g_i \mathfrak f^{-1})[x]$, $1 \leq i \leq l$, such that $(f, \mathfrak f) \xrightarrow{G}_+ (r, \mathfrak f)$,
\[ f - r = \sum_{i=1}^r h_i g_i, \]
and $\deg(f) = \max((\max_{1 \leq i \leq l}\deg(h_i g_i), \deg(r))$.
\end{theorem}
\begin{proof}
Follows immediately from Lemma~\ref{lem:3} and Remark~\ref{rem:1}.
\end{proof}
We can now generalize the characterization of Gröbner bases to pseudo-Gröbner bases.
\begin{theorem}\label{thm:defgroeb}
Let $I$ be an ideal of $R[x]$ and $G = \{ (g_i, \mathfrak g_i) \mid 1 \leq i \leq l\}$ non-zero pseudo-polynomials of $I$. Then the following are equivalent:
\begin{enumerate}
\item
$\Lt(I) = \Lt(G)$;
\item
For a pseudo-polynomial $(f, \mathfrak f)$ of $R[x]$ we have
$\langle (f, \mathfrak f) \rangle \subseteq I$ if and only if $(f, \mathfrak f)$ reduces to $0$ modulo $G$.
\item
For every pseudo-polynomial $(f, \mathfrak f)$ of $R[x]$ with $\langle (f, \mathfrak f) \rangle \subseteq I$ there exist $h_i \in (\mathfrak g_i \mathfrak f^{-1})[x]$, $1 \leq i \leq l$, such that $f = \sum_{i=1}^r h_i g_i$ and $\LM(f) = \max_{1 \leq i \leq l}(\LM(h_ig_i))$.
\item
If $(a_{ij})_{1 \leq j \leq n_i}$ are ideal generators of $\mathfrak g_i$ for $1 \leq i \leq l$,
then the set
\[ \{ a_{ij}g_i \mid 1 \leq i \leq l, 1 \leq j \leq n_i\} \]
is a Gröbner basis of $I$.
\end{enumerate}
\end{theorem}
\begin{proof}
(i) $\Rightarrow$ (ii): If $(f, \mathfrak f) \xrightarrow{G}_+ 0$, then Lemma~\ref{lem:2} implies that $\langle( f, \mathfrak f)\rangle \subseteq I$.
Now assume $\langle( f, \mathfrak f)\rangle \subseteq I$.
Theorem~\ref{thm:rep} there exists a non-zero pseudo-polynomial $(r, \mathfrak f)$, which is minimal with respect to $G$ such that
$(f, \mathfrak f) \xrightarrow{G}_+ (r, \mathfrak f)$.
If $r \neq 0$, then Lemma~\ref{lem:3} shows that
$\Lt(r, \mathfrak f) \subsetneq \Lt(G) = \Lt(I)$.
As $\langle (f, \mathfrak f)\rangle \subseteq I$ we also have
$\langle (r, \mathfrak f)\rangle \subseteq I$ by Lemma~\ref{lem:2} and hence
$\Lt(r, \mathfrak f) \subseteq \Lt(I)$, a contradiction.
Thus $(f, \mathfrak f) \xrightarrow{G}_+ 0$.
(ii) $\Rightarrow$ (iii): Clear from Remark~\ref{rem:1}.
(iii) $\Rightarrow$ (i): We just have to show that $\Lt(I) \subseteq \Lt(G)$. Let $\langle(f, \mathfrak f)\rangle \subseteq I$ and write $f = \sum_{i \in J} h_i g_i$ with $h_i \in (\mathfrak g_i \mathfrak f^{-1})[x]$ and $\LM(f) = \max_{1 \leq i \leq l}(\LM(h_i g_i))$.
Thus $\LT(f) = \sum_{i \in J} \LT(h_i)\LT(g_i)$, where $J = \{ i \in J \mid \LM(g_ih_i) = \LM(f_i)\}$.
Since $\LT(h_i) \in \mathfrak g_i \mathfrak f^{-1}[x]$, for every $\alpha \in \mathfrak f$ we therefore have
\[ \alpha \LT(f) \subseteq \sum_{i \in J} \mathfrak g_i[x] \LT(g_i), \text{ that is, } \mathfrak \Lt(f, \mathfrak f) = \mathfrak f[x] \LT(f) \subseteq \sum_{i \in I} \Lt(g_i, \mathfrak g_i) = \Lt(G). \qedhere \]
(iv) $\Leftrightarrow$ (i): This follows from the fact that
\[ \Lt(G) = \Lt(\{ a_{ij}g_i \mid 1 \leq i \leq l, 1 \leq j \leq n_i\}). \]
\end{proof}
\begin{definition}\label{def:grob}
Let $I$ be an ideal of $R[x]$. A family $G$ of pseudo-polynomials of $R[x]$ is called a \textit{pseudo-Gröbner basis} of $I$ (with respect to $<$), if $G$ satisfies any of the equivalent conditions of Theorem~\ref{thm:defgroeb}.
\end{definition}
\begin{remark}\label{rem:rem2}
\hfill
\begin{enumerate}
\item
If one replaces pseudo-polynomials by ordinary polynomials in Theorem~\ref{thm:defgroeb}, one recovers the notion of Gröbner basis of an ideal $I \subseteq G$.
\item
Since $R$ is Noetherian, an ideal $I$ of $R[x]$ has a Gröbner basis $\{g_1,\dotsc,g_l\}$ in the ordinary sense~\cite[Corollary 4.1.17]{Adams1994}.
Recall that his means that $\Lt(g_1,\dotsc,g_l) = \langle \LT(g_1),\dotsc,\LT(g_n) \rangle = \Lt(I)$.
As $\Lt(g_1,\dotsc,g_l)$ is equal to the leading term ideal of $G = \{ (g_i, R) \mid 1 \leq i \leq l\}$, we see at once that $I$ also has a pseudo-Gröbner basis.
\item
In view of Theorem~\ref{thm:defgroeb}~(iv), the notion of pseudo-Gröbner basis is a generalization of~\cite{Adams1997} from the univariate to the multivariate case.
\end{enumerate}
\end{remark}
Recall that a generating set $G$ of an ideal $I$ in $R[x]$ is called a strong Gröbner basis,
if for every $f \in I$ there exists $g \in G$ such that $\LT(g)$ divides $\LT(f)$.
It is well known, that in case of principal ideal rings, a strong Gröbner basis always exists.
We show that when passing to pseudo-Gröbner bases, we can recover this property for Dedekind domains.
\begin{definition}
Let $(f, \mathfrak f)$ and $(g, \mathfrak g)$ be two non-zero pseudo-polynomials in $R[x]$. We say that
$(f,\mathfrak f)$ \textit{divides} $(g, \mathfrak g)$ if $g \mathfrak g[x] \subseteq f \mathfrak f[x]$.
Let $I \subseteq R[x]$ be an ideal. A set $G = \{ (g_i,\mathfrak g_i) \mid 1 \leq i \leq l\}$ of pseudo-polynomials in $I$
is a \textit{strong pseudo-Gröbner basis}, if for every pseudo-polynomial $(f, \mathfrak f)$ in $I$ there
exists $i \in \{1,\dotsc,r\}$ such that $\Lt(g_i, \mathfrak g_i)$ divides $\Lt(f, \mathfrak f)$.
\end{definition}
We now fix non-zero pseudo-polynomials $G = \{(g_i,\mathfrak g_i) \mid 1 \leq i \leq l\}$.
For a subset $J \subseteq \{1,\dotsc,r\}$
we define $x_J = \lcm(\LM(g_i) \mid i \in J)$ and $\mathfrak c_J = \sum_{i \in J}\mathfrak g_i \LC(g_i)$.
Let $1 = \sum_{i \in J} a_i \LC(g_i)$ with $a_i \in \mathfrak c_J^{-1} \mathfrak g_i$ for $i \in J$
and define $f_J = \sum_{i \in J} a_i \frac{x_J}{\LM(g_i)} g_i$.
Note that by construction $\LT(f_J) = x_J$.
Finally recall that $J \subseteq \{1,\dotsc,r\}$ is saturated, if for $i \in \{1,\dotsc,r\}$ with $\LM(g_i) \mid x_J$ we have $i \in J$.
\begin{theorem}
Assume that $G = \{(g_i,\mathfrak g_i) \mid 1 \leq i \leq l\}$ is a pseudo-Gröbner basis of the ideal $I \subseteq R[x]$.
Then
\[ \{(f_J, \mathfrak c_J) \mid J \subseteq \{1,\dotsc,r\} \text{ saturated} \} \]
is a strong pseudo-Gröbner basis of $I$.
\end{theorem}
\begin{proof}
Let $(f, \mathfrak f)$ be a non-zero pseudo-polynomial in $I$ and let $J = \{ i \in \{1,\dotsc,r\} \mid \LM(g_i) \text{ divides } \LM(f) \}$.
Then $J$ is saturated and since $G$ is a pseudo-Gröbner basis of $I$ we have
\[ \LC(f, \mathfrak f) = \LC(f)\mathfrak f \subseteq \sum_{i \in J} \LC(g_i)\mathfrak g_i = \mathfrak c_J = \LC(f_J\mathfrak c_J). \]
Furthermore $\LM(f_J) = x_J \mid \LM(f)$ and thus $(f_J,\mathfrak c_J)$ divides $(f, \mathfrak f)$ by Lemma~\ref{lem:div}.
\end{proof}
\begin{corollary}
Every ideal $I$ of $R[x]$ has a strong pseudo-Gröbner basis.
\end{corollary}
\section{Syzygies}
We already saw in Remark~\ref{rem:rem2}~(ii), that the existence of pseudo-Gröbner basis is a trivial consequence of the fact the Gröbner bases exists whenever the coefficient ring is Noetherian.
The actual usefulness of pseudo-polynomials come from the richer structure of their syzygies, which can be used to characterize and compute Gröbner bases (see~\cite{Moller1988}).
In this section we will show that, similar to the case of principal ideal rings, the syzygy modules of pseudo-polynomials have a basis corresponding to generalized $S$-polynomials.
\subsection{Generating sets}
Consider a family $G = \{(g_i, \mathfrak g_i) \mid 1 \leq i \leq l\}$ of non-zero pseudo-polynomials. As $G = \sum_{1 \leq i \leq l} \mathfrak g_i[x] g_i$, the map
\begin{align*} \varphi \colon \mathfrak g_1[x] \times \dots \times \mathfrak g_l[x] \longrightarrow I, \quad (h_1,\dots,h_n) \longmapsto \sum_{i=1}^l h_i g_i
\end{align*}
is a well-defined surjective morphism of $R[x]$-modules.
\begin{definition}
With the notation of the preceding paragraph we call $\ker(\varphi)$ the \textit{syzygies} of $G$ and denote it by $\Syz(G)$.
A \textit{pseudo-syzygy} of $G$ is a pseudo-element of $\Syz(G)$, that is, a pair $((h_1,\dotsc,h_l), \mathfrak h)$ consisting of
polynomials $(h_1,\dotsc,h_n) \in K[x]^l$ such that $\mathfrak h \cdot (h_1,\dotsc,h_l) \subseteq \Syz(G)$.
Equivalently, $\sum_{1 \leq i \leq l} h_i g_i = 0$ and $\mathfrak h h_i \subseteq \mathfrak g_i[x]$ for all $1 \leq i \leq l$.
Assume that the polynomials $g_1, \dotsc,g_l$ are terms. Then we call the pseudo-syzygy $((h_1,\dotsc,h_l), \mathfrak h)$ \textit{homogeneous} if $h_i$ is a term for $1 \leq i \leq l$ and there exists $\alpha \in \N^n$ with $\LM(h_i g_i) = x^\alpha$ for all $1 \leq i \leq l$.
\end{definition}
In the following we will denote by $e_i \in K[x]^l$ the element with components $(\delta_{ij})_{1 \leq j \leq l}$, where $\delta_{ii} = 1$ and $\delta_{ij} = 0$ if $j \neq i$.
\begin{lemma}\label{lem:syzgenset}
Let $G = \{(g_i,\mathfrak g_i) \mid 1 \leq i \leq l\}$ be non-zero pseudo-polynomials. Then $\Syz(G)$ has a finite generating set of homogeneous pseudo-syzygies.
\end{lemma}
\begin{proof}
Since $R$ is Noetherian, so is $R[x]$ by Hilbert's basis theorem. In particular $R[x]^l$ is a Noetherian $R[x]$-module. Since the $\mathfrak g_i$ are fractional $R$-ideals, there exists $\alpha \in R$ such that $\alpha \mathfrak g_i \subseteq R$ for all $1 \leq i \leq l$. In particular $\mathfrak g_i[x] \subseteq (\frac 1 \alpha R)[x] = (\frac 1 \alpha) R[x]$.
Thus
\[ \mathfrak g_1[x] \times \dots \times \mathfrak g_l[x] \subseteq (1/\alpha) (R[x])^l \cong R[x]^l \]
is a Noetherian $R[x]$-module as well. Thus the $R[x]$-submodule $\Syz(G)$ is finitely generated.
A standard argument shows that $\Syz(G)$ is generated by finitely many homogeneous syzygies $v_1,\dotsc,v_m \in \Syz(G)$.
Hence $\Syz(G) = \langle (v_1,R), \dotsc, (v_m, R)\rangle$ is generated by finitely many homogeneous pseudo-syzygies.
\end{proof}
We can now characterize pseudo-Gröbner bases in terms of syzygies.
\begin{theorem}\label{thm:groebcond2}
Let $G = \{(g_i,\mathfrak g_i) \mid 1 \leq i \leq l\}$ be non-zero pseudo-polynomials of $R[x]$ and $B$ a finite generating set of homogeneous syzygies of $\Syz(\LT(g_1,\mathfrak g_1),\dots,\LT(g_l,\mathfrak g_l))$.
Then the following are equivalent:
\begin{enumerate}
\item
$G$ is a Gröbner basis of $\langle G \rangle$.
\item
For all $((h_1,\dotsc,h_l),\mathfrak h) \in B$ we have $(\sum_{1 \leq i \leq l} h_i g_i, \mathfrak h) \xrightarrow{G}_+ (0, \mathfrak h)$.
\end{enumerate}
\end{theorem}
\begin{proof}
(i) $\Rightarrow$ (ii): Since $\mathfrak h h_i \subseteq \mathfrak g_i[x]$ by definition, we know that
$\mathfrak h (\sum_{1 \leq i \leq l} h_i g_i) \subseteq \sum_{1 \leq i \leq} \mathfrak g_i[x] \cdot g_i = \langle G \rangle$.
Hence the element reduces to zero by Theorem~\ref{thm:defgroeb}~(ii).
(ii) $\Rightarrow$ (i):
We show that $G$ is a Gröbner basis by verifying Theorem~\ref{thm:defgroeb}~(iii).
To this end, let $(f, \mathfrak f)$ be a pseudo-polynomial contained in $\langle G \rangle$.
By Lemma~\ref{lem:rep} there exist elements $u_i \in (\mathfrak g_i \mathfrak f^{-1})[x]$, $1 \leq i \leq l$, such that
$f = \sum_{1 \leq i \leq l} u_i g_i$.
We need to show that there exists such a linear combination with $\LM(f) = \max_{1 \leq i \leq l} \LM(u_i g_i)$.
Let $x^\alpha = \max_{1 \leq i \leq l} (\LM(u_i g_i))$ with $\alpha \in \N^n$,
and assume that $x^\alpha > \LM(f)$. We will show that $f$ has a representation with strictly smaller degree.
Denote by $S$ the set $\{ 1 \leq i \leq l \mid \LM(u_ig_i) = x^\alpha\}$.
As $x^\alpha > \LM(f)$ we necessarily have $\sum_{1 \leq i \leq l} \LT(u_i) \LT(g_i) = 0$.
In particular $(\sum_{i \in S} e_i \LT(u_i), \mathfrak f)$ is a homogeneous pseudo-syzygy of $\Syz(\LT(g_1,\mathfrak g_1),\dots,\LT(g_l,\mathfrak g_l))$ (since $\mathfrak f \cdot \LT(u_i) \subseteq \mathfrak g_i[x]$).
Let now $B = ((h_{1j},\dots,h_{lj}), \mathfrak h_j)$, $1 \leq j \leq r$ be the finite generating set of homogeneous pseudo-syzygies.
By Lemma~\ref{lem:rep} we can find $f_j \in (\mathfrak h_j \mathfrak f^{-1})[x]$ with
\[ \sum_{i \in S} e_i \LT(u_i) = \sum_{j=1}^r f_j \sum_{i=1}^l e_i h_{ij}. \]
Since each $\LT(u_i)$ is a term, we may assume that each $f_j$ is also a term.
Thus for all $1 \leq i \leq l, 1 \leq j \leq r$ we also have
\[ x^\alpha = \LM(u_ig_i) = \LM(u_i)\LM(g_i) = \LM(f_j)\LM(h_{ij})\LM(g_i) .\]
whenever $f_j h_{ij}$ is non-zero.
By assumption, for all $1 \leq j \leq r$ the pseudo-polynomial $(\sum_{1 \leq i \leq l} h_{ij} g_i, \mathfrak h_j)$ reduces to zero with respect to $G$.
Hence by Theorem~\ref{thm:rep} we can find $v_{ij} \in (\mathfrak
g_i\mathfrak h_j^{-1})[x]$, $1 \leq i \leq l$, such that $\sum_{1 \leq i \leq l}
h_{ij}g_i = \sum_{1 \leq i \leq l} v_{ij}g_i$ and
\begin{align}\label{eq} \max_{1 \leq i \leq l} \LM(v_{ij}g_i) = \LM\Bigl(\sum_{i=1}^l h_{ij} gi\Bigr) < \max_{1 \leq i \leq l} \LM(h_{ij} g_i). \end{align}
The last inequality follows from $\sum_{1 \leq i \leq l} h_{ij} \LT(g_i) = 0$.
For the element $f$ we started with this implies
\[ f = \sum_{i = 1}^l u_i g_i = \sum_{i\in S} \LT(u_i)g_i + \sum_{i \in S} (u_i - \LT(u_i)) g_i + \sum_{i \not\in S} u_i g_i. \]
The first term is equal to
\[ \sum_{i\in S} \LT(u_i)g_i = \sum_{j = 1}^r f_j \sum_{i = 1}^l h_{ij} g_i = \sum_{j = 1}^r \sum_{i = 1}^l f_j h_{ij}g_i = \sum_{j=1}^r \sum_{i=1}^l f_j v_{ij} g_i = \sum_{i=1}^l (\sum_{j=1}^r f_j v_{ij})g_i.\]
Now $f_j \in (\mathfrak h_j \mathfrak f^{-1})[x]$, $v_{ij} \in (\mathfrak g_i \mathfrak h_j^{-1})[x]$, hence $f_j v_{ij} \in (\mathfrak g_i \mathfrak f^{-1})[x]$.
Moreover from~(1) we have
\[ \max_{i, j} \LM(f_j)\LM(v_{ij})\LM(g_i) < \max_j \max_i \LM(f_j)\LM(h_{ij}g_i) = x^\alpha. \]
Thus we have found polynomials $\tilde u_i \in (\mathfrak g_i\mathfrak f^{-1})[x]$, $1 \leq i \leq l$, such that
$\max_{1 \leq i \leq l} \LM(\tilde u_i g_i) < \max_{1 \leq i \leq l} \LM(u_i g_i)$ and $f = \sum_{1 \leq i \leq l}\tilde u_i g_i$.
\end{proof}
\begin{prop}\label{prop:redtomonic}
Let $(g_i,\mathfrak g_l)_{1 \leq i \leq l}$ be non-zero pseudo-polynomials of
$R[x]$ and $(a_i)_{1 \leq i \leq l} \in (K^\times)^l$. Consider the map $\Phi \colon
K[x]^l \to K[x]^l$, $\sum_{1 \leq i \leq l} e_i h_i \mapsto \sum_{1\leq i
\leq l} e_i \frac{h_i}{a_i}$. Then the following hold:
\begin{enumerate}
\item
The restriction of $\Phi$ induces an isomorphism
\[ \Syz((g_1,\mathfrak g_1),\dotsc,(g_l,\mathfrak g_l)) \longrightarrow \Syz\left(\Bigl(a_1g_1,\frac{\mathfrak g_1}{a_1}\Bigr),\dotsc,\Bigl(a_l g_l, \frac{\mathfrak g_l}{a_l}\Bigr)\right) \]
of $R[x]$-modules.
\item
If $(h, \mathfrak h)$ is a pseudo-syzygy of $\Syz((g_i,\mathfrak g_i) \mid 1 \leq i \leq l)$, then
$(\Phi(h), \mathfrak h)$ is a pseudo-syzygy of $\Syz((a_i g_i, \frac{\mathfrak g_i}{a_i}) \mid 1 \leq i \leq l)$ and $\Phi(\langle (h, \mathfrak h) \rangle) = \langle(\Phi(h), \mathfrak h)\rangle$.
\end{enumerate}
\end{prop}
\begin{proof}
(i): The map $\Phi$ is clearly $K[x]$-linear. We now show that the image of the syzygies $\Syz((g_i,\mathfrak g_i)_{1 \leq i \leq l})$ under $\Phi$ is contained in $\Syz((a_i g_i, \frac{\mathfrak g_i}{a_i})_{1 \leq i \leq l})$. To this end let $(h_1,\dotsc,h_l) \in \Syz((g_i, \mathfrak g_i)_{1 \leq i \leq l})$, that is, $\sum_{1 \leq i \leq l} h_i g_i = 0$ and $h_i \in \mathfrak g_i[x]$. But then $\sum_{1 \leq i \leq l} \frac{h_i}{a_i} a_i g_i = 0$ and $\frac{h_i}{a_i} \in (\frac{\mathfrak g_i}{a_i})[x]$, that is, $(\frac{h_1}{a_1},\dotsc,\frac{h_l}{a_l}) \in \Syz((a_i g_i, \frac{\mathfrak g_i}{a_i})_{1 \leq i \leq l})$.
As the inverse map is given by $(h_1,\dotsc,h_l) \mapsto (a_1 h_1,\dotsc,a_l h_l)$, the claim follows.
(ii): Follows at one from (i)..
\end{proof}
\subsection{Buchberger's algorithm}
\begin{theorem}\label{thm:syzgen}
Let $(a_i x^{\alpha_i}, \mathfrak g_i)_{1 \leq i \leq l}$ be non-zero pseudo-polynomials, where each polynomial is a term. For $1 \leq i, j \leq l$ we define the pseudo-element
\[ s_{ij} =\left(\left( \frac{\lcm(x^{\alpha_i}, x^{\alpha_j})}{x^{\alpha_i}} \frac{1}{a_i} e_i - \frac{\lcm(x^{\alpha_i}, x^{\alpha_j})}{x^{\alpha_j}} \frac{1}{\alpha_j} e_j \right), (a_i \mathfrak g_i \cap \alpha_j \mathfrak g_j)\right) \]
of $K[x]^l$ and for $1 \leq k \leq l$ we set $S_k = \Syz((a_i x^{\alpha_i}, \mathfrak g_i)_{1 \leq i \leq k})$.
Then the following hold:
\begin{enumerate}
\item
For $1 \leq i, j \leq l$, $i \neq j$, the syzygies $\Syz((a_i x^{\alpha_i}, \mathfrak g_i), (\alpha_jx^{\alpha_j}, \mathfrak g_j))$ are generated by $s_{ij}$.
\item
If $B_{k - 1}$ is a generating set of pseudo-generators for $S_{k - 1}$, then
\[ B = \{ ((h, 0), \mathfrak h) \mid (h, \mathfrak h) \in B_{k - 1} \} \cup \{ s_{ik} \mid 1 \leq i \leq k - 1\} \]
is a generating set of pseudo-generators for $S_k$.
\end{enumerate}
\end{theorem}
\begin{proof}
By Proposition~\ref{prop:redtomonic} we are reduced to the monic case, that is, $a_i = 1$ for $1 \leq i \leq l$.
(i): It is clear that $s_{ij}$ is a pseudo-syzygy of $((x^{\alpha_i}, \mathfrak g_i), (x^{\alpha_j}, \mathfrak g_j))$.
Let now $((h_i, h_j), \mathfrak h)$ be a homogeneous pseudo-syzygy with $h_i =
b_i x^{\beta_i}$, $h_j = b_j x^{\beta_j}$, $\mathfrak h h_i \subseteq \mathfrak
g_i$ and $\mathfrak h h_j \subseteq \mathfrak g_j$. We may further assume
that $b_i \neq 0 \neq b_j$. In particular $x^{\alpha_i}x^{\beta_i} = x^{\alpha_j} x^{\beta_j}$
and we can write $x^{\beta_i} = \lcm(x^{\alpha_i}, x^{\alpha_j})/x^{\alpha_i} \cdot x^\beta$, $x^{\beta_j} =
\lcm(x^{\alpha_i}, x^{\alpha_j})/x^{\alpha_j} \cdot x^{\beta}$ for some monomial $x^\beta$.
We obtain
\[(h_i, h_j) = x^\beta \left(b_i \frac{\lcm(x^{\alpha_i}, x^{\alpha_j})}{x^{\alpha_i}}, b_j \frac{\lcm(x^{\alpha_i}, x^{\alpha_j})}{x^{\alpha_j}}\right)
= x^\beta b_i \left(\frac{\lcm(x^{\alpha_i}, x^{\alpha_j})}{x^{\alpha_i}}, - \frac{\lcm(x^{\alpha_i}, x^{\alpha_j})}{x^{\alpha_j}}\right), \]
where the last equality follows from $b_i + b_j = 0$.
As $b_i \mathfrak h \subseteq \mathfrak g_i$, $b_i \mathfrak h = b_j \mathfrak h \subseteq \mathfrak g_j$ we obtain $b_i \mathfrak h \subseteq \mathfrak g_i \cap \mathfrak g_j$.
Thus $\langle ((h_i, h_j), \mathfrak h) \rangle_{R[x]} \subseteq \langle s_{ij} \rangle_{R[x]}$.
The claim now follows from Lemma~\ref{lem:syzgenset}.
(ii):
We start again with a homogeneous pseudo-syzygy $(h, \mathfrak h) = ((h_1,\dotsc,h_{k}), \mathfrak h)$ of $S_k$ of degree $x^\beta$.
We write $h_i = b_i x^{\beta_i}$ with $\mathfrak h \mathfrak b_i \subseteq \mathfrak g_i$, $1 \leq i \leq k$.
Since in case $b_k = 0$ we have that $((h_1,\dotsc,h_{k-1}), \mathfrak h)$ is a pseudo-syzygy in $S_{k-1}$, we can assume that $b_k \neq 0$.
Let $J = \{ i \mid 1 \leq i \leq k - 1, \ b_i \neq 0\}$.
Since $(h, \mathfrak h)$ is homogeneous, we have $x^{\beta_i} x^{\alpha_i} = x^\beta$ for all $i \in J \cup \{ k \}$ and in particular $\lcm(x^{\alpha_i}, x^{\alpha_k}) \mid x^\beta$ for all $i \in J$.
Furthermore we have $b_k = - \sum_{i \in J} b_i \in \sum_{i \in J} \langle b_i \rangle_R$ and hence
$\mathfrak h b_k \subseteq \sum_{i \in J} \mathfrak h b_i \subseteq \sum_{i \in J} \mathfrak g_i$.
Since at the same time it holds that $\mathfrak h b_k \subseteq \mathfrak g_k$, we conclude that
$\mathfrak h b_k \subseteq (\sum_{i \in J} \mathfrak g_i) \cap \mathfrak g_k = \sum_{i \in J} (\mathfrak g_i \cap \mathfrak g_k)$.
Hence there exist $c_i \in (\mathfrak g_i \cap \mathfrak b_k) \mathfrak b^{-1}$, $i \in J$, such that $b_k = - \sum_{i \in J} c_i$.
For $1 \leq i, j \leq k$ let us denote $\lcm(x^{\alpha_i}, x^{\alpha_j})$ by $x^{\alpha_{ij}}$.
Now as $x^{\beta}/x^{\alpha_{ik}} \cdot x^{\alpha_{ik}}/x^{\alpha_i} = x^{\beta_i}$ we obtain
\begin{align*}
b_k x^{\beta_k} e_k = \sum_{i \in J} - c_i \frac{x^{\beta}}{x^{\alpha_k}} e_k &= \sum_{i \in J} - c_i \frac{x^{\beta}}{x^{\alpha_{ik}}} \frac{x^{\alpha_{ik}}}{x^{\alpha_k}} e_k
\\ &= \sum_{i \in J} c_i \frac{x^\beta}{x^{\alpha_{ik}}} \left( \frac{x^{\alpha_{ik}}}{x^{\alpha_i}} e_i - \frac{x^{\alpha_{ik}}}{x^{\alpha_k}} e_k\right) - \sum_{i \in J} c_i x^{\beta_i} e_i.
\end{align*}
Hence
\[ h = \sum_{i=1}^l e_i b_i x^{\beta_i} = \sum_{i=1}^{l-1} e_i b_i x^{\beta_i}
- \sum_{i \in J} c_i x^{\beta_i} e_i
+ \sum_{i \in J} c_i \frac{x^\beta}{x^{\alpha_{ik}}} \left( \frac{x^{\alpha_{ik}}}{x^{\alpha_i}} e_i - \frac{x^{\alpha_{ik}}}{x^{\alpha_k}} e_k \right). \]
We set
\[ \tilde{h} = \sum_{i=1}^{l-1} e_i b_i x^{\beta_i} - \sum_{i \in J} c_i x^{\beta_i} e_i \quad\text{and}\quad \tilde{\tilde h} = \sum_{i \in J} c_i \frac{x^\beta}{x^{\alpha_{ik}}} \left( \frac{x^{\alpha_{ik}}}{x^{\alpha_i}} e_i - \frac{x^{\alpha_{ik}}}{x^{\alpha_k}} e_k \right). \]
By construction, for all $i \in J$ we have $\mathfrak h c_i \subseteq \mathfrak g_i \cap
\mathfrak g_k$. Together with $J \subseteq \{ 1,\dotsc,k-1\}$ this implies
$\langle (\tilde{\tilde h}, \mathfrak h)_{R[x]} \rangle \subseteq \langle
s_{ik} \mid 1 \leq i \leq k - 1 \rangle_{R[x]}$.
Let $\Phi \colon \sum_{1 \leq i \leq } e_i h_i \mapsto \sum_{1 \leq i \leq l} h_i g_i$.
As $h, \tilde{\tilde{h}} \in \ker(\Phi)$, the same holds for $\tilde h$.
Using again the property $\mathfrak h c_i \subseteq \mathfrak g_i \cap \mathfrak g_k \subseteq \mathfrak g_i$ we conclude that $(\tilde h, \mathfrak h)$ is a pseudo-syzygy of $(g_i, \mathfrak g_i)_{1 \leq i \leq l - 1}$.
In particular $\langle ((\tilde h, 0), \mathfrak h) \rangle_{R[x]} \subseteq \langle ((h, 0), \mathfrak h) \mid (h, \mathfrak h) \in B_{k - 1}\rangle_{R[x]}$.
Invoking again Lemma~\ref{lem:syzgenset}, this proves the claim.
\end{proof}
\begin{definition}
Let $(f, \mathfrak f)$, $(g, \mathfrak g)$ be two non-zero pseudo-polynomials of $R[x]$. We call
\[ \left(\left( \frac{\lcm(\LM(f), \LM(g))}{\LM(f)} \frac{1}{\LC(f)} f - \frac{\lcm(\LM(f), \LM(g))}{\LM(g)} \frac{1}{\LC(g)} g \right), \LC(f)\mathfrak f \cap \LC(g)\mathfrak g\right) \]
the \textit{S-polynomial} of $(f, \mathfrak g)$, $(g, \mathfrak g)$ and denote it by
$\spoly((f, \mathfrak f), (g, \mathfrak g))$.
\end{definition}
We can now give the analogue of the classical Buchberger criterion in the case of Dedekind domains.
\begin{corollary}\label{cor:buchcrit}
Let $G = \{ (g_i, \mathfrak g_i) \mid 1 \leq i \leq l\}$ be non-zero pseudo-polynomials of $R[x]$.
Then $G$ is a Gröbner basis of $\langle G \rangle$ if and only if $\spoly((g_i, \mathfrak g_i), (g_j,\mathfrak g_j))$ reduces to $0$ modulo $G$ for all $1 \leq i < j \leq l$.
\end{corollary}
\begin{proof}
Applying Theorem~\ref{thm:syzgen}~(ii) inductively using (i) as the base case shows that the set $\{ s_{ij} \mid 1 \leq i < j \leq l\}$ is a of homogeneous pseudo-syzygies generating $\Syz(G)$.
The claim now follows from Theorem~\ref{thm:groebcond2}.
\end{proof}
\begin{algorithm}\label{alg:groeb}
Given a family $F = (f_i, \mathfrak f_i)_{1 \leq i \leq l}$ of non-zero pseudo-polynomials, the following steps return a Gröbner basis $G$ of $\langle F \rangle$.
\begin{enumerate}
\item
We initialize $\tilde G$ as $\{ ((f_i, \mathfrak f_i), (f_j, \mathfrak f_j)) \mid 1 \leq i < j \leq l \}$ and $G = F$.
\item
While $\tilde G \neq \emptyset$, repeat the following steps:
\begin{enumerate}
\item
Pick $((f, \mathfrak f), (g, \mathfrak g)) \in \tilde G$ and compute $(h, \mathfrak h)$ minimal with respect to $G$ such that $\spoly((f, \mathfrak f), (g, \mathfrak g)) \xrightarrow{G}_+ (h, \mathfrak h)$.
\item
If $h \neq 0$, set $\tilde G = \tilde G \cup \{((f, \mathfrak f), (h, \mathfrak h)) \mid (f, \mathfrak f) \in G \}$ and $G = G \cup \{ (h, \mathfrak h) \}$.
\end{enumerate}
\item
Return $G$.
\end{enumerate}
\end{algorithm}
\begin{proof}[Algorthm~\ref{alg:groeb} is correct]
By Corollary~\ref{cor:buchcrit} it is sufficient to show that the algorithm terminates.
But termination follows as in the field case by considering the ascending chain of leading
term ideals $\Lt(G)$ (in the Noetherian ring $R[x]$) and using Lemma~\ref{lem:3}.
\end{proof}
\subsection{Product criterion}
For Gröbner basis computations a bottleneck of Buchberger's algorithm
is the reduction of the $S$-polynomials and the number of $S$-polynomials one has to consider.
Buchberger himself gave criteria under which certain $S$-polynomials will reduce to $0$.
In \cite{Moller1988, Lichtblau2012} they have been adapted to coefficient rings that are principal ideal rings and Euclidean domains respectively.
We will now show that the product criterion can be easily translated to the setting of pseudo-Gröbner bases.
Recall that in the case $R$ is a principal ideal domain, the product criterion reads as follows: If $f, g$ are non-zero polynomials in $R[x]$ such that $\GCD(\LC(f), \LC(g)) = 1$ and $\GCD(\LM(f), \LM(g)) = 1$, then the $S$-polynomial $\spoly(f, g)$ reduces to zero modulo $\{f, g\}$.
\begin{theorem}
Let $(f, \mathfrak f)$, $(g, \mathfrak g)$ be pseudo-polynomials of $R[x]$ such that $\LM(f)$ and $\LM(g)$ are coprime in $K[x]$ and $\LC(f, \mathfrak f)$ and $\LC(g, \mathfrak g)$ are coprime ideals of $R$. Then the $S$-polynomial $\spoly((f, \mathfrak f), (g, \mathfrak g))$ reduces to $0$ modulo $\{(f, \mathfrak f), (g, \mathfrak g)\}$.
\end{theorem}
\begin{proof}
Denote by $f'$ and $g'$ the tails of $f$ and $g$ respectively.
We consider three cases.
In the first case, let both $f$ and $g$ be terms. Then their $S$-polynomial will be $0$ be definition.
Consider next the case in which $f$ is a term and $g$ is not.
Then a quick calculation shows that
\[ (s, \mathfrak s) = \spoly((f, \mathfrak f), (g, \mathfrak g)) = \left(-\frac{1}{\LC(f)\LC(g)} g' f, \LC(f) \mathfrak f \cdot \LC(g) \mathfrak g)\right). \]
We want to show that $(s, \mathfrak s)$ reduces modulo $\{(f, \mathfrak f)\}$.
Since $\LM(f)$ divides $\LM(h)$ by definition it is sufficient to show that $\LC(s, \mathfrak s) \subseteq \LC(f, \mathfrak f)$, which is equivalent to
$\LC(g') \LC(f) \mathfrak f \mathfrak g \subseteq \LC(f) \mathfrak f$.
But this follows from $\LC(g')\mathfrak g \subseteq R$.
Hence $(s, \mathfrak s)$ reduces modulo $(f, \mathfrak f)$ to
\[ \left(s - \frac{\LT(g')}{\LC(g)\LC(f)} f, \LC(f)\mathfrak f \cdot \LC(g) \mathfrak g\right) = \left(-\frac{1}{\LC(f)\LC(g)}(g' - \LT(g'))f, \LC(f)\mathfrak f \cdot \LC(g) \mathfrak g\right). \]
Applying this procedure recursively, we see that $(s, \mathfrak s)$ reduces to $0$ modulo $\{(f, \mathfrak f)\}$.
Now consider the case, where $f$ and $g$ are both not terms, that is, $f' \neq 0 \neq g'$.
Then the $S$-polynomial of $(f, \mathfrak f)$ and $(g, \mathfrak g)$ is equal to
\[ (s, \mathfrak s) = \left( (\frac{\LT(g)}{\LC(f)} f - \frac{\LT(f)}{\LC(g)} g), \LC(f) \mathfrak f \LC(g) \mathfrak g \right)= (\frac{1}{\LC(f)\LC(g)} (f' g - g' f), \LC(f) \mathfrak f \LC(g) \mathfrak g). \]
Since $\LM(f)$ and $\LM(g)$ are coprime, we have $\LM(f'g) \neq \LM(g'f)$ and therefore $\LM(s)$ is either $\LM(f' g)$ or $\LM(g' f)$.
In particular $\LM(s)$ is either a multiple of $\LM(f)$ or $\LM(g)$.
If $\LM(s) = \LM(g' f)$
then $\LC(s) = \-\LC(g')/\LC(g)$ and $\LC(s, \mathfrak s) = \LC(g') \LC(f) \mathfrak f \mathfrak g$.
As in third case, $(s, \mathfrak s)$ reduces to
\begin{align*} &\left(- \frac{1}{\LC(f)\LC(g)} (f' g - g' f) - \frac{\LT(g')}{\LC(g)\LC(f)} f, \LC(f)\mathfrak f \LC(g)\mathfrak g\right)\\
= &\left(-\frac{1}{\LC(f)\LC(g)} ( f' g - (g' - \LT(g')))f, \LC(f)\mathfrak f \LC(g)\mathfrak g\right),
\end{align*}
and similar in the other case.
Note that again, the leading monomial of $(f'g - (g' - \LT(g')) f)$ is a multiple of $\LM(f)$ and $\LM(g)$.
Inductively this shows that $(s, \mathfrak s) \xrightarrow{\{(f,\mathfrak f), (g, \mathfrak g)\}}_+ 0$.\qedhere
\end{proof}
\section{Coefficient reduction}
Although in contrast to $\Q[x]$ the naive Gröbner basis computation of
an ideal $I$ of $\Z[x]$ is free of denominators, the problem of quickly growing
coefficients is still present.
In case a non-zero element $N \in I \cap \Z$ is known this problem can be
avoided: By adding $N$ to the generating set under consideration, all
intermediate results can be reduced modulo $N$, leading to tremendous improvements in runtime, see ~\cite{Eder2018}.
In this section we will describe a similar strategy for the computation of
pseudo-Gröbner bases in case the coefficient ring is the ring of integers of a
finite number field. Although this is
quite similar to the integer case, we now have to deal with the growing size of
the coefficients of polynomials themselves as well as with the size of the
coefficient ideals.
\subsection{Admissible reductions}
We first describe the reduction operations that are allowed during a Gröbner
basis computation for arbitrary Dedekind domains.
\begin{prop}\label{prop:red}
Let $R$ be a Dedekind domain, and $(f, \mathfrak f)$ a non-zero pseudo-polynomials of $R[x]$.
\begin{enumerate}
\item
If $(g, \mathfrak g)$ is a pseudo-polynomial of $R[x]$ with $\mathfrak f
f = \mathfrak g g$, then $(f, \mathfrak f)$ reduces to $0$ modulo $(g,
\mathfrak g)$.
\item
Write $f = \sum_{1 \leq i \leq d} c_{\alpha_i} x^{\alpha_i}$ with $c_{\alpha_i} \neq 0$.
Assume that $g = \sum_{1 \leq i \leq d} \bar c_{\alpha_i} x^{\alpha_i} \in R[x]$ is a polynomial and $\mathfrak N$ a fractional ideal of
$R$ such that $c_{\alpha_i} - \bar c_{\alpha_i} \in
\mathfrak N \mathfrak f^{-1}$ for $1 \leq i \leq d$.
Then $f$ reduces to $0$ modulo $((g,
\mathfrak f), (1, \mathfrak N))$.
\end{enumerate}
\end{prop}
\begin{proof}
(i): By assumption $\LM(f) = \LM(g)$. Moreover, as $\frac{\LC(f)}{\LC(g)} \in
\mathfrak g \mathfrak f^{-1}$ we see that $(f, \mathfrak f)$ reduces to
\[ \left(f - \frac{\LC(f)}{\LC(g)}\frac{\LM(f)}{\LM(g)} g, \mathfrak f\right) = (0, \mathfrak f). \]
(ii):
We first consider the case that $\LM(f) \neq \LM(g)$. By assumption this implies that $\LC(f) \in \mathfrak N \mathfrak f^{-1}$ and
$(f, \mathfrak f)$ reduces to $(f - \LC(f)\LM(f), \mathfrak f)$ modulo $(1, \mathfrak N)$.
Since we also have $(f - \LC(f)\LM(f)) - g \in \mathfrak N\mathfrak f^{-1}[x]$,
we now may assume that $(f - \LC(f)\LM(f)) = 0$, in which case we are finished, or
$\LM(f) = \LM(g)$.
In the latter case, we use $\LC(f) - \LC(g) \in \mathfrak N\mathfrak f^{-1}$ and
$\LC(f) = 1 \cdot \LC(g) + (\LC(f) - \LC(g))\cdot 1$
to conclude that $(f, \mathfrak f)$ reduces to $(\tilde f, \mathfrak f)$ modulo $\{(g, \mathfrak f), (1, \mathfrak N)\}$,
where $\tilde f = f - g - (\LC(f) - \LC(g))\LM(f)$.
Since the polynomial $\tilde f$ satisfies $\tilde f \in \mathfrak N\mathfrak f^{-1}[x]$, it reduces to $0$ modulo $(1, \mathfrak N)$.
\end{proof}
Since our version of Buchberger's algorithm rests on S-polynomials reducing to $0$ (see Corollary~\ref{cor:buchcrit}),
the previous result immediately implies the correctness of the following modification of Algorithm~\ref{alg:groeb}.
\begin{corollary}\label{cor:red}
Assume that $F = (f_i,\mathfrak f_i)_{1 \leq i \leq l}$ is family of pseudo-polynomials, such that $\langle F \rangle$ contains a non-zero ideal $\mathfrak N$ of $R$.
After adding $(1, \mathfrak N)$ to $F$, in Algorithm~\ref{alg:groeb} include the following Step after (a):
\begin{enumerate}
\item[(a')]
Let $(g_1, \mathfrak g_1)$ be a non-zero pseudo-polynomial
with $\mathfrak g_1 g_1 = \mathfrak h h$.
Now let $g_1 = \sum_{i}c_{\alpha_i}x^{\alpha_i}$ with $c_{\alpha_i} \neq 0$.
Find a polynomial $g_2 = \sum_{i}\bar c_{\alpha_i} x^{\alpha_i}$ with $c_{\alpha_i} - \bar c_{\alpha_i} \in \mathfrak N\mathfrak g^{-1}[x]$ for all $i$ and replace $(h, \mathfrak h)$ by $(g_2, \mathfrak g_1)$.
\end{enumerate}
Then the resulting algorithm is still correct.
\end{corollary}
\subsection{The case of rings of integers.}
It remains to describe how to use the previous results to bound the size of the intermediate pseudo-polynomials.
Since this question is meaningless in the general settings of Dedekind domains, we now restrict to the case where $R$ is the ring of integers of a finite number field $K/\mathbf Q$.
We assume that $I \subseteq R[x]$ is an ideal which contains non-zero ideal $\mathfrak N$ of $R$.
In view of Proposition~\ref{prop:red}, we want to solve the following two problems for a given non-zero pseudo-polynomial $(f, \mathfrak f)$ of $R[x]$.
\begin{enumerate}
\item
Find a pseudo-polynomial $(g, \mathfrak g)$ of $R[x]$ with $\mathfrak g$ small such that $\mathfrak f f = \mathfrak g g$.
\item
Find a pseudo-polynomial $(g, \mathfrak f)$ of $R[x]$, such that $g$ has small coefficients, every monomial of $g$ is a monomial of $f$, and $f - g \in \mathfrak N \mathfrak f^{-1}[x]$.
\end{enumerate}
We will now translate this problem to the setting of pseudo-elements in
projective $R$-modules of finite rank, where the analogous problems
are already solved in the context of generalized Hermite form algorithms.
To this end, let $f = \sum_{1 \leq i \leq d} c_{\alpha_i} x^{\alpha_i}$, $c_{\alpha_i} \neq 0$, and consider
\[ \pi \colon K[x] \longrightarrow K^d, \, \sum_{\alpha \in \N^n} c_\alpha x^{\alpha} \longmapsto (c_{\alpha_i})_{1 \leq i \leq d}, \quad \iota \colon K^d \longrightarrow K[x], \, (c_{\alpha_i})_{1 \leq i \leq d} \longmapsto \sum_{i=1}^d c_{\alpha_i} x^{\alpha_i} . \]
Using these $K$-linear maps, we can think of pseudo-polynomials having the same
support as $f$ as projective $R$-submodules of $V$ of rank one, that is, as pseudo-elements in $K^d$.
Moreover, if $\mathfrak f \pi(f) = \mathfrak g w$ for some $w \in K^d$ and
fractional ideal $\mathfrak g$ of $R$, then $\mathfrak f f =
\mathfrak g \iota(w)$.
In particular, by setting $v = (v_i)_{1 \leq i \leq d} = \pi(f) \in K^d$, problems~(i) and~(ii) are equivalent to the following two number theoretic problems:
\begin{enumerate}
\item[(i')] Find a pseudo-element $(w, \mathfrak g)$ of $K^d$ with $\mathfrak g$ small such that $\mathfrak fv = \mathfrak gw$.
\item[(ii')] Find a pseudo-element $(w, \mathfrak f)$ of $K^d$, such that $w_i$ is small and $v_i - w_i \in \mathfrak N\mathfrak f^{-1}$ for all $1 \leq i \leq d$.
\end{enumerate}
Hence, we can reduce pseudo-polynomials by applying the following two
algorithms to the coefficient ideal and the coefficients respectively.
Both are standard tools in algorithmic algebraic number theory, see \cite{Biasse2017} for a discussion including a complexity analysis.
\begin{lemma}
Let $\mathfrak N$ be a non-zero ideal of $R$.
\begin{enumerate}
\item
There exists an algorithm, that given a fractional ideal $\mathfrak a$ of $R$ and a vector $v \in K^d$ determines an ideal $\mathfrak b$ of $R$ and a vector $w \in K^d$ such that $\mathfrak a v = \mathfrak b w$ and the norm $\#(R/\mathfrak b)$ can be bounded by a constant that depends only on the field $K$ (and not on $\mathfrak a$ or $v$).
\item
There exists an algorithm, that given a non-zero ideal $\mathfrak f$ of $R$ and an element $\alpha$ of $K$, determines an element $\beta \in K$ such that $\alpha - \beta \in \mathfrak N\mathfrak f^{-1}$ and the size of $\beta$ can be bounded by a constant that depends only on the field $K$ and the norms $\#(R/\mathfrak N)$, $\#(R/\mathfrak f)$.
\end{enumerate}
\end{lemma}
\begin{remark}
Recall that $\mathfrak N$ is a non-zero ideal of $R$ such that $\mathfrak N \subseteq I$, where $I$ is the ideal of $R[x]$ for which we want to find a pseudo-Gröbner basis.
The preceding discussion together with Corollary~\ref{cor:red} implies that
during Buchberger's algorithm (Algorithm~\ref{alg:groeb}), we can reduce the intermediate results so that
the size of all pseudo-polynomials is bounded by a constant depending only on
$\mathfrak N$ and $K$.
\end{remark}
\begin{remark}
Assume that $I \subseteq R[x]$ is an ideal. Then there exists a non-zero ideal $\mathfrak N$ of $R$ contained in $I$ if and only if $K[x] = \langle I \rangle_{K[x]}$.
In case, one can proceed as follows to find such an ideal $\mathfrak N$.
Let $F = (f_i, \mathfrak f_i)_{1 \leq i \leq l}$ be a generating set of pseudo-polynomials of $I$.
Using classical Gröbner basis computations and the fact that $1 \in \langle I \rangle_{K[x]}$
we can compute $a_i \in K$, $1 \leq i \leq l$, such that $1 = \sum_{1 \leq i \leq l} a_i f_i$.
Next we determine $d \in R$ such that $da_i \in \mathfrak f_i$ for all $1 \leq i \leq l$.
Then
\[ d = \sum_{i=1}^l da_if_i \in \sum_{i=1}^l \mathfrak f_i f_i \subseteq I \]
and thus the non-zero ideal $\mathfrak N = dR$ satisfies $\mathfrak N \subseteq I$.
\end{remark}
\section{Applications}
We give a few applications of pseudo-Gröbner bases to classical problems from
algorithmic commutative algebra as well as to the problem of computing primes of bad reduction.
\subsection{Ideal membership and intersections}
\begin{prop}
Let $I$ be an ideal of $R[x]$ given by a finite generating set of non-zero (pseudo-)polynomials.
There exists an algorithm, that given a polynomial $f$ respectively a pseudo-polynomial $(f, \mathfrak f)$ decides whether $f \in I$ respectively $\langle (f, \mathfrak f) \rangle \subseteq I$.
\end{prop}
\begin{proof}
Since $f \in I$ if and only if $\langle (f, R)\rangle \subseteq I$, we can restrict to the case of pseudo-polynomials.
After computing a pseudo-Gröbner basis of $I$ using Algorithm~\ref{alg:groeb}, we can use Theorem~\ref{thm:defgroeb}~(ii) to decide membership.
\end{proof}
Next we consider intersections of ideals, where as in the case of fields we use an elimination ordering.
\begin{prop}\label{prop:intersect}
Consider $R[x, y]$ with elimination order with the $y$ variables larger than the $x$ variables.
Let $G = \{(g_i,\mathfrak g_i) \mid 1 \leq i \leq l\}$ be a pseudo-Gröbner basis of an ideal $I \subseteq R[x, y]$.
Then $\{ (g_i, \mathfrak g_i) \mid g_i \in K[x] \}$
is a pseudo-Gröbner basis of $I \cap R[x]$.
\end{prop}
\begin{proof}
Follows from Theorem~\ref{thm:defgroeb}~(iv) and the corresponding result for Gröbner bases, see \cite[Theorem 4.3.6]{Adams1994}.
\end{proof}
\begin{corollary}
Let $I$, $J$ be two ideals of $R[x]$ given by finite generating sets of non-zero (pseudo-)polynomials.
Then there exists an algorithm that computes a finite generating set of pseudo-polynomials of $I \cap J$.
\end{corollary}
\begin{proof}
This follows from Proposition~\ref{prop:intersect} and the classical fact that
$I \cap J = \langle wI, (1 - w)J \rangle_{R[x, w]} \cap R[x]$,
where $w$ is an additional variable (see~\cite[Proposition 4.3.9]{Adams1994}).
\end{proof}
\begin{corollary}\label{coro:intersectR}
Let $I \subseteq R[x]$ be an ideal.
Then there exists an algorithm for computing $I \cap R$.
\end{corollary}
\subsection{Primes of bad reduction}
It seems to be well known, that in the case where $R$ is $\Z$, the primes of bad reduction of a variety can be determined by computing Gröbner bases of ideals corresponding to singular loci.
Due to the lack of references we give a proof of this folklore result and show how it relates to pseudo-Gröbner bases.
Assume that $X \subseteq
\PP_{R}^n$ is a subscheme which is flat over $\Spec(R)$, has smooth generic fiber $X_K$ and is pure of dimension $k$.
Our aim is to determine the primes of bad reduction of $X$, that is, we want to find all points $\mathfrak p \in \Spec(R)$ such that the special fiber $X_{\mathfrak p}$ is not smooth.
By passing to an affine cover, we may assume that $X$ is a closed subscheme $V(f_1,\dotsc,f_l)$ of $\mathbf A_R^n$, where $f_1,\dotsc,f_l \in R[x]$.
Let $\mathfrak p \in \Spec(R)$, $\mathfrak p \neq 0$ and denote by $k_{\mathfrak p} = R/\mathfrak p$ the residue field.
Let $J = (\frac{\partial f_i}{\partial x_j})_{1\leq i \leq l, 1 \leq j \leq n}$ be the Jacobian matrix.
\begin{theorem}
Let $X = V(f_1,\dotsc,f_l)$ and $I$ the ideal of $R[x]$ generated by $f_1,\dotsc,f_l$ and the $(n-k)$ minors of $J$.
Then $X_\mathfrak p \subseteq \mathbf A_{k_\mathfrak p}^n$ is smooth if and only if $\mathfrak p$ does not divide $I \cap R$.
\end{theorem}
\begin{proof}
The flatness condition implies that $X_\mathfrak p$ has dimension $k$. By the Jacobian criterion (\cite[Chapter 4, Theorem 2.14]{Liu2002}, $X_\mathfrak p$ is smooth if and only if $J_\mathfrak p(p)$ has rank $n - k$ for all $p \in X_\mathfrak p (\bar k_\mathfrak p)$, where $J_\mathfrak p = (\frac{\partial \bar f_i}{\partial x_j})_{1 \leq i \leq l, 1 \leq j \leq n}$ is the Jacobian of $\bar f_1,\dotsc,\bar f_l$.
Thus $X_\mathfrak p$ is smooth if and only if the ideal of $k_\mathfrak p[x]$ generated by $\bar f_1,\dotsc,\bar f_l$ and the $(n - k)$-minors of $J_\mathfrak p$ is equal to $k_\mathfrak p[x]$.
Hence $X_\mathfrak p$ is smooth if and only if the ideal $(I, \mathfrak p)$ of $R[x]$ is equal to $R[x]$.
Now $(I, \mathfrak p) \subsetneq R[x]$ if and only if there exists a maximal ideal $M$ of $R[x]$ containing $(I, \mathfrak p)$. But in this case the kernel $R \cap M$ of the projection $R \to R[x]/M$ contains $\mathfrak p$ and must therefore be equal to $\mathfrak p$. As $\mathfrak p \subseteq (I, \mathfrak p) \cap R \subseteq M \cap R = \mathfrak p$, the existence of $M$ is equivalent to $(I, \mathfrak p) \cap R = \mathfrak p$, that is, $I \cap R \subseteq \mathfrak p$.
\end{proof}
Combining this with the previous subsection, the primes of bad reduction can be easily characterized with pseudo-Gröbner bases. Note that this does not determine the primes themselves, since one has to additionally determine the prime ideal factors.
\begin{corollary}
Let $X = V(f_1,\dotsc,f_k)$ and $I$ the ideal of $R[x]$ generated by $f_1,\dotsc,f_l$ and the $(n-k)$ minors of the Jacobian matrix $J$.
Let $\{ (g_i,\mathfrak g_i) \mid 1 \leq i \leq l\}$ be a pseudo-Gröbner basis of $I$ and $\mathfrak N = \sum \mathfrak g_i g_i \subseteq R$, where the sum is over all $1 \leq i \leq l$ such that $g_i \in K$. Then $\mathfrak p$ is a prime of bad reduction of $X$ if and only if $\mathfrak p$ divides $\mathfrak N$.
\end{corollary}
\begin{example}
To have a small non-trivial example, we look at an elliptic curve defined over a number field.
Although there are other techniques to determine the primes of bad reduction, we will do so
using pseudo-Gröbner bases.
Consider the number field $K = \Q(\sqrt{10})$ with ring of integers $\mathcal O_K = \Z[a]$, where $a = \sqrt{10}$. Let $E/K$ be the elliptic curve defined by
\[ f = y^2 - x^2 + (1728a+3348)x + (44928a-324432) \in K[x, y]. \]
Note that this is a short Weierstrass equation of the elliptic curve with
label \texttt{6.1-a2} from the LMFDB~(\cite{lmfdb}).
To determine the places of bad reduction, we consider the ideal
\[ I = \left\langle f, \frac{\partial f}{\partial x}, \frac{\partial f}{\partial y} \right\rangle \subseteq \mathcal O_K[x, y]. \]
Applying Algorithm~\ref{alg:groeb} we obtain a pseudo-Gröbner basis $G$, which together with Corollary~\ref{coro:intersectR} allows us to compute
\[ I \cap R = \langle 940369969152, 437864693760a + 71663616\rangle \subseteq \mathcal O_K. \]
The ideal $I \cap R$ has norm $67390312367240773632 = 2^{31} \cdot 3^{22}$ and factors as
\[ I \cap R = \langle 2, a\rangle^{31} \cdot \langle 3, a + 2 \rangle^{15} \cdot \langle 3, a+4\rangle^7. \]
Thus the primes of bad reduction are $\langle 2, a \rangle$, $\langle 3, a + 2 \rangle$ and $\langle 3, a + 4 \rangle$.
Note that the conductor of $E$ is divisible only by $\langle 2, a \rangle$ and $\langle 3, a + 2 \rangle$ (the model we chose is not minimal at $\langle 3, a + 4 \rangle$).
In fact this can be seen by determining the primes of bad reduction of the model $y^2 = x^3 + \frac 1 3(64a + 124)x + \frac 1{27}(1664a - 12016)$, which is minimal at $\langle 3, a + 4 \rangle$ (computed with \textsc{Magma} \cite{Bosma1997}).
\end{example}
\bibliography{pseudo_grobner}
\bibliographystyle{amsalpha}
\end{document} | {"config": "arxiv", "file": "1906.08555/pseudo_grobner.tex"} |
\begin{document}
\title[Multi-valued graphs]
{The space of embedded minimal surfaces of fixed genus in a $3$-manifold II;
Multi-valued graphs in disks}
\author{Tobias H. Colding}
\address{Courant Institute of Mathematical Sciences and MIT\\
251 Mercer Street\\
New York, NY 10012 and 77 Mass. Av., Cambridge, MA 02139}
\author{William P. Minicozzi II}
\address{Department of Mathematics\\
Johns Hopkins University\\
3400 N. Charles St.\\
Baltimore, MD 21218}
\thanks{The first author was partially supported by NSF Grant DMS 9803253
and an Alfred P. Sloan Research Fellowship
and the second author by NSF Grant DMS 9803144
and an Alfred P. Sloan Research Fellowship.}
\email{colding@cims.nyu.edu and minicozz@math.jhu.edu}
\maketitle
\numberwithin{equation}{section}
\section{Introduction} \label{s:s0}
This paper is the second in a series where we attempt to give a
complete description of the space of all embedded minimal surfaces of
fixed genus in a fixed (but arbitrary)
closed $3$-manifold. The key for understanding
such surfaces is to understand the local structure in a
ball and in particular the structure of an embedded minimal
disk in a ball in $\RR^3$.
We show here that if the curvature of such a disk
becomes large at some point, then it contains an almost flat
multi-valued graph nearby that continues almost all the way to the
boundary.
Let $\cP$ be the universal cover of the
punctured plane $\CC \setminus \{ 0 \}$ with global (polar)
coordinates $(\rho , \theta)$. An $N$-valued graph $\Sigma$ over
the annulus $D_{r_2} \setminus D_{r_1}$ (see fig. \ref{f:1}) is a (single-valued) graph
over
\begin{equation} \label{e:defmvg}
\{ (\rho ,\theta ) \in \cP \, | \, r_1 < \rho < r_2 {\text{ and }}
|\theta| \leq \pi \, N \} \, .
\end{equation}
\begin{figure}[htbp]
\setlength{\captionindent}{20pt}
\begin{minipage}[t]{0.5\textwidth}
\centering\input{shn1.pstex_t}
\caption{A multi-valued graph.} \label{f:1}
\end{minipage}
\end{figure}
\begin{Thm} \label{t:blowupwinding0}
Given $N\in \ZZ_+$, $\epsilon > 0$, there exist
$C_1,\,C_2>0$ so: Let
$0\in \Sigma^2\subset B_{R}\subset \RR^3$ be an embedded minimal
disk, $\partial \Sigma\subset \partial B_{R}$. If
$\max_{B_{r_0} \cap \Sigma}|A|^2\geq 4\,C_1^2\,r_0^{-2}$ for some
$R>r_0>0$, then there exists
(after a rotation)
an $N$-valued graph $\Sigma_g \subset \Sigma$ over $D_{R/C_2}
\setminus D_{2r_0}$ with gradient $\leq \epsilon$
and
$\Sigma_g \subset \{ x_3^2 \leq \epsilon^2 \, (x_1^2 + x_2^2) \}$.
\end{Thm}
This theorem is modeled by one half of the helicoid and its rescalings.
Recall that the helicoid is
the minimal surface $\Sigma^2$ in $\RR^3$
parameterized by
\begin{equation} \label{e:helicoid}
(s\,\cos t,s\sin t,t)
\end{equation}
where $s,t\in\RR$. By one half of the helicoid we mean the
multi-valued graph given by requiring that $s>0$ in \eqr{e:helicoid}.
Theorem \ref{t:blowupwinding0} will
follow by combining a blow up result with \cite{CM3}.
This blow up result says that if an embedded
minimal disk in a ball has large curvature at a point, then it
contains a small almost flat multi-valued graph nearby, that is:
\begin{Thm} \label{t:blowupwindinga}
See fig. \ref{f:2}.
Given $N , \omega>1$, and $\epsilon > 0$, there exists
$C=C(N,\omega,\epsilon)>0$ so: Let
$0\in \Sigma^2\subset B_{R}\subset \RR^3$ be an embedded minimal
disk, $\partial \Sigma\subset \partial B_{R}$. If
$\sup_{B_{r_0} \cap \Sigma}|A|^2\leq 4\,C^2\,r_0^{-2}$
and $|A|^2(0)=C^2\,r_0^{-2}$ for some $0<r_0<R$, then there exist
$ \bar{R} < r_0 / \omega$ and (after a rotation)
an $N$-valued graph $\Sigma_g \subset \Sigma$ over $D_{\omega \bar{R} }
\setminus D_{\bar{R} }$ with gradient $\leq \epsilon$, and
$\dist_{\Sigma}(0,\Sigma_g) \leq 4 \, \bar{R}$.
\end{Thm}
Recall that by the middle sheet $\Sigma^M$ of an $N$-valued graph $\Sigma$
we mean the portion over
\begin{equation}
\{ (\rho ,\theta ) \in \cP \, | \, r_1 < \rho < r_2 {\text{ and }}
0 \leq \theta \leq 2 \, \pi \} \, .
\end{equation}
The result that we need from \cite{CM3}
(combining theorem 0.3 and lemma II.3.8 there) is:
\begin{figure}[htbp]
\setlength{\captionindent}{20pt}
\begin{minipage}[t]{0.5\textwidth}
\centering\input{shn3.pstex_t}
\caption{Theorem \ref{t:blowupwindinga} -
finding a small multi-valued graph in
a disk near a point of large curvature.} \label{f:2}
\end{minipage}\begin{minipage}[t]{0.5\textwidth}
\centering\input{shn2.pstex_t}
\caption{Theorem \ref{t:spin4ever2} - extending a small multi-valued graph
in a disk.} \label{f:3}
\end{minipage}
\end{figure}
\begin{Thm} \label{t:spin4ever2}
\cite{CM3}; see fig. \ref{f:3}.
Given $N_1$ and $\tau > 0$, there exist $N , \Omega, \epsilon > 0$ so:
If $\Omega \, r_0 < 1 < R_0 / \Omega$, $\Sigma \subset B_{R_0}$ is
an embedded minimal disk with $\partial \Sigma \subset \partial B_{R_0}$,
and $\Sigma$ contains an $N$-valued minimal graph $\Sigma_g$ over
$D_1 \setminus D_{r_0}$ with gradient $\leq \epsilon$ and
$\Sigma_g \subset \{ x_3^2 \leq \epsilon^2 (x_1^2 + x_2^2) \}$, then
$\Sigma$ contains a $N_1$-valued graph $\Sigma_d$ over
$D_{R_0/\Omega} \setminus D_{r_0}$ with gradient $\leq \tau$ and
$(\Sigma_g)^M \subset \Sigma_d$.
\end{Thm}
As a consequence of Theorem \ref{t:blowupwinding0}, we will
show that if $|A|^2$ is blowing up for a sequence of embedded minimal disks,
then there is a smooth minimal graph through this point in the limit of a
subsequence
(Theorem \ref{t:stablim} below).
Theorems \ref{t:blowupwinding0}, \ref{t:blowupwindinga},
\ref{t:spin4ever2}, \ref{t:stablim}
are local and are for simplicity
stated and proven only for $\RR^3$ with the flat metric although
they can with only very minor changes easily be seen to hold for
a sufficiently small ball in any
given fixed Riemannian $3$-manifold.
Let $x_1 , x_2 , x_3$ be the standard coordinates on $\RR^3$ and
$\Pi : \RR^3 \to \RR^2$ orthogonal projection to $\{ x_3 = 0 \}$.
For $y \in S \subset \Sigma \subset \RR^3$ and $s > 0$, the
extrinsic and intrinsic balls and tubes are
\begin{alignat}{2}
B_s(y) &= \{ x \in \RR^3 \, | \, |x-y| < s \} \, , \, & T_s(S) &=
\{ x \in \RR^3 \, | \, \dist_{\RR^3} (x , S) < s \} \, , \\
\cB_s(y) &= \{ x \in \Sigma \, | \, \dist_{\Sigma} (x , y) < s \}
\, , \, & \cT_s (S) &= \{ x \in \Sigma \, | \, \dist_{\Sigma} (x ,
S) < s \} \, .
\end{alignat}
$D_s$ denotes the disk $B_s(0) \cap \{ x_3 = 0 \}$.
$\K_{\Sigma}$ the sectional curvature of a smooth compact surface
$\Sigma$ and when
$\Sigma$ is immersed $A_{\Sigma}$ will be its second fundamental form.
When $\Sigma$ is oriented, $\nn_{\Sigma}$ is the unit normal.
\section{Poincar\'e and Caccioppoli
type inequalities for area and curvature}
\label{s:wtotcurv}
In this section, we will first estimate the area of a surface
(not necessarily minimal) in terms of its
total curvature; see Corollary \ref{c:poincare}.
This should be seen as
analogous to a Poincar\'e
inequality (for functions), and will be used similarly
later in this paper.
After that,
we will bound the curvature by the area for a minimal disk;
see Corollary \ref{c:lcacc}. This
inequality is
similar to a Caccioppoli inequality and, unlike the Poincar\'e type
inequality, relies on that the surface is minimal. Finally, we will apply
these inequalities to show a strengthened (intrinsic) version of a
result of Schoen and Simon.
\begin{Lem} \label{l:li1}
If $\cB_{r_0} (x) \subset \Sigma^2$
is disjoint from the cut locus of $x$,
\begin{align}
\text{Length}(\partial \cB_{r_0})-2\pi r_0&
=-\int_0^{r_0}\int_{\cB_{\rho}}\K_{\Sigma}\, ,
\label{e:oi1a}\\
\Area (\cB_{r_0}(x))- \pi \, r_0^2
&= - \int_{0}^{r_0} \int_0^{\tau} \int_{\cB_{\rho}(x)}
\K_{\Sigma}\, . \label{e:oi1}
\end{align}
\end{Lem}
\begin{proof}
For $0 < t \leq r_0$, by the Gauss-Bonnet
theorem,
\begin{equation} \label{e:oi2}
\frac{d}{dt} \int_{\partial \cB_t } 1
= \int_{\partial \cB_t} \kg =
2 \, \pi - \int_{\cB_t} \K_{\Sigma}
\, ,
\end{equation}
where $\kg$ is the geodesic curvature of $\partial \cB_t$.
Integrating \eqr{e:oi2} gives the lemma.
\end{proof}
\begin{Cor} \label{c:poincare}
If $\cB_{r_0} (x) \subset \Sigma^2$
is disjoint from the cut locus of $x$,
\begin{equation} \label{e:oi6}
\Area (\cB_{r_0}(x)) \leq \pi \, r_0^2 - \frac{1}{2} \, r_0^2 \,
\int_{\cB_{r_0}(x)} \min \{ \K_{\Sigma},0\} \, \, .
\end{equation}
\end{Cor}
\begin{Cor} \label{c:lcacc}
If $\Sigma^2\subset \RR^3$ is immersed and minimal,
$\cB_{r_0} \subset \Sigma^2$ is a disk, and
$\cB_{r_0} \cap \partial \Sigma = \emptyset$,
\begin{align}
t^2\int_{\cB_{{r_0}-2\,t}} |A|^2
&\leq r_0^{2}\int_{\cB_{r_0}}|A|^2\,(1-r/r_0)^2/2=
\int_{0}^{r_0} \int_0^{\tau} \int_{\cB_{\rho}(x)} |A|^2\notag\\
&=2\,(\Area \, (\cB_{r_0}) - \pi \, {r_0}^2) \leq r_0 \,
{\text{Length}} (\partial \cB_{r_0}) - 2 \pi \, r_0^2
\, .\label{e:o1.23}
\end{align}
\end{Cor}
\begin{proof}
Since $\Sigma$ is minimal, $|A|^2=-2\,\K_{\Sigma}$ and
hence by Lemma \ref{l:li1}
\begin{equation} \label{e:bb1}
t^2 \, \int_{\cB_{{r_0}-2\,t}} |A|^2 \leq
t \, \int_{0}^{{r_0}-t} \int_{\cB_{\rho}} |A|^2 \leq
\int_{0}^{{r_0}} \int_0^{\tau} \int_{\cB_{\rho}} |A|^2 =
2 \, ( \Area \, (\cB_{r_0}) - \pi \, {r_0}^2 ) \, .
\end{equation}
The first equality follows by integration by parts twice
(using the coarea formula).
To get the last inequality in \eqr{e:o1.23}, note that
$\frac{d^2}{dt^2}{\text{Length}} (\partial \cB_{t})\geq 0$ by
\eqr{e:oi2} hence
$\frac{d}{dt}{\text{Length}} (\partial \cB_{t})\geq t\,
{\text{Length}} (\partial \cB_{t})$
and consequently $\frac{d}{dt}\left({\text{Length}}
(\partial \cB_{t}) /t\right)\geq 0$.
From this it follows easily.
\end{proof}
The following lemma and its corollary generalizes the main
result of \cite{ScSi}:
\begin{Lem} \label{l:scsi}
Given $C$, there exists $\epsilon>0$ so if
$\cB_{9 s}\subset \Sigma\subset \RR^3$ is an embedded minimal disk,
\begin{equation} \label{e:tcb0}
\int_{\cB_{9 s}}|A|^2 \leq C {\text{ and }}
\int_{\cB_{9 s}\setminus \cB_s} |A|^2 \leq \epsilon \, ,
\end{equation}
then $\sup_{\cB_{s}} |A|^2 \leq s^{-2}$.
\end{Lem}
\begin{proof}
Observe first that for $\epsilon$ small, \cite{CiSc} and \eqr{e:tcb0} give
\begin{equation} \label{e:ciscann}
\sup_{\cB_{8 s}\setminus \cB_{2s}} |A|^2 \leq C_1^2 \,
\epsilon \, s^{-2} \, .
\end{equation}
By \eqr{e:oi1a} and \eqr{e:tcb0}
\begin{equation} \label{e:lbd}
{\text{Length}} (\partial \cB_{2s}) \leq
( 4 \pi + C) \, s \, .
\end{equation}
We will next use \eqr{e:ciscann} and \eqr{e:lbd} to show that,
after rotating $\RR^3$, $\cB_{8 s}\setminus \cB_{2s}$ is
(locally) a graph over $\{ x_3 = 0 \}$ and furthermore $|\Pi
(\partial \cB_{8\,s})| > 3 \, s$. Combining these two facts with
embeddedness, the lemma will then follow easily from Rado's
theorem.
By \eqr{e:lbd}, $\diam (\cB_{8s} \setminus \cB_{2s}) \leq (12 +
2 \pi + C/2)s$. Hence, integrating \eqr{e:ciscann} gives
\begin{equation} \label{e:gmapaa}
\sup_{x,x' \in \cB_{8s} \setminus \cB_{2s}} \dist_{\SS^2} (\nn(x') , \nn(x) )
\leq C_1 \, \epsilon^{1/2} \, ( 12 + 2 \pi + C/2) \, .
\end{equation}
We can therefore rotate $\RR^3$ so that
\begin{equation} \label{e:gmapa}
\sup_{\cB_{8s} \setminus \cB_{2s}} |\nabla x_3|
\leq C_2 \, \epsilon^{1/2} \, ( 1 + C) \, .
\end{equation}
Given $y \in \partial \cB_{2s}$, let $\gamma_y$ be the outward normal
geodesic from $y$ to $\partial \cB_{8s}$ parametrized by arclength
on $[0,6s]$. Integrating \eqr{e:ciscann} gives
\begin{equation} \label{e:ciscr1}
\int_{\gamma_y |_{[0,t]}} |k_g^{\RR^3}| \leq
\int_{\gamma_y |_{[0,t]} } |A| \leq C_1 \, \epsilon^{1/2} \, t / s \, ,
\end{equation}
where $k_g^{\RR^3}$ is the geodesic curvature of $\gamma_y$ in
$\RR^3$.
Combining \eqr{e:gmapa} with \eqr{e:ciscr1} gives (see fig. \ref{f:4})
\begin{equation} \label{e:gmapa3}
\langle \nabla |\Pi ( \cdot) - \Pi(y)| , \gamma_y' \rangle > 1 -
C_3 \, \epsilon^{1/2} \, ( 1 + C) \, .
\end{equation}
Integrating \eqr{e:gmapa3}, we get that for $\epsilon$ small,
$|\Pi (\partial \cB_{8\,s})| > 3 \, s$.
See fig. \ref{f:5}.
Combining $|\Pi (\partial \cB_{8\,s})| > 3 \, s$ and
\eqr{e:gmapa}, it follows that, for $\epsilon $ small,
$\Pi^{-1} (\partial D_{2s}) \cap
\cB_{8s}$ is a collection of immersed multi-valued graphs over
$\partial D_{2s}$. Since $\cB_{8s}$ is embedded, $\Pi^{-1}
(\partial D_{2s}) \cap \cB_{8s}$ consists of disjoint embedded
circles which are graphs over $\partial D_{2s}$; this is the only
use of embeddedness. Since $x_1^2+x_2^2$ is subharmonic on the
disk $\cB_{8\,s}$, these circles bound disks in $\cB_{8\,s}$ which
are then graphs by Rado's theorem (see, e.g., \cite{CM1}). The
lemma now follows easily from \eqr{e:gmapa} and the mean value
inequality.
\end{proof}
\begin{figure}[htbp]
\setlength{\captionindent}{20pt}
\begin{minipage}[t]{0.5\textwidth}
\centering\input{blow3.pstex_t}
\caption{Proof of Lemma \ref{l:scsi}:
By \eqr{e:gmapa} and
\eqr{e:ciscr1}, each $\gamma_y$ is almost a
horizontal line segment of length $6s$. Therefore,
$|\Pi (\partial \cB_{8\,s})| > 3 \, s$.} \label{f:4}
\end{minipage}\begin{minipage}[t]{0.5\textwidth}
\centering\input{blow4.pstex_t}
\caption{Proof of Lemma \ref{l:scsi}:
$\Pi^{-1} (\partial D_{2s})\cap \cB_{8s}$ is a union
of graphs over $\partial D_{2s}$. Each bounds a
graph in $\Sigma$ over
$D_{2s}$ by Rado's theorem.}
\label{f:5}
\end{minipage}
\end{figure}
\begin{Cor} \label{c:scsi}
Given $C_I$, there exists $C_P$ so
if $\cB_{2s}\subset \Sigma\subset \RR^3$ is an embedded
minimal disk with
\begin{equation} \label{e:tcbp}
\int_{\cB_{2s}} |A|^2 \leq C_I \, ,
\end{equation}
then $\sup_{\cB_{s}} |A|^2 \leq C_P \, s^{-2}$.
\end{Cor}
\begin{proof}
Let $\epsilon > 0$ be given by Lemma \ref{l:scsi} with $C=C_I$
and then let $N$ be the least integer greater than $C_I / \epsilon$.
Given $x \in \cB_s$, there exists $1\leq j \leq N$ with
\begin{equation} \label{e:cth}
\int_{\cB_{9^{1-j} s}(x) \setminus \cB_{9^{-j} s}(x)} |A|^2 \leq
C_I / N \leq \epsilon \, .
\end{equation}
Combining \eqr{e:tcbp} and \eqr{e:cth}, Lemma \ref{l:scsi} gives that
$|A|^2(x) \leq (9^{-j} s)^{-2} \leq 9^{2N} \, s^{-2}$.
\end{proof}
We close this section with a generalization to
surfaces of higher genus; see Theorem \ref{t:scsig} below.
This will not be used in this paper
but will be useful in \cite{CM6}. First we need:
\begin{figure}[htbp]
\setlength{\captionindent}{20pt}
\begin{minipage}[t]{0.5\textwidth}
\centering\input{blow6.pstex_t}
\caption{Lemma \ref{l:replacement}:
A curve $\sigma$ and a broken geodesic $\sigma_1$
homotopic to $\sigma$ in $\cT_{r_0}(\sigma)$.}
\label{f:6}
\end{minipage}
\end{figure}
\begin{Lem} \label{l:replacement}
Let $\Sigma$ be a surface and $\sigma \subset \Sigma$ a
simple closed curve with length $\leq C\,r_0$. If for all
$y\in \sigma$ the ball $\cB_{r_0}(y)$ is a disk disjoint
from $\partial \Sigma$, then
there is a broken geodesic
$\sigma_1 \subset \Sigma $ homotopic to
$\sigma$ in $\cT_{r_0}(\sigma)$
and with $\leq C+1$ breaks; see fig. \ref{f:6}. If $\Sigma$ is
an annulus with $\K_{\Sigma} \leq 0$ and $\sigma$ separates $\partial \Sigma$,
then $\sigma_1$ contains a simple curve $\sigma_2$ homotopic to $\sigma$ with
$\leq C+2$ breaks.
\end{Lem}
\begin{proof}
Parametrize $\sigma$ by arclength so that
$\sigma(0)=\sigma(\text{Length}(\sigma))$.
Let
$0=t_0<\cdots<t_n= \text{Length}(\sigma)$ be a subdivision with
$t_{i+1}-t_i \leq r_0$ and $n \leq C+1$. Since $\cB_{r_0}(y)$
is a disk for all $y\in \sigma$, it follows that we can replace
$\sigma$ with a broken geodesic $\sigma_1$ with breaks at
$\sigma(t_i)=\sigma(t_i)$ and which is homotopic to $\sigma$
in $\cT_{r_0}(\sigma)$.
Suppose also now that $\Sigma$ is
an annulus with $\K_{\Sigma} \leq 0$ and $\sigma$ is topologically nontrivial.
Let $[a,b]$ be a maximal interval so that $\sigma_1 |_{[a,b]}$ is simple.
We are done if $\sigma_1 |_{[a,b]}$ is homotopic to $\sigma$.
Otherwise, $\sigma_1 |_{[a,b]}$ bounds a disk in $\Sigma$ and the Gauss-Bonnet
theorem implies that $\sigma_1 |_{(a,b)}$ contains a break. Hence,
replacing $\sigma_1$ by $\sigma_1 \setminus \sigma_1 |_{(a,b)}$
gives a subcurve homotopic to $\sigma$ but does not
increase the number of breaks. Repeating
this eventually gives $\sigma_2$.
\end{proof}
Given a surface $\Sigma$ with boundary $\partial \Sigma$, we will
define the {\it genus} of $\Sigma$ ($\Genus (\Sigma)$) to be the
genus of the closed surface $\hat{\Sigma}$ obtained by adding a
disk to each boundary circle. For example, the disk and the
annulus are both genus zero; on the other hand, a closed surface
of genus $g$ with $k$ disks removed has genus $g$.
In contrast to Corollary \ref{c:scsi} (and the results preceding it),
the next result concerns surfaces intersected with extrinsic balls.
Below, $\Sigma_{0,s}$ is the component of $B_{s} \cap
\Sigma$ with $0 \in \Sigma_{0,s}$.
\begin{Thm} \label{t:scsig}
Given $C_a , g$, there exist $C_c , C_r$ so: If $0
\in \Sigma\subset B_{r_0}$ is an embedded minimal surface
with $\partial \Sigma\subset \partial B_{r_0}$, $\Genus(\Sigma)
\leq g$, $\Area (\Sigma) \leq C_a \, r_0^2$, and for each $C_r
\,r_0 \leq s \leq r_0$,
$\Sigma \setminus \Sigma_{0,s}$ is topologically an annulus,
then $\Sigma$ is a disk and
$\sup_{\Sigma_{0, C_r \, r_0}} |A|^2 \leq C_c \, r_0^{-2}$.
\end{Thm}
\begin{proof}
By the coarea formula, we can find $r_0/2 \leq r_1 \leq 3r_0 / 4$
with ${\text{Length}}(\partial B_{r_1} \cap \Sigma) \leq 4 \, C_a \, r_0$.
It is easy to see from the maximum principle that
$\cB_{r_0/4}(y)$ is a disk for each $y \in
\partial B_{r_1} \cap \Sigma$ (we will take $C_r < 1/4$).
Applying Lemma \ref{l:replacement} to $\partial
\Sigma_{0,r_1} \subset \Sigma \setminus \Sigma_{0,r_0/4}$,
we get a simple broken geodesic
$\sigma_2 \subset \cT_{r_0/4}(\partial \Sigma_{0,r_1})$
homotopic to $\partial
\Sigma_{0,r_1}$ and with $\leq 16 \, C_a+2$ breaks.
Consequently, the Gauss-Bonnet
theorem gives
\begin{equation}
\int_{\Sigma_{0,r_0/4}} |A|^2
= - 2 \int_{\Sigma_{0,r_0/4}} K_{\Sigma} \leq
8 \, \pi \, g + 2 \, \int_{\sigma_2} |k_g|
\leq 8 \, \pi ( g + 4\, C_a + 1) \, .
\end{equation}
For $\epsilon > 0$, arguing as in Corollary \ref{c:scsi} gives $r_2$ with
$\int_{\Sigma_{0,5r_2} \setminus \Sigma_{0,r_2}} |A|^2 \leq \epsilon^2$
so, by \cite{CiSc},
\begin{equation} \label{e:pcw}
\sup_{\Sigma_{0,4r_2} \setminus \Sigma_{0,2r_2}} |A|^2
\leq C \, \epsilon^2 \, r_2^{-2} \, .
\end{equation}
Using the area
bound, $\partial \Sigma_{0,3r_2}$ can be covered by $C C_a$ intrinsic
balls $\cB_{r_2/4}(x_i)$ with $x_i \in \partial \Sigma_{0,3r_2}$
(by the maximum principle, each $\cB_{r_2/4}(x_i)$ is a disk).
Hence, since $\partial \Sigma_{0,3r_2}$ is connected,
any two points in $\partial \Sigma_{0,3r_2}$
can be joined
by a curve in $\Sigma_{0,4r_2} \setminus \Sigma_{0,2r_2}$ of length
$\leq C \, r_2$.
Integrating \eqr{e:pcw} twice then gives a plane $P \subset \RR^3$
with $\partial \Sigma_{0,3r_2} \subset T_{C \, \epsilon \, r_2}(P)$.
By the convex hull property,
$0 \in \Sigma_{0,3r_2} \subset T_{C \, \epsilon \, r_2}(P)$.
Hence, since $\partial \Sigma_{0,3r_2}$ is
connected and embedded, $\partial \Sigma_{0,3r_2}$ is a
graph over the boundary
of a convex domain for $\epsilon$ small.
The standard existence theory and Rado's theorem give
a minimal graph $\Sigma_g$ with $\partial \Sigma_g =
\partial \Sigma_{0,3r_2}$. By translating $\Sigma_g$ above
$\Sigma_{0,3r_2}$ and sliding it down to the first point of
contact, and then repeating
this from below, it follows easily from the strong maximum
principle that $\Sigma_g = \Sigma_{0,3r_2}$, completing the proof.
\end{proof}
\section{Finding large nearly stable pieces} \label{s:s1a}
We will collect here some results on stability of
minimal surfaces which will be used later to conclude that
certain sectors
are nearly stable.
The basic point is that
two disjoint but nearby embedded minimal surfaces satisfying a
priori curvature estimates must be nearly stable (made precise
below). We start by recalling the definition of $\delta_s$-stability.
Let again $\Sigma\subset \RR^3$ be an embedded oriented minimal surface.
\begin{Def} \label{d:almst}
($\delta_s$-stability).
Given $\delta_s \geq 0$, set
\begin{equation}
L_{\delta_s} = \Delta + (1-\delta_s) |A|^2 \, ,
\end{equation}
so that $L_0$ is the usual Jacobi operator on $\Sigma$. A domain
$\Omega \subset \Sigma$ is $\delta_s$-stable if $\int \phi \,
L_{\delta_s} \phi \leq 0$ for any compactly supported Lipschitz
function $\phi$ (i.e., $\phi \in C_0^{0,1}(\Omega)$).
\end{Def}
It follows that $\Omega$ is $\delta_s$-stable if and only if, for
all $\phi \in C_0^{0,1}(\Omega)$, we have the $\delta_s$-stability
inequality:
\begin{equation} \label{e:stabineq}
(1-\delta_s) \int |A|^2 \phi^2 \leq \int |\nabla \phi |^2 \, .
\end{equation}
Since the Jacobi equation is the linearization of the minimal
graph equation over $\Sigma$, standard calculations give:
\begin{Lem} \label{l:mge}
There exists $\delta_g > 0$ so that if $\Sigma$ is minimal and $u
$ is a positive solution of the minimal graph equation over
$\Sigma$ (i.e., $\{ x + u(x) \, \nn_{\Sigma} (x) \, |\, x \in
\Sigma \}$ is minimal) with $|\nabla u| + |u| \, |A| \leq
\delta_g$, then $w = \log u$ satisfies on $\Sigma$
\begin{equation} \label{e:wlog2}
\Delta w = - |\nabla w|^2 + \dv (a \nabla w) + \langle \nabla w ,
a \nabla w \rangle + \langle b , \nabla w \rangle + (c-1) |A|^2 \,
,
\end{equation}
for functions $a_{ij} , b_j , c$ on $\Sigma$ with $|a| , |c| \leq
3 \, |A| \, |u| + |\nabla u|$ and $|b| \leq 2 \, |A| \, |\nabla
u|$.
\end{Lem}
The following slight modification of
a standard argument (see, e.g., proposition 1.26 of \cite{CM1}) gives
a useful sufficient condition for $\delta_s$-stability of a domain:
\begin{Lem} \label{l:fiscm}
There exists $\delta > 0$ so: If $\Sigma$ is minimal
and $u > 0$ is a solution of the minimal graph equation over
$\Omega \subset \Sigma$ with $|\nabla u| + |u| \, |A| \leq
\delta$, then
$\Omega$ is $1/2$-stable.
\end{Lem}
\begin{proof}
Set $w=\log u$ and choose a cutoff function
$\phi \in C_0^{0,1}(\Omega)$.
Applying Stokes' theorem
to $\dv ( \phi^2 \, \nabla w - \phi^2 \, a \, \nabla w)$,
substituting \eqr{e:wlog2}, and using
$|a| , |c| \leq 3 \, \delta ,
|b| \leq 2 \, \delta \, |\nabla w|$ gives
\begin{align} \label{e:lapw2}
(1 - 3 \, \delta) \, \int \phi^2 \, |A|^2
&\leq - \int \phi^2 \, |\nabla w|^2 + \int \phi^2 \,
\langle \nabla w , b + a \, \nabla w \rangle
+ 2 \int \phi \langle \nabla \phi , \nabla w
- a \, \nabla w \rangle \notag \\
&\leq (5 \delta - 1 ) \, \int
\phi^2 \, |\nabla w|^2 + 2 (1 + 3 \delta)
\int |\phi \, \nabla w| \, |\nabla \phi|
\, .
\end{align}
The lemma now follows easily from
the absorbing inequality.
\end{proof}
We will use Lemma \ref{l:fiscm} to see that disjoint embedded minimal
surfaces that are close are nearly stable (Corollary \ref{c:delst} below).
Integrating $|\nabla \dist_{\SS^2}( \nn (x) , \nn )| \leq
|A|$ on geodesics gives
\begin{equation} \label{e:gmap}
\sup_{x' \in \cB_s(x)} \dist_{\SS^2} (\nn(x') , \nn(x) )
\leq s \, \sup_{\cB_s(x)} |A| \, .
\end{equation}
By \eqr{e:gmap},
we can choose $0 < \rho_2 < 1/4$ so: If
$\cB_{2s}(x) \subset \Sigma$, $s \, \sup_{\cB_{2s}(x)} |A| \leq 4 \,
\rho_2$, and $t \leq s$, then the component
$\Sigma_{x,t}$ of $B_{t}(x) \cap \Sigma$ with $x \in \Sigma_{x,t}$
is a graph over $T_x \Sigma$ with gradient $\leq t/s$ and
\begin{equation} \label{e:gmap1}
\inf_{x' \in \cB_{2s}(x)} |x'-x| / \dist_{\Sigma}(x,x') > 9/10 \, .
\end{equation}
One consequence is that if
$t \leq s$ and we translate $T_x \Sigma$ so that $x \in T_x \Sigma$, then
\begin{equation} \label{e:gmap2}
\sup_{x' \in \cB_t(x)} \, | x' - T_x \Sigma|
\leq t^2 / s \, .
\end{equation}
\begin{Lem} \label{l:dnl}
There exist $C_0 , \rho_0 > 0$ so:
If $\rho_1 \leq \min \{\rho_0,\rho_2\}$,
$\Sigma_1 , \Sigma_2 \subset \RR^3$ are oriented
minimal surfaces, $|A|^2\leq 4$ on each $\Sigma_i$,
$x \in \Sigma_1 \setminus \cT_{4\rho_2}(\partial \Sigma_1)$, $y
\in B_{\rho_1}(x) \cap \Sigma_2 \setminus \cT_{4\rho_2}(\partial
\Sigma_2)$, and $\cB_{2 \rho_1}(x) \cap \cB_{2 \rho_1}(y) =
\emptyset$, then $\cB_{\rho_2}(y)$ is the graph $\{ z + u(z) \,
\nn (z) \}$ over a domain containing $\cB_{\rho_2/2}(x)$ with $u
\ne 0$ and $|\nabla u| + 4 \, |u| \leq C_0 \, \rho_1$.
\end{Lem}
\begin{proof}
Since $\rho_1 \leq \rho_2$,
\eqr{e:gmap1} implies that
$\cB_{ 2\rho_2}(x) \cap \cB_{ 2\rho_2}(y) = \emptyset$.
If $t\leq 9 \rho_2 / 5$, then $|A|^2\leq 4$ implies that the
components $\Sigma_{x,t} , \Sigma_{y,t}$ of $B_{t}(x) \cap
\Sigma_1 , B_{t}(y) \cap \Sigma_2$, respectively, with $x\in
\Sigma_{x,t} , y\in \Sigma_{y,t}$, are graphs with gradient $\leq
t/(2\rho_2)$ over $T_x \Sigma_1 , T_y \Sigma_2$ and have
$\Sigma_{x,t} \subset \cB_{ 2\rho_2}(x) , \Sigma_{y,t} \subset
\cB_{ 2\rho_2}(y)$. The last conclusion implies that
$\Sigma_{x,t} \cap \Sigma_{y,t} = \emptyset$.
It now follows that $\Sigma_{x,t} , \Sigma_{y,t}$ are
graphs over the same plane. Namely,
if we set
$\theta = \dist_{\SS^2} (\nn(x) , \{ \nn(y) , - \nn(y) \} )$, then
\eqr{e:gmap2}, $|x-y| < \rho_1$,
and $\Sigma_{x,t} \cap \Sigma_{y,t} = \emptyset$ imply that
\begin{equation} \label{e:nnclose1}
\rho_1 - (t/2 -\rho_1) \, \sin \theta + t^2 / (2 \rho_2) >
- t^2 / (2\rho_2 )\, .
\end{equation}
Hence, $\sin \theta < \rho_1 / (t/2 -\rho_1) + t^2 /
[(t/2 -\rho_1)\rho_2]$. For $\rho_0 / \rho_2$ small, $\cB_{\rho_2}(y)$ is a
graph with bounded gradient over $T_x \Sigma_1$. The lemma now
follows easily using the Harnack inequality.
\end{proof}
\begin{figure}[htbp]
\setlength{\captionindent}{20pt}
\begin{minipage}[t]{0.5\textwidth}
\centering\input{blow7.pstex_t}
\caption{Corollary \ref{c:delst}: Two sufficiently close disjoint
minimal surfaces with bounded curvatures must be nearly stable.}
\label{f:7}
\end{minipage}\begin{minipage}[t]{0.5\textwidth}
\centering\input{blow8.pstex_t}
\caption{The set $VB$ in \eqr{e:bplus}. Here
$x \in VB$ and $y \Sigma\setminus VB$.}
\label{f:8}
\end{minipage}
\end{figure}
Combining Lemmas \ref{l:fiscm} and \ref{l:dnl} gives:
\begin{Cor} \label{c:delst}
See fig. \ref{f:7}. Given $C_0 , \, \delta > 0$,
there exists
$\epsilon (C_0 , \delta) > 0$ so that
if $p_i \in \Sigma_i \subset \RR^3$ ($i=1,2$) are embedded minimal surfaces,
$\Sigma_1 \cap \Sigma_2 = \emptyset$,
$\cB_{2 R}(p_i) \cap \partial \Sigma_i = \emptyset$,
$ |p_1 - p_2| < \epsilon \, R$, and
\begin{equation} \label{e:curvhinc}
\sup_{\cB_{2 R}(p_i)} |A|^2 \leq
C_0 \, R^{-2} \, ,
\end{equation}
then $\cB_{R} (\tilde{p}_i) \subset \tilde{\Sigma}_i$ is $\delta$-stable where
$\tilde{p}_i$ is the point over $p_i$ in the universal cover $\tilde{\Sigma}_i$
of $\Sigma_i$.
\end{Cor}
The next result gives a decomposition
of an embedded minimal surface with bounded curvature into a portion with
bounded area and a union of disjoint $1/2$-stable domains.
\begin{Lem} \label{l:deltstb}
There exists $C_1$ so:
If $0 \in \Sigma \subset B_{2R} \subset \RR^3$ is an
embedded minimal surface with
$\partial \Sigma \subset \partial B_{2R}$, and $|A|^2 \leq 4$, then
there exist disjoint
$1/2$-stable subdomains $\Omega_j \subset \Sigma$ and
a function $\chi \leq 1$ which vanishes on $B_R \cap \Sigma \setminus
\cup_j \Omega_j$ so that
\begin{align} \label{e:abd}
\Area ( \{ x \in B_R \cap \Sigma \, | \, \chi (x) < 1 \} ) &
\leq C_1 \, R^3 \, , \\
\int_{\cB_R} |\nabla \chi|^2 & \leq C_1 \, R^3 \, . \label{e:chi}
\end{align}
\end{Lem}
\begin{proof}
We can assume that $R > \rho_2$ (otherwise $B_R \cap \Sigma$ is stable).
Let $\delta > 0$ be from Lemma \ref{l:fiscm}
and $C_0 , \rho_0$ be from Lemma \ref{l:dnl}. Set
$\rho_1 = \min \{ \rho_0 / C_0 , \delta / C_0 , \rho_2 / 4 \}$.
Given $x \in B_{2R - \rho_1} \cap \Sigma$,
let $\Sigma_{x}$ be the component of $B_{\rho_1}(x)\cap \Sigma$
with $x \in \Sigma_{x}$ and let $B_x^{+}$ be the component
of $B_{\rho_1}(x) \setminus \Sigma_{x}$
which $\nn(x)$ points into.
See fig. \ref{f:8}. Set
\begin{equation} \label{e:bplus}
\vb = \{ x \in B_R \cap \Sigma \, | \,
B_x^{+} \cap \Sigma \setminus
\cB_{4 \, \rho_1 }(x) = \emptyset \}
\end{equation}
and let $\{ \Omega_j \}$ be the
components of $B_{R} \cap \Sigma \setminus \overline{\vb}$.
Choose a maximal disjoint collection $\{ \cB_{\rho_1}(y_i) \}_{1 \leq i \leq \nu}$
of balls centered in
$\vb$. Hence, the union of the balls
$\{ \cB_{2 \, \rho_1}(y_i) \}_{1 \leq i \leq \nu}$ covers $\vb$.
Further,
the ``half-balls'' $B_{\rho_1/2}(y_i) \cap
B_{y_i}^{+} $ are pairwise disjoint. To see this, suppose that
$|y_i - y_j| < \rho_1$ but $y_j \notin \cB_{2 \rho_1}(y_i)$.
Then, by \eqr{e:gmap1},
$y_j \notin \cB_{8 \rho_1}(y_i)$ so
$\cB_{4 \rho_1}(y_j) \notin B_{y_i}^+$ and
$\cB_{4 \rho_1}(y_i) \notin B_{y_j}^+$; the triangle inequality
then implies that
$B_{\rho_1/2}(y_i) \cap
B_{y_i}^{+} \cap B_{\rho_1/2}(y_j) \cap
B_{y_j}^{+} = \emptyset$ as claimed.
By \eqr{e:gmap}--\eqr{e:gmap2}, each
$B_{\rho_1/2}(y_i) \cap
B_{y_i}^{+}$
has volume approximately $\rho_1^{3}$
and is contained in $B_{2R}$ so that
$\nu \leq C \, R^3$.
Define the function $\chi$ on $\Sigma$ by
\begin{equation}
\chi (x)=
\begin{cases}
0 & \hbox{ if } x \in \vb \, , \\
\dist_{\Sigma} (x, \vb ) / \rho_1
& \hbox{ if } x\in \cT_{\rho_1}(\vb)\setminus \vb \, , \\
1 & \hbox{ otherwise }\, .
\end{cases}
\end{equation}
Since $\cT_{\rho_1}(\vb) \subset
\cup_{i=1}^{\nu}
\cB_{3 \, \rho_1}(y_i)$, $|A|^2 \leq 4$, and $\nu \leq C \, R^3$,
we get \eqr{e:abd}. Combining \eqr{e:abd} and $|\nabla \chi| \leq \rho_1^{-1}$
gives \eqr{e:chi} (taking $C_1$ larger).
It remains to show that each $\Omega_j$ is $1/2$-stable.
Fix $j$. By construction, if $x \in \Omega_j$, then
there exists $y_x \in B_x^{+} \cap \Sigma \setminus
\cB_{4 \, \rho_1 }(x)$ minimizing $|x - y_x|$ in
$B_x^{+} \cap \Sigma$.
In particular, by Lemma \ref{l:dnl},
$\cB_{\rho_2}(y_x)$ is the graph
$\{ z + u_{x}(z) \, \nn (z) \}$ over a domain containing $\cB_{\rho_2/2}(x)$
with $u_x > 0$ and
$|\nabla u_x| + 4 \, |u_x| \leq \min \{
\delta , \rho_0 \}$.
Choose a maximal disjoint collection of balls $\cB_{\rho_2 / 6}(x_i)$
with $x_i \in \Omega_j$ and let $u_{x_i} > 0$ be the corresponding
functions defined on $\cB_{\rho_2 / 2}(x_i)$.
Since $\Sigma$ is embedded (and compact) and $|u_{x_i}| < \rho_0$,
Lemma \ref{l:dnl} implies that
$u_{x_i}(x) = \min_{t>0} \{ x + t \, \nn(x) \in \Sigma \}$ for
$x \in \cB_{\rho_2 / 2}(x_{i})$.
Hence, $u_{x_{i_1}}(x) = u_{x_{i_2}}(x)$ for
$x \in \cB_{\rho_2 / 2}(x_{i_1}) \cap
\cB_{\rho_2 / 2}(x_{i_2})$.
Note that
$\cT_{\rho_2 / 6} (\Omega_j) \subset \cup_i \cB_{\rho_2 / 2}(x_i)$.
We conclude that the $u_{x_i}$'s give a well-defined function
$u_j> 0$ on $\cT_{\rho_2/6}(\Omega_j)$ with $|\nabla u_j| + |u_j| \, |A| \leq
\delta$.
Finally,
Lemma \ref{l:fiscm} implies that each $\Omega_j$ is
$1/2$-stable.
\end{proof}
\section{Total curvature and area of embedded minimal disks} \label{s:s1}
Using the decomposition of Lemma \ref{l:deltstb}, we next
obtain polynomial bounds
for the area and total curvature of intrinsic balls
in embedded minimal
disks with bounded curvature.
\begin{Lem} \label{l:vgr}
There exists $C_1$ so
if $0 \in \Sigma \subset B_{2\, R}$ is an
embedded minimal
disk, $\partial \Sigma \subset \partial B_{2\, R}$,
$|A|^2 \leq 4$, then
\begin{equation} \label{e:tcgr}
\int_{0}^R
\int_{0}^t \int_{\cB_s} |A|^2 \, ds \, dt =
2( \Area (\cB_{R})- \pi \, R^2)
\leq 6 \,\pi \, R^2 + 20 \, C_1 \, R^5 \, .
\end{equation}
\end{Lem}
\begin{proof}
Let $C_1$, $\chi$, and $\cup_j \Omega_j$ be given by
Lemma \ref{l:deltstb}.
Define $\psi$ on $\cB_R$
by $\psi =\psi (\dist_{\Sigma}(0, \cdot) ) =
1- \dist_{\Sigma}(0, \cdot) /R$, so $\chi \psi$
vanishes off of $\cup_j \Omega_j$.
Using $\chi \psi$ in the $1/2$-stability inequality, the
absorbing inequality and \eqr{e:chi}
give
\begin{align} \label{e:stab1}
\int |A|^2 \chi^2 \psi^2 & \leq 2 \, \int |\nabla ( \chi \psi)|^2 =
2 \, \int \, \left( \chi^2 |\nabla \psi|^2 + 2 \chi \, \psi \langle
\nabla \chi , \nabla \psi \rangle + \psi^2 |\nabla \chi|^2 \right)
\notag \\ & \leq 6 \, C_1 \, R^3 + 3 \int \chi^2 |\nabla \psi|^2
\leq 6 \, C_1 \, R^3 + 3\,R^{-2} \Area (\cB_R)\, .
\end{align}
Using \eqr{e:abd} and $|A|^2 \leq 4$, we get
\begin{equation} \label{e:stab3}
\int |A|^2 \psi^2
\leq 4 \, C_1 \, R^3 + \int |A|^2 \chi^2 \psi^2
\leq 10 \, C_1 \, R^3 + 3 \, R^{-2} \, \Area \, (\cB_R) \, .
\end{equation}
The lemma follows
from \eqr{e:stab3} and Corollary \ref{c:lcacc}.
\end{proof}
The polynomial growth allows us to find large intrinsic balls
with a fixed doubling:
\begin{Cor} \label{c:choosing}
There exists $C_2$ so that
given $\beta , R_0 > 1$, we get $R_{2}$
so: If $0 \in \Sigma \subset B_{R_{2}}$ is an embedded
minimal disk, $\partial \Sigma \subset \partial B_{R_2}$,
$|A|^2(0) = 1$, and $|A|^2 \leq 4$,
then there exists $R_0 < R < R_2 / (2 \, \beta)$ with
\begin{equation}
\int_{\cB_{3 \, R}} |A|^2 +
\beta^{-10} \, \int_{\cB_{2 \, \beta \, R}} |A|^2 \leq C_{2} \, R^{-2} \,
\Area \, ( \cB_{R} ) \, . \label{e:fe3}
\end{equation}
\end{Cor}
\begin{proof}
Set $\ca (s) = \Area (\cB_s)$.
Given $m$,
Lemma \ref{l:vgr} gives
\begin{equation} \label{e:fe1}
\left( \min_{1 \leq n \leq m} \frac{\ca ( (4\beta)^{2n} \, R_0)}
{\ca ( (4\beta)^{2n-2} \, R_0)} \right)^m \leq
\frac{\ca ((4\beta)^{2m} \, R_0 )}{\ca (R_0)} \leq C_1' \,
(4\beta)^{10m} \, R_0^3 \, .
\end{equation}
Fix $m$ with $C_1' \, R_0^3 < 2^{m}$ and
set
$R_2 = 2 \, (4\beta)^{2m} \, R_0$. By \eqr{e:fe1}, there exists
$R_1 = (4\beta)^{2n-2} \, R_0$ with $1 \leq n \leq m$ so
\begin{equation} \label{e:fe2}
\frac{\ca ( (4\beta)^{2} \, R_1)}
{\ca ( R_1)}
\leq 2 \, (4 \beta)^{10} \, .
\end{equation}
For simplicity, assume that $\beta = 4^q$ for $q \in \ZZ^+$.
As in \eqr{e:fe1}, \eqr{e:fe2}, we get $0 \leq j \leq q$ with
\begin{equation} \label{e:fe2a}
\frac{\ca ( 4^{j+1} \, R_1)}
{\ca ( 4^{j} \, R_1)}
\leq
\left[ \frac{\ca ( 4 \beta \, R_1)}
{\ca ( R_1)} \right]^{1/(q+1)}
\leq
2^{1/(q+1)} \, 4^{10} \, .
\end{equation}
Set $R = 4^{j} \, R_1$.
Combining \eqr{e:fe2}, \eqr{e:fe2a}, and
Corollary \ref{c:lcacc} gives
\eqr{e:fe3}.
\end{proof}
\section{The local structure near the axis} \label{s:s2}
\begin{figure}[htbp]
\setlength{\captionindent}{20pt}
\begin{minipage}[t]{0.5\textwidth}
\centering\input{blow9.pstex_t}
\caption{The intrinsic sector over a curve $\gamma$
defined in \eqr{e:defsip}.}
\label{f:9}
\end{minipage}
\end{figure}
Given $\gamma \subset \partial \cB_{r}$, define the intrinsic
sector, see fig. \ref{f:9},
\begin{equation} \label{e:defsip}
S_{R}(\gamma) = \{ \exp_0 (v) \, | \, r \leq |v| \leq r + R {\text{ and }}
\exp_0 (r \, v / |v|) \in \gamma \} \, .
\end{equation}
The key for proving Theorem \ref{t:blowupwindinga} is to
find $n$ large intrinsic sectors with a scale-invariant
curvature bound.
To do this,
we first
use Corollary \ref{c:scsi} to bound
${\text{Length}}(\partial \cB_R)/R$ from below for $R \geq R_0$.
Corollary \ref{c:choosing} gives $R_3 >R_0$
and $n$ long disjoint curves $\tilde{\gamma}_i \subset \partial \cB_{R_3}$
so the sectors over $\tilde{\gamma}_i$ have bounded
$\int |A|^2$. Corollary \ref{c:scsi} gives the
curvature bound.
Once we have these sectors, for $n$ large,
two must be
close and hence, by Lemmas \ref{l:fiscm} and \ref{l:dnl},
$1/2$-stable. The $N$-valued graph is then given
by corollary II.1.34 of \cite{CM3}:
\begin{Cor} \label{c:uselater}
\cite{CM3}.
Given $\omega > 8, 1 > \epsilon > 0, C_0$, and $N$, there exist $m_1 ,
\Omega_1$ so: If $0 \in \Sigma$ is an embedded minimal disk,
$\gamma \subset \partial \cB_{r_1}$ is a
curve, $\int_{\gamma} k_g < C_0 \, m_1$, ${\text{Length}}(\gamma) =
m_1 \, r_1$, and $\cT_{r_1 / 8} (S_{\Omega_1^2 \, \omega \, r_1 } (\gamma))$
is $1/2$-stable, then
(after rotating $\RR^3$) $S_{ \Omega_1^2 \, \omega \, r_1}
(\gamma)$ contains an $N$-valued graph $\Sigma_N$ over $D_{\omega \,
\Omega_1 \, r_1} \setminus D_{\Omega_1 \, r_1}$ with gradient
$\leq \epsilon$, $|A| \leq \epsilon / r$, and
$\dist_{S_{ \Omega_1^2 \, \omega \, r_1}
(\gamma)} ( \gamma , \Sigma_N ) < 4 \, \Omega_1 \, r_1$.
\end{Cor}
\begin{figure}[htbp]
\setlength{\captionindent}{20pt}
\begin{minipage}[t]{0.5\textwidth}
\centering\input{blow10.pstex_t}
\caption{Equation \eqr{e:defsiptilde} divides a punctured
ball into
sectors $\tilde S_i$.}
\label{f:10}
\end{minipage}
\end{figure}
\begin{proof}
(of Theorem \ref{t:blowupwindinga}).
Rescale by $C/r_0$ so that $|A|^2(0)=1$ and
$|A|^2 \leq 4$ on $B_C$.
Let $C_2$ be from Corollary \ref{c:choosing} and then let
$m_1 , \Omega_1 > \pi$ be
given by Corollary \ref{c:uselater} with $C_0$ there
$=2 \, C_2 + 2$. Fix $a_0$ large (to be
chosen). By Corollaries \ref{c:lcacc}, \ref{c:scsi},
there exists $R_0 = R_0(a_0)$ so
that for any $R_3 \geq R_0$
\begin{equation} \label{e:a5}
a_0 \, R_3 \leq R_3/4 \, \int_{\cB_{R_3/2}} |A|^2 \leq
{\text{Length}} (\partial \cB_{R_3})
\, .
\end{equation}
Set $\beta = 2 \Omega_1^2 \, \omega$.
Corollaries \ref{c:lcacc}, \ref{c:choosing} give $R_2 =
R_2 (R_0 , \beta)$ so if $C \geq R_2$, then
there is $R_0 < {R_3} < R_2 / (2 \, \beta)$ with
\begin{equation}
\int_{\cB_{3 \, {R_3}}} |A|^2 +
\beta^{-10} \, \int_{\cB_{2 \, \beta \, {R_3}}} |A|^2 \leq
C_2 \, {R_3}^{-2} \, \Area (\cB_{R_3} ) \leq
C_2 \, {\text{Length}} (\partial \cB_{R_3})
/ (2{R_3})
\, . \label{e:a4}
\end{equation}
Using \eqr{e:a5}, choose $n$ so that
\begin{equation} \label{e:fixn}
a_0 \, {R_3} \leq 4 \, m_1 \, n \, {R_3} < {\text{Length}} (\partial \cB_{R_3})
\leq 8 \, m_1 \, n \, {R_3} \, ,
\end{equation}
and fix $2n$ disjoint curves $\tilde \gamma_i \subset
\partial \cB_{{R_3}}$ with length $2 \, m_1 \, {R_3}$.
Define the intrinsic sectors (see fig. \ref{f:10})
\begin{equation} \label{e:defsiptilde}
\tilde S_i = \{ \exp_0 (v) \, | \, 0<|v| \leq 2 \, \beta \, {R_3} {\text{ and }}
\exp_0 ({R_3} \, v / |v|) \in \tilde \gamma_i \} \, .
\end{equation}
Since the $\tilde S_i$'s are disjoint, combining \eqr{e:a4} and \eqr{e:fixn} gives
\begin{equation} \label{e:a6}
\sum_{i=1}^{2n} \left( \int_{\cB_{3\,{R_3}} \cap \tilde S_i} |A|^2 + \beta^{-10} \,
\int_{\tilde S_i} |A|^2 \right) \leq 4 \, C_2 \, m_1 \, n \, .
\end{equation}
Hence, after reordering the $\tilde \gamma_i$, we can assume that for
$1 \leq i \leq n$
\begin{equation} \label{e:a7a}
\int_{\cB_{3\,{R_3}} \cap \tilde S_i} |A|^2 + \beta^{-10} \,
\int_{\tilde S_i} |A|^2 \leq 4 \, C_2 \, m_1 \, .
\end{equation}
Using the Riccati comparison theorem, there are curves $\gamma_i \subset
\partial \cB_{2{R_3}} \cap \tilde S_i$ with length $2 \, m_1 \, {R_3}$ so that if
$y \in S_i=S_{\beta R_3}(\gamma_i) \subset \tilde S_i$,
then
$\cB_{\dist_{\Sigma}(0,y)/2}(y) \subset \tilde S_i$.
Hence, by Corollary
\ref{c:scsi} and
\eqr{e:a7a}, we get for $y \in S_i$ and $i\leq n$
\begin{equation} \label{e:a9}
\sup_{\cB_{\dist_{\Sigma}(0,y)/4}(y)} |A|^2 \leq
C_3 \, \dist_{\Sigma}^{-2} (0,y) \, ,
\end{equation}
where $C_3 = C_3 (\beta , m_1 )$. For $i \leq n$,
\eqr{e:a7a} and the Gauss-Bonnet theorem yield
\begin{equation} \label{e:a7b}
\int_{\gamma_i} k_g \leq 2 \, \pi + 2 \, C_2 \, m_1 < (2\, C_2 + 2) \, m_1 \, .
\end{equation}
By \eqr{e:a9} and a Riccati comparison argument,
there exists $C_4 = C_4 (C_3)$ so that for $i\leq n$
\begin{equation} \label{e:a7bb}
1 / (2{R_3}) \leq \min_{\gamma_i} k_g \leq \max_{\gamma_i} k_g \leq C_4 / {R_3} \, .
\end{equation}
Applying Lemma \ref{l:dnl} repeatedly (and using \eqr{e:a9}),
it is easy to see that there exists $\alpha > 0$
so that if $i_1 < i_2 \leq n$ and
\begin{equation} \label{e:getcloseA}
\dist_{C^{1}([0,2m_1],\RR^3)}
(\gamma_{i_1}/{R_3} , \gamma_{i_2}/{R_3} )
\leq \alpha \, ,
\end{equation}
then
$\{ z + u(z) \, \nn(z) \, | \, z \in \cT_{{R_3}/4} (S_{i_1}) \} \subset
\cup_{y \in S_{i_2}} \cB_{\dist_{\Sigma}(0,y)/4}(y)$
for a function $u \ne 0$ with
\begin{equation} \label{e:graphca}
|\nabla u| + |A| \, |u| \leq C_0' \, \dist_{C^{1}([0,2m_1],\RR^3)}
(\gamma_{i_1}/{R_3} , \gamma_{i_2}/{R_3} ) \, \, .
\end{equation}
Here $\dist_{C^{1}([0,2m_1],\RR^3)} (\gamma_{i_1}/{R_3} , \gamma_{i_2}/{R_3} )$
is the scale-invariant $C^1$-distance between the
curves.
Next, we use compactness to show that
\eqr{e:getcloseA} must hold for $n$ large.
Namely, since each
$\gamma_i / {R_3} \subset B_{2}$ is parametrized by arclength on
$[0,2 m_1]$ and has a uniform
$C^{1,1}$ bound
by \eqr{e:a7bb}, this set of maps is compact by
the Arzela-Ascoli theorem. Hence, there exists $n_0$
so that
if $n \geq n_0$, then \eqr{e:getcloseA} holds
for some $i_1 < i_2 \leq n$. In particular,
\eqr{e:graphca} and
Lemma \ref{l:fiscm} imply that $S_{i_1}$ is
$1/2$-stable for $n$ large (now choose $a_0, R_0 , R_2$).
After rotating $\RR^3$, Corollary \ref{c:uselater} gives
the $N$-valued graph
$\Sigma_g \subset S_{i_1}$ over $D_{2 \omega \, \Omega_1 \, {R_3}} \setminus
D_{ 2 \Omega_1 \, {R_3}}$ with gradient $\leq \epsilon$, $|A| \leq \epsilon / r$,
and $\dist_{\Sigma} (0 , \Sigma_g) \leq 8 \, \Omega_1 \, {R_3}$.
Rescaling by $r_0/C$,
the theorem follows with $\bar{R} = 2 \Omega_1 \, {R_3} r_0 / C$.
\end{proof}
\begin{Cor} \label{c:blowupwinding}
Given $N > 1$ and $\tau > 0$, there exist $\Omega > 1$ and
$C > 0$ so: Let
$0\in \Sigma^2\subset B_{R}$ be an embedded minimal
disk, $\partial \Sigma\subset \partial B_{R}$. If
$R>r_0>0$ with
$\sup_{B_{r_0} \cap \Sigma}|A|^2\leq 4\,C^2\,r_0^{-2}$
and $|A|^2(0)=C^2\,r_0^{-2}$, then there exists
(after a rotation)
an $N$-valued graph $\Sigma_g \subset \Sigma$ over $D_{R/\Omega}
\setminus D_{r_0}$ with gradient $\leq \tau$,
$\dist_{\Sigma}(0,\Sigma_g) \leq 4 \, r_0$, and
$\Sigma_g \subset \{ x_3^2 \leq \tau^2 \, (x_1^2 + x_2^2) \}$.
\end{Cor}
\begin{proof}
This follows immediately by combining
Theorems \ref{t:blowupwindinga} and \ref{t:spin4ever2}.
\end{proof}
\begin{Pro} \label{p:blowupgap}
See fig. \ref{f:11}.
There exists $\beta > 0$ so:
If $\Sigma_g \subset \Sigma$ is as in
Theorem \ref{t:blowupwindinga},
then the separation between the sheets of
$\Sigma_g$ over $\partial D_{\bar{R}}$ is at least $\beta \, \bar{R}$.
\end{Pro}
\begin{proof}
This follows easily from the curvature bound,
Lemma \ref{l:dnl},
the Harnack inequality, and
estimates for $1/2$-stable surfaces.
\end{proof}
\begin{figure}[htbp]
\setlength{\captionindent}{20pt}
\begin{minipage}[t]{0.5\textwidth}
\centering\input{blow11.pstex_t}
\caption{Proposition \ref{p:blowupgap}:
The initial separation is inversely proportional
to the maximum of $|A|$.}
\label{f:11}
\end{minipage}
\end{figure}
\section{The blow up}
Combining Corollary \ref{c:blowupwinding} and a blowup argument
will give Theorem \ref{t:blowupwinding0}.
\begin{Lem} \label{l:bup}
If $0 \in \Sigma \subset B_{r_0}$, $\partial \Sigma \subset \partial
B_{r_0}$, and
$\sup_{B_{r_0/2} \cap \Sigma}|A|^2\geq 16\,C^2\,r_0^{-2}$, then there exist
$y \in \Sigma$ and $r_1 < r_0 - |y|$ with $|A|^2 (y)
= C^2 \,r_1^{-2}$ and
$\sup_{B_{r_1}(y) \cap \Sigma}|A|^2\leq 4 \, C^2 \,r_1^{-2}$.
\end{Lem}
\begin{proof}
Set $F(x)=(r_0-|x|)^2\, |A|^2(x)$.
Since $F \geq 0$, $F|\partial B_{r_0}\cap \Sigma=0$, and
$\Sigma$ is compact,
$F$ achieves its maximum at $y \in \partial
B_{r_0-\sigma}\cap \Sigma$ with $0< \sigma \leq r_0$.
Since
$\sup_{B_{r_0/2} \cap \Sigma}|A|^2\geq 16 \,C^2\,r_0^{-2}$,
\begin{equation} \label{e:o2.0a}
F(y)=\sup_{B_{r_0}\cap \Sigma} F \geq
4 \, C^2 \, .
\end{equation}
To get the first claim, define $r_1>0$ by
\begin{equation} \label{e:o2.1a}
r_1^2\,|A(y)|^2= C^2\, .
\end{equation}
Since $F(y)=\sigma^2 \, |A(y)|^2 \geq 4 \, C^2$, we have
$2 \, r_1 \leq \sigma$. Finally, by \eqr{e:o2.0a},
\begin{equation} \label{e:o2.3a}
\sup_{B_{r_1}(y)\cap \Sigma}
\left( \frac{\sigma}{2} \right) ^2\, |A|^2
\leq \sup_{B_{\frac{\sigma}{2}}(y)\cap \Sigma}
\left( \frac{\sigma}{2} \right) ^2\, |A|^2
\leq \sup_{B_{\frac{\sigma}{2}}(y)\cap \Sigma} F
\leq \sigma^2\, |A(y)|^2\, .
\end{equation}
\end{proof}
\begin{proof}
(of Theorem \ref{t:blowupwinding0}).
This follows immediately from Corollary \ref{c:blowupwinding} and
Lemma \ref{l:bup}.
\end{proof}
If
$y_i\in \Sigma_i$ is a sequence of minimal disks with $y_i \to y$ and $|A|(y_i)$
blowing up, then we can take $r_0 \to 0$ in
Theorem \ref{t:blowupwinding0}. Combining this with the sublinear
growth of the separation between the sheets from
\cite{CM3}, we will get in Theorem \ref{t:stablim}
a smooth limit through $y$.
Below $\Sigma^{0,2\pi}_{r,s} \subset \Sigma$ is the ``middle sheet'' over
$\{ (\rho, \theta) \, | \, 0 \leq \theta
\leq 2 \pi , \, r \leq \rho \leq s \}$.
The sublinear growth is given by proposition II.2.12 of \cite{CM3}:
\begin{Pro} \label{l:grades1}
\cite{CM3}. See fig. \ref{f:12}.
Given $\alpha > 0$, there exist $\delta_p > 0 , N_g > 5 $ so:
If $\Sigma$ is a $N_g$-valued minimal graph over
$D_{\e^{N_g} \, R} \setminus D_{\e^{-N_g} \, R}$ with gradient
$\leq 1$ and $0 < u < \delta_p \, R$ is a solution of the minimal graph
equation over $\Sigma$ with $|\nabla u| \leq 1$,
then for $R \leq s \leq 2 \, R$
\begin{align} \label{e:wantit}
\sup_{ \sztp_{R,2R} } |A_{\Sigma}| &+
\sup_{ \sztp_{R,2R} } |\nabla u| / u \leq
\alpha / (4\,R) \, , \\ \label{e:slg}
\sup_{ \sztp_{R,s} } u &\leq (s/R)^{\alpha} \, \sup_{
\sztp_{R,R} } u \, .
\end{align}
\end{Pro}
\begin{Thm} \label{t:stablim}
See fig. \ref{f:13}.
There exists $\Omega > 1$ so:
Let $y_i\in \Sigma_i \subset B_{R}$ with $\partial \Sigma_i \subset
\partial B_{R}$ be embedded minimal disks where $y_i \to 0$. If $|A_{\Sigma_i}|(y_i)
\to \infty$, then, after a rotation and passing to a subsequence,
there exist
$\epsilon_i \to 0$, $\delta_i \to 0$, and $2$-valued minimal
graphs $\Sigma_{d,i} \subset \{ x_3^2 \leq x_1^2 + x_2^2 \} \cap
\Sigma_i$ over $D_{R/\Omega} \setminus D_{\epsilon_i}$
with gradient $\leq 1$, and separation at most $\delta_i \, s$ over
$\partial D_s$. Finally, the $\Sigma_{d,i}$ converge (with
multiplicity two) to a smooth minimal graph through $0$.
\end{Thm}
\begin{proof}
The first part follows immediately from
taking $r_0 \to 0$ in
Theorem \ref{t:blowupwinding0}. When $s$ is small, the bound
on the separation follows from the gradient bound.
The separation then grows less than linearly
by Proposition \ref{l:grades1}, giving the bound for large
$s$ and showing that the $\Sigma_{d,i}$ close up in the limit.
In particular, the $\Sigma_{d,i}$ converge to a minimal graph
$\Sigma'$ over $D_{R/\Omega} \setminus \{ 0 \}$ with gradient $\leq 1$
and $\Sigma' \subset \{ x_3^2 \leq x_1^2 + x_2^2 \}$. By
a standard removable singularity theorem,
$\Sigma' \cup \{ 0 \}$ is a smooth minimal graph over
$D_{R/\Omega}$.
\end{proof}
\begin{figure}[htbp]
\setlength{\captionindent}{20pt}
\begin{minipage}[t]{0.5\textwidth}
\centering\input{shn7.pstex_t}
\caption{The sublinear growth of the separation $u$ of
the multi-valued graph $\Sigma$:
$u(2R) \leq 2^{\alpha} \, u(R) $ with $\alpha < 1$.}
\label{f:12}
\end{minipage}\begin{minipage}[t]{0.5\textwidth}
\centerline{\input{blow13.pstex_t}}
\caption{Theorem \ref{t:stablim}: As $|A_{\Sigma_i}|(y_i) \to \infty$ and
$y_i \to y$, $2$-valued
graphs converge to a graph through $y$.
(The upper sheets of the $2$-valued graphs collapses to the lower sheets.)}
\label{f:13}
\end{minipage}
\end{figure}
\appendix | {"config": "arxiv", "file": "math0210086/blowup106.tex"} |
\section{Complex Integration by Substitution}
Tags: Contour Integration
\begin{theorem}
Let $\left[{a \,.\,.\, b}\right]$ be a [[Definition:Closed Real Interval|closed real interval]].
Let $\phi: \left[{a \,.\,.\, b}\right] \to \R$ be a [[Definition:Real Function|real function]] which has a [[Definition:Derivative|derivative]] on $\left[{a \,.\,.\, b}\right]$.
Let $f: A \to \C$ be a [[Definition:Continuous Complex Function|continuous complex function]], where $A$ is a [[Definition:Subset|subset]] of the [[Definition:Image of Mapping|image]] of $\phi$.
If $\phi \left({a}\right) \le \phi \left({b}\right)$, then:
:$\displaystyle \int_{\phi \left({a}\right)}^{\phi \left({b}\right)} f \left({t}\right) \, \mathrm d t = \int_a^b f \left({\phi \left({u}\right)}\right) \phi' \left({u}\right) \, \mathrm d u$
If $\phi \left({a}\right) > \phi \left({b}\right)$, then:
:$\displaystyle \int_{\phi \left({b}\right)}^{\phi \left({a}\right)} f \left({t}\right) \, \mathrm dt = -\int_a^b f \left({\phi \left({u}\right)}\right) \phi' \left({u}\right) ,\ \mathrm d u$
\end{theorem}
\begin{proof}
Let $\operatorname{Re}$ and $\operatorname{Im}$ denote [[Definition:Real Part|real parts]] and [[Definition:Imaginary Part|imaginary parts]] respectively.
Let $\phi \left({a}\right) \le \phi \left({b}\right)$.
Then:
{{begin-eqn}}
{{eqn | l = \int_{\phi \left({a}\right)}^{\phi \left({b}\right)} f \left({t}\right) \, \mathrm d t
| r = \int_{\phi \left({a}\right)}^{\phi \left({b}\right)} \operatorname{Re} \left({f \left({t}\right) }\right) \, \mathrm d t + i \int_{\phi \left({a}\right)}^{\phi \left({b}\right)} \operatorname{Im} \left({f \left({t}\right) }\right) \, \mathrm d t
| c = Definition of [[Definition:Complex Riemann Integral|Complex Riemann Integral]]
}}
{{eqn | r = \int_a^b \operatorname{Re} \left({ f \left({ \phi \left({u}\right) }\right) }\right) \phi' \left({u}\right) \, \mathrm d u + i \int_a^b \operatorname{Im} \left({ f \left({ \phi \left({u}\right) }\right) }\right) \phi' \left({u}\right) \, \mathrm d u
| c = [[Integration by Substitution]]
}}
{{eqn | r = \int_a^b \operatorname{Re} \left({ f \left({ \phi \left({u}\right) }\right) \phi' \left({u}\right) }\right) \, \mathrm d u + i \int_a^b \operatorname{Im} \left({ f \left({ \phi \left({u}\right) }\right) \phi' \left({u}\right) }\right) \, \mathrm d u
| c = [[Multiplication of Real and Imaginary Parts]]
}}
{{eqn | r = \int_a^b f \left({\phi \left({u}\right)}\right) \phi' \left({u}\right) \, \mathrm d u
}}
{{end-eqn}}
Let $\phi \left({a}\right) > \phi \left({b}\right)$.
Then:
{{begin-eqn}}
{{eqn | l = \int_{\phi \left({b}\right)}^{\phi \left({a}\right)} f \left({t}\right) \, \mathrm d t
| r = \int_{\phi \left({b}\right)}^{\phi \left({a}\right)} \operatorname{Re} \left({ f \left({t}\right) }\right) \, \mathrm d t + i \int_{\phi \left({b}\right)}^{\phi \left({a}\right)} \operatorname{Im} \left({ f \left({t}\right) }\right) \, \mathrm d t
| c = Definition of [[Definition:Complex Riemann Integral|Complex Riemann Integral]]
}}
{{eqn | r = \int_b^a \operatorname{Re} \left({f \left({ \phi \left({u}\right) }\right) }\right) \phi' \left({u}\right) \, \mathrm d u + i \int_b^a \operatorname{Im} \left({f \left({ \phi \left({u}\right) }\right) }\right) \phi' \left({u}\right) \, \mathrm d u
| c = [[Integration by Substitution]]
}}
{{eqn | r = -\int_a^b \operatorname{Re} \left({f \left({ \phi \left({u}\right) }\right) }\right) \phi' \left({u}\right) \, \mathrm d u - i \int_a^b \operatorname{Im} \left({f \left({ \phi \left({u}\right) }\right) }\right) \phi' \left({u}\right) \, \mathrm d u
| c = Definition of [[Definition:Definite Integral|Definite Integral]]
}}
{{eqn | r = -\int_a^b f \left({\phi \left({u}\right)}\right) \phi' \left({u}\right) \, \mathrm d u
}}
{{end-eqn}}
{{qed}}
\end{proof}
| {"config": "wiki", "file": "thm_6245.txt"} |
TITLE: Prove roots of quadratic are rational
QUESTION [1 upvotes]: Given the equation:
$$(3m-5)x^2-3m^2x+5m^2=0$$
Prove the roots are rational given $m$ is rational.
I have tried finding the discriminate of the quadratic however this has not been fruitful and ends up being very ugly. I have tried setting the discriminate to $a^2$ however I don't know if this is the right thing to do.
REPLY [0 votes]: You could always spot that $x=m$ is a solution.
To get this you can use the rational root theorem, and note that the coefficient of $x^2$ has no factors - if the roots are rational for every $m$, therefore, there should be a factorisation $\left(\left(3m-5\right)x+a\right)\left(x+b\right)$ and $b$ must be a factor of $5m^2$ so there is not far to look.
Otherwise just use the quadratic formula to find the roots. | {"set_name": "stack_exchange", "score": 1, "question_id": 2438708} |
TITLE: Extract a variable from the following summation
QUESTION [1 upvotes]: Is it possible to extract the variable a from the following summation and hence write the summation as a times 'something', or similar?
$$n/a = \sum_{i=1}^n 1/(a+x_i)$$
REPLY [1 votes]: Let $g(a,x_1,\dots,x_n)=\sum_{i=1}^n \frac{1}{(a+x_i)}$
If you could write $g(a,x_1,\dots,x_n) = a f(x_1,\dots,x_n)$ for some function $f$ then the second partial derivative of $g$ with respect to $a$ would be zero. However, it isn't zero so you can't do that so there can be no way to factor out the $a$. | {"set_name": "stack_exchange", "score": 1, "question_id": 2233617} |
TITLE: Maximum degree of a polynomial $\sum\limits_{i=0}^n a_i x^{n-i}$ such that $a_i = \pm 1$ for every $i$, with only real zeroes
QUESTION [1 upvotes]: What is the maximum degree of a polynomial of the form $\sum\limits_{i=0}^n a_i x^{n-i}$ with $a_i = \pm 1$ for $0 \leq i \leq n$, such that all the zeros are real?
How would I manipulate that scary sigma? I'm stuck. Solutions are greatly appreciated.
REPLY [2 votes]: The maximum degree is $3$.
Let $\mathcal{E} = \{ +1, -1 \}$. Let's say for some $n \ge 3$, there is a polynomial $p(z) = \sum_{k=0}^n a_k z^k \in \mathcal{E}[x]$ and all its roots are real. Let $\lambda_1, \ldots,\lambda_n$ be the roots. By Vieta's formula, we have
$$\sum_{k=1}^n \lambda_k = -\frac{a_{n-1}}{a_n} \in \mathcal{E}
\quad\text{ and }\quad
\sum_{1 \le j \le k \le n} \lambda_j \lambda_k = \frac{a_{n-2}}{a_n} \in \mathcal{E}$$
This implies
$$\sum_{k=1}^n\lambda_k^2 = \left(\sum_{k=1}^n\lambda_k\right)^2 - 2\sum_{1\le j \le k \le n}\lambda_j\lambda_k
= \left(\frac{a_{n-1}}{a_n}\right)^2 - 2\frac{a_{n-2}}{a_n} = 3 \text{ or } - 1
$$
Since $\lambda_k$ are all real, we can rule out the second case. As a result,
$\sum\limits_{k=1}^n \lambda_k^2 = 3$.
Apply a similar argument to $z^n p\left(\frac1z\right)$, we find
$\sum\limits_{k=1}^n \lambda_k^{-2} = 3$.
Combine these two results and notice $\lambda^2 + \lambda^{-2} \ge 2$ for all real number $\lambda$, we have
$$6 = 3 + 3 = \sum_{k=1}^n \lambda_k^2 + \lambda_k^{-2} \ge \sum_{k=1}^n 2 = 2n\quad\implies\quad 3 \ge n$$
This implies the maximum degree is at most $3$.
As pointed by Jack D'Aurizio in comment more than a year ago, there is a polynomial with degree $3$ in $\mathcal{E}[x]$
$$1 - x - x^2 + x^3 = (x+1)(x-1)^2$$
whose roots are all real. This means the maximum degree is indeed $3$. | {"set_name": "stack_exchange", "score": 1, "question_id": 1967570} |
TITLE: prove : if $G$ is a finite group of order $n$ and $p$ is the smallest prime dividing $|G|$ then any subgroup of index $p$ is normal
QUESTION [10 upvotes]: Prove:
If $G$ is a finite group of order $n$ and $p$ is the smallest prime dividing $ |G| $ then any subgroup of index $p$ is normal
where $ |G| $ is the order of $G$
This is a result in Abstract Algebra by Dummit and Foote at page 120 .
The proof is produced but there is some points which is not obivous for me !
First, in page 121 in the proof, it says, all divisors of $ (p-1)!$ are less than $p$.
why this is true ? can any one explain?
Second, why does "every prime divisor of $ k$ is greater than or equal to $p$ "force that k=1 ??
Why does $ k = 1 $ under this condition??
Third, if $ k=1 $
Then, the order of $ K$ = the order of $H$
Why does this mean that $ K=H $ in this case??
Can you help in explaining these three things?
REPLY [4 votes]: We can use the Cayley Theorem to show that:
Theorem: Let the group $G$ have a subgroup $H$ of index $n$, then there is a normal subgroup $K$ of $G$ such that $$K\subseteq H,~~ m=[G:K]<\infty,~~m\mid n!$$
Now let $H\leq G$ and $[G:H]=p$ such that $p$ has the property the question noted. So according to the above theorem there is a normal subgroup $K$ of $G$ contained in $H$ such that $[G:K]\big|p!$. But $[G:H]\big ||G|$ also, so since $p$ is the smallest dividing number then we have necessarily have $$[G:K]\big|p$$ and this means that $K=H$. | {"set_name": "stack_exchange", "score": 10, "question_id": 289305} |
TITLE: Taking a theorem as a definition and proving the original definition as a theorem
QUESTION [46 upvotes]: Gian-Carlo Rota's famous 1991 essay, "The pernicious influence of mathematics upon philosophy" contains the following passage:
Perform the following thought experiment. Suppose that you are given two formal presentations of the same mathematical theory. The definitions of the first presentation are the theorems of the second, and vice versa. This situation frequently occurs in mathematics. Which of the two presentations makes the theory 'true'? Neither, evidently: what we have is two presentations of the same theory.
Rota's claim that "this situation frequently occurs in mathematics" sounds reasonable to me, because I feel that I have frequently encountered authors who, after proving a certain theorem, say something like, "This theorem can be taken to be the definition of X," with the implicit suggestion that the original definition of X would then become a theorem. However, when I tried to come up with explicit examples, I had a lot of trouble. My question is, does this situation described by Rota really arise frequently in the literature?
There is a close connection between this question and another MO question about cryptomorphisms. But I don't think the questions are exactly the same. For instance, different axiomatizations of matroids comprise standard examples of cryptomorphisms. It is true that one can take (say) the circuit axiomatization of a matroid and prove basis exchange as a theorem, or one can take basis exchange as an axiom and prove the circuit "axioms" as theorems. But these equivalences are all pretty easy to prove; in Oxley's book Matroid Theory, they all appear in the introductory chapter. As far as I know, none of the theorems in later chapters have the property that they could be taken as the starting point for matroid theory, with (say) basis exchange becoming a deep theorem. What I'm wondering is whether there are cases in which a significant piece of theory really is developed in two different ways in the literature, with a major theorem of Presentation A being taken as a starting point for Presentation B, and the definitions of Presentation A being major theorems of Presentation B.
Let me also mention that I don't think that reverse mathematics is quite what Rota is referring to. Brouwer's fixed-point theorem can be shown to imply the weak Kőnig's lemma over RCA0, but as far as I know, nobody seriously thinks that it makes sense to take Brouwer's fixed-point theorem as an axiom when developing the basics of analysis or topology.
EDIT: In another MO question, someone quoted Bott as referring to "the old French trick of turning a theorem into a definition". I'm not sure if Bott and Rota had exactly the same concept in mind, but it seems related.
REPLY [2 votes]: In set theory, measurable cardinals can be defined in two quite different ways. In the original 1930 definition, an uncountable cardinal $\kappa$ is measurable if its powerset $\mathcal{P}(\kappa)$ has a nonprincipal ultrafilter closed under meets of $<\kappa$-many elements.
In the 1960s, enough tools, techniques and concepts were developed to prove a theorem characterizing measurables in what seems to be an intrinsically 2nd-order way that quantifies over proper classes:
$\kappa$ is measurable iff there is a (nontrivial) elementary embedding $j\colon V\rightarrow M$ of the universe $V$ into a model $M$ such that $\kappa$ is the critical point of $j$ — the least ordinal $\alpha$ such that $j(\alpha) \neq \alpha$.
For expository purposes the original definition is usually preferred. But it's the latter characterization that has been generalized to define yet-larger cardinals, and it's not uncommon to see authors adopting it as their definition of 'measurable' in contexts where stronger notions are also considered. | {"set_name": "stack_exchange", "score": 46, "question_id": 404882} |
\begin{document}
\title[]{Fokker-Planck Equation and Path Integral Representation of Fractional Ornstein-Uhlenbeck Process with Two Indices}
\author{Chai Hok Eab}
\address{
Department of Chemistry
Faculty of Science, Chulalongkorn University
Bangkok 10330, Thailand
}
\ead{Chaihok.E@chula.ac.th}
\author{S.C. Lim}
\address{
Faculty of Engineering, Multimedia University
63100 Cyberjaya, Selangor Darul Ehsan, Malaysia
}
\ead{sclim47@gmail.com}
\begin{abstract}
This paper considers the Fokker-Planck equation and path integral formulation of the fractional Ornstein-Uhlenbeck process parametrized by two indices.
The effective
Fokker-Planck equation of this process is derived from the associated fractional Langevin equation.
Path integral representation of the process is constructed and the basic quantities are evaluated.
\end{abstract}
\pacs{02.50.Ey, 05.10.Gg, 05.40.-a}
\maketitle
\input{introduction}
\input{fracOU2}
\input{pathOU2}
\input{conclude}
\appendix
\input{FPeqPotential}
\input{pathOU2detail}
\input{CaputoRL}
\section*{References}
\bibliographystyle{iopart-num}
\bibliography{biblioPathOU2}
\end{document} | {"config": "arxiv", "file": "1405.0653/main.tex"} |
TITLE: show: $\overline{\overline X} = \overline X$
QUESTION [0 upvotes]: is my proof correct?
Definition:
Let $X\subset\mathbb R$ and let $x'\in\mathbb R$, we say that $x'$ is an adherent point of $X$ iff $\forall\epsilon>0\exists x\in X \text{ s.t. }d(x′,x)≤ε$. the closure of X is denoted as $\overline X$ and is defined to be the set of all the adherent points of $X$.
show: $\overline{\overline X} = \overline X$
suppose $\exists z \in \overline{\overline X}~~and~~z \notin \overline X$
then, $\exists x' \in \overline X ~~s.t.~~ |z-x'|\leq \epsilon$
$|z-x'|\leq \frac{\epsilon}{2}$
but we also know that $\exists x \in X s.t. |x'-x|\leq \frac{\epsilon}{2}$
hence, $z-x+x-x' \leq \epsilon/2$
$z-x\leq0$
hence z is an adherent point of X ($z\in \overline X$). but this is a contradiction with the above condition on $z$
hence, $\overline{\overline X} = \overline X$
REPLY [0 votes]: The closure of a set is by definition the intersection of all closed sets containing the set.
But any intersection of closed sets is closed, so the closure is closed, and closure of a closed set is itself. | {"set_name": "stack_exchange", "score": 0, "question_id": 1016682} |
TITLE: Does the spiral Theta = L/R have a name?
QUESTION [0 upvotes]: Note: I intentionally left the equation in the title in plain text instead of MathJax, so it is searchable.
Here is a spiral's equation in polar coordinates:
$$\theta=L/r,$$
and in Cartesian coordinates:
$$(x,y) = \left(r\cdot\cos\frac Lr,\, r\cdot\sin\frac Lr\right)$$
for $0 < r < \infty$ and some positive constant $L$.
It appeared here, at Math SE, as an answer to the Spiral equation question.
Does this curve have a name?
REPLY [1 votes]: As @YvesDaoust said in a comment it is called a hyperbolic spiral, or a reciproke spiral as the circle inversion of an Archimedean spiral
Wikipedia has a couple of images, depending on whether you look at one or two arms of the hyperbola underlying $r=\frac a \varphi$ | {"set_name": "stack_exchange", "score": 0, "question_id": 3766344} |
TITLE: Calculus interval problem please explain how to solve it
QUESTION [1 upvotes]: Suppose $f(x)>0$ for all $x$ in $[0,10]$. Express the area under the curve $f(x) $for $0≤x≤10$ as the limit of a sum, using the value of $f(x)$ at right endpoints of your intervals.
Could you explain how to solve it?
REPLY [3 votes]: Your functions $f$ is positive everywhere on $[0,10]$. By definition, the area under the $f$ on the interval $[0,10]$ is
$$ \int\limits_0^{10} f dx = \lim_{ n \to \infty } \sum_{i=1}^n f( \frac{10i}{n}) \frac{10}{n} $$ | {"set_name": "stack_exchange", "score": 1, "question_id": 592187} |
TITLE: Why is this a valid proof for the harmonic series?
QUESTION [3 upvotes]: The other day, I encountered the Harmonic series which I though is an interesting concept. There were many ways to prove that it's divergent and the one I really liked was rather simple but did the job nevertheless. But a proof used integrals and proved that it diverges. But when using integrals, aren't we also calculating fractions with a decimal denominator? I'm not really familiar with calculus and usually only do number theory but doesn't integral find the area under a continuous curve?
Thanks in advance for any help!
EDIT Here's the integral proof:
$$\int_1^{\infty} 1 / x \,dx= {\infty}$$
REPLY [12 votes]: Consider the following summation:
$$
A=1\cdot 1+\frac12\cdot1+\frac13\cdot1+\frac14\cdot1+...
$$
That's a sum of the areas of an infinite number of rectangles of height $\frac1n$, where $n\in\mathbb{N}$, and width 1. Do you agree that's nothing more than the harmonic series? Moreover, the area under the curve $f(x)=\frac{1}{x}$ from $1$ to $\infty$ is clearly less than $A$ and lies totally within $A$ (see the picture below). However, the area under the curve $f(x)=\frac{1}{x}$ from $1$ to $\infty$ doesn't add up to a finite number. It goes to infinity. What should be happening with a piece of area that's even larger than that? Obviously, it must also be infinite. Therefore, the harmonic series diverges. | {"set_name": "stack_exchange", "score": 3, "question_id": 3207598} |
TITLE: Expectation value with plane waves
QUESTION [1 upvotes]: Hey guys Im a little confused with the concept of plane waves and how to perform an expectation value. Let me show you by an example. Suppose you have a wave function of the form
$\psi_{\boldsymbol{p}_{0}}(x)=f(p_{0})e^{\frac{i}{\hbar}\boldsymbol{p}_{0}\cdot\boldsymbol{x}}$
where $\boldsymbol{p}_{0}=(0,0,p_{0})$ and suppose you want to perform an expectation value of the position of the particle, that is
$<x>=f^{2}(p_{0})\int\,d^{3}\boldsymbol{x}\,x\,e^{\frac{i}{\hbar}\boldsymbol{p}_{0}\cdot\boldsymbol{x}}e^{-\frac{i}{\hbar}\boldsymbol{p}_{0}\cdot\boldsymbol{x}}=f^{2}(p_{0})\int\,d^{3}\boldsymbol{x}\,x$
wich I think is nonsense. But if you define an arbitrary momentum vector $\boldsymbol{p}'=(p_{1}',p_{2}',p_{3}')$, and perform the transition probability
\begin{align}
\left<\psi_{\boldsymbol{p}'}(\boldsymbol{x})|\,x\,|\psi_{\boldsymbol{p}_{0}}(\boldsymbol{x})\right>&=f(p')f(p_{0})\int\,d^{3}\boldsymbol{x}\,xe^{-\frac{i}{\hbar}(\boldsymbol{p}'-\boldsymbol{p}_{0})\cdot\boldsymbol{x}}=f(p')f(p_{0})\int\,d^{3}\boldsymbol{x}\left( i\hbar\frac{\partial}{\partial p_{x}'}\right)e^{-\frac{i}{\hbar}(\boldsymbol{p}'-\boldsymbol{p}_{0})\cdot\boldsymbol{x}}= \\
&=i\hbar f(p')f(p_{0})\frac{\partial}{\partial p_{x}'}\delta^{3}(\boldsymbol{p}'-\boldsymbol{p}_{0})=-i\hbar\delta^{3}(\boldsymbol{p}'-\boldsymbol{p}_{0})\frac{\partial}{\partial p_{x}'}\left( f(p')\right)f(p_{0})
\end{align}
where I made use of the property $f(x)\delta'(x)=-f'(x)\delta(x)$. So now with my new expresion I have a meaningful result and I can evaluate for $\boldsymbol{p}'=\boldsymbol{p}_{0}$ and get a result that I wasnt able to get with the first method. What I'm doing wrong or the the second way is the correct way to do it? Thanks!
REPLY [0 votes]: wich I think is nonsense
Formally, shouldn't the 2nd equation be
$$\langle x \rangle = \frac{\langle\psi_{\mathbf p_0}|\hat x|\psi_{\mathbf p_0}\rangle}{\langle\psi_{\mathbf p_0}|\psi_{\mathbf p_0}\rangle} = \frac{f^2(p_0)\int d\mathbf x^3 x}{f^2(p_0)\int d\mathbf x^3} = \frac{\int d\mathbf x^3 x}{\int d\mathbf x^3} = \lim_{\tau \rightarrow \infty}\frac{\int_{-\tau}^{\tau} d\mathbf x^3 x}{\int_{-\tau}^{\tau} d\mathbf x^3} = \lim_{\tau \rightarrow \infty}\frac{0}{2\tau} = 0 $$ | {"set_name": "stack_exchange", "score": 1, "question_id": 162835} |
\begin{document}
\section{Examples of combinatorial projection maps}\label{sec:combinatorial-examples}
In this section we list combinatorial projection maps for several common examples of groups $G\le S_n$. Notice that in each of the four examples of concrete groups below, implementation via a suitable sorting function circumvents the need to perturb inputs initially.
\subsection{The symmetric group}\label{subsec:S_n}
If $G= S_n$, let $N=\{1,\dots,n\}$ and we can choose the base $B=(1,2,\dots,n-1)$. The ascending projection $\pi_\uparrow(x)$ permutes the entries so that they increase from left-to-right, and the descending projection $\pi_\downarrow(x)$ permutes the entries so that they decrease.
\subsection{The alternating group}
If $G=A_n<S_n$ is the group of even permutations, we can choose $B=(1,2,\dots,n-2)$ and the ascending (resp.\ descending) projection permutes the entries of $x$ so that the first $n-2$ entries increase (resp.\ decrease) from left-to-right, and the last two entries are greater than or equal to all the other entries. If $x$ contains repeated entries then the last to entries can also be ordered to be increasing (resp. decreasing); otherwise their relative order depends on whether the permutation $s_{\hat{x}}$ which maps $i\mapsto\hat{x}_i$ for $1\le i\le n$, is an even or odd permutation (see \cref{subsec:perturb} for the definition of $\hat{x}_i$).
\subsection{The cyclic group}
If $G=\Z_n\le S_n$ is the cyclic group generated by the permutation $(1\;2\;\cdots\;n)$, we can choose the base $B=(1)$. The ascending (resp.\ descending) projection cyclically permutes the entries of $x$ so that the first entry is less (resp.\ greater) than or equal to all other entries of $x$.
\subsection{The dihedral group}\label{subsec:dihedral}
If $G=D_n\le S_n$ is the dihedral group generated by
\[
s_1=(1\;2\;\cdots\;n)\quad \textrm{and} \quad s_2=(2\;n)(3\;(n-1))(4\;(n-2))\cdots,
\]
we can choose base $B=(1,2)$. The ascending (resp.\ descending) projection cyclically permutes the entries of $x$ via $s_1$ so that the first entry is less (resp.\ greater) than or equal to all other entries of $x$, and then if the final entry is less (resp.\ greater) than the second entry, it applies the permutation $s_2$.
\subsection{Products of groups acting on products of spaces}
Suppose $G=\prod_{j=1}^rH_j$ where $H_j\le S_{n_j}$ acts on $\bigoplus_{j=1}^r\R^{n_j}$ by each $H_j$ acting by permutations on the corresponding space $\R^{n_j}$ and trivially everywhere else. Let $B_j=\left(b_j^{(1)},\dots,b_j^{(k_j)}\right)\subset\{1,\dots,n_j\}=N_j$ be a base for $H_j$ acting on $\R^{n_j}$, then
\[
B=\left (b_1^{(1)},\dots,b_1^{(k_1)},b_2^{(1)},\dots,b_2^{(k_2)},\dots,b_r^{(1)},\dots,b_r^{(k_r)}\right )
\]
is a base for $G$. Let $\pi_{j\uparrow}\colon \R^{n_j}\to \R^{n_j}$ be the ascending projection corresponding to $B_j$. Then define $\pi_\uparrow=\bigoplus_{j=1}^r\pi_{j\uparrow}$, to be the projection which equals $\pi_{j\uparrow}$ when restricted to $\R^{n_j}$. Similarly $\pi_\downarrow=\bigoplus_{j=1}^r\pi_{j\downarrow}$.
\subsection{Products of groups acting on tensors of spaces}\label{subsec:tensors}
Suppose $G=\prod_{j=1}^rH_j$ where $H_j\le S_{n_j}$ acts on $\bigotimes_{j=1}^r\R^{n_j}$ by each $H_j$ acting by permutations on the $j$th component of $\bigotimes_{j=1}^r\R^{n_j}$, and trivially on the other components. For each $1\le j\le r$ let $B_j=\left (b_j^{(1)},\dots,b_j^{(k_j)}\right)\subset\{1,\dots,n_j\}=N_j$ be a base for $H_j$ acting on $\R^{n_j}$, and furthermore (for convenience) assume that $b_j^{(1)}=1$. Then choose $B\subset\prod_{j=1}^rN_j\eqqcolon N$ to be
\begin{align*}
B=\Bigl((1,\dots,1),\;&\left (b_1^{(2)},1,\dots,1\right ),\dots,\left (b_1^{(k_1)},1,\dots,1\right ),\\
&\hphantom{(b_1^{(2)},1,}\vdots\hphantom{\dots,1),\dots,(b_1^{(k_1)},1}\vdots\\
&\left (1,\dots,1,b_r^{(2)}\right ),\dots,\left (1,\dots,1,b_r^{(k_r)}\right )\Bigr),
\end{align*}
where a $1$ in the $j$th position of an element of $B$ should be thought of as $b_j^{(1)}$. Suppose $x=(x_{l_1\cdots l_r})_{l_1\cdots l_r}\in \bigotimes_{j=1}^r\R^{n_j}$, and let $\hat{x}$ be defined as in \cref{subsec:perturb}. Choose $(m_1,\dots,m_r)\in N$ to be the index in the $G$-orbit of $\mathds{1}=(1,\dots,1)$ with minimal entry in $\hat{x}$. For $1\le j\le r$ define $\hat{x}_j\coloneqq (\hat{x}_{m_1\cdots l_j\cdots m_r})_{1\le l_j\le n_j}\in \R^{n_j}$, which is the restriction of $\hat{x}$ to the $\R^{n_j}$-vector containing the entry $\hat{x}_{m_1\cdots m_j\cdots m_r}$. Then define
\[
\phi_\uparrow\colon \textstyle\bigotimes_{j=1}^r\R^{n_j}\to G\colon x\mapsto (\phi_{1\uparrow}(\hat{x}_1),\dots,\phi_{r\uparrow}(\hat{x}_r)),
\]
where $\phi_{j\uparrow}\colon \R^{n_j}\to H_j$ is the function defined for $H_j$ acting on $\R^{n_j}$, and similarly define $\phi_\downarrow(x)$. Then as before, $\pi_\uparrow(x)\coloneqq \phi_\uparrow(x)\cdot x$ and $\pi_\downarrow(x)\coloneqq \phi_\downarrow(x)\cdot x$.
\end{document} | {"config": "arxiv", "file": "2202.02164/sections_arxiv/app-combinatorial-examples.tex"} |
\begin{document}
\title[Hofmann-Lawson duality for locally small spaces]{Hofmann-Lawson duality for locally small spaces}
\author[A. Pi\k{e}kosz]{Artur Pi\k{e}kosz}
\address{{\rm Department of Applied Mathematics, Cracow University of Technology,\\
ul. Warszawska 24, 31-155 Krak\'ow, Poland\\
email: \textit{pupiekos@cyfronet.pl}
}}
\date{\today}
\keywords{Hofmann-Lawson duality, Stone-type duality, locally small space, spatial frame, continuous frame.}
\subjclass[2010]{Primary: 06D22, 06D50. Secondary: 54A05, 18F10.}
\begin{abstract}
We prove versions of the spectral adjunction, a Stone-type duality and Hofmann-Lawson duality
for locally small spaces with bounded continuous mappings.
\end{abstract}
\maketitle
\section{Introduction}
Hofmann-Lawson duality belongs to the most important dualities in lattice theory.
It was stated in \cite{HL} (with the reference to the proof in \cite{HK}).
Its versions for various types of structures are numerous (see, for example, \cite{E}).
The concept of a locally small space comes from that of Grothendieck topology through generalized topological spaces in the sense of Delfs and Knebusch (see \cite{DK,P1,P}). Locally small spaces were used in o-minimal homotopy theory (\cite{DK,P3}) as underlying structures of locally definable spaces. A simple language for locally small spaces was introduced and used in \cite{P2} and \cite{P}, compare also \cite{PW}. It is analogical to the language of Lugojan's generalized topology (\cite{Lu}) or Cs\'{a}sz\'{a}r's generalized topology (\cite{C}) where a family of subsets of the underlying set is satisfying some, but not all, conditions for a topology.
However treating locally small spaces as topological spaces with additional structure seems to be more useful.
While Stone duality for locally small spaces was discussed in \cite{P-stone}, we consider in this paper the spectral adjunction, a Stone-type duality and Hofmann-Lawson duality.
The set-theoretic axiomatics of this paper is what Saunders Mac Lane calls the standard
Zermelo-Fraenkel axioms (actually with the axiom of choice) for set theory plus the existence of a set which is a universe, see \cite{MacLane}, page 23.
\textbf{Notation.}
We shall use a special notation for family intersection
$$ \mc{U} \cap_1 \mc{V} = \{ U \cap V : U \in \mc{U}, V \in \mc{V} \}.$$
\section{Preliminaries on frames and their spectra.}
First, we set the notation and collect basic material on frame theory that can be found in \cite{G2, CLD, HL, PP,PS}.
\begin{defi}
A \textit{frame} is a complete distributive lattice $L$ satisfying the (possibly infinite) distributive law $a\wedge \bigvee_{i\in I} b_i =\bigvee_{i\in I} (a\wedge b_i)$
for $a, b_i \in L$ and $i\in I$.
\end{defi}
\begin{defi}A \textit{frame homomorphism} is a lattice homomorphism between frames preserving all joins. The category of frames and frame homomorphisms will be denoted by
$\mathbf{Frm}$. The category of frames and right Galois adjoints of frame homomorphisms will be denoted by $\mathbf{Loc}$.
\end{defi}
\begin{defi}
For a frame $L$, its \textit{spectrum} $Spec(L)$ is the set of non-unit primes of $L$:
$$Spec(L)=\{ p\in L\setminus \{1\}:(p=a\wedge b)\implies (p=a \mbox{ or } p=b)\}.$$
For $a\in L$, we define
$$\Delta_L(a)=\{ p\in Spec(L): a\not\leq p\}.$$
For a subset $S\subseteq L$, we set
$$\Delta_L(S)=\{\Delta_L(a):a\in S \}.$$
\end{defi}
\begin{fact}[{{\cite[Prop. V-4.2]{G2}}}] \label{topol}
For any frame $L$ and $a,b,a_i \in L$ for $i\in I$, we have
$$ \Delta_L(0)=\emptyset, \qquad \Delta_L(1)=Spec(L),$$
$$\Delta_L(\bigvee_{i\in I} a_i) = \bigcup_{i\in I} \Delta_L(a_i), \quad \Delta_L(a\wedge b)=\Delta_L(a) \cap \Delta_L(b).$$
Consequently, the mapping $\Delta_L: L\ni a\mapsto \Delta_L(a)\in \Delta_L(L)$ is a surjective frame homomorphism.
\end{fact}
\begin{defi}
The set $\Delta_L (L)$ is a topology on $Spec(L)$, called the \textit{hull-kernel topology}.
\end{defi}
\begin{rem}
In this paper (as opposed to \cite{P-stone}), being sober implies being $T_0$ (Kolmogorov).
\end{rem}
\begin{fact}[{{\cite[Ch. II, Prop. 6.1]{PP}, \cite[V-4.4]{G2}}}] \label{sober}
For any frame $L$, the topological space $(Spec(L),\Delta_L(L))$ is sober.
\end{fact}
\begin{defi}
A frame $L$ is \textit{spatial} if
$Spec(L)$ \textit{order generates} (or: \textit{inf-generates}) $L$, which means that
each element of $L$ is a meet of primes.
\end{defi}
\begin{fact}[{{\cite[Ch. II, Prop. 5.1]{PP}}}] \label{iso-spatial}
If $L$ is a spatial frame, then $\Delta_L$ is an isomorphism of frames.
\end{fact}
\begin{defi}[{{\cite[p. 286]{HL}}}]
For $a,b \in L$, we say that $b$ is \textit{well-below} $a$ (or: $b$ is \textit{way below} $a$) and write $b \ll a$ if for each (up-)directed set $D\subseteq L$ such that $a\leq \sup D$ there exists $d\in D$ such that $b\leq d$.
\end{defi}
\begin{defi}
A frame is called \textit{continuous} if for each element $a\in L$ we have
$$a=\bigvee \{ b\in L: b \ll a \}.$$
\end{defi}
\begin{fact}[{{\cite[Ch. VII, Prop. 6.3.3]{PP}, \cite[I-3.10]{G2}}}]
Every continuous frame is spatial.
\end{fact}
The following two facts lead to Hofmann-Lawson duality.
\begin{fact}[{{\cite[Thm. V-5.5]{G2}, \cite[Ch. VII, 6.4.2]{PP}}}] \label{6.4.2}
If $L$ is a continuous frame, then $(Spec(L),\Delta_L(L))$ is a sober locally compact topological space
and $\Delta_L$ is an isomorphism of frames.
\end{fact}
\begin{fact}[{{\cite[Thm. V-5.6]{G2}, \cite[Ch. VII, 5.1.1]{PP}}}] \label{5.1.1}
If $(X,\tau_X)$ is a sober locally compact topological space, then $\tau_X$ is a continuous frame.
\end{fact}
Finally, we recall the classical Hofmann-Lawson duality theorem.
\begin{thm}[{{\cite[Thm. 2-7.9]{PS}}}, {{\cite[Thm. 9.6]{HL}}}] \label{HL}
The category $\mathbf{ContFrm}$ of continuous frames with frame homomorphisms and the category $\mathbf{LKSob}$ of locally compact sober spaces and continuous maps are dually equivalent.
\end{thm}
\section{Categories of consideration.}
Now we present the basic facts on the theory of locally small spaces.
\begin{defi}[{{\cite[Definition 2.1]{P}}}]\label{lss}
A \textit{locally small space}
is a pair $(X,\mc{L}_X)$, where $X$ is any set and $\mc{L}_X\subseteq \mc{P}(X)$ satisfies the following conditions:
\begin{enumerate}
\item[(LS1)] \quad $\emptyset \in \mc{L}_X$,
\item[(LS2)] \quad if $L,M \in \mc{L}_X$, then $L\cap M, L\cup M \in \mc{L}_X$,
\item[(LS3)] \quad $\forall x \in X \: \exists L_x \in \mc{L}_X \:\: x\in L_x$ (i. e., $\bigcup \mc{L}_X = X$).
\end{enumerate}
The family $\mc{L}_X$ will be called a \textit{smopology} on $X$ and
elements of $\mc{L}_X$ will be called \textit{small open} subsets (or \textit{smops}) in $X$.
\end{defi}
\begin{rem}
Each topological space $(X,\tau_X)$ may be expanded in many ways to a locally small space by choosing a suitable basis $\mc{L}_X$ of the topology $\tau_X$ such that
$\mc{L}_X$ is a sublattice in $\tau_X$ containg the empty set.
\end{rem}
\begin{defi}[{{\cite[Def. 2.9]{P}}}]
For a locally small space $(X,\mc{L}_X)$ we define the family
$$ \mc{L}_X^{wo} =\mbox{the unions of subfamilies of }\mc{L}_X $$
of \textit{weakly open} sets.
Then $\mc{L}_X^{wo}$ is the smallest topology containing $\mc{L}_X$.
\end{defi}
\begin{exam} The nine families of subsets of $\mb{R}$ from Example 2.14 in \cite{P}
(compare Definition 1.2 in \cite{PW}) are smopologies and share the same
family of weakly open sets (the natural topology on $\mb{R}$).
Analogically, Definition 4.3 in \cite{PW2} shows many generalized topological spaces induced by smopologies on $\mb{R}$ with the same family of weakly open sets (the Sorgenfrey topology).
\end{exam}
\begin{defi}[{{\cite[Def. 31]{P-stone}}}]
A locally small space $(X,\mc{L}_X)$ is called $T_0$ (or \textit{Kolmogorov}) if the family $\mc{L}_X$ \textit{separates points}, which means
$$\forall x,y\in X \quad (x\neq y)\implies \exists V\in \mc{L}_X \mid V \cap \{x,y\} \mid =1.$$
\end{defi}
\begin{defi}
Assume $(X, \mc{L}_X)$ and $(Y,\mc{L}_Y)$ are locally small spaces.
Then a mapping $f:X \to Y$ is:
\begin{enumerate}
\item[$(a)$] \textit{bounded} (\cite[Definition 2.40]{P}) if $\mc{L}_X$ is a refinement of $f^{-1}(\mc{L}_Y)$,
\item[$(b)$] \textit{continuous} (\cite[Definition 2.40]{P}) if
$f^{-1}(\mc{L}_Y) \cap_1 \mc{L}_X \subseteq \mc{L}_X $,
\item[$(c)$] \textit{weakly continuous} if
$f^{-1}(\mc{L}_Y^{wo}) \subseteq \mc{L}_X^{wo}$.
\end{enumerate}
The category of locally small spaces and their bounded continuous mappings is denoted by \textbf{LSS} (\cite[Remark 2.46]{P}).
The full subcategory of $T_0$ locally small spaces is denoted by $\mathbf{LSS}_0$
(\cite[Def. 33]{P-stone}).
\end{defi}
\begin{defi} We have the following full subcategories of $\mathbf{LSS}_0$:
\begin{enumerate}
\item the category $\mathbf{SobLSS}$ generated by the topologically sober
objects $(X,\mc{L}_X)$, i.e., such object that the topological space $(X, \mc{L}_X^{wo})$ is sober (compare \cite[Def. 3.1]{PW}).
\item the category $\mathbf{LKSobLSS}$ generated by the topologically locally compact sober objects.
\end{enumerate}
\end{defi}
\begin{exam}
For $X=\mathbb{R}^2$, we take
$$ \mc{L}_{lin}=\mbox{ the smallest smopology containing all straight lines,} $$
$$ \mc{L}_{alg}= \mbox{ the smallest smopology containing all proper algebraic subsets.}$$
Then $( \mathbb{R}^2,\mc{L}_{lin} )$ and $( \mathbb{R}^2 ,\mc{L}_{alg} )$
are two different topologically locally compact sober
locally small spaces with the same topology of weakly open sets (the discrete topology).
\end{exam}
Now we introduce some categories constructed from \textbf{Frm}.
\begin{defi}
The category $\mathbf{FrmS}$ consists of pairs $(L,L_s)$ with $L$ a frame and $L_s$
a sublattice with zero \textit{sup-generating} $L$ (this means: every member of $L$ is the supremum of a subset of $L_s$) as objects and dominating compatible frame homomorphisms $h:(L,L_s)\to (M,M_s)$ as morphisms. Here a frame homomorphism $h:L\to M$ is called
\begin{enumerate}
\item
\textit{dominating} if
$\quad \forall m\in M_s \quad \exists l\in L_s\quad h(l) \wedge m = m$\\
(then we shall also say that $h(L_s)$ \textit{dominates} $M_s$).
\item \textit{compatible} if
$\quad \forall m\in M_s \quad \forall l\in L_s \quad h(l)\wedge m \in M_s$\\
(then we shall also say that $h(L_s)$ \textit{is compatible with} $M_s$).
\end{enumerate}
\end{defi}
\begin{rem} \label{on}
If $h:L \to M$ is a frame homomorphism satisfying $h(L_s)=M_s$, then
$h: (L,L_s)\to (M,M_s)$ is dominating compatible.
\end{rem}
\begin{defi}
The category $\mathbf{LocS}$ consists of pairs $(L,L_s)$ with $L$ a frame and $L_s$
a sublattice with zero {sup-generating} $L$ as objects and
\textit{special localic maps} (i.e., right Galois adjoints $h_*:(M,M_s)\to (L,L_s)$ of dominating compatible frame homomorphisms $h:(L,L_s)\to (M,M_s)$) as morphisms.
\end{defi}
\begin{rem} \label{iso}
The categories $\mathbf{FrmS}^{op}$ and $\mathbf{LocS}$ are isomorphic.
\end{rem}
\begin{defi} We introduce the following categories:
\begin{enumerate}
\item
the full subcategory \textbf{SpFrmS} in $\mathbf{FrmS}$
generated by objects $(L,L_s)$ where $L$ is a spatial frame,
\item
the full subcategory \textbf{SpLocS} in $\mathbf{LocS}$
generated by objects $(L,L_s)$ where $L$ is a spatial frame,
\item
the full subcategory $\mathbf{ContFrmS}$ in $\mathbf{FrmS}$
generated by objects $(L,L_s)$ where $L$ is a continuous frame,
\item the full subcategory $\mathbf{ContLocS}$ in $\mathbf{LocS}$
generated by objects $(L,L_s)$ where $L$ is a continuous frame.
\end{enumerate}
\end{defi}
\section{The spectral adjunction.}
\begin{thm}[the spectral adjunction] \label{adjunction}
The categories $\mathbf{LSS}$ and $\mathbf{LocS}$ are adjoint.
\end{thm}
\begin{proof}
\textbf{Step 1:} Defining functor $\Omega: \mathbf{LSS} \to \mathbf{LocS}$.
Functor $\Omega: \mathbf{LSS} \to \mathbf{LocS}$ is defined by
$$ \Omega (X,\mc{L}_X)=(\mc{L}^{wo}_X, \mc{L}_X), \quad
\Omega(f) =(\mc{L}^{wo} f)_{*} , $$
where $\mc{L}^{wo} f:\mc{L}^{wo}_Y \to \mc{L}^{wo}_X$ is given by $(\mc{L}^{wo} f)(W)=f^{-1}(W)$ for a bounded continuous $f:(X, \mc{L}_X)\to (Y, \mc{L}_Y)$ .
For any locally small space $(X,\mc{L}_X)$, the pair
$(\mc{L}^{wo}_X, \mc{L}_X)$ consists of a frame and a sublattice with zero that
sup-generates the frame.
For a bounded continuous map $f:(X,\mc{L}_X) \to (Y,\mc{L}_Y)$
the frame homomorphism $\mc{L}^{wo} f:\mc{L}^{wo}_Y \to \mc{L}^{wo}_X$ is always:
\begin{enumerate}
\item
dominating (because $f$ is bounded):
$$\forall W\in \mc{L}_X \ \exists V\in \mc{L}_Y\ (\mc{L}^{wo} f)(V) \cap W = W,$$
\item
compatible ({because $f$ is continuous}):
$$\forall W\in \mc{L}_X\ \forall V\in \mc{L}_Y\ (\mc{L}^{wo} f)(V)\cap W \in \mc{L}_X.$$
\end{enumerate}
The mapping $(\mc{L}^{wo} f)_{*}:\mc{L}_X^{wo}\to \mc{L}_Y^{wo}$, defined by the condition
$$(\mc{L}^{wo}f)_{*}(W)=\bigcup \{V\in \mc{L}^{wo}_Y : f^{-1}(V)\subseteq W\}=\bigcup (\mc{L}^{wo}f)^{-1}(\downarrow W),$$
is a special localic map.
Clearly, $\Omega$ preserves identities and compositions.
\noindent \textbf{Step 2:} Defining functor
$\Sigma:\mathbf{LocS} \to \mathbf{LSS}$.
Functor $\Sigma:\mathbf{LocS} \to \mathbf{LSS}$ is defined by
$$\Sigma (L,L_s)=(Spec(L), \Delta_L (L_s) ),$$
$$ \Sigma (h_{*}) =h_{*}|_{Spec(M)}:Spec(M)\to Spec(L).$$
(Notice that $\Sigma (h_{*})$ is always well defined by \cite[Prop.4.5]{G2}).
The pair $(Spec(L), \{ \Delta_L (a) \}_{a\in L_s})$ is always a topologically sober locally small space by Facts \ref{topol} and \ref{sober}.
For $ h:(L,L_s)\to (M,M_s)$ a dominating compatible frame homomorphism,
$\Sigma(h_*)=h_{*}|_{Spec(M)}$ is always a bounded continuous mapping between locally small spaces from $(Spec(M), \Delta_M(M_s))$ to $(Spec(L),\Delta_L(L_s))$:
\begin{enumerate}
\item Take any $a\in M_s$. Since $h$ is dominating, for some $b\in L_s$ we have
$h_*|_{Spec(L)} (\Delta_M(a)) \subseteq h_*|_{Spec(L)}(\Delta_M(h(b))) \subseteq \Delta_L(b)$. This is why
$ h_*|_{Spec(L)}(\Delta_M(M_s))$ refines $\Delta_L(L_s)$.
\item For any $d\in L_s$, $f\in M_s$, we have
$$(h_*|_{Spec(L)})^{-1}(\Delta_L(d)) \cap \Delta_M(f) = \Delta_M (h(d) \wedge f) \in \Delta_M(M_s).$$
This is why $ (h_*|_{Spec(L)})^{-1}(\Delta_L(L_s))\cap_1 \Delta_M(M_s)\subseteq \Delta_M(M_s).$
\end{enumerate}
Clearly, $\Sigma$ preserves identities and compositions.
\noindent \textbf{Step 3:} There exists a natural transformation $\sigma$ from $\Omega \Sigma$ to $Id_{\mathbf{LocS}}$.
We define the mapping $\sigma_L: (\Delta_L(L),\Delta_L(L_s))\to (L,L_s)$ by the formula
$$\sigma_L=(\Delta_L)_*: \Delta_L(L)=\tau( \Delta_L (L_s) ) \ni A \to \bigvee \Delta_L^{-1} (\downarrow\!\! A) \in L.$$
This $\sigma_L$ is an (injective) morphism in $\mathbf{LocS}$ since $\Delta_L$ is a (surjective) dominating compatible frame homomorphism:
\begin{enumerate}
\item $\Delta_L (L_s)$ obviously dominates $\Delta_L(L_s)$.
\item Take any $D\in \Delta_L (L_s)$ and $f\in L_s$. Choose $d\in L_s$ such that $\Delta_L(d)=D$. Then $D\cap \Delta_L(f)=\Delta_L (d\wedge f) \in \Delta_L (L_s)$, so $\Delta_L$ is compatible.
\end{enumerate}
For a special localic map $h_*:(M,M_s)\to (L,L_s)$ and $a\in M$,
we have
$$\sigma_L \circ \Omega\Sigma(h_*) (\Delta_M(a))=\sigma_L \circ (\mc{L}^{wo}h_*|_{Spec(M)})_{*} (\Delta_M(a))=$$
$$\sigma_L(\bigvee \{Z\in \Delta_L (L): (h_*|_{Spec(M)})^{-1}(Z)\subseteq \Delta_M(a)\})=
h_*(a)= h_*\circ \sigma_M (\Delta_M(a)),$$
so $\sigma$ is a natural transformation.
\textbf{Step 4:} There exists a natural transformation $\lambda $ from $Id_{\mathbf{LSS}}$ to $ \Sigma\Omega$.
W define $\lambda_X: (X,\mc{L}_X)\to (Spec(\mc{L}^{wo}_X),\Delta(\mc{L}_X))$,
where $\Delta=\Delta_{\mc{L}^{wo}_X}$, by
$$\lambda_X:X\ni x \mapsto \ext_X \{x\} \in Spec(\mc{L}^{wo}_X),$$
which is a bounded continuous map:
\begin{enumerate}
\item Take any $W\in \mc{L}_X$. Then $\lambda_X(W)=\{ \ext\{x\}:x\in W\}$, which is contained in $\Delta(W)=\{ V\in Spec(\mc{L}^{wo}_X): W \not\subseteq V \}$.
This is why
$ \lambda_X(\mc{L}_X) \mbox{ refines } \Delta(\mc{L}_X)$.
\item Take any $W\in \mc{L}_X$. Since $x\in W$ iff $\ext\{x\}\in \Delta(W)$,
we have $W=\lambda_X^{-1}(\Delta(W))$. This is why
$\lambda_X^{-1} (\Delta(\mc{L}_X))\cap_1 \mc{L}_X\subseteq \mc{L}_X$.
\end{enumerate}
For a bounded continuous map $f:(X,\mc{L}_X) \to (Y,\mc{L}_Y)$, we have
$$(\Sigma\Omega(f)\circ \lambda_X)(x)=(\mc{L}^{wo}f)_{*} (\ext_X\{ x\} )=
\bigcup \{ Z\in Spec(\mc{L}^{wo}_X): x\notin f^{-1}(Z)\}$$
$$=\ext_Y\{f(x)\}=(\lambda_Y \circ f)(x).$$
This means $\lambda$ is a natural transformation.
\textbf{Step 5:} Functor $\Omega$ is a left adjoint of functor $\Sigma$.
We are to prove that
$$\sigma_L|_{Spec(\Delta(L))}\circ \lambda_{Spec(L)}=id_{Spec(L)}, \qquad
\sigma_{\mc{L}^{wo}_X}\circ (\mc{L}^{wo}\lambda_X)_*=id_{\mc{L}^{wo}_X}.$$
For $p\in Spec(L)$, we have
$$ \sigma_L|_{Spec(\Delta(L))}\circ \lambda_{Spec(L)}(p)=\sigma_L|_{Spec(\Delta(L))}(\ext_{Spec(L)}\{ p\})=id_{Spec(L)}(p).$$
For $W\in \mc{L}^{wo}_X$, we have
$$\sigma_{\mc{L}^{wo}_X}\circ (\mc{L}^{wo}\lambda_X)_*(W)=\sigma_{\mc{L}^{wo}_X}
(\Delta_{\mc{L}^{wo}_X}(W))=id_{\mc{L}^{wo}_X}(W).$$
\end{proof}
\section{A Stone-type duality.}
\begin{thm} \label{stone}
The categories $\mathbf{SobLSS}$, $\mathbf{SpLocS}$ and $\mathbf{SpFrmS}^{op}$ are equivalent.
\end{thm}
\begin{proof} Assume $(X,\mc{L}_X)$ is an object of $\mathbf{SobLSS}$.
Then $\lambda_X:(X,\mc{L}_X) \to (Spec(\mc{L}_X^{wo}),\Delta(\mc{L}_X))$ is a homeomorphism
by \cite[Ch. II, Prop. 6.2]{PP} and $\lambda_X(\mc{L}_X)=\Delta(\mc{L}_X)$. Hence
$\lambda_X$ is an isomorphism in $\mathbf{SobLSS}$.
Assume $(L,L_s)$ is an object of $\mathbf{SpFrmS}$. Then, by Fact \ref{iso-spatial},
$\Delta_L$ is an isomorphism of frames
and $\Delta_L^{-1}(\Delta_L(L_s) )=L_s$, so, by Remark \ref{on}, both $\Delta_L: (L,L_s)\to (\Delta_L(L), \Delta_L(L_s) )$ and $\Delta_L^{-1}$ are dominating compatible frame homomorphisms.
Hence $\sigma_L=(\Delta_L)_*$ is an isomorphism in $\mathbf{SpLocS}$.
Restricted $\sigma:\Omega\Sigma \rightsquigarrow Id_{\mathbf{SpLocS}}$
and $\lambda: Id_{\mathbf{SobLSS}} \rightsquigarrow \Sigma\Omega$ are natural isomorphisms. Hence
$\mathbf{SobLSS}$ and $\mathbf{SpLocS}$ are equivalent.
Similarly to Remark \ref{iso}, categories $\mathbf{SpLocS}$ and $\mathbf{SpFrmS}^{op}$ are isomorphic.
\end{proof}
\section{Hofmann-Lawson duality.}
In this section we give a new version of Theorem \ref{HL}.
\begin{lem} \label{5.5}
Let $(L,L_s)$ be an object of $\mathbf{ContFrmS}$. Then
$(Spec(L),\Delta(L_s))$ is an object of $\mathbf{LKSobLSS}$.
\end{lem}
\begin{proof}
By the proof of Theorem \ref{adjunction}, $(Spec(L),\Delta(L_s))$ is a locally small space.
By Facts \ref{sober} and \ref{6.4.2}, this space is topologically sober locally compact.
\end{proof}
\begin{lem} \label{5.6}
For an object $(X,\mc{L}_X)$ of $\mathbf{LKSobLSS}$,
the pair $(\mc{L}_X^{wo},\mc{L}_X)$ is an object of $\mathbf{ContFrmS}$ (so also of $\mathbf{ContLocS}$).
\end{lem}
\begin{proof}
By the proof of Theorem \ref{adjunction}, $(\mc{L}_X^{wo},\mc{L}_X)$
is an object of $\mathbf{SpFrmS}$. By Fact \ref{5.1.1}, $\mc{L}_X^{wo}$ is a continuous frame.
\end{proof}
\begin{thm}[Hofmann-Lawson duality for locally small spaces]
The categories
$\mathbf{LKSobLSS}$,
$\mathbf{ContLocS}$ and $\mathbf{ContFrmS}^{op}$
are equivalent.
\end{thm}
\begin{proof} By lemmas \ref{5.5}, \ref{5.6}, the restricted functors
$$\Sigma: \mathbf{ContLocS} \to \mathbf{LKSobLSS}, \qquad
\Omega:\mathbf{LKSobLSS} \to \mathbf{ContLocS}$$
are well defined.
The restrictions
$$\lambda: Id_{\mathbf{LKSobLSS}} \rightsquigarrow \Sigma\Omega ,\qquad
\sigma: \Omega\Sigma \rightsquigarrow Id_{\mathbf{ContLocS}}$$
of natural isomorphisms from Theorem \ref{stone} are natural isomorphisms.
Obviously, $\mathbf{ContLocS}$ and $\mathbf{ContFrmS}^{op}$ are isomorphic.
\end{proof}
\begin{rem} Further equivalences of categories may be obtained using Exercise V-5.24 in \cite{G2} (or Exercise V-5.27 in \cite{CLD}).
\end{rem}
\begin{exam}
Consider the space $(\mb{R}, \mc{L}_{l^+om})$ from \cite[Ex. 2.14(4)]{P} where
$$\mc{L}_{l^+om}=\mbox{the finite unions of bounded from above open intervals}$$
and the following functions:
\begin{enumerate}
\item the function $-\id: \mb{R} \ni x \mapsto -x \in \mb{R}$ is continuous but is not bounded, so $\mc{L}^{wo}(-\id):(\tau_{nat},\mc{L}_{l^+om}) \to (\tau_{nat}, \mc{L}_{l^+om})$ is a compatible frame homomorphism but is not dominating,
\item the function $\sin: \mb{R} \ni x \mapsto \sin(x) \in \mb{R}$ is bounded an weakly continuous but not continuous, so $\mc{L}^{wo}\sin:(\tau_{nat},\mc{L}_{l^+om}) \to (\tau_{nat}, \mc{L}_{l^+om})$ is a dominating frame homomorphism but is not compatible,
\item the function $\arctan: \mb{R} \ni x \mapsto \arctan(x) \in \mb{R}$ is bounded continuous, so $\mc{L}^{wo}\arctan:(\tau_{nat},\mc{L}_{l^+om}) \to (\tau_{nat}, \mc{L}_{l^+om})$ is a dominating compatible frame homomorphism,
\item the function $\frac{1}{\exp}: \mb{R} \ni x \mapsto \exp(-x) \in \mb{R}$
is continuous but not bounded, so the mapping $\mc{L}^{wo}\frac{1}{\exp}:(\tau_{nat},\mc{L}_{l^+om}) \to (\tau_{nat}, \mc{L}_{l^+om})$ is a compatible frame homomorphism but is not dominating.
\end{enumerate}
\end{exam} | {"config": "arxiv", "file": "2009.02966.tex"} |
TITLE: How is this function closed?
QUESTION [0 upvotes]: I am reading appendix of convex optimization book. In example A.1 it is written that $$f(x)=\array{x\log(x)\quad \text{if}\quad x>0 \quad \text{and}\quad 0 \quad \text{if} \quad x=0}$$ Its domain is $[0,\infty)$. Now function is continuous. The boundary of the domain is empty set (please point out if I am wrong in this). So since the empty set is subset of every set therefore the domain of $f$ is closed. But at the same time since the intersection of every set and empty set is an empty set therefore the domain of $f$ is open. So if the domain is closed then the function $f$ is closed but in this case the domain is also open so I do not understand how $f$ is closed. Any help in this regard will be much appreciated. Thanks in advance.
REPLY [0 votes]: Nope. The boundary of the domain is the set $\{0\}$. Every open ball centered at zero intersects $[0,\infty)$ and $(-\infty,0)=\mathbb{R}\setminus [0,\infty)$. Moreover, the set $[0,\infty)$ is closed. Take a point $x\in\mathbb{R}\setminus [0,\infty)$, then, if $\varepsilon=\dfrac{|x|}{2}$ then $B_\varepsilon(x)\subseteq\mathbb{R}\setminus [0,\infty)$, i.e., $\mathbb{R}\setminus [0,\infty)$ is open, thus, $[0,\infty)$ is closed.
The previous arguments holds if your whole space is $\mathbb{R}$ with the usual topology. But if your whole space is $[0,\infty)$ then you're correct: the boundary is the empty set and $[0,\infty)$ is open and closed.
Finally, the closed-ness of the open-ness of your function doesn't follows from the properties of the domain (to be closed, for example). Do you have the correct definitions of closed and open function? | {"set_name": "stack_exchange", "score": 0, "question_id": 2618451} |
\begin{document}
\sloppy
\maketitle
\begin{abstract}
Consider a self-similar space $X$. A typical situation is that $X$ looks
like several copies of itself glued to several copies of another space $Y$,
and that $Y$ looks like several copies of itself glued to several copies of
$X$---or the same kind of thing with more than two spaces. Thus, we have a
system of simultaneous equations in which the right-hand sides (the
gluing instructions) are `higher-dimensional formulas'.
This idea is developed in detail in~\cite{SS1} and~\cite{SS2}. The present
informal seminar notes explain the theory in outline.
\end{abstract}
\noindent
I want to tell you about a very general theory of self-similar objects that
I've been developing recently. In principle this theory can handle
self-similar objects of any kind whatsoever---algebraic, analytic,
geometric, probabilistic, and so on. At present it's the topological case
that I understand best, so that's what I'll concentrate on today. This
concerns the self-similar or fractal spaces that you've all seen pictures
of.
Some of the most important self-similar spaces in mathematics are Julia
sets. For the purposes of this talk you don't need to know the definition
of Julia set, but the bare facts are these: to every complex rational
function $f$ there is associated a closed subset $J(f)$ of the Riemann
sphere $\Cx \cup \{\infty\}$, its Julia set, which almost always has a
highly intricate, fractal appearance. If you look in a textbook on complex
dynamics, you'll find theorems about `local self-similarity' of Julia sets.
For example, given almost any point $z$ in a Julia set, points locally
isomorphic to $z$ occur densely throughout the set \cite[Ch.~4]{Mil}. On
the other hand, the kind of self-similarity I'm going to talk about today
is the dual idea, `global self-similarity', where you say that the
\emph{whole} space looks like several copies of itself stuck together---or
some statement of the kind. So it's a top-down, rather than bottom-up,
point of view.
A long-term goal is to develop the algebraic topology of self-similar spaces.
The usual invariants coming from homotopy and homology are pretty much
useless (e.g.\ for a fractal subset of the plane all you've got is $\pi_1$,
which is usually either trivial or infinite-dimensional), but a description
by global self-similarity is discrete and so might provide useful
invariants at some point in the future.
This theory is about self-similarity as an \emph{intrinsic} structure on an
object: there is no reference to an ambient space, and in fact no ambient
space at all. This is like doing group theory rather than representation
theory, or the theory of abstract manifolds rather than the theory of
manifolds embedded in $\mathbb{R}^n$. We can worry about representations
later. For instance, the Koch snowflake is just a circle for us: its
self-similar aspect is the way it's embedded in the plane.
Later I'll show you the general language of self-similarity, but first here
are some concrete examples to indicate the kind of situation that I want to
describe.
\section{First example: a Julia set}
Let's look at one particular Julia set in detail: Figure~\ref{fig:Julia}(a).
\begin{figure}
\setlength{\unitlength}{1mm}
\begin{picture}(120,50)
\cell{21}{50.5}{t}{\includegraphics{whole4marked.ps}}
\cell{65}{7}{b}{\includegraphics{half4marked.ps}}
\cell{104}{7}{b}{\includegraphics{third130marked.ps}}
\cell{21}{-2}{b}{\textrm{(a)}}
\cell{65}{-2}{b}{\textrm{(b)}}
\cell{49}{6.5}{t}{1}
\cell{63.2}{6.5}{t}{2}
\cell{66.8}{6.5}{t}{3}
\cell{81}{6.5}{t}{4}
\cell{104}{-2}{b}{\textrm{(c)}}
\cell{96}{6.5}{t}{1}
\cell{88.5}{31}{b}{2}
\cell{119}{31}{b}{3}
\cell{112}{6.5}{t}{4}
\end{picture}
\caption{(a) The Julia set of $z \goesto (2z/(1 + z^2))^2$; (b), (c)
certain subsets}
\label{fig:Julia}
\end{figure}
I'll write $I_1$ for this Julia set. It clearly has reflectional symmetry
in a horizontal axis, so if we cut at the four points shown then we get a
decomposition
\begin{equation} \label{eq:I1}
I_1 =
\begin{array}{c}
\setlength{\unitlength}{1mm}
\begin{picture}(18,20)(-9,-10)
\cell{0.1}{0.1}{c}{\includegraphics{X1new.eps}}
\numcell{-7.7}{2}{1}
\numcell{-0.9}{1.8}{2}
\numcell{0.9}{1.8}{3}
\numcell{7.6}{2}{4}
\numcell{-7.7}{-2}{1}
\numcell{-0.9}{-1.8}{2}
\numcell{0.9}{-1.8}{3}
\numcell{7.6}{-2}{4}
\cell{0}{6}{c}{I_2}
\cell{0}{-6}{c}{I_2}
\end{picture}
\end{array}
\end{equation}
where $I_2$ is a certain space with four marked points (or `basepoints').
Now consider $I_2$ (Figure~\ref{fig:Julia}(b)). Cutting at the points
shown gives a decomposition
\begin{equation} \label{eq:I2}
I_2
=
\begin{array}{c}
\setlength{\unitlength}{1mm}
\begin{picture}(52,30)(-26,0)
\cell{0.2}{0}{b}{\includegraphics{X2new.eps}}
\numcell{-23.6}{2.5}{1}
\numcell{-5}{7.8}{2}
\numcell{-2.3}{12.5}{3}
\numcell{-3}{28.5}{4}
\numcell{23.8}{2.5}{1}
\numcell{5.2}{7.8}{2}
\numcell{2.5}{12.5}{3}
\numcell{3}{28.5}{4}
\numcell{-2.8}{1.7}{1}
\numcell{-3.3}{4.8}{2}
\numcell{3.3}{4.8}{3}
\numcell{2.8}{1.7}{4}
\cell{-13}{16}{c}{I_2}
\cell{13}{16}{c}{I_2}
\cell{0}{2}{b}{I_3}
\end{picture}
\end{array}
\end{equation}
where $I_3$ is another space with four marked points. Next, consider $I_3$
(Figure~\ref{fig:Julia}(c)); it decomposes as
\begin{equation} \label{eq:I3}
I_3
=
\begin{array}{c}
\setlength{\unitlength}{1mm}
\begin{picture}(32,20)(-16,0)
\cell{0}{0}{b}{\includegraphics{X3new.eps}}
\numcell{-3}{5.5}{1}
\numcell{-10}{2.5}{2}
\numcell{-13}{18}{3}
\numcell{-3}{15}{4}
\numcell{3}{5.5}{1}
\numcell{10}{2.5}{2}
\numcell{13}{18}{3}
\numcell{3}{15}{4}
\cell{-8}{10}{c}{I_3}
\cell{8}{10}{c}{I_3}
\end{picture}
\end{array}.
\end{equation}
Here we can stop, since no new spaces are involved. Or nearly: there's a
hidden role being played by the one-point space $I_0$, since that's what we've
been gluing along, and I'll record the trivial equation
\begin{equation} \label{eq:I0}
I_0 = I_0.
\end{equation}
What we have here is a system of four simultaneous equations, with the
unusual feature that the right-hand sides are not algebraic formulas of the
usual type, but rather `2-dimensional formulas' expressing how the spaces
are glued together.
(There's a conceptual link between this and the world of $n$-categories,
where there are 2-dimensional and higher-dimensional morphisms which you're
allowed to compose or `glue' in various ways. Both can be regarded as a
kind of higher-dimensional algebra. The \latin{cognoscenti} will see a
technological link too: in both contexts the gluing can be described by
pullback-preserving functors on categories of presheaves.)
The simultaneous equations \bref{eq:I1}--\bref{eq:I0} can be expressed as
follows. First, we have our spaces $I_1, I_2, I_3$ with their marked
points, which together form a functor from the category
\[
\scat{A} =
\left( \ \
\begin{diagram}[height=1.7em,width=3em,noPS]
& &1 \\
& & \\
0 &\pile{\rTo\\ \rTo\\ \rTo\\ \rTo}
&2 \\
&\pile{\rdTo\\ \rdTo\\ \rdTo\\ \rdTo}
& \\
& &3 \\
\end{diagram}
\ \
\right)
\]
to the category $\Set$ of sets (or a category of spaces, but
let's be conservative for the moment). Second, the gluing
formulas define a functor
\[
G: \ftrcat{\scat{A}}{\Set} \go \ftrcat{\scat{A}}{\Set},
\]
where $\ftrcat{\scat{A}}{\Set}$ is the category of functors $\scat{A} \go
\Set$: given $X \in \ftrcat{\scat{A}}{\Set}$, put
\begin{eqnarray*}
(G(X))_1 &= &
\begin{array}{c}
\setlength{\unitlength}{1mm}
\begin{picture}(18,20)(-9,-10)
\cell{0.1}{0.1}{c}{\includegraphics{X1new.eps}}
\numcell{-7.7}{2}{1}
\numcell{-0.9}{1.8}{2}
\numcell{0.9}{1.8}{3}
\numcell{7.6}{2}{4}
\numcell{-7.7}{-2}{1}
\numcell{-0.9}{-1.8}{2}
\numcell{0.9}{-1.8}{3}
\numcell{7.6}{-2}{4}
\cell{0}{6}{c}{X_2}
\cell{0}{-6}{c}{X_2}
\end{picture}
\end{array}
=
(X_2 \amalg X_2)/\sim,
\\
(G(X))_2 &= &
\begin{array}{c}
\setlength{\unitlength}{1mm}
\begin{picture}(52,31)(-26,0)
\cell{0.2}{0}{b}{\includegraphics{X2new.eps}}
\numcell{-23.6}{2.5}{1}
\numcell{-5}{7.8}{2}
\numcell{-2.3}{12.5}{3}
\numcell{-3}{28.5}{4}
\numcell{23.8}{2.5}{1}
\numcell{5.2}{7.8}{2}
\numcell{2.5}{12.5}{3}
\numcell{3}{28.5}{4}
\numcell{-2.8}{1.7}{1}
\numcell{-3.3}{4.8}{2}
\numcell{3.3}{4.8}{3}
\numcell{2.8}{1.7}{4}
\cell{-13}{16}{c}{X_2}
\cell{13}{16}{c}{X_2}
\cell{0}{2}{b}{X_3}
\end{picture}
\end{array}
=
(X_2 \amalg X_2 \amalg X_3)/\sim ,
\end{eqnarray*}
and so on. (I've drawn the pictures as if $X_0$ were a single point.)
Then the simultaneous equations assert precisely that
\[
I \iso G(I)
\]
---$I$ is a fixed point of $G$.
Although these simultaneous equations have many solutions ($G$ has many
fixed points), $I$ is in some sense the universal one. This means that the
simple diagrams \bref{eq:I1}--\bref{eq:I0} contain just as much information
as the apparently very complex spaces in Figure~\ref{fig:Julia}: for given
the system of equations, we recover these spaces as the universal solution.
Caveats: we're only interested in the intrinsic, topological aspects of
self-similar spaces, not how they're embedded into an ambient space (in
this case, the plane) or the metrics on them.
Next we have to find some general way of making rigorous the idea of
`gluing formula'; so far I've just drawn pictures. We have a small
category $\scat{A}$ whose objects index the spaces involved, and I claim
that the self-similarity equations are described by a functor $M:
\scat{A}^\op \times \scat{A} \go \Set$ (a `2-sided $\scat{A}$-module').
The idea is that for $b, a \in \scat{A}$,
\begin{eqnarray*}
M(b, a) &= &
\{\textrm{copies of the }b\textrm{th space used in the gluing formula} \\
& &
\ \,\textrm{for the }a\textrm{th space} \}.
\end{eqnarray*}
Take, for instance, our Julia set. In the gluing formula for $I_2$, the
one-point space $I_0$ appears 8 times, $I_1$ doesn't appear at all, $I_2$
appears twice, and $I_3$ appears once, so, writing $n$ for an
$n$-element set,
\[
M(0, 2) = 8,
\diagspace
M(1, 2) = \emptyset,
\diagspace
M(2, 2) = 2,
\diagspace
M(3, 2) = 1.
\]
(It's easy to get confused between the arrows $b \go a$ in $\scat{A}$ and
the elements of $M(b, a)$. The arrows of $\scat{A}$ say nothing whatsoever
about the gluing formulas, although they determine where gluing may
\emph{potentially} take place. The elements of $M$ embody the gluing
formulas themselves.)
This is an example of what I'll call a `self-similarity system':
\begin{defn} \label{defn:sss}
A \demph{self-similarity system} is a small category $\scat{A}$ together
with a functor $M: \scat{A}^\op \times \scat{A} \go \Set$ such that
\begin{enumerate}
\item \label{item:Mfinite}
for each $a \in \scat{A}$, the set $\coprod_{c, b \in \scat{A}} \scat{A}(c,
b) \times M(b, a)$ is finite
\item \label{item:Mpbf}
(a condition to be described later).
\end{enumerate}
\end{defn}
Part~\bref{item:Mfinite} says that in the system of simultaneous equations,
each right-hand side is a gluing of only a \emph{finite} family of spaces.
So we might have infinitely many spaces (in which case $\scat{A}$ would be
infinite), but each one is described as a finite gluing. The condition is
more gracefully expressed in categorical language: `for each $a$, the
category of elements of $M(\dashbk, a)$ is finite'.
As in our example, any self-similarity system $(\scat{A}, M)$ induces an
endofunctor $G$ of $\ftrcat{\scat{A}}{\Set}$. This works as follows.
First note that if $A$ is a ring (not necessarily commutative), $Y$ a right
$A$-module, and $X$ a left $A$-module, there is a tensor product $Y
\otimes_A X$ (a mere abelian group). Similarly, if $\scat{A}$ is a small
category, $Y: \scat{A}^\op \go \Set$ a contravariant functor, and $X:
\scat{A} \go \Set$ a covariant functor, there is a tensor product
\[
Y \otimes X
=
\left( \coprod_{a \in \scat{A}} Ya \times Xa \right) / \sim
\]
(a mere set): see~\cite[IX.6]{CWM}. So if $(\scat{A}, M)$ is a
self-similarity system then there is an endofunctor $G = M \otimes \dashbk$
of $\ftrcat{\scat{A}}{\Set}$ defined by
\[
(M \otimes X)(a)
=
M(\dashbk, a) \otimes X
=
\left( \coprod_{b \in \scat{A}} M(b, a) \times Xb \right) / \sim.
\]
($X \in \ftrcat{\scat{A}}{\Set}$, $a \in \scat{A}$). We are interested in
finding a fixed point of $G$ that is in some sense `universal'.
\section{Second example: Freyd's Theorem}
The second example I'll show you comes from a very different direction. In
December 1999, Peter Freyd posted a message~\cite{Fre} on the categories
mailing list that caused a lot of excitement, especially among the
theoretical computer scientists.
We'll need some terminology. Given a category $\cat{C}$ and an endofunctor
$G$ of $\cat{C}$, a \demph{$G$-coalgebra} is an object $X$ of $\cat{C}$
together with a map $\xi: X \go GX$. (For instance, if $\cat{C}$ is a
category of modules and $GX = X \otimes X$ then a $G$-coalgebra is a
coalgebra---not necessarily coassociative---in the usual sense.) A
\demph{map} $(X, \xi) \go (X', \xi')$ of coalgebras is a map $X \go X'$ in
$\cat{C}$ making the evident square commute. Depending on what $G$ is, the
category of $G$-coalgebras may or may not have a terminal object, but if it
does then it's a fixed point:
\begin{lemma}[Lambek~\cite{Lam}]
Let $\cat{C}$ be a category and $G$ an endofunctor of $\cat{C}$. If $(I,
\iota)$ is terminal in the category of $G$-coalgebras then $\iota: I \go
GI$ is an isomorphism.
\end{lemma}
\begin{proof}
Short and elementary.
\done
\end{proof}
Here's what Freyd said, modified slightly. Let $\cat{C}$ be the category
whose objects are diagrams $X_0 \parpair{u}{v} X_1$ where $X_0$ and $X_1$
are sets and $u$ and $v$ are injections with disjoint images; then an
object of $\cat{C}$ can be drawn as
\[
\setlength{\unitlength}{1mm}
\begin{picture}(30,14)(-15,-5)
\cell{0}{0}{c}{\includegraphics{Xbare.eps}}
\cell{-10}{0}{c}{X_0}
\cell{10}{0}{c}{X_0}
\cell{0}{5.5}{b}{X_1}
\end{picture}
\]
where the copies of $X_0$ on the left and the right are the
images of $u$ and $v$ respectively. A map $X \go X'$ in
$\cat{C}$ consists of functions $X_0 \go X'_0$ and $X_1 \go X'_1$
making the evident two squares commute. Now, given $X \in
\cat{C}$ we can form a new object $GX$ of $\cat{C}$ by gluing
two copies of $X$ end to end:
\[
\begin{array}{c}
\setlength{\unitlength}{1mm}
\begin{picture}(49,14)(-24.5,-5)
\cell{0}{0}{c}{\includegraphics{GXbare.eps}}
\cell{0}{0}{c}{X_0}
\cell{-20}{0}{c}{X_0}
\cell{20}{0}{c}{X_0}
\cell{-10}{5.5}{b}{X_1}
\cell{10}{5.5}{b}{X_1}
\end{picture}
\end{array}.
\]
Formally, $GX$ is defined by pushout:
\[
\begin{diagram}[size=2em]
& & & &(GX)_1 & & & & \\
& & &\ruTo & &\luTo & & & \\
& &X_1 & &\textrm{pushout} &
&X_1 & & \\
&\ruTo<u& &\luTo>v& &\ruTo<u& &\luTo>v& \\
(GX)_0 = X_0& & & &X_0 & & & &X_0. \\
\end{diagram}
\]
For example, the unit interval with its endpoints distinguished forms an
object
\[
I = \left( \{\star\} \parpair{0}{1} [0,1] \right)
\]
of $\cat{C}$, and $GI$ is naturally described as an interval of
length 2, again with its endpoints distinguished:
\[
GI = \left( \{\star\} \parpair{0}{2} [0,2] \right).
\]
So there is a coalgebra structure $\iota: I \go GI$ on $I$ given by
multiplication by two. Freyd's Theorem says that this is, in fact, the
\emph{universal} example of a $G$-coalgebra:
\begin{thm}[Freyd$+\epsln$] \label{thm:Freyd}
$(I, \iota)$ is terminal in the category of $G$-coalgebras.
\end{thm}
I won't go into the proof, but clearly it's going to have to involve the
completeness of the real numbers in an essential way. Once you've worked
out what `terminal coalgebra' is saying, it's easy to see that the proof is
going to be something to do with binary expansions. Note that although
$\iota$ is an isomorphism (as predicted by Lambek's Lemma), this by no
means determines $(I, \iota)$: consider, for instance, the unique coalgebra
satisfying $X_0 = X_1 = \emptyset$, or the evident coalgebra in which $X_0 =
\{\star\}$ and $X_1 = [0,1] \cap \{ \textrm{dyadic rationals} \}$.
The striking thing about Freyd's result is that we started with just some
extremely primitive notions of set, function, and gluing---and suddenly,
out popped the real numbers. What excited the computer scientists was that
it suggested new ways of representing the reals. But its relevance for us
here is that it describes the self-similarity of the unit interval---in
other words, the fact that it's isomorphic to two copies of itself stuck
end to end. We'll take this idea of Freyd, describing a very simple
self-similar space as a terminal coalgebra, and generalize it dramatically.
Freyd's Theorem concerns the self-similarity system $(\scat{A}, M)$ in
which
\[
\scat{A} =
\left( 0 \parpairu 1 \right)
\]
and $M: \scat{A}^\op \times \scat{A} \go \Set$ is given by
\[
\begin{diagram}
M(\dashbk, 0):& &\{\star\} &\pile{\lTo\\ \lTo} &\emptyset \\
& &\dTo<0 \dTo>1 & &\dTo \dTo \\
M(\dashbk, 1):& &\{ 0, \dhalf, 1\}&
\pile{\lTo^{\inf}\\ \lTo_{\sup}} &
\{ [0, \dhalf], [\dhalf, 1] \}. \\
\end{diagram}
\]
(Here $M(0, 1)$ is just a $3$-element set and $M(1, 1)$ a $2$-element set,
but their elements have been named suggestively.) The category $\cat{C}$
is a full subcategory of $\ftrcat{\scat{A}}{\Set}$, and the endofunctor $G$
of $\cat{C}$ is the restriction of the endofunctor $M \otimes \dashbk$ of
$\ftrcat{\scat{A}}{\Set}$.
The only thing that looks like a barrier to generalization is the condition
that $u, v: X_0 \go X_1$ are injective with disjoint images (which is the
difference between $\cat{C}$ and $\ftrcat{\scat{A}}{\Set}$). If this were
dropped then $(\{\star\} \parpairu \{\star\})$ would be the terminal
coalgebra, so the theorem would degenerate entirely. It turns out that the
condition is really a kind of flatness.
A module $X$ over a ring is called `flat' if the functor $\dashbk \otimes
X$ preserves finite limits. There is an analogous definition when $X$ is a
functor, but actually we want something weaker:
\begin{defn}
Let $\scat{A}$ be a small category. A functor $X: \scat{A} \go \Set$ is
\demph{nondegenerate} if the functor
\[
\dashbk \otimes X: \pshf{\scat{A}} \go \Set
\]
preserves finite \emph{connected} limits. Write $\ndSet{\scat{A}}$ for the
category of nondegenerate functors $\scat{A} \go \Set$ and natural
transformations between them.
\end{defn}
It looks as if this is very abstract, that in order to show that $X$ was
degenerate you'd need to test it against all possible finite connected
limits in $\pshf{\scat{A}}$, but in fact there's an equivalent explicit
condition. This can be used to show that in the case at hand, a functor
$X: \scat{A} \go \Set$ is degenerate precisely when the two functions $u,
v: X_0 \go X_1$ are injective with disjoint images.
The missing condition~\bref{item:Mpbf} in the definition of self-similarity
system is that for each $b \in \scat{A}$, the functor $M(b, \dashbk):
\scat{A} \go \Set$ is nondegenerate. This guarantees that the endofunctor
$M \otimes \dashbk$ of $\ftrcat{\scat{A}}{\Set}$ restricts to an
endofunctor of $\ndSet{\scat{A}}$. A terminal coalgebra for this
restricted endofunctor is called a \demph{universal solution} of the
self-similarity system; if one exists, it's unique up to canonical
isomorphism. Lambek's Lemma implies that if $(I, \iota)$ is a universal
solution then, as the terminology suggests, $M \otimes I \iso I$. Freyd's
Theorem describes the universal solution of a certain self-similarity
system.
Before we move on, I'll show you a version of Freyd's Theorem in which the
unit interval is characterized not just as a set but as a topological
space. The simplest thing would be to change `set' to `space' and
`function' to `continuous map' throughout the above. Unsurprisingly, this
gives a boring topology on $[0,1]$: the indiscrete one, as it happens. But
all we need to do to get the usual topology is to insist that the maps $u$
and $v$ are closed. That is:
\setcounter{bean}{\value{thm}}
\begin{trivlist}
\item
\textbf{Theorem \arabic{bean}$'$}
\textit{Define $\cat{C'}$ and $G'$ as $\cat{C}$ and
$G$ were defined above, changing `set' to `space' and `function' to
`continuous map', and adding the condition that $u$ and $v$ are closed
maps. Then the terminal $G'$-coalgebra is $(I,\iota)$ where $[0,1]$ is
equipped with the Euclidean topology.}
\end{trivlist}
Generally, a functor $X: \scat{A} \go \Top$ is \demph{nondegenerate} if its
underlying $\Set$-valued functor is nondegenerate and $Xf$ is a closed map
for every map $f$ in $\scat{A}$. This gives a notion of \demph{universal
topological solution}, just as in the set-theoretic scenario. So
Theorem~\arabic{bean}$'$ describes the universal topological solution of
the Freyd self-similarity system.
\section{Results}
Just as some systems of equations have no solution, some self-similarity
systems have no universal solution. But it's easy to tell
whether there is one:
\begin{thm}
A self-similarity system has a universal solution if and only if it
satisfies a certain condition \So.
\end{thm}
I won't say what \So\ is, but it is completely explicit. So too is the
construction of the universal solution when it does exist; it is similar
in spirit to constructing the real numbers as infinite decimals, although
smoother than that would suggest.
Let $(\scat{A}, M)$ be a self-similarity system with universal solution
$(I, \iota)$. Then there is a canonical topology on each space $Ia$, with
the property that all the maps $If$ are continuous and closed and all the
maps $\iota_a$ are continuous. Again, the topology can be defined in a
completely explicit way.
\begin{thm} \label{thm:topo-terminal}
$(I, \iota)$ with this topology is the universal topological solution.
\end{thm}
Call a space \demph{self-similar} if it is homeomorphic to $Ia$ for some
self-similarity system $(\scat{A}, M)$ and some $a \in \scat{A}$, where
$(I, \iota)$ is the universal solution of $(\scat{A}, M)$. There is a
`recognition theorem' giving a practical way to recognize universal
solutions, and this generates some examples of self-similar spaces:
\begin{itemize}
\item $[0,1]$, as in the Freyd example
\item $[0,1]^n$ for any $n\in\nat$; more generally, the product of two
self-similar spaces is self-similar
\item the $n$-simplex $\Delta^n$ for any $n\in\nat$, by barycentric
subdivision
\item the Cantor set (isomorphic to two disjoint copies of itself)
\item Sierpi\'nski's gasket, and many other spaces defined by iterated
function systems.
\end{itemize}
The proof of Theorem~\ref{thm:topo-terminal} involves showing that each of
the spaces $Ia$ is compact and metrizable (or equivalently, compact
Hausdorff with a countable basis of open sets). So every self-similar
space is compact and metrizable. The shock is that the converse holds:
\begin{thm} \label{thm:ss-cm}
For topological spaces,
\[
\textrm{self-similar}
\iff
\textrm{compact metrizable}.
\]
\end{thm}
This looks like madness, so let me explain.
First, the result is non-trivial: the classical result that every nonempty
compact metrizable space is a continuous image of the Cantor set can be
derived as a corollary.
Second, the word `self-similar' is problematic (even putting aside the
obvious objection: what could be more similar to a thing than itself?)
When we formalized the idea of a system of self-similarity equations, we
allowed ourselves to have infinitely many equations, even though each
individual equation could involve only finitely many spaces. So there
might be infinite regress: for instance, $X_1$ could be described as a copy
of itself glued to a copy of $X_2$, $X_2$ as a copy of itself glued to a
copy of $X_3$, and so on. Perhaps `recursively realizable' would be better
than `self-similar'.
Finally, this theorem does not exhaust the subject: it characterizes the
spaces admitting \emph{at least one} self-similarity structure, but a space
may be self-similar in several essentially different ways.
There's a restricted version of Theorem~\ref{thm:ss-cm}. Call a space
\demph{discretely self-similar} if it is homeomorphic to one of the spaces
$Ia$ arising from a self-similarity system $(\scat{A}, M)$ in which the
category $\scat{A}$ is discrete (has no arrows except for identities). The
Cantor set is an example: we can take $\scat{A}$ to be the one-object
discrete category.
\begin{thm}
For topological spaces,
\[
\textrm{discretely self-similar}
\iff
\textrm{totally disconnected compact metrizable}.
\]
\end{thm}
Totally disconnected compact Hausdorff spaces are the same as profinite
spaces, and the metrizable ones are those that can be written as the limit
of a \emph{countable} system of finite discrete spaces. For instance, the
underlying space of the absolute Galois group
$\textrm{Gal}(\overline{\mathbb{Q}}/\mathbb{Q})$ is discretely
self-similar.
If you find the general notion of self-similarity too inclusive, you may
prefer to restrict to finite categories $\scat{A}$, which gives the notion
of \demph{finite self-similarity}. This means that the system of equations
is finite. A simple cardinality argument shows that almost all compact
metrizable spaces are not finitely self-similar.
I'll finish with two conjectures. They both say that certain types of
compact metrizable space \emph{are} finitely self-similar.
\begin{conj}
Every finite simplicial complex is finitely self-similar.
\end{conj}
I strongly believe this to be true. The standard simplices $\Delta^n$ are
finitely self-similar, and if we glue a finite number of them along faces
then the result should be finitely self-similar too. For example, by
gluing two intervals together we find that the circle is finitely
self-similar. Note that any manifold is as locally self-similar as could
be, in the sense of the introduction: every point is locally isomorphic to
every other point.
More tentatively,
\begin{conj}
The Julia set $J(f)$ of any complex rational function $f$ is finitely
self-similar.
\end{conj}
This brings us full circle: it says that in the first example, we could
have taken any rational function $f$ and seen the same type of behaviour:
after a finite number of decompositions, no more new spaces $I_n$ appear.
Both $J(f)$ and its complement are invariant under $f$, so $f$ restricts to
an endomorphism of $J(f)$ and this endomorphism is, with finitely many
exceptions, a $\deg(f)$-to-one mapping. This suggests that $f$ itself
should provide the global self-similarity structure of $J(f)$, and that if
$(\scat{A}, M)$ is the corresponding self-similarity system then the sizes
of $\scat{A}$ and $M$ should be bounded in terms of the degree of $f$.
\paragraph*{Acknowledgements}
I thank those who have given me the opportunity to speak on this:
Francis Borceux, Robin Chapman, Eugenia Cheng, Sjoerd Crans, Iain
Gordon, John Greenlees, Jesper Grodal, Peter May, Carlos Simpson,
and Bertrand To\"en. I am very grateful to Jon Nimmo for
creating Figure~\ref{fig:Julia}(a). I gratefully acknowledge a
Nuffield Foundation award NUF-NAL 04.
\small | {"config": "arxiv", "file": "math0411343/gsso.tex"} |
TITLE: Root equation - What am I missing?
QUESTION [3 upvotes]: There's a problem of which I know the solution but not the solving process:
$(\sqrt{x} + 7)(\sqrt{x} - 1) = \frac{105}{4}$
I'm convinced that up to:
$x + 6\sqrt{x} - 7 = \frac{105}{4}$
everything is correct.
But afterwards, I never seem to be able to get to the solution of $x = \frac{49}{4}$
How can this equation be solved?
REPLY [4 votes]: Suggestion:
Multiply the given equation by $4$ to have integer coefficients only. (You may omit this step)
Expand the LHS.
$$4x+24\sqrt{x}-28=105.$$
Isolate the term with a square root on one side.
$$24\sqrt{x}=133-4x.$$
Square both sides (this may introduce a new solution) and simplify. You get a quadratic equation. One of its two solutions satisfies the original equation. $$x_1=\frac{49}{4}.$$ | {"set_name": "stack_exchange", "score": 3, "question_id": 145498} |
\begin{document}
\title[]{Convexity of strata in diagonal pants graphs of surfaces}
\author{J. Aramayona, C. Lecuire, H. Parlier \& K. J. Shackleton}
\address{School of Mathematics, Statistics and Applied Mathematics,
NUI Galway, Ireland}
\email{javier.aramayona@nuigalway.ie}
\urladdr{http://www.maths.nuigalway.ie/$\sim$javier/}
\address{Laboratoire Emile Picard
Universit\'e Paul Sabatier, Toulouse, France}
\email{lecuire@math.ups-tlse.fr}
\urladdr{http://www.math.univ-toulouse.fr/$\sim$lecuire/}
\address{Department of Mathematics, University of Fribourg, Switzerland}
\email{hugo.parlier@unifr.ch}
\urladdr{http://homeweb.unifr.ch/parlierh/pub/}
\address{University of Tokyo IPMU\\ Japan 277-8583}
\email{kenneth.shackleton@impu.jp}
\urladdr{http://ipmu.jp/kenneth.shackleton/}
\begin{abstract}
We prove a number of convexity results for strata of the diagonal pants graph of a surface, in analogy with the extrinsic geometric properties of strata in the Weil-Petersson completion. As a consequence, we exhibit convex flat subgraphs of every possible rank inside the diagonal pants graph.
\end{abstract}
\maketitle
\section{Introduction}
Let $S$ be a connected orientable surface, with empty boundary and negative Euler characteristic. The {\em pants graph}
$\CP(S)$ is the graph whose vertices correspond to homotopy classes of pants decompositions of $S$, and where two vertices are adjacent
if they are related by an elementary move; see Section \ref{prelim} for an expanded definition. The graph $\CP(S)$ is connected, and becomes a geodesic metric space by declaring each edge to have length 1.
A large part of the motivation for the study of $\CP(S)$ stems from the result of Brock \cite{brock} which asserts that $P(S)$ is quasi-isometric to $\CT(S)$, the \teichmuller space of $S$ equipped with the Weil-Petersson metric. As such, $P(S)$ (or any of its relatives also discussed in this article) is a combinatorial model for Teichm\"uller space.
By results of Wolpert \cite{wolpert} and Chu \cite{chu}, the space $\CT(S)$ is not complete. Masur \cite{masur} proved that its completion
${\hat \CT}(S)$ is homeomorphic to the {\em augmented \teichmuller space} of $S$, obtained from $\CT(S)$ by extending Fenchel-Nielsen
coordinates to admit zero lengths. The completion ${\hat \CT}(S)$ admits a natural {\em stratified structure}: each stratum $\CT_C(S) \subset {\hat \CT}(S)$ corresponds to a multicurve $C \subset S$, and parametrizes surfaces with {\em nodes} exactly at the elements of $C$.
By Wolpert's result \cite{wolpert2} on the convexity of length functions, $\CT_C(S)$ is convex in ${\hat \CT}(S)$ for all multicurves $C \subset S$.
The pants graph admits an analogous stratification, where the stratum $\CP_C(S)$ corresponding to the multicurve $C$ is the subgraph of $\CP(S)$ spanned by those pants decompositions that contain $C$.
Moreover, Brock's quasi-isometry between $\CP(S)$ and $\CT(S)$ may be chosen so that the image of $\CP_C(S)$ is contained in $\CT_C(S)$.
In light of this discussion, it is natural to study which strata of $\CP(S)$ are convex. This problem was addressed in \cite{APS1, APS2}, where certain families of strata in $\CP(S)$ were proven to be totally geodesic; moreover,
it is conjectured that this is the case for all strata of $\CP(S)$, see Conjecture 5 of \cite{APS1}. As was observed by Lecuire, the validity of this conjecture is equivalent to the existence of only finitely many geodesics between any pair of vertices of $\CP(S)$; we will give a proof of the equivalence of these two problems in the Appendix.
The main purpose of this note is to study the extrinsic geometry of strata in certain graphs of pants decompositions closely related to $\CP(S)$, namely the {\em diagonal pants graph} $\DP(S)$ and the {\em cubical pants graph}. Concisely, $\DP(S)$ (resp. $\CCP(S)$) is obtained from $\CP(S)$ by adding an edge of length 1 (resp. length $\sqrt{k}$) between any two pants decompositions that differ by $k$ {\em disjoint} elementary moves. Note that Brock's result \cite{brock} implies that both $\DP(S)$ and $\CCP(S)$ are quasi-isometric to $\CT(S)$, since they are quasi-isometric to $\CP(S)$. These graphs have recently arisen in the study of metric properties of moduli spaces; indeed, Rafi-Tao \cite{RaTa} use $\DP(S)$ to estimate the Teichm\"uller diameter of the thick part of moduli space, while $\CCP(S)$ has been
used by Cavendish-Parlier to give bounds on the Weil-Petersson diameter of moduli space.
As above, given a multicurve $C\subset S$, denote by $\DP_C(S)$ (resp. $\CCP_C(S))$ the subgraph of $\DP(S)$ (resp. $\CCP(S)$) spanned by those pants decompositions which contain $C$. Our first result is:
\begin{theorem}\label{thm:punctures}
Let $S$ be a sphere with punctures and $C \subset S$ a multicurve. Then $\DP_C(S)$
is convex in $\DP(S)$.
\end{theorem}
We remark that, in general, strata of $\DP(S)$ are not totally geodesic; see Remark \ref{rmk:outside3} below.
The proof of Theorem \ref{thm:punctures} will follow directly from properties of the {\em forgetful maps} between graphs of pants decompositions. In fact, the same techniques will allow us to prove the following more general result:
\begin{theorem}
Let $Y$ be an essential subsurface of a surface $S$ such that $Y$ has the same genus as $S$.
Let $C$ be the union of a pants decomposition of $S \setminus Y$ with all the boundary
components of $Y$. Then:
\begin{enumerate}
\item $\DP_C(S)$ is convex in $\DP(S)$.
\item If $Y$ is connected, then $\CP_C(S)$ and $\CCP_C(S)$ are totally geodesic inside $\CP(S)$ and $\CCP(S)$, respectively.
\end{enumerate}
\label{main2}
\end{theorem}
Next, we will use an enhanced version of the techniques in \cite{APS1} to show an analog of Theorem \ref{thm:punctures} for
general surfaces, provided the multicurve $C$ has {\em deficiency} 1; that is, it has one curve less than a pants decomposition.
Namely, we will prove:
\begin{theorem}\label{thm:general}
Let $C\subset S$ be a multicurve of deficiency $1$. Then:
\begin{enumerate}
\item $\DP_C(S)$
is convex in $\DP(S)$.
\item $\CCP_C(S)$
is totally geodesic in $\CCP(S)$.
\end{enumerate}
\end{theorem}
We observe that part (2) of Theorem \ref{thm:general} implies the main results in \cite{APS1}.
\medskip
We now turn to discuss some applications of our main results. The first one concerns the existence of convex flat subspaces of $\DP(S)$ of every possible rank. Since $\DP(S)$ is quasi-isometric to $\CP(S)$, the results of Behrstock-Minsky \cite{bemi}, Brock-Farb \cite{brfa} and Masur-Minsky \cite{MM2} together yield that $\DP(S)$ admits a quasi-isometrically embedded copy of $\BZ^r$ if and only if $r\le [\frac{3g+p-2}{2}]$. If one considers ${\hat T}(S)$ instead of $\DP(S)$, one obtains {\em isometrically} embedded copies of $\BZ^r$ for the exact same values of $r$. In \cite{APS2}, convex copies of $\BZ^2$ were exhibited in $\CP(S)$, and it is unknown whether higher rank convex flats appear. The corresponding question for the diagonal pants graph is whether there exist convex copies of $\BZ^r$ with a modified metric which takes into account the edges added to obtain $\DP(S)$; we will denote this metric space $\DZ^r$. Using Theorems \ref{thm:punctures} and \ref{main2}, we will obtain a complete answer to this question, namely:
\begin{corollary}\label{cor:punctures}
There exists an isometric embedding $\DZ^r \to \DP(S)$ if and only if $r \le [\frac{3g+p-2}{2}]$.
\end{corollary}
Our second application concerns the {\em finite geodesicity} of different graphs of pants decompositions. Combining Theorem \ref{thm:punctures} with the main results of \cite{APS1, APS2}, we will obtain the following:
\begin{corollary}
Let $S$ be the six-times punctured sphere, and let $P,Q$ be two vertices of $\CP(S)$. Then there are only finitely many geodesics in $\CP(S)$ between $P$ and $Q$.
\label{geodesics}
\end{corollary}
The analogue of Corollary \ref{geodesics} for $\DP(S)$ is not true. Indeed, we will observe that if $S$ is a sphere with at least six punctures, one can find pairs of vertices of $\DP(S)$ such that there are infinitely many geodesics in $\DP(S)$ between them. However, we will prove:
\begin{corollary} Given any surface $S$, and any $k$, there exist points in $\DP(S)$ at distance $k$ with only a finite number of geodesics between them.
\label{cor:far}
\end{corollary}
\noindent {\bf Acknowledgements.} The authors would like to thank Jeff Brock and Saul Schleimer for conversations.
\section{Preliminaries}
\label{prelim}
Let $S$ be a connected orientable surface with empty boundary and negative Euler characteristic. The complexity
of $S$ is the number $\kappa(S) =3g-3+p$, where $g$ and $p$ denote, respectively, the genus and number of punctures of $S$.
\subsection{Curves and multicurves} By a curve on $S$ we will mean
a homotopy class of simple closed curves on $S$; we will often blur the distinction between a curve and any of its representatives.
We say that a curve $\alpha \subset S$ is essential if no representative of $\alpha$ bounds a disk with at most one puncture.
The geometric intersection number between two curves $\alpha$ and $\beta$ is defined as \[ i(\alpha, \beta) = \min\{|a \cap b| : a \in \alpha, b \in \beta\}.\]
If $\alpha\neq\beta$ and $i(\alpha, \beta) = 0$ we say that $\alpha$ and $\beta$ are disjoint; otherwise, we say they intersect. We say that the curves $\alpha$ and $\beta$ fill $S$ if $i(\alpha,\gamma)+i(\beta,\gamma)>0$ for all curves $\gamma \subset S$.
A multicurve is a collection of pairwise distinct and pairwise disjoint essential curves. The deficiency of a multicurve $C$ is defined as $\kappa(S) - |C|$, where
$|C|$ denotes the number of elements of $C$.
\subsection{Graphs of pants decompositions}
A pants decomposition $P$ of $S$ is a multicurve that is maximal with respect to inclusion. As such, $P$ consists of exactly $\kappa(S)$ curves, and has a representative $P'$ such that every connected component of $S \setminus P'$ is homeomorphic to a sphere with three punctures, or
pair of pants.
We say that two pants decompositions of $S$ are related by an elementary move
if they have exactly $\kappa(S) -1$ curves in common, and the remaining two curves either fill a one-holed torus and intersect
exactly once, or else they fill a four-holed sphere and intersect exactly twice. Observe that every elementary move determines a unique subsurface of $S$ of complexity 1; we will say that two elementary moves are disjoint if the subsurfaces they determine are disjoint.
As mentioned in the introduction, the {\em pants graph} $\CP(S)$ of $S$ is the simplicial graph whose vertex set is the set of pants decompositions of $S$, considered
up to homotopy, and where two pants decompositions are adjacent in $\CP(S)$ if and only if they are related by an elementary move. We turn $\CP(S)$ into a geodesic metric space by declaring the length of each edge to be 1.
The diagonal pants graph $\DP(S)$ is the simplicial graph obtained from $\CP(S)$ by adding an edge of unit length between any two vertices that differ by $k\geq 2$ disjoint elementary moves.
The cubical pants graph $\CCP(S)$
is obtained from $\CP(S)$ by adding an edge {\em of length} $\sqrt{k}$ between any two edges that differ
by $k\ge 2$ disjoint elementary moves.
\section{The forgetful maps: proofs of Theorems \ref{thm:punctures} and \ref{main2}}
The idea of the proof of Theorems \ref{thm:punctures} and \ref{main2} is to use the so-called {\em forgetful maps} to define a distance non-increasing
projection from the diagonal pants graph to each of its strata.
\subsection{Forgetful maps} Let $S_1$ and $S_2$ be connected orientable surfaces with empty boundary.
Suppose that $S_1$ and $S_2$ have equal genus, and that $S_2$ has at most as many punctures as $S_1$.
Choosing an identification between the set of punctures of $S_2$ and a subset of the set of punctures of $S_1$ yields a map $\tilde{\psi}:S_1 \to \tilde{S}_1$, where $\tilde{S}_1$ is obtained by forgetting all punctures of $S_1$ that do not correspond to a puncture of $S_2$. Now, $\tilde{S}_1$ and $S_2$ are homeomorphic; by choosing a homeomorphism $\varphi:\tilde{S}_1\to S_2$, we obtain a map $\psi=\varphi\circ \tilde{\psi}: S_1 \to S_2$. We refer to all such maps $\psi$ as {\em forgetful maps}.
Observe that if $\alpha\subset S_1$ is a curve then $\psi(\alpha)$ is a (possibly homotopically trivial) curve on $S_2$. Also, $i(\psi(\alpha),\psi(\beta))\le i(\alpha,\beta)$ for al curves $\alpha,
\beta \subset S$. As a consequence, if $C$ is a multicurve on $S_1$ then $\psi(C)$ is a multicurve on $S_2$; in particular $\psi$ maps pants decompositions to pants decompositions. Lastly, observe that if $P$ and $Q$ are pants decompositions of $S_1$ that differ by at most $k$ pairwise disjoint elementary moves, then $\psi(P)$ and $\psi(Q)$ differ by at most $k$ pairwise disjoint elementary moves. We summarize these observations as a lemma.
\begin{lemma} Let $\psi:S_1 \to S_2$ be a forgetful map. Then:
\begin{enumerate}
\item If $P$ is a pants decomposition of $S_1$, then $\psi(P)$ is a pants decomposition of $S_2$.
\item If $P$ and $Q$ are related by $k$ disjoint elementary moves in $S_1$, then $\psi(S_1)$ and $\psi(S_2)$ are related by at most $k$ disjoint elementary moves in $S_2$. \hfill$\square$
\end{enumerate}
\label{lem:forget}
\end{lemma}
In light of these properties, we obtain that forgetful maps induce distance non-increasing maps $$\CP(S_1) \to \CP(S_2)$$ and, similarly, $$\DP(S_1) \to \DP(S_2) \hspace{.5cm} \text{ and } \hspace{.5cm} \CCP(S_1)\to \CCP(S_2).$$
\subsection{Projecting to strata} \label{sec:projection} Let $S$ be a sphere with punctures, $C$ a multicurve on $S$, and consider $S \setminus C = X_1 \sqcup \ldots \sqcup X_n$.
For each $i$, we proceed as follows. On each connected component of $S\setminus X_i$ we choose a puncture of $S$. Let $\bar X_i$ be the surface
obtained from $S$ by forgetting all punctures of $S$ in $S\setminus X_i$ except for these chosen punctures. Noting that $\bar X_i$ is naturally homeomorphic to $X_i$, as above
we obtain a map $$\phi_i: \CP(S) \to \CP(X_i),$$ for all $i = 1, \ldots, n$.
We now define a map $$\phi_C: \CP(S) \to \CP_C(S)$$ by setting $$ \phi_C(P)= \phi_1(P)\cup \ldots \cup \phi_n(P) \cup C.$$ Abusing notation, observe that we also obtain
maps $$\phi_C: \DP(S) \to \DP_C(S) \hspace{.5cm} \text{ and } \hspace{.5cm} \phi_C: \CCP(S) \to \CCP_C(S).$$ The following is an immediate consequence of the definitions and Lemma \ref{lem:forget}:
\begin{lemma}
Let $C\subset S$ be a multicurve. Then
\begin{enumerate}
\item If $P$ is a pants decomposition of $S$ that contains $C$, then $\phi_C(P) =P$.
\item For all $P,Q \in \DP(S)$, $d_{\DP(S)}(\phi_C(P), \phi_C(Q)) \le d_{\DP(S)}(P,Q).$ \hfill$\square$
\end{enumerate}
\label{project}
\end{lemma}
\subsection{Proof of Theorem \ref{thm:punctures}} After the above discussion, we are ready to give a proof of our first result:
\begin{proof}[Proof of Theorem \ref{thm:punctures}] Let $P$ and $Q$ be vertices of $\DP_C(S)$, and let $\omega$ be a path in $\DP(S)$ between them. By Lemma \ref{project}, $\phi_C(\omega)$ is a path in $\DP_C(S)$ between $\phi_C(P)=P$ and $\phi_C(Q)=Q$, of length at most that of $\omega$. Hence
$\DP_C(S)$ is convex.
\end{proof}
\begin{remark}[Strata that are not totally geodesic]
{\rm If $S$ has at least 6 punctures, there exist multicurves $C$ in $S$ for which $\DP_C(S)$ fails to be totally geodesic. For instance, let $C= \alpha \cup \beta$, where $\alpha$ cuts off
a four-holed sphere $Z\subset S$ and $\beta \subset Z$. Let $P, Q\in \DP_C(S)$ at distance at least 3, and choose a geodesic path $$P=P_1, P_2, \ldots, P_{n-1}, P_n = Q$$ in $\DP_C(S)$ between them. For each $i= 2, \ldots, n-1$, let $P_i'$ be a pants decomposition obtained from $P_i$ by replacing the curve $\beta$ with a curve $\beta' \subset Z$ such that $P_i$ and $P'_i$ are related by an elementary move for all $i$. Observing that $P$ and $P'_1$ (resp. $P'_{n-1}$ and $Q$) are related by two disjoint elementary moves, and therefore are adjacent in $\DP(S)$, we obtain a geodesic path $$P=P_1, P'_2, \ldots, P'_{n-1}, P_n = Q$$ between $P$ and $Q$ that lies entirely outside $\DP_C(S)$ except at the endpoints. In particular, $\DP(S)$ is not totally geodesic. }
\label{rmk:outside}
\end{remark}
\subsection{Proof of Theorem \ref{main2}} Let $Y$ be a subsurface of $S$ such that $Y$ has the same genus as $S$. Each component of $\partial Y$ bounds in $S$ a punctured disc. As in Section \ref{sec:projection}, we may define a projection $$\phi_Y: \CP(S) \to \CP(Y),$$ this time by choosing one puncture of $S$ in each punctured disc bounded by a component of $\partial Y$. Let $C$ be the union of a pants decomposition of $S\setminus Y$ with all the boundary components of $Y$. We obtain a map:
$$\phi_C: \CP(S) \to \CP_C(S)$$ by setting $\phi_C(P)= \phi_Y(P) \cup C.$ Again abusing notation, we also have maps $$\phi_C: \DP(S) \to \DP_C(S) \hspace{.5cm} \text{ and } \hspace{.5cm} \phi_C: \CCP(S) \to \CCP_C(S).$$
\begin{proof}[Proof of Theorem \ref{main2}]
We prove part (2) only, since the proof of part (1) is completely analogous to that of Theorem \ref{thm:punctures}. Since $Y$ is connected, Lemma \ref{lem:forget} implies that, for any pants decompositions $P$ and $Q$ of $S$, the projections $\phi_C(P)$ and $\phi_C(Q)$ differ by at most as many elementary moves as $P$ and $Q$ do.
In other words, the maps $\phi_C: \CP(S) \to \CP_C(S)$ and $\phi_C: \CCP(S) \to \CCP_C(S)$ are distance non-increasing.
Suppose, for contradiction, there exist $P,Q \in \CP_C(S)$ and a geodesic $\omega: P=P_0,P_1, \ldots, P_n=Q$ such that $P_i\in \CP_C(S)$ if and only if $i=0,n$.
Let $\tilde{P}_i = \phi_C(P_i)$ for all $i$, noting that $\tilde{P}_0=P$ and $\tilde{P}_n=Q$. Moreover, since $P$ contains $C$ but $P_1$ does not, then
$\tilde{P}_1= \phi_C(P_1) = P_0$. Therefore, the projected path $\phi_C(\omega)$ is strictly shorter (both in $\CP(S)$ and $\CCP(S))$ than $\omega$, and the result
follows. \end{proof}
\begin{remark}
{\rm If $S$ has at least $3$ punctures (and genus at least $1$), then there are multicurves $C\subset S$ such that $\DP_C(S)$ is not totally geodesic. One such example is $C=\alpha\cup\beta$, where $\alpha$ cuts off a four-holed sphere $Z\subset S$, and $\beta \subset Z$; compare with Remark \ref{rmk:outside}.}
\label{rmk:outside2}
\end{remark}
\section{Convex Farey graphs: proof of Theorem \ref{thm:general}}
The main goal of this section is to prove that, for any surface $S$ and any multicurve $C\subset S$ of deficiency 1, the stratum $\DP_C(S)$ is convex in $\DP(S)$. This is the equivalent theorem for $\DP(S)$ of the main result of \cite{APS1}, which states that any isomorphic copy of a Farey graph in $\CP(S)$ is totally geodesic; compare with Remark \ref{rmk:outside3} below.
\subsection{Geometric subsurface projections}
We recall the definition and some properties of subsurface projections, as introduced by Masur-Minsky \cite{MM2}, in the particular case where the subsurface $F$ has complexity $1$.
Let $C\subset S$ be a deficiency 1 multicurve; as such, $S\setminus C$ contains an incompressible subsurface $F$ of complexity 1, thus either a one-holed torus or a four-holed sphere.
Let $\alpha\subset S$ be a curve that intersects $F$ essentially. Let $a$ be a connected component of $\alpha \cap F$; as such, $a$ is either a curve in $F$, or an arc with endpoints on $\partial F$. Now, there is exactly one curve $\gamma_a$ in $F$ that is disjoint from $a$, and we refer to the pants decomposition $\gamma_a \cup C$ as a projection of $\alpha$.
We write $\pi_C(\alpha)$ to denote the set of all projections of $\alpha$, each counted once; we note that $\pi_C(\alpha)$ depends only on $\alpha$ and $C$. If $D\subset S$ is a multicurve, then $\pi_C(D)$ is the union
of the projections of the elements of $D$. In the special case where a multicurve $D$ does not intersect $F$ essentially, we set $\pi_C(D) = \emptyset$. Finally, observe that if $D$ contains a curve $\beta$ that is contained in $F$, then $\pi_C(D)=\beta\cup C$.\\
We will need the following notation. Given an arc $a\subset F$ with endpoints in $\partial F$, we call it a {\it wave} if its endpoints belong to the same boundary component of $F$; otherwise, we call it a {\it seam}. These two types of arcs are illustrated in Figure \ref{fig:arcs}. Observe that any arc in a one-holed torus is a seam.
\begin{figure}[h]
\leavevmode \SetLabels
\endSetLabels
\begin{center}
\AffixLabels{\centerline{\epsfig{file =Figures/arcs1.pdf,height=4.5cm,angle=0}\hspace{.2cm} \epsfig{file =Figures/arcs2.pdf,height=4.5cm,angle=0}}}
\end{center}
\caption{A wave and a seam on a four holed sphere, a wave on a torus (and their projections)} \label{fig:arcs}
\end{figure}
\subsection{Preliminary lemmas}
Unless specified, $C$ will always be a deficiency $1$ multicurve on $S$, and $F$ the unique (up to isotopy) incompressible subsurface of complexity 1 in $S\setminus C$.
Observe that, in this case, $\DP_C(S)$ is isomorphic to $\CP(F)$. We will need some lemmas, similar to those previously used in \cite{APS1}. First, since any two disjoint arcs
in a one-holed torus project to curves that intersect at most once, we have:
\begin{lemma}\label{lem:disjointT}
Suppose $F$ is a one-holed torus, and let $\alpha,\beta\subset S$ be curves with $i(\alpha,\beta)=0$. Then, $\pi_C(\alpha\cup \beta)$ has diameter at most 1 in $\CP(F)$.\hfill $\square$
\end{lemma}
For $F$ a four-holed sphere, we need the following instead:
\begin{lemma}\label{lem:dis2}
Let $F$ be a four-holed sphere, and let $\alpha,\beta\subset F$ be curves with $i(\alpha,\beta)\le 8$. Then , $d_{\CP(F)}(\alpha, \beta) \le 2$.
\end{lemma}
\begin{proof}
It is well-known that curves in $F$ correspond to elements of $\BQ \cup \{\infty\}$ expressed in lowest terms. Moreover, if $\alpha=p/q$ and $\beta=r/s$, then $i(\alpha,\beta) = 2|ps-rq|$. Up to the action of ${\rm SL}(2, \BZ)$, we may assume that $\alpha = 0/1$ and $\beta = r/s$. Since $i(\alpha, \beta) \le 8$, then $r\le 4$; suppose, for concreteness, that $r=4$. We are looking for a curve $\gamma=u/v$ with $i(\alpha,\gamma)= i(\beta,\gamma)=2$. Since $i(\alpha, \gamma) =2$, then $u=1$. Since ${\rm gcd}(r,s) =1$, then $s=4k+1$ or $s=4k+3$, for some $k$. In the former case, we set $v = k$ and, in the latter case, we set $v=k+1$. \end{proof}
As a direct consequence of the above, we obtain:.
\begin{lemma}\label{lem:disjoint}
Suppose $F$ is a four-holed sphere, and let $\alpha,\beta\subset S$ be curves with $i(\alpha,\beta)=0$. Then, $\pi_C(\alpha\cup \beta)$ has diameter at most 2 in $\CP(F)$. \hfill $\square$
\end{lemma}
We will also need the following refinement of Lemma 9 of \cite{APS1}:
\begin{lemma}\label{lem:twocurves}
Let $P$ be a pants decomposition of $S$ with $\partial F \not\subset P$ and let $\alpha_1 \in P$ be a curve that essentially intersects $F$. Then there exists $\alpha_2\in P\setminus \alpha_1$ that essentially intersects $F$. Futhermore $\alpha_2$ can be chosen so that $\alpha_1$ and $\alpha_2$ share a pair of pants.
\end{lemma}
\begin{proof}
We prove the first assertion. Suppose, for contradiction, that $\alpha_1$ is the only element of $P$ that essentially intersects $F$. Since $\partial F \not\subset P$ and $P\setminus \alpha_1$ has deficiency 1, then $P\setminus \alpha_1 \cup \partial F$ is a pants decomposition of $S$, which is impossible since its complement contains $F$, which has complexity 1. Thus there is a curve $\tilde{\alpha}\neq \alpha_1$ of $P$ essentially intersecting $F$.
To show that a second curve $\alpha_2$ can be chosen sharing a pair of pants with $\alpha_1$, consider any path $c:[0,1]\longrightarrow F$ with $c(0)\in \alpha_1$ and $c(1)\in \tilde{\alpha}$ and choose $\alpha_2$ to be the curve on which $c(\tau)$ lies, where $\tau = \min\{t | c(t) \in P\setminus \alpha_1 \}$. Thus there is a path between $\alpha_1$ and $\alpha_2$ which intersects $P$ only at its endpoints. Therefore, $\alpha_1$ and $\alpha_2$ share a pair of pants.
\end{proof}
The next lemma will be crucial to the proof of the main result. In what follows, $d_C$ denotes distance in $\DP_C(S)=\CP_C(S)$
\begin{lemma}\label{lem:mainlemma}
Let $F$ be a four-holed sphere. Let $P,P'$ be pants decompositions at distance $1$ in $\DP(S)$ and let $\alpha\in P$, $\alpha' \in P'$ be curves related by an elementary move. Suppose both curves intersect $F$ essentially, and let $a$ be an arc of $\alpha$ on $F$. Then at least one of the following two statements holds:
\begin{enumerate}[i)]\item\label{casei}There exists $\tilde{\alpha} \in P\cap P'$ and an arc $\tilde{a}$ of $\tilde{\alpha}$ on $F$ such that
$$d_C(\pi_C(a), \pi_C(\tilde{a})) \leq 1.$$
\item\label{caseii} The curve $\alpha'$ satisfies
$$d_C(\pi_C(a), \pi_C(\alpha')) \leq 2.$$
\end{enumerate}
\end{lemma}
\begin{remark} The example illustrated in Figure \ref{fig:lemproof1-2} shows that case \ref{caseii}) does not cover everything. The two arcs labelled $a$ and $a'$, which can indeed be subarcs of curves related by an elementary move, produce projected curves that are distance $3$.
\end{remark}
\begin{proof}[Proof of lemma \ref{lem:mainlemma}]
First, we choose representatives of $\alpha$ and $\alpha'$ that realize $i(\alpha,\alpha')$; abusing notation, we denote these representatives by $\alpha$ and $\alpha'$.
Let $a$ (resp. $a'$) be any arc of $\alpha$ (resp. $\alpha'$) in $F$. First, observe that if $i(a,a')\le 1$ then
$i(\pi_C(a), \pi_C(a')) \leq 8$. The same is true if at least one of $a$ or $a'$ is a wave. In either case, Lemma \ref{lem:dis2} yields that $d_C(\pi_C(a), \pi_C(\alpha')) \leq 2$ and thus the result follows.
Therefore, it suffices to treat the case where $i(a,a')=2$ and both $a,a'$ are seams. In this case, we will show that conclusion \ref{casei}) holds.
\begin{figure}[h]
\leavevmode \SetLabels
\L(.23*.4) $c'$\\
\L(.34*.73) $a$\\
\L(.83*.73) $a$\\
\L(.86*.41) $a'$\\
\L(.18*.73) $c$\\
\L(.075*-.023) $\delta$\\
\L(.57*-.023) $\delta$\\
\endSetLabels
\begin{center}
\AffixLabels{\centerline{\epsfig{file =Figures/lemproof1.pdf,width=6.5cm,angle=0}\hspace{1.5cm} \epsfig{file =Figures/lemproof2.pdf,width=6.5cm,angle=0}}}
\end{center}
\caption{The arcs $a$ and $a'$, shown in the four-holed sphere $F$.} \label{fig:lemproof1-2}
\end{figure}
Consider the subarcs $c\subset a$ and $c'\subset a'$ between the two intersection points of $a$ and $a'$. As $\alpha$ and $\alpha'$ intersect twice and are related by an elementary move, they fill a unique (up to isotopy) incompressible four-holed sphere $X\subset S\setminus(P\cap P')$; in particular, they have algebraic intersection number 0.
As the endpoints of $a'$ must lie on different boundary components of $F$, by checking the different possibilities, the only topological configuration of the two arcs is the one shown in Figure \ref{fig:lemproof1-2}.
Denote $\delta$ the curve in the homotopy class of $c \cup c'$. This curve is a boundary curve of $F$, as is illustrated in Figure \ref{fig:lemproof1-2}, but is also one of the boundary curves of $X$. This is because $\delta$ is disjoint from all the boundary curves of $X$, and is not homotopically trivial since $i(\alpha,\alpha')=2$. In $X$,
the curve $\alpha'$ and the arc $c$ determine two boundary curves of $X$, namely $\delta$ and another one we shall denote $\tilde{\delta}$ (see Figure \ref{fig:lemproof3}).
\begin{figure}[h]
\leavevmode \SetLabels
\L(.44*.46) $\alpha$\\
\L(.53*.73) $\alpha'$\\
\L(.6*.38) $c$\\
\L(.67*.96) $\delta$\\
\L(.67*-.025) $\tilde{\delta}$\\
\endSetLabels
\begin{center}
\AffixLabels{\centerline{\epsfig{file =Figures/lemproof3.pdf,width=6.5cm,angle=0}}}
\end{center}
\caption{The surface $X$ determined by $\alpha$ and $\alpha'$} \label{fig:lemproof3}
\end{figure}
Note that the curve $\tilde{\delta}$ by definition shares a pair of pants with $\alpha$ in $P$ and with $\alpha'$ in $P'$.
Now, in $F$ the arc $\tilde{c}$, illustrated in Figure \ref{fig:lemproof4}, is an essential subarc of $\tilde{\delta}$. Now clearly
$$
d_C(\pi_C(\tilde{c}),\pi_C(a))=1;
$$
see again Figure \ref{fig:lemproof4}. As $\tilde{\delta} \in P\cap P'$, we can conclude that \ref{casei}) holds and thus the lemma is proved.
\end{proof}
\begin{figure}[h]
\leavevmode \SetLabels
\L(.41*.46) $\tilde{c}$\\
\L(.855*.7) $\tilde{c}$\\
\L(.64*.72) $a$\\
\L(.6*.35) $\pi(a)$\\
\L(.74*.47) $\pi(\tilde{c})$\\
\endSetLabels
\begin{center}
\AffixLabels{\centerline{\epsfig{file =Figures/lemproof4.pdf,width=13.5cm,angle=0}}}
\end{center}
\caption{The surface $X$ determined by $\alpha$ and $\alpha'$} \label{fig:lemproof4}
\end{figure}
\subsection{Proof of Theorem \ref{thm:general}}
We will prove the following more precise statement, which implies part (a) of Theorem \ref{thm:general}. Again, $d_C$ denotes distance in $\DP_C(S)$ (or, equivalently, in $\CCP_C(S)$, since both are isomorphic).
\begin{theorem}
Let $C\subset S$ be a deficiency 1 multicurve, and let $F$ be the unique (up to isotopy) incompressible surface of complexity 1 in $S\setminus C$. Let $P,Q \in \DP_C(S)$, and let $P=P_0,\hdots,P_n=Q$ be a path in $\DP(S)$ between them. If $\partial F \not\subset P_k$ for at least one $k$,
then $n>d_C(P,Q)$.
\label{thm:themeat}
\end{theorem}
\begin{remark}
In fact, Theorem \ref{thm:themeat} also implies part (b) of Theorem \ref{thm:general}, since $d_{\CCP(S)}(P,Q) \ge d_{\DP}(S)(P,Q)$.
\end{remark}
\begin{proof}[Proof of Theorem \ref{thm:themeat}]
Let $P=P_0,\hdots,P_n=Q$ be a path in $\DP(S)$. It suffices to consider the case that $\partial F$ is not contained in $P_k$ for $k\notin \{0,n\}$.
The proof relies on projecting to curves in $\pi_C(P_k)$ to produce a path $\tilde{P}_0, \ldots, \tilde{P}_n$ in $\DP_C(S)$, in such a way that $d(\tilde{P}_i,\tilde{P}_{i+1})\leq 1$. The strict inequality will come from the observation that, since $P_0$ contains $\partial F$ but $P_1$ does not, then $\pi_C(P_1)= P_0$.
We set $\tilde{P}_0:= P_0$ and $\tilde{P}_1:= \pi_C(P_1)=P_0$. Proceeding inductively, suppose that we have constructed $\tilde{P}_k$ by projecting the arc $a_k$ of the
curve $\alpha_1^k \in P_k$. Let $\alpha_2^k$ the curve of $P_k$ whose existence is guaranteed by Lemma \ref{lem:twocurves}. We begin with the easier case when $F$ is a one holed torus.\\
\noindent{\it \underline{The case where $F$ is a one holed torus}}\\
\noindent We proceed as follows for $k\leq n-1$.
At step $k+1$, at least $\alpha_1^k$ or $\alpha_2^k$ belong to $P_{k+1}$. We set $\tilde{P}_{k+1}$ to be the projection of any arc of such curve. By Lemma \ref{lem:disjointT} , we have $d_C(\tilde{P}_{k},\tilde{P}_{k+1}) \leq 1$. Since $Q$ contains $\partial F$ but $P_{n-1}$ does not, then $\pi_C(P_{n-1})=Q$ and therefore $\tilde{P}_{n-1} = Q$. In this way we obtain the desired path, which has length at most $n-2$. \\
\noindent{\it \underline{The case where $F$ is a four holed sphere}}\\
\noindent We proceed as follows for $k\leq n-2$. There are several cases to consider:
\begin{enumerate}[1.]
\item[{\bf 1.}] If $\alpha_1^k \in P_{k+1}$, then we define $\tilde{P}_{k+1}:=\tilde{P}_{k}$.
\end{enumerate}
\noindent If $\alpha_1^k \notin P_{k+1}$, denote by $\alpha_1^{k+1}$ the curve in $P_{k+1}$ that intersects
$\alpha_1^k $ essentially.
\begin{enumerate}[2.]
\item[{\bf 2.}] Suppose that $\alpha_1^{k+1}$ intersects $F$ essentially. As $\alpha_1^{k+1}$ and $\alpha_1^{k}$ are related by an elementary move, and since $\alpha_1^{k}$ shares a pair of pants with $\alpha_2^{k}$, it follows that $\alpha_2^{k}$ also belongs to $P_{k+1}$ and that it shares a pair of pants with $\alpha_1^{k+1}$.
\begin{enumerate}[i.]\item If $d(\pi_C(a_k), \pi_C(\alpha_1^{k+1}))>2$, then by Lemma \ref{lem:mainlemma}, there exists a curve $\tilde{\alpha}$ in $P_k\cap P_{k+1}$ (not necessarily $\alpha_2^{k}$) with an arc $\tilde{a}$ such that $d(\pi_C(a_k), \pi_C(\tilde{a}))=1$. In this case we set
$
\tilde{P}_{k+1}:= C \cup \pi_C(\tilde{a})
$, noting that $d(\tilde{P}_{k}, \tilde{P}_{k+1})=1$.
\item Otherwise we have $d(\pi_C(a_k), \pi_C(\alpha_1^{k+1}))\leq 2$. Now, at least one of $\alpha_1^{k+1}$ or $\alpha_2^{k}$ belongs to $P_{k+2}$. We define $\tilde{P}_{k+2}$ to be the projection of any arc of $\{ \alpha_1^{k+1}, \alpha_2^{k}\} \cap P_{k+2}$, observing that $d(\tilde{P}_{k}, \tilde{P}_{k+2})\le 2$, by the above observations and/or Lemma \ref{lem:disjoint}. (In the case that $d(\tilde{P}_{k}, \tilde{P}_{k+2})= 2$, we choose any $\tilde{P}_{k+1}$ that contains $\partial F$ and is adjacent to both $\tilde{P}_{k}$ and $\tilde{P}_{k+2}$.)
\end{enumerate}
\end{enumerate}
\begin{enumerate}[1.]
\item[{\bf 3.}] Finally, suppose that $\alpha_1^{k+1}$ does not essentially intersect $F$. By Lemma \ref{lem:twocurves}, $P_{k+1}$ contains a curve $\tilde{\alpha}$ which essentially intersects $F$ and which shares a pair of pants with $\alpha_2^k$. The curve $\tilde{\alpha}$ may or may not belong to $P_k$ but, in either case, observe that it is disjoint from $\alpha_1^k$. As such, by Lemma \ref{lem:disjoint} we have:
$$
d(\pi_C(a_k),\pi_C(\tilde{\alpha})) \leq 2.
$$
As before, at least $\tilde{\alpha}$ or $\alpha_2^k$ belongs to $P_{k+1}$ so we choose any arc of $\{ \tilde{\alpha}, \alpha_2^k\} \cap P_{k+2}$ to produce $\tilde{P}_{k+2}$, which is at distance at most $2$ from $\tilde{P}_{k}$.\\
\end{enumerate}
This process will provide us with all $\tilde{P}_k$ up until either $k=n$ or $k=n-1$. We conclude by noticing that if $\tilde{P}_{n-1}$ was obtained as the projection of a curve in $P_{n-1}$ then, as in the initial step, $\tilde{P}_{n-1}=\pi_C(P_{n-1})=P_n$ and we set $\tilde{P_n}:=P_n$.
\end{proof}
Rephrasing Theorem \ref{thm:themeat}, we obtain the following nice property of geodesics in $\DP(S)$:
\begin{corollary}
Let $C\subset S$ be a deficiency 1 multicurve, and let $F$ be the unique (up to isotopy) incompressible surface of complexity 1 in $S\setminus C$. Let $P,Q \in \DP_C(S)$ and let $\omega$ be any geodesic from $P$ to $Q$. Then every vertex of $\omega$ contains $\partial F$.
\end{corollary}
\begin{remark}
{\rm Combining Theorem \ref{thm:general} with a similar argument to that in Remark \ref{rmk:outside} we obtain that, if $\kappa(S) \ge 3$, then $\DP_C(S)$ is not totally geodesic in $\DP(S)$.}
\label{rmk:outside3}
\end{remark}
\section{Consequences}
Let $S$ be a connected orientable surface of negative Euler characteristic and with empty boundary. Let $R=R(S)$ be the number $[\frac{3g+p-2}{2}]$ where $g$ and $p$ are, respectively, the genus and number of punctures of $S$. Note that $R$ is the maximum number of pairwise distinct, pairwise disjoint complexity 1 subsurfaces of $S$. As mentioned in the introduction, by work of Behrstock-Minsky \cite{bemi}, Brock-Farb \cite{brfa}, and Masur-Minsky \cite{MM2}, $R$ is precisely the {\em geometric rank} of the Weil-Petersson completion $\hat{T}(S)$ (and thus also of the pants graph $\CP(S)$, by Brock's result \cite{brock}).
We will say that a multicurve $D \subset S$ is {\em rank-realizing} if $S\setminus D$ contains $R$ pairwise distinct, pairwise disjoint incompressible subsurfaces of $S$, each of
complexity 1. We have:
\begin{proposition}
Let $D\subset S$ be a rank-realizing multicurve. Then $\DP_D(S)$ is convex in $\DP(S)$.
\label{prop:rank}
\end{proposition}
\begin{proof}
Let $X_1,\ldots, X_R$ be the $R$ complexity 1 incompressible subsurfaces in $S\setminus D$. Let $P,Q$ be any two vertices of $\DP_C(S)$, and denote by $\alpha_i$ (resp. $\beta_i$) the unique curve in $P$ (resp. $Q$) that is an essential curve in $X_i$. Consider any path $\omega$ between $P$ and $Q$. As in the proof
of Theorem \ref{thm:general}, we may project $\omega$ to a path $\omega_i$ in $\DP(X_i)$ from $\alpha_i$ to $\beta_i$, such that ${\rm length}(\omega_i)\le {\rm length}(\omega)$ for all $i$. In fact, by repeating vertices at the end of some of the $\omega_i$ if necessary, we may assume that all the $\omega_i$ have precisely $N+1$ vertices, with $N \le {\rm length}(\omega)$.
Now, if $\omega_{i,j}$ denotes the $j$-th vertex along $\omega_i$, then $P_j=\omega_{1,j} \cup \ldots, \omega_{R,j} \cup D$ is a vertex of $\DP_D(S)$. In this way, we obtain
a path $P=P_0, \ldots, P_N=Q$ that is entirely contained in $\DP_D(S)$ and has length $N\le {\rm length}(\omega)$, as desired.
\end{proof}
As a consequence, we obtain our first promised corollary. Here, $\DZ^r$ denotes the graph obtained from the cubical lattice $\BZ^r$ by adding
the diagonals to every $k$-dimensional cube of $\BZ^r$, for $k\le r$.
\begin{named}{Corollary \ref{cor:punctures}}
There exists an isometric embedding $\DZ^r \to \DP(S)$ if and only if $r \le R$.
\end{named}
\begin{proof}
We exhibit an isometric embedding of $\DZ^R$, as the construction for $r\le R$ is totally analogous. Let $D$ be a rank-realizing multicurve, and let $X_1,\ldots, X_R$ be the $R$ incompressible subsurfaces of complexity 1 in $S\setminus D$. For $i=1, \ldots, R$, choose
a bi-infinite geodesic $\omega_i$ in $\DP(X_i)$. Let $\omega_{i,j}$ denote the $j$-th vertex along $\omega_i$. Then subgraph of $\DP(S)$ spanned by
$\{\omega_{i,j}\cup D: 1\le i\le R, j\in \BZ\}$ is isomorphic to $\DZ^R$, and is convex in $\DP(S)$ by Proposition \ref{prop:rank}.
Conversely, every $k$-dimensional cube in $\DZ^r$ singles out a unique (up to isotopy) collection of $k$ pairwise distinct, pairwise disjoint incompressible subsurfaces of $S$, each of complexity 1. Thus the result follows.
\end{proof}
Theorem \ref{thm:general} also implies the following:
\begin{named}{Corollary \ref{cor:far}}
Given any surface $S$, and any $k$, there exist points in $\DP(S)$ at distance $k$ with only a finite number of geodesics between them.
\end{named}
\begin{proof}
Let $D$ be a rank-realizing multicurve, and let $X_1,\ldots, X_R$ be the $R$ incompressible subsurfaces of complexity 1 in $S\setminus D$. Let $P\in \DP_D(S)$ and let $\alpha_i$ be the unique curve of $P$ that essentially intersects $X_i$. For all $i=1, \ldots, R$, choose a geodesic ray $\omega_i$ in $\CP(X_i)$ issued from $\alpha_i$, and denote by $\omega_{i,j}$ the $j$-th vertex along $\omega_i$. Let $Q_k= \omega_{1,k} \cup \ldots, \omega_{R,k} \cup D$. By Proposition \ref{prop:rank}, $d_{\DP(S)}(P, Q_k) = k$ for all $k$.
By Theorem \ref{thm:themeat}, any geodesic path in $\DP(S)$ between $P$ and $Q_k$ is entirely contained in $DP_D(S)$. Furthermore, it follows from the choice of $Q_k$ that the projection to $\DP(X_i)$ of a geodesic path in $\DP(S)$ between $P$ and $Q_k$ is a geodesic path in $\DP(X_i)$ between
$\alpha_i$ and $\omega_{i,k}$. The result now follows because $\DP(X_i)$ is isomorphic to a Farey graph, and there are only finitely many geodesic paths between any two points of a Farey graph.
\end{proof}
\section*{Appendix: Strong convexity vs. finite geodesicity}\label{cuicui}
In this section, we prove:
\begin{proposition}
Let $S$ be a connected orientable surface of negative Euler characteristic. We denote by $\CC(S)$ either $\CP(S)$ or $\DP(S)$ or $\CCP(S)$. The following two properties are equivalent:
\begin{enumerate}
\item Given a simple closed curve $\alpha\subset S$, the subset $\CC_\alpha(S)$ of $\CC(S)$ spanned by the vertices corresponding to pants decompositions containing $\alpha$ is totally geodesic,
\item Given two vertices $P,Q\in \CC(S)$, there are finitely many geodesic segments joining $P$ to $Q$ in $\CC(S)$.
\end{enumerate}
\label{prop:cyrilito}
\end{proposition}
\begin{proof}
Let us first show that $1)\Rightarrow 2)$. More precisely, we will show that $[\text{not }2)]\Rightarrow [\text{not } 1)]$. Consider an infinite family $\{\omega_n,n\in\BN\}$ of distinct geodesic arcs joining two points $P,Q\in\CC(S)$ and denote the vertices of $\omega_n$ by $\omega_{n,i}$, $0\leq i\leq q(n)$, where $d_{\CC(S)}(\omega_{n,i-1},\omega_{n,i})=1$ when $\CC(S)=\CP(S)$ or $\DP(S)$ and $d_{\CC(S)}(\omega_{n,i-1},\omega_{n,i})= \sqrt{k}$, for some integer $k \le [\frac{3g+p-2}{2}]$, when $\CC(S)=\CCP(S)$. Notice that $q(n)=d_{\CC(S)}(P,Q)$ if $\CC(S)=\CP(S)$ or $\DP(S)$ but $q(n)\leq d_{\CC(S)}(P,Q)$ may depend on $n$ for $\CC(S)=\CCP(S)$. In the latter case, we first extract a subsequence such that $q(n)$ does not depend on $n$. Let $0<i_0<q(n)=q$ be the smallest index such that $\{\omega_{n,i_0}\}$ contains infinitely many distinct points. Let us extract a subsequence such that $\omega_{n,i}$ does
not depend on $n$ for $i<i_0$ and $\{\omega_{n,i_0}\}\neq \{\omega_{m,i_0}\}$ for any $n\neq m$. Extract a further subsequence such that the set of leaves of $\omega_{n,i_0-1}$ that are not leaves of $\omega_{n,i_0}$ does not depend on $n$ (namely we fix the subsurfaces in which the elementary moves happen). Extract a subsequence one last time so that for any $i$, $\omega_{n,i}$ converges in the Hausdorff topology to a geodesic lamination $\omega_{\infty,i}$.\\
\indent
Since all the pants decompositions $\{\omega_{n,i_0}\}$ are distinct, there is a leaf $\alpha$ of $\{\omega_{n,i_0-1}\}$ which is not a leaf of any $\{\omega_{n,i_0}\}$ and such that the leaves $\alpha_n$ of the pants decompositions $\{\omega_{n,i_0}\}$ that intersect $\alpha$ form an infinite family. The lamination $\omega_{\infty,i_0}$ contains $\alpha$ and some leaves spiraling towards $\alpha$. Since $\omega_{n,i_0}$ and $\omega_{n,i_0+1}$ are adjacent vertices, we have $i(\omega_{n,i_0},\omega_{n,i_0+1}) \le 3g+p-2$ for any $n$. It follows that $\omega_{\infty,i_0+1}$ also contains $\alpha$. If, furthermore, $\omega_{\infty, i_0+1}$ contains some leaves spiraling towards $\alpha$, then $\omega_{\infty, i_0+2}$ contains $\alpha$, and the same holds for $\omega_{\infty, i_0+3}$, etc. On the other hand, $\omega_{\infty,q}=Q$ does not contain any leaves spiraling towards $\alpha$. It follows that there is $i_1>i_0$ such that $\omega_{\infty,i_1}$ contains $\alpha$ as an isolated leaf. Then for, $n$ large enough, $\omega_{n,i_1}$ contains $\alpha$, i.e. $\omega_{n,i_1}\subset \CC_\alpha(S)$. Since $\omega_{n,i_0-1}\subset \CC_\alpha(S)$ and $\omega_{n,i_0}\not\subset \CC_\alpha(S)$, the geodesic segment joining $\omega_{n,i_0-1}$ to $\omega_{n,i_1}$ and passing through $\{\omega_{n,i},i_0\leq i\leq i_1-1\}$ is not contained in $\CC_\alpha(S)$. Thus we have proved that $2)$ is not satisfied.\\
\indent
For the other direction, $2)\Rightarrow 1)$, we will show $[\text{not }1)]\Rightarrow [\text{not } 2)]$. So assume that there is simple closed curve $\alpha\subset S$ such that $\CC_\alpha(S)$ is not totally geodesic. In particular there is a geodesic segment $\omega$ joining two vertices $P,Q\in \CC_\alpha(S)$ such that $\omega\not\subset\CC_\alpha(S)$. Let $T_\alpha:\CC(S)\rightarrow \CP(S)$ be the automorphism induced by the right Dehn twist along $\alpha$. Then $\CC_\alpha(S)$ is exactly the set of points fixed by $T_\alpha$. Since $\omega\not\subset\CC_\alpha(S)$, $\{T_\alpha^n(k),k\in\BZ\}$ is an infinite family of pairwise distinct geodesic segments joining $P$ to $Q$. It follows that $1)$ is not satisfied.
\end{proof}
Combining Proposition \ref{prop:cyrilito}, Theorem \ref{thm:punctures}, and the main results of \cite{APS1,APS2} we obtain the last promised consequence.
\begin{named}{Corollary \ref{geodesics}}
Let $S$ be the six-times punctured sphere. For any $P,Q \in \CP(S)$, there are only finitely many geodesics in $\CP(S)$ between $P$ and $Q$. $\square$
\end{named}
Observe that in the proof above we could have replaced "simple closed curve" by "multi-curve" in $1)$. This, together with Remark \ref{rmk:outside}, implies that the analog of Corollary \ref{geodesics} for the diagonal pants graph is not true. | {"config": "arxiv", "file": "1111.1187/convex041111.tex"} |
\begin{document}
\selectlanguage{english}
\maketitle
We establish the existence of a global solution to a family of equations, which are obtained in certain regimes in~\cite{DS-16} as the mean-field evolution of the supercurrent density in a (2D section of a) type-II superconductor with pinning and with imposed electric current. We also consider general vortex-sheet initial data, and investigate the uniqueness and regularity properties of the solution.
\bigskip
\section{Introduction}\label{chap:intro}
We study the well-posedness of the following two evolution models coming from the mean-field limit equations of Ginzburg-Landau vortices: first, for $\alpha\ge0$, $\beta\in\R$, we consider the ``incompressible'' flow
\begin{align}\label{eq:limeqn1}
\partial_tv=\nabla P-\alpha(\Psi+v) \curl v+\beta(\Psi+v)^\bot \curl v,\qquad\Div (av)=0,\qquad\text{in $\R^+\times\R^2$,}
\end{align}
and second, for $\lambda\ge0$, $\alpha>0$, $\beta\in\R$, we consider the ``compressible'' flow
\begin{align}\label{eq:limeqn2}
\partial_tv=\lambda\nabla(a^{-1}\Div(av))-\alpha(\Psi+v) \curl v+\beta(\Psi+v)^\bot \curl v,\qquad\text{in $\R^+\times\R^2$,}
\end{align}
with $v:\R^+\times\R^2:=[0,\infty)\times\R^2\to\R^2$, where $\Psi:\R^2\to\R^2$ is a given forcing vector field, and where $a:=e^h$ is determined by a given ``pinning potential'' $h:\R^2\to\R$.
More precisely, we investigate existence, uniqueness and regularity, both locally and globally in time, for the associated Cauchy problems; we also consider vortex-sheet initial data, and we study the degenerate case $\lambda=0$ as well.
As shown in a companion paper~\cite{DS-16} with Serfaty, these equations are obtained in certain regimes as the mean-field evolution of the supercurrent density in a (2D section of a) type-II superconductor described by the 2D Ginzburg-Landau equation with pinning and with imposed electric current --- but without gauge and in whole space, for simplicity.
\subsubsection*{Brief discussion of the model}
Superconductors are materials that in certain circumstances lose their resistivity, which allows permanent supercurrents to circulate without loss of energy. In the case of type-II superconductors, if the external magnetic field is not too strong, it is expelled from the material (Meissner effect), while, if it is much too strong, the material returns to a normal state. Between these two critical values of the external field, these materials are in a mixed state, allowing a partial penetration of the external field through ``vortices'', which are accurately described by the (mesoscopic) Ginzburg-Landau theory.
Restricting ourselves to a 2D section of a superconducting material, it is standard to study for simplicity the 2D Ginzburg-Landau equation on the whole plane (to avoid boundary issues) and without gauge (although the gauge is expected to bring only minor difficulties). We refer e.g. to~\cite{Tinkham,Tilley} for further reference on these models, and to~\cite{SS-book} for a mathematical introduction.
In this framework, in the asymptotic regime of a large Ginzburg-Landau parameter (which is indeed typically the case in real-life superconductors), vortices are known to become point-like, and to interact with one another according to a Coulomb pair potential.
In the mean-field limit of a large number of vortices, the evolution of the (macroscopic) suitably normalized mean-field density $\omega:\R^+\times\R^2\to\R$ of the vortex liquid was then naturally conjectured to satisfy the following Chapman-Rubinstein-Schatzman-E equation~\cite{WE-94,CRS-96}
\begin{align}\label{eq:LZh0}
\partial_t\omega=\Div(|\omega| \nabla (-\triangle)^{-1}\omega),\qquad\text{in $\R^+\times\R^2$},
\end{align}
where $(-\triangle)^{-1}\omega$ is indeed the Coulomb potential generated by the vortices. Although the vortex density $\omega$ is a priori a signed measure, we restrict here (and throughout this paper) to positive measures, $|\omega|=\omega$, so that the above is replaced by
\begin{align}\label{eq:LZh}
\partial_t\omega=\Div(\omega \nabla (-\triangle)^{-1}\omega).
\end{align}
More precisely, the mean-field supercurrent density $v:\R^+\times\R^2\to\R^2$ (linked to the vortex density through the relation $\omega=\curl v$) was conjectured to satisfy
\[\partial_tv=\nabla P-v\curl v,\qquad\Div v=0.\]
(Taking the curl of this equation indeed formally yields~\eqref{eq:LZh}, noting that the incompressibility constraint $\Div v=0$ allows to write $v=\nabla^\bot\triangle^{-1}\omega$.)
On the other hand, in the context of superfluidity, a conservative counterpart of the usual parabolic Ginzburg-Landau equation is used as a mesoscopic model: this counterpart is given by the Gross-Pitaevskii equation, which is a particular instance of a nonlinear Schrdinger equation. At the level of the mean-field evolution of the corresponding vortices, we then need to replace~\eqref{eq:LZh0}--\eqref{eq:LZh} by their conservative versions, thus replacing $\nabla(-\triangle)^{-1}\omega$ by $\nabla^\bot(-\triangle)^{-1}\omega$.
As argued e.g. in~\cite{Aranson-Kramer}, there is also physical interest in rather starting from the ``mixed-flow'' (or ``complex'') Ginzburg-Landau equation, which is a mix between the usual Ginzburg-Landau equation describing superconductivity ($\alpha=1$, $\beta=0$, below), and its conservative counterpart given by the Gross-Pitaevskii equation ($\alpha=0$, $\beta=1$, below). The above mean-field equation for the supercurrent density $v$ is then replaced by the following, for $\alpha\ge0$, $\beta\in\R$,
\begin{align}\label{eq:limeqnS1}
\partial_tv=\nabla P-\alpha v\curl v+\beta v^\bot\curl v,\qquad\Div v=0.
\end{align}
Note that in the conservative case $\alpha=0$, this equation is equivalent to the 2D Euler equation, as becomes clear from the identity $v^\bot\curl v=(v\cdot \nabla)v-\frac12\nabla|v|^2$.
The first rigorous deductions of these (macroscopic) mean-field limit models from the (mesoscopic) Ginzburg-Landau equation are due to~\cite{Kurzke-Spirn-14,Jerrard-Soner-15}, and to~\cite{Serfaty-15} for much more general regimes. As discovered by Serfaty~\cite{Serfaty-15}, in some regimes with $\alpha>0$, this limiting equation~\eqref{eq:limeqnS1} is no longer correct, and must be replaced by the following compressible flow
\begin{align}\label{eq:limeqnS2}
\partial_tv=\lambda\nabla (\Div v)-\alpha v\curl v+\beta v^\bot\curl v,
\end{align}
for some $\lambda>0$.
There is some interest in the degenerate case $\lambda=0$ as well, since it is formally expected (although not proven) to be the correct mean-field evolution in some other regimes.
When an electric current is applied to a type-II superconductor, it flows through the material, inducing a Lorentz-like force that makes the vortices move, dissipates energy, and disrupts the permanent supercurrents. As most technological applications of superconducting materials occur in the mixed state, it is crucial to design ways to reduce this energy dissipation, by preventing vortices from moving. For that purpose a common attempt consists in introducing in the material inhomogeneities (e.g. impurities, or dislocations), which are indeed meant to destroy superconductivity locally and therefore ``pin down'' the vortices.
This is usually modeled by correcting the Ginzburg-Landau equations with
a non-uniform equilibrium density
$a:\R^2\to[0,1]$, which locally lowers the energy penalty associated with the vortices (see e.g.~\cite{Chapman-Richardson-97,BFGLV} for further details).
As formally predicted by Chapman and Richardson~\cite{Chapman-Richardson-97}, and first completely proven by~\cite{Jian-Song-01,S-Tice-11} (see also~\cite{Jerrard-Smets-15,Kurzke-Marzuola-Spirn-15} for the conservative case), in the asymptotic regime of a large Ginzburg-Landau parameter, this non-uniform density $a$ translates at the level of the vortices into an effective ``pinning potential'' $h=\log a$, indeed attracting the vortices to the minima of $a$. As shown in our companion paper~\cite{DS-16}, the mean-field equations~\eqref{eq:limeqnS1}--\eqref{eq:limeqnS2} are then replaced by~\eqref{eq:limeqn1}--\eqref{eq:limeqn2}, where the forcing $\Psi$ can be decomposed as $\Psi:=F^\bot-\nabla^\bot h$, in terms of the pinning force $-\nabla h$, and of some vector field $F:\R^2\to\R^2$ related to the imposed electric current (see also~\cite{Tice-10,S-Tice-11}).
\subsubsection*{Relation to previous works}
The simplified model~\eqref{eq:LZh} describes the mean-field limit of the gradient-flow evolution of any particle system with Coulomb interactions~\cite{D-15}. As such, it is related to nonlocal aggregation and swarming models, which have attracted a lot of mathematical interest in recent years (see e.g.~\cite{Bertozzi-Laurent-Leger-12,Carrillo-Choi-Hauray-14} and the references therein); they consist in replacing the Coulomb potential $(-\triangle)^{-1}$ by a convolution with a more general kernel corresponding to an attractive (rather than repulsive) nonlocal interaction.
Equation~\eqref{eq:LZh} was first studied by Lin and Zhang~\cite{Lin-Zhang-00}, who established global existence for vortex-sheet initial data $\omega|_{t=0}\in\Pc(\R^2)$, and uniqueness in some Zygmund space.
To prove global existence for such rough initial data, they proceed by regularization of the data, then passing to the limit in the equation using the compactness given by some very strong a priori estimates obtained by ODE type arguments.
As our main source of inspiration, their approach is described in more detail in the sequel.
When viewing~\eqref{eq:LZh} as a mean-field model for the motion of the Ginzburg-Landau vortices in a superconductor, there is also interest in changing sign solutions and the correct model is then rather~\eqref{eq:LZh0}, for which global existence and uniqueness have been investigated in~\cite{Du-Zhang-03,Masmoudi-Zhang-05}.
In~\cite{Ambrosio-S-08,Ambrosio-Mainini-S-11}, using
an energy approach where the equation is seen as a formal gradient flow in the Wasserstein space of probability measures ( la Otto~\cite{Otto-01}), made rigorous by the minimizing movement approach of Ambrosio, Gigli and Savar~\cite{Ambrosio-Gigli-Savare},
analogues of equations~\eqref{eq:LZh0}--\eqref{eq:LZh} were studied in a 2D bounded domain, taking into account the possibility of mass entering or exiting the domain. In the case of nonnegative vorticity $\omega\ge0$, essentially the same existence and uniqueness results are established in that setting in~\cite{Ambrosio-S-08} as for~\eqref{eq:LZh}.
In the case $\omega\ge0$ on the whole plane, still a different approach was developed by Serfaty and V\'azquez~\cite{S-Vazquez-14}, where equation~\eqref{eq:LZh} is obtained as a limit of nonlocal diffusions, and where uniqueness is further established for bounded solutions using transport arguments la Loeper~\cite{Loeper-06}.
Note that no uniqueness is expected to hold for general measure solutions of~\eqref{eq:LZh} (see~\cite[Section~8]{Ambrosio-S-08}). In the present paper, we focus on the case $\omega\ge0$ on the whole plane $\R^2$.
The model~\eqref{eq:limeqnS1} is a linear combination of the gradient-flow equation~\eqref{eq:LZh} (obtained for $\alpha=1$, $\beta=0$), and of its conservative counterpart that is nothing but the 2D Euler equation (obtained for $\alpha=0$, $\beta=1$). The theory for the 2D Euler equation has been well-developed for a long time: global existence for vortex-sheet data is due to Delort~\cite{Delort-91}, while the only known uniqueness result, due to Yudovich~\cite{Yudovich-63}, holds in the class of bounded vorticity (see also~\cite{Bardos-Titi} and the references therein).
As far as the general model~\eqref{eq:limeqnS1} is concerned, global existence and uniqueness results for smooth solutions are easily obtained by standard methods (see e.g.~\cite{Chemin-98}). Although not surprising, global existence for this model is further proven here for vortex-sheet initial data.
On the other hand, the compressible model~\eqref{eq:limeqnS2}, first introduced in~\cite{Serfaty-15}, is completely new in the literature. In \cite[Appendix~B]{Serfaty-15}, only local-in-time existence and uniqueness of smooth solutions are proven in the non-degenerate case $\lambda>0$, using a standard iterative method. In the present paper, a similar local-in-time existence result is obtained for the degenerate parabolic case $\alpha=1$, $\beta=0$, $\lambda=0$, which requires a more careful analysis of the iterative scheme, and global existence with vortex-sheet data is further proven in the non-degenerate parabolic case $\alpha=1$, $\beta=0$, $\lambda>0$.
The general models~\eqref{eq:limeqn1}--\eqref{eq:limeqn2}, introduced in our companion paper~\cite{DS-16}, are inhomogeneous versions of~\eqref{eq:limeqnS1}--\eqref{eq:limeqnS2} with forcing. Since these are new in the literature, the present paper aims at providing a detailed discussion of local and global existence, uniqueness, and regularity issues.
Note that in the conservative regime $\alpha=0$, $\beta=1$, the incompressible model~\eqref{eq:limeqn1} takes the form of an ``inhomogeneous'' 2D Euler equation with ``forcing'': using the identity $v^\bot\curl v=(v\cdot \nabla)v-\frac12\nabla|v|^2$, and setting $\tilde P:=P-\frac12|v|^2$, we indeed find
\[\partial_t v=\nabla\tilde P+\Psi^\bot\curl v+(v\cdot \nabla)v,\qquad\Div(av)=0.\]
We are aware of no work on this modified Euler equation, which seems to have no obvious interpretation in terms of fluid mechanics. As far as global existence issues are concerned, it should be clear from the Delort type identity~\eqref{eq:delort1} below that inhomogeneities give rise to important difficulties: indeed, for $h$ non-constant, the first term $-\frac12|v|^2\nabla^\bot h$ in~\eqref{eq:delort1} does not vanish and is clearly not weakly continuous as a function of $v$ (although the second term is, as in the classical theory~\cite{Delort-91}). Because of that, we obtain no result for vortex-sheet initial data in that case, and only manage to prove global existence for initial vorticity in $\Ld^q(\R^2)$ for some $q>1$.
\subsubsection*{Notions of weak solutions for~\eqref{eq:limeqn1} and~\eqref{eq:limeqn2}}
We first introduce the vorticity formulation of equations~\eqref{eq:limeqn1} and~\eqref{eq:limeqn2}, which will be more convenient to work with. Setting $\omega:=\curl v$ and $\zeta:=\Div(av)$, each of these equations may be rewritten as a nonlinear nonlocal transport equation for the vorticity $\omega$,
\begin{align}\label{eq:limeqn1VF}
\partial_t\omega=\Div(\omega(\alpha(\Psi+v)^\bot+\beta(\Psi+v))),\quad\curl v=\omega,\quad\Div(av)=\zeta,
\end{align}
where in the incompressible case~\eqref{eq:limeqn1} we have $\zeta:=0$, while in the compressible case~\eqref{eq:limeqn2} $\zeta$ is the solution of the following transport-diffusion equation (which is highly degenerate as $\lambda=0$),
\begin{align}\label{eq:limeqn2VF}
\partial_t\zeta-\lambda\triangle\zeta+\lambda\Div(\zeta\nabla h)&=\Div(a\omega(-\alpha(\Psi+v)+\beta(\Psi+v)^\bot)).
\end{align}
Let us now precisely define our notions of weak solutions for~\eqref{eq:limeqn1} and~\eqref{eq:limeqn2}. (We denote by $\M_\loc^+(\R^2)$ the convex cone of locally finite non-negative Borel measures on $\R^2$, and by $\Pc(\R^2)$ the convex subset of probability measures, endowed with the usual weak-* topology.)
\begin{defin}\label{defin:sol}
Let $h,\Psi\in\Ld^\infty(\R^2)$, $T>0$, and set $a:=e^h$.
\begin{enumerate}[(a)]
\item Given $v^\circ\in\Ld^2_\loc(\R^2)^2$ with $\omega^\circ=\curl v^\circ\in\M_\loc^+(\R^2)$ and $\zeta^\circ:=\Div(av^\circ)\in\Ld^2_\loc(\R^2)$,
we say that $v$ is a {\it weak solution of~\eqref{eq:limeqn2}} on $[0,T)\times\R^2$ with initial data $v^\circ$, if $v\in\Ld^2_\loc([0,T)\times\R^2)^2$ satisfies $\omega:=\curl v\in\Ld^1_\loc([0,T);\M_\loc^+(\R^2))$, $\zeta:=\Div(a v)\in\Ld^2_\loc([0,T);\Ld^2(\R^2))$, $|v|^2\omega\in\Ld^1_\loc([0,T);\Ld^1(\R^2))$ (hence also $\omega v\in\Ld^1_\loc([0,T)\times\R^2)^2$), and satisfies~\eqref{eq:limeqn2} in the distributional sense, that is, for all $\psi\in C^\infty_c([0,T)\times\R^2)^2$,
\begin{align*}
\int\psi(0,\cdot) \cdot v^\circ+\iint v\cdot\partial_t\psi =\lambda\iint a^{-1}\zeta\Div\psi+\iint\psi\cdot(\alpha(\Psi+v)-\beta(\Psi+v)^\bot)\omega.
\end{align*}
\item Given $v^\circ\in\Ld^2_\loc(\R^2)^2$ with $\omega^\circ:=\curl v^\circ\in\M_\loc^+(\R^2)$ and $\Div(av^\circ)=0$, we say that $v$ is a {\it weak solution of~\eqref{eq:limeqn1}} on $[0,T)\times\R^2$ with initial data $v^\circ$, if $v\in\Ld^2_\loc([0,T)\times\R^2)^2$ satisfies $\omega:=\curl v\in\Ld^1_\loc([0,T);\M_\loc^+(\R^2))$, $|v|^2\omega\in\Ld^1_\loc([0,T);\Ld^1(\R^2)^2)$ (hence also $\omega v\in\Ld^1_\loc([0,T)\times\R^2)^2$), $\Div(av)=0$ in the distributional sense, and satisfies the vorticity formulation~\eqref{eq:limeqn1VF} in the distributional sense, that is, for all $\psi\in C^\infty_c([0,T)\times\R^2)$,
\begin{align*}
\int \psi(0,\cdot)\omega^\circ+\iint\omega \partial_t\psi =\iint\nabla\psi\cdot (\alpha(\Psi+v)^\bot+\beta(\Psi+v))\omega.
\end{align*}
\item Given $v^\circ\in\Ld^2_\loc(\R^2)^2$ with $\omega^\circ:=\curl v^\circ\in\M_\loc^+(\R^2)$ and $\Div(av^\circ)=0$, we say that $v$ is a {\it very weak solution of~\eqref{eq:limeqn1}} on $[0,T)\times\R^2$ with initial data $v^\circ$, if $v\in\Ld^2_\loc([0,T)\times\R^2)^2$ satisfies $\omega:=\curl v\in\Ld^1_\loc([0,T);\M_\loc^+(\R^2))$, $\Div(av)=0$ in the distributional sense, and satisfies, for all $\psi\in C^\infty_c([0,T)\times\R^2)$,
\begin{align*}
\int \psi(0,\cdot)\omega^\circ+\iint\omega \partial_t\psi &=\iint\nabla\psi\cdot (\alpha \Psi^\bot+\beta \Psi)\omega+\iint(\alpha\nabla\psi+\beta\nabla^\bot\psi)\cdot \Big(\frac12|v|^2\nabla h+a^{-1}\Div (aS_{v})\Big),
\end{align*}
in terms of the stress-energy tensor $S_{v}:=v\otimes v-\frac12\Id|v|^2$.
\end{enumerate}
\end{defin}
\begin{rems}\label{rem:sol}$ $
\begin{enumerate}[(i)]
\item Weak solutions of~\eqref{eq:limeqn2} are defined directly from~\eqref{eq:limeqn2}, and satisfy in particular the vorticity formulation~\eqref{eq:limeqn1VF}--\eqref{eq:limeqn2VF} in the distributional sense. As far as weak solutions of~\eqref{eq:limeqn1} are concerned, they are rather defined in terms of the vorticity formulation~\eqref{eq:limeqn1VF}, in order to avoid compactness and regularity issues related to the pressure $p$. Nevertheless, if $v$ is a weak solution of~\eqref{eq:limeqn1} in the above sense, then under mild regularity assumptions we may use the formula $v=a^{-1}\nabla^\bot(\Div a^{-1}\nabla)^{-1}\omega$ to deduce that $v$ actually satisfies~\eqref{eq:limeqn1} in the distributional sense on $[0,T)\times\R^2$ for some distribution $p$ (cf. Lemma~\ref{lem:pressure} below for details).
\item The definition~(c) of a very weak solution of~\eqref{eq:limeqn1} is motivated as follows (see also the definition of ``general weak solutions'' of~\eqref{eq:LZh} in~\cite{Lin-Zhang-00}). In the purely conservative case $\alpha=0$, there are too few a priori estimates to make sense of the product $\omega v$. As is now common in 2D fluid mechanics (see e.g. \cite{Chemin-98}), the idea is to reinterpret this product in terms of the stress-energy tensor $S_v$, using the following identity: given $\Div(av)=0$, we have for smooth enough fields
\begin{align}\label{eq:delort1}
\omega v=-\frac12|v|^2\nabla^\bot h-a^{-1}(\Div(aS_v))^\bot,
\end{align}
where the right-hand side now makes sense in $\Ld^1_\loc([0,T);W^{-1,1}_\loc(\R^2)^2)$ whenever $v\in\Ld^2_\loc([0,T)\times\R^2)^2$.
In particular, if $\omega\in\Ld^p_\loc ([0,T)\times\R^2)$ and $v\in\Ld^{p'}_\loc([0,T)\times\R^2)$ for some $1\le p\le\infty$, $1/p+1/p'=1$, then the product $\omega v$ makes perfect sense and the above identity~\eqref{eq:delort1} holds in the distributional sense, hence in that case $v$ is a weak solution of~\eqref{eq:limeqn1} whenever it is a very weak solution. In reference to~\cite{Delort-91}, identity~\eqref{eq:delort1} is henceforth called an ``(inhomogeneous) Delort type identity''.
\end{enumerate}
\end{rems}
\subsubsection*{Statement of the main results}
Global existence and regularity results are summarized in the following theorem.
Our approach relies on proving a priori estimates for the vorticity $\omega$ in $\Ld^q(\R^2)$ for some $q>1$. For the compressible model~\eqref{eq:limeqn2}, such estimates are only obtained in the parabolic regime, hence our limitation to that regime. In parabolic cases, particularly strong estimates are available, and existence is then established even for vortex-sheet data, thus completely extending the known theory for~\eqref{eq:LZh} (see~\cite{Lin-Zhang-00,S-Vazquez-14}). Note that the additional exponential growth in the dispersive estimate~\eqref{eq:flat1} below is only due to the forcing $\Psi$.
In the conservative incompressible case, the situation is the most delicate because of a lack of strong enough a priori estimates, and only existence of very weak solutions is expected and proven. As is standard in 2D fluid mechanics (see e.g.~\cite{Chemin-98}), the natural space for the solution $v$ is $\Ld^\infty_\loc(\R^+;\bar v^\circ+\Ld^2(\R^2)^2)$ for a given smooth reference field $\bar v^\circ:\R^2\to\R^2$.
\begin{theor}[Global existence]\label{th:main}
Let $\lambda>0$, $\alpha\ge0$, $\beta\in\R$, $h,\Psi\in W^{1,\infty}(\R^2)^2$, and set $a:=e^h$. Let $\bar v^\circ\in W^{1,\infty}(\R^2)^2$ be some reference map with $\bar \omega^\circ:=\curl\bar v^\circ\in \Pc\cap H^{s_0}(\R^2)$ for some $s_0>1$, and with either $\Div(a\bar v^\circ)=0$ in the case~\eqref{eq:limeqn1}, or $\bar\zeta^\circ:=\Div(a\bar v^\circ)\in H^{s_0}(\R^2)$ in the case~\eqref{eq:limeqn2}. Let $v^\circ\in \bar v^\circ+\Ld^2(\R^2)^2$, with $\omega^\circ:=\curl v^\circ\in\Pc(\R^2)$, and with either $\Div(av^\circ)=0$ in the case~\eqref{eq:limeqn1}, or $\zeta^\circ:=\Div(av^\circ)\in\Ld^2(\R^2)$ in the case~\eqref{eq:limeqn2}.
Denoting by $C>0$ any constant depending only on an upper bound on $\alpha$, $|\beta|$ and $\|(h,\Psi)\|_{W^{1,\infty}}$, the following hold:
\begin{enumerate}[(i)]
\item \emph{Parabolic compressible case (that is, \eqref{eq:limeqn2} with $\alpha>0$, $\beta=0$):}\\
There exists a weak solution $v\in\Ld^\infty_\loc(\R^+;\bar v^\circ+\Ld^2(\R^2)^2)$
on $\R^+\times\R^2$ with initial data $v^\circ$, with $\omega:=\curl v\in \Ld^\infty(\R^+;\Pc(\R^2))$ and $\zeta:=\Div(av)\in \Ld^2_\loc(\R^+;\Ld^2(\R^2))$, and with
\begin{align}\label{eq:flat1}
\|\omega^t\|_{\Ld^\infty}\le(\alpha t)^{-1}+C\alpha^{-1}e^{Ct},\qquad\text{for all $t>0$.}
\end{align}
Moreover, if $\omega^\circ\in\Ld^q(\R^2)$ for some $q>1$, then $\omega\in\Ld^\infty_\loc(\R^+;\Ld^q(\R^2))$.
\item \emph{Parabolic incompressible case (that is, \eqref{eq:limeqn1} with $\alpha>0$, $\beta=0$, or with $\alpha>0$, $\beta\in\R$, $h$ constant):}\\
There exists a weak solution $v\in\Ld^\infty_\loc(\R^+;\bar v^\circ+\Ld^{2}(\R^2)^2)$ on $\R^+\times\R^2$ with initial data $v^\circ$, with $\omega:=\curl v\in \Ld^\infty(\R^+;\Pc(\R^2))$, and with the dispersive estimate~\eqref{eq:flat1}.
Moreover, if $\omega^\circ\in\Ld^q(\R^2)$ for some $q>1$, then $\omega\in\Ld^\infty_\loc(\R^+;\Ld^q(\R^2))\cap\Ld^{q+1}_\loc(\R^+;\Ld^{q+1}(\R^2))$.
\item \emph{Mixed-flow incompressible case (that is, \eqref{eq:limeqn1} with $\alpha>0$, $\beta\in\R$):}\\
If $\omega^\circ\in\Ld^q(\R^2)$ for some $q>1$, there exists a weak solution $v\in\Ld^\infty_\loc(\R^+;\bar v^\circ+\Ld^{2}(\R^2)^2)$ on $\R^+\times\R^2$ with initial data $v^\circ$, and with $\omega:=\curl v\in \Ld^\infty_\loc(\R^+;\Pc\cap\Ld^q(\R^2))\cap\Ld^{q+1}_\loc(\R^+;\Ld^{q+1}(\R^2))$.
\item \emph{Conservative incompressible case (that is, \eqref{eq:limeqn1} with $\alpha=0$, $\beta\in\R$):}\\
If $\omega^\circ\in\Ld^q(\R^2)$ for some $q>1$, there exists a very weak solution $v\in\Ld^\infty_\loc(\R^+;\bar v^\circ+\Ld^2(\R^2)^2)$ on $\R^+\times\R^2$ with initial data $v^\circ$, and with $\omega:=\curl v\in\Ld^\infty_\loc(\R^+;\Pc\cap\Ld^q(\R^2))$. This is a weak solution whenever $q\ge4/3$.
\end{enumerate}
We set $\zeta^\circ,\bar\zeta^\circ,\zeta:=0$ in the incompressible case~\eqref{eq:limeqn1}.
If in addition $\omega^\circ$, $\zeta^\circ\in\Ld^\infty(\R^2)$, then we further have $v\in\Ld^\infty_\loc(\R^+;\Ld^\infty(\R^2)^2)$, $\omega\in\Ld^\infty_\loc(\R^+;\Ld^1\cap\Ld^\infty(\R^2))$, and $\zeta\in\Ld^\infty_\loc(\R^+;\Ld^2\cap\Ld^\infty(\R^2)^2)$.
If $h$, $\Psi$, $\bar v^\circ\in W^{s+1,\infty}(\R^2)^2$ and $\omega^\circ$, $\bar\omega^\circ$, $\zeta^\circ$, $\bar\zeta^\circ\in H^s(\R^2)$ for some $s>1$, then $v\in \Ld^\infty_\loc(\R^+;\bar v^\circ+H^{s+1}(\R^2)^2)$ and $\omega,\zeta\in \Ld^\infty_\loc(\R^+;H^{s}(\R^2)^2)$. If $h$, $\Psi$, $v^\circ\in C^{s+1}(\R^2)^2$ for some non-integer $s>0$, then $v\in\Ld^\infty_\loc(\R^+;C^{s+1}(\R^2)^2)$.
\end{theor}
For the regimes that are not described in the above --- i.e., the degenerate compressible case $\lambda=0$, and the mixed-flow compressible case (as well as the a priori unphysical case $\alpha<0$) ---, only local-in-time existence is proven for smooth enough initial data. Note that in the degenerate case $v$ and $\omega$ are on the same footing in terms of regularity.
\begin{theor}[Local existence]\label{th:mainloc}
Given some $s>1$, let $h,\Psi,\bar v^\circ\in W^{s+1,\infty}(\R^2)^2$, set $a:=e^h$, and let $v^\circ\in \bar v^\circ+H^{s+1}(\R^2)^2$ with $\omega^\circ:=\curl v^\circ$, $\bar\omega^\circ:=\curl\bar v^\circ\in H^s(\R^2)$, and with either $\Div(av^\circ)=\Div(a\bar v^\circ)=0$ in the case~\eqref{eq:limeqn1}, or $\zeta^\circ:=\Div(av^\circ)$, $\bar\zeta^\circ:=\Div(a\bar v^\circ)\in H^s(\R^2)$ in the case~\eqref{eq:limeqn2}. The following hold:
\begin{enumerate}[(i)]
\item \emph{Incompressible case (that is, \eqref{eq:limeqn1} with $\alpha,\beta\in\R$):}\\
There exists $T>0$ and a weak solution $v\in\Ld^\infty_\loc([0,T); \bar v^\circ+H^{s+1}(\R^2)^2)$ on $[0,T)\times\R^2$ with initial data $v^\circ$.
\item \emph{Non-degenerate compressible case (that is, \eqref{eq:limeqn2} with $\alpha,\beta\in\R$, $\lambda>0$):}\\
There exists $T>0$ and a weak solution $v\in\Ld^\infty_\loc([0,T); \bar v^\circ+H^{s+1}(\R^2)^2)$ on $[0,T)\times\R^2$ with initial data $v^\circ$.
\item \emph{Degenerate parabolic compressible case (that is, \eqref{eq:limeqn2} with $\alpha\in\R$, $\beta=\lambda=0$):}\\
If $\Psi$, $\bar v^\circ\in W^{s+2,\infty}(\R^2)^2$ and $\omega^\circ$, $\bar\omega^\circ\in H^{s+1}(\R^2)$, there exists $T>0$ and a weak solution $v\in \Ld^\infty_\loc([0,T); \bar v^\circ+ H^{s+1}(\R^2)^2)$ on $[0,T)\times\R^2$ with initial data $v^\circ$, and with $\omega:=\curl v\in\Ld^\infty_\loc([0,T);H^{s+1}(\R^2))$.
\end{enumerate}
\end{theor}
Let us finally turn to uniqueness issues. No uniqueness is expected to hold for general weak measure solutions of~\eqref{eq:limeqn1}, as it is already known to fail for the 2D Euler equation (see e.g.~\cite{Bardos-Titi} and the references therein), and as it is also expected to fail for equation~\eqref{eq:LZh} (see~\cite[Section~8]{Ambrosio-S-08}). In both cases, as already explained, the only known uniqueness results are in the class of bounded vorticity. For the general incompressible model~\eqref{eq:limeqn1}, similar arguments as for~\eqref{eq:LZh} are still available and the same uniqueness result holds, while for the non-degenerate compressible model~\eqref{eq:limeqn2} the result is slightly weaker. In the degenerate parabolic case, the result is even worse since $v$ and $\omega$ must then be on the same footing in terms of regularity.
\begin{theor}[Uniqueness]\label{th:mainunique}
Let $\lambda\ge0$, $\alpha,\beta\in\R$, $T>0$, $h,\Psi\in W^{1,\infty}(\R^2)$, and set $a:=e^h$. Let $v^\circ:\R^2\to\R^2$ with $\omega^\circ:=\curl v^\circ\in\Pc(\R^2)$, and with either $\Div(av^\circ)=0$ in the case~\eqref{eq:limeqn1}, or $\Div(av^\circ)\in\Ld^2(\R^2)$ in the case~\eqref{eq:limeqn2}.
\begin{enumerate}[(i)]
\item \emph{Incompressible case (that is,~\eqref{eq:limeqn1} with $\alpha,\beta\in\R$):}\\
There exists at most a unique weak solution $v$ on $[0,T)\times\R^2$ with initial data $v^\circ$, in the class of all $w$'s such that $\curl w\in\Ld^\infty_\loc([0,T);\Ld^\infty(\R^2))$.
\item \emph{Non-degenerate compressible case (that is,~\eqref{eq:limeqn2} with $\alpha,\beta\in\R$, $\lambda>0$):}\\
There exists at most a unique weak solution $v$ on $[0,T)\times\R^2$ with initial data $v^\circ$, in the class in the class $\Ld^2_\loc([0,T);v^\circ+\Ld^2(\R^2)^2)\cap\Ld^\infty_\loc([0,T);W^{1,\infty}(\R^2)^2)$.
\item \emph{Degenerate parabolic compressible case (that is,~\eqref{eq:limeqn2} with $\alpha\in\R$, $\lambda=\beta=0$):}\\
There exists at most a unique weak solution $v$ on $[0,T)\times\R^2$ with initial data $v^\circ$, in the class of all $w$'s in $\Ld^2_\loc([0,T);v^\circ+\Ld^2(\R^2)^2)\cap \Ld^\infty_\loc([0,T);\Ld^{\infty}(\R^2)^2)$ with $\curl w\in\Ld^2_\loc([0,T);\Ld^2(\R^2))\cap \Ld^\infty_\loc([0,T);W^{1,\infty}(\R^2))$.
\end{enumerate}
\end{theor}
\subsubsection*{Roadmap to the proof of the main results}
We begin in Section~\ref{chap:local} with the local existence of smooth solutions, summarized in Theorem~\ref{th:mainloc} above. In the non-degenerate case $\lambda>0$, the proof follows from a standard iterative scheme as in~\cite[Appendix~B]{Serfaty-15}. It is performed here in Sobolev spaces, but could be done in Hlder spaces as well. In the degenerate parabolic case $\alpha=1$, $\beta=0$, $\lambda=0$, a similar argument holds, but requires a more careful analysis of the iterative scheme.
We then turn to global existence in Section~\ref{chap:global}. In order to pass from local to global existence, we prove estimates for the Sobolev (and Hlder) norms of solutions through the norm of their initial data. As shown in Section~\ref{chap:propagation}, arguing quite similarly as in the work by Lin and Zhang~\cite{Lin-Zhang-00} on the simpler model~\eqref{eq:LZh}, such estimates on Sobolev norms essentially follow from an a priori estimate for the vorticity in $\Ld^\infty(\R^2)$.
In~\cite{Lin-Zhang-00} such an a priori estimate for the vorticity was achieved by a simple ODE type argument, using that for~\eqref{eq:LZh} the evolution of the vorticity along characteristics can be explicitly integrated. This ODE argument can still be somehow adapted to the more sophisticated models~\eqref{eq:limeqn1} and~\eqref{eq:limeqn2} in the parabolic case (cf. Lemma~\ref{lem:Lpest}(iii)). This yields the nice dispersive estimate~\eqref{eq:flat1} for the $\Ld^\infty$-norm of the vorticity (through its initial mass $\int\omega^\circ=1$ only), which of course differs from~\cite{Lin-Zhang-00} by the additional exponential growth due to the forcing $\Psi$.
For the incompressible model~\eqref{eq:limeqn1} in the mixed-flow case, such arguments are no longer available, and only a weaker estimate is obtained, controlling the $\Ld^q$-norm of the solution (as well as its space-time $\Ld^{q+1}$-norm if $\alpha>0$) by the $\Ld^q$-norm of the data, for all $1< q\le\infty$ (cf. Lemma~\ref{lem:Lpvort}). This is instead proven by a careful energy type argument.
In order to handle rougher initial data, we regularize the data and then pass to the limit in the equation, using the compactness given by the available a priori estimates. The simplest energy estimates only give bounds for $v$ in $\bar v^\circ+\Ld^2(\R^2)^2$ and for $\zeta$ in $\Ld^2(\R^2)$. To pass to the limit in the nonlinear term $\omega v$, the additional estimates for the vorticity in $\Ld^q(\R^2)$, $q>1$, then again turn out to be crucial. To get to vortex-sheet initial data in parabolic cases, as in~\cite{Lin-Zhang-00} we make use of some compactness result due to Lions~\cite{Lions-98} in the context of the compressible Navier-Stokes equations. The model~\eqref{eq:limeqn1} in the conservative case $\alpha=0$ is however more subtle because of a lack of strong enough a priori estimates. Only very weak solutions are then expected and obtained (for initial vorticity in $\Ld^q(\R^2)$ with $q>1$), and compactness is in that case proven by hand.
Uniqueness issues are finally addressed in Section~\ref{chap:unique}.
Following Serfaty~\cite[Appendix~B]{Serfaty-15}, a weak-strong uniqueness principle for both~\eqref{eq:limeqn1} and~\eqref{eq:limeqn2} is proven by energy methods in the non-degenerate case $\lambda>0$.
Note that this uniqueness principle is the key to the mean-field limit results for the Ginzburg-Landau vortices in our companion paper~\cite{DS-16}, following the strategy developed by Serfaty~\cite{Serfaty-15}. A much weaker weak-strong uniqueness principle is also obtained in the degenerate parabolic case $\beta=\lambda=0$.
For the compressible model~\eqref{eq:limeqn1}, uniqueness in the class of bounded vorticity is easily obtained using the approach by Serfaty and V\'azquez~\cite{S-Vazquez-14} for equation~\eqref{eq:LZh}, which consists in adapting the corresponding uniqueness result for the 2D Euler equation due to Yudovich~\cite{Yudovich-63} together with a transport argument la Loeper~\cite{Loeper-06}.
To ease the presentation, the various independent PDE results that are needed in the proofs are isolated in Section~\ref{chap:prelim}, including general a priori estimates for transport and transport-diffusion equations, some global elliptic regularity results, as well as critical potential theory estimates. The interest of such estimates for our purposes should be already clear from the form of the equations in the vorticity formulation~\eqref{eq:limeqn1VF}--\eqref{eq:limeqn2VF}.
\subsubsection*{Notation}
For any vector field $F=(F_1,F_2)$ on $\R^2$, we denote $F^\bot=(-F_2,F_1)$, $\curl F=\partial_1F_2-\partial_2F_1$, and also as usual $\Div F=\partial_1F_1+\partial_2F_2$. Given two linear operators $A,B$ on some function space, we denote by $[A,B]:=AB-BA$ their commutator. For any exponent $1\le p\le\infty$, we denote its Hlder conjugate by $p':=p/(p-1)$. Denote by $B(x,r)$ the ball of radius $r$ centered at $x$ in $\R^d$, and set $B_r:=B(0,r)$ and $B(x):=B(x,1)$. We use the notation $a\wedge b=\min\{a,b\}$ and $a\vee b=\max\{a,b\}$ for all $a,b\in\R$. Given a function $f:\R^d\to\R$, we denote its positive and negative parts by $f^+(x):=0\vee f(x)$ and $f^-(x):=0\vee(-f)(x)$, respectively. We write $\lesssim$ and $\simeq$ for $\le$ and $=$ up to (unless explicitly stated) universal constants. The space of Lebesgue-measurable functions on $\R^d$ is denoted by $\Mes(\R^d)$, the set of Borel probability measures on $\R^d$ is denoted by $\Pc(\R^d)$, and for all $\sigma>0$, $C^\sigma(\R^d)$ stands as usual for the Hlder space $C^{\lfloor\sigma\rfloor,\sigma-\lfloor\sigma\rfloor}(\R^d)$. For $\sigma\in(0,1)$, we denote by $|\cdot|_{C^\sigma}$ the usual Hlder seminorm, and by $\|\cdot\|_{C^\sigma}:=|\cdot|_{C^\sigma}+\|\cdot\|_{\Ld^\infty}$ the corresponding norm. We denote by $\Ld^p_{\uloc}(\R^d)$ the Banach space of functions that are uniformly locally $\Ld^p$-integrable, with norm $\|f\|_{\Ld^p_\uloc}:=\sup_x\|f\|_{\Ld^p(B(x))}$. Given a Banach space $X\subset \Mes(\R^d)$ and $t>0$, we use the notation $\|\cdot\|_{\Ld^p_tX}$ for the usual norm in $\Ld^p([0,t];X)$.
\section{Preliminary results}\label{chap:prelim}
In this section, we establish various PDE results that are needed in the sequel and are of independent interest. As most of them do not depend on the choice of space dimension $2$, they are stated here in general dimension $d$.
We first recall the following useful proxy for a fractional Leibniz rule, which is essentially due to Kato and Ponce (see e.g. \cite[Theorem~1.4]{Gulisashvili-Kon-96}).
\begin{lem}[Kato-Ponce inequality]\label{lem:katoponce-1}
Let $d\ge1$, $s\ge0$, $p\in(1,\infty)$, and $1/p_i+1/q_i=1/p$ with $i=1,2$ and $p_1,q_1,p_2,q_2\in(1,\infty]$. Then, for $f,g\in C^\infty_c(\R^d)$, we have
\[\|fg\|_{W^{s,p}}\lesssim \|f\|_{\Ld^{p_1}}\|g\|_{W^{s,q_1}}+\|g\|_{\Ld^{p_2}}\|f\|_{W^{s,q_2}}.\]
\end{lem}
The following gives a general estimate for the evolution of the Sobolev norms of the solutions of transport equations (see also~\cite[equation~(7)]{Lin-Zhang-00} for a simpler version), which will be useful in the sequel since the vorticity $\omega$ indeed satisfies an equation of this form~\eqref{eq:limeqn1VF}.
\begin{lem}[A priori estimate for transport equations]\label{lem:katoponce}
Let $d\ge1$, $s\ge0$, $T>0$. Given a vector field $w\in \Ld^\infty_\loc([0,T);W^{1,\infty}(\R^d)^d)$, let $\rho\in \Ld^\infty_\loc([0,T);H^s(\R^d))$ satisfy the transport equation $\partial_t\rho=\Div(\rho w)$ in the distributional sense in $[0,T)\times\R^d$.
Further assume $w-W\in \Ld^\infty_\loc([0,T);H^{s+1}(\R^d)^d)$ for some reference map $W\in W^{s+1,\infty}(\R^d)^d$. Then for all $t\in[0,T)$,
\begin{align}
\partial_t\|\rho^t\|_{H^s}&\le 2\|(\nabla w^t,\nabla W)\|_{\Ld^\infty}\|\rho^t\|_{H^s}+ \|\rho^t\|_{\Ld^\infty}\|\Div(w^t-W)\|_{H^s}\label{eq:katoponcecom}\\
&\qquad+\|\rho^t\|_{\Ld^2}\|\Div W\|_{W^{s,\infty}}+\frac12\|\Div w^t\|_{\Ld^\infty}\|\rho^t\|_{H^s},\nonumber
\end{align}
where we use the notation $\|(\nabla w^t,\nabla W)\|_{\Ld^\infty}:=\|\nabla w^t\|_{\Ld^\infty}\vee\|\nabla W\|_{\Ld^\infty}$. Also, for all $t\in[0,T)$,
\begin{align}\label{eq:tsph-1}
\|\rho^t-\rho^\circ\|_{\dot H^{-1}}\le\|\rho\|_{\Ld^\infty_t\Ld^2}\|w\|_{\Ld^1_t\Ld^\infty}.
\end{align}
\end{lem}
\begin{proof}
We split the proof into two steps: we first prove~\eqref{eq:katoponcecom} as a corollary of the celebrated Kato-Ponce commutator estimate, and then we check estimate~\eqref{eq:tsph-1}, which is but a straightforward observation.
\medskip
{\it Step~1: proof of~\eqref{eq:katoponcecom}.}
Let $s\ge0$. The time-derivative of the $H^s$-norm of the solution $\rho$ can be computed as follows, using the notation $\langle \nabla\rangle:=(1+|\nabla|^2)^{1/2}$,
\begin{align*}
\partial_t\|\rho^t\|_{H^s}^2=2\int (\langle\nabla\rangle^s\rho^t) (\langle\nabla\rangle^s\Div(\rho^t w^t))&=2\int (\langle\nabla\rangle^s\rho^t) [\langle\nabla\rangle^s\Div,w^t]\rho^t+2\int (\langle\nabla\rangle^s\rho^t) (w^t\cdot \nabla\langle\nabla\rangle^s\rho^t)\\
&=2\int (\langle\nabla\rangle^s\rho^t) [\langle\nabla\rangle^s\Div,w^t]\rho^t-\int |\langle\nabla\rangle^s\rho^t|^2\Div w^t\\
&\le 2 \|\rho^t\|_{H^s} \|[\langle\nabla\rangle^s\Div,w^t]\rho^t\|_{\Ld^2}+\|(\Div w^t)^-\|_{\Ld^\infty}\|\rho^t\|_{H^s}^2,
\end{align*}
which we may further bound by
\begin{align*}
\partial_t\|\rho^t\|_{H^s}&\le \|[\langle\nabla\rangle^s\Div,w^t-W]\rho^t\|_{\Ld^2}+\|[\langle\nabla\rangle^s\Div,W]\rho^t\|_{\Ld^2}+\frac12\|(\Div w^t)^-\|_{\Ld^\infty}\|\rho^t\|_{H^s}.
\end{align*}
Now we recall the following general form of the Kato-Ponce commutator estimate~\cite[Lemma~X1]{Kato-Ponce-88} (which follows by replacing the use of~\cite{Coifman-Meyer-78} by the later work~\cite{Coifman-Meyer-86} in the proof of~\cite[Lemma~X1]{Kato-Ponce-88}; see also~\cite[Theorem~1.4]{Gulisashvili-Kon-96}): given $p\in(1,\infty)$, and $1/p_i+1/q_i=1/p$ with $i=1,2$ and $p_1,q_1,p_2,q_2\in(1,\infty]$, we have for any $f,g\in C^\infty_c(\R^d)$,
\[\|[\langle\nabla\rangle^s\Div,f]g\|_{\Ld^p}\lesssim \|\nabla f\|_{\Ld^{q_1}}\|g\|_{W^{s,p_1}}+\|g\|_{\Ld^{q_2}}\|\Div f\|_{W^{s,p_2}}.\]
This estimate yields
\begin{align*}
\partial_t\|\rho^t\|_{H^s}&\le \|\rho^t\|_{H^s}\|\nabla(w^t-W)\|_{\Ld^\infty}+ \|\rho^t\|_{\Ld^\infty}\|\Div(w^t-W)\|_{H^s}\\
&\qquad+\|\rho^t\|_{H^s}\|\nabla W\|_{\Ld^\infty}+\|\rho^t\|_{\Ld^2}\|\Div W\|_{W^{s,\infty}}+\frac12\|(\Div w^t)^-\|_{\Ld^\infty}\|\rho^t\|_{H^s},
\end{align*}
and the result~\eqref{eq:katoponcecom} follows.
\medskip
{\it Step~2: proof of~\eqref{eq:tsph-1}.}
Let $\e>0$. We denote by $\hat u$ the Fourier transform of a function $u$ on $\R^d$. Set $G^t:=\rho^t w^t$, so that the equation for $\rho$ takes the form $\partial_t\rho^t=\Div G^t$. Rewriting this equation in Fourier space and testing it against $(\e+|\xi|)^{-2}(\hat\rho^t-\hat\rho^\circ)(\xi)$, we find
\begin{align*}
\partial_t\int (\e+|\xi|)^{-2}|\hat \rho^t(\xi)-\hat\rho^\circ(\xi)|^2d\xi&=2i\int (\e+|\xi|)^{-2}\xi\cdot \hat G^t(\xi)(\overline{\hat \rho^t(\xi)-\hat\rho^\circ(\xi)})d\xi\\
&\le2\int (\e+|\xi|)^{-1}|\hat \rho^t(\xi)-\hat\rho^\circ(\xi)||\hat G^t(\xi)|d\xi,
\end{align*}
and hence, by the Cauchy-Schwarz inequality,
\begin{align*}
\partial_t\bigg(\int (\e+|\xi|)^{-2}|\hat \rho^t(\xi)-\hat\rho^\circ(\xi)|^2d\xi\bigg)^{1/2}&\le\bigg(\int|\hat G^t(\xi)|^2d\xi\bigg)^{1/2}.
\end{align*}
Integrating in time and letting $\e\downarrow0$, we obtain
\[\| \rho^t-\rho^\circ\|_{\dot H^{-1}}\le \|G\|_{\Ld^1_t\Ld^2}\le \|\rho\|_{\Ld^\infty_t\Ld^2}\|w\|_{\Ld^1_t\Ld^\infty},\]
that is,~\eqref{eq:tsph-1}.
\end{proof}
As the evolution of the divergence $\zeta$ in the compressible model~\eqref{eq:limeqn2} is given by the transport-diffusion equation~\eqref{eq:limeqn2VF}, the following parabolic regularity results will be needed.
Note that a variant of item~(ii) below can be found e.g. in~\cite[Section~3.4]{BCD-11}.
Item~(iii) could be substantially refined (weakening the norm of $g$ in time, at the price of a stronger norm in space), but the statement below is already more than enough for our purposes.
\begin{lem}[A priori estimates for transport-diffusion equations]\label{lem:parreg+tsp}
Let $d\ge1$, $T>0$. Let $g\in\Ld^1_\loc([0,T)\times\R^d)^d$, and let $w$ satisfy $\partial_tw-\triangle w+\Div(w\nabla h)=\Div g$ in the distributional sense in $[0,T)\times\R^d$ with initial data $w^\circ$. The following hold:
\begin{enumerate}[(i)]
\item for all $s\ge0$, if $\nabla h\in W^{s,\infty}(\R^d)^d$, $w\in \Ld^\infty_\loc([0,T);H^s(\R^d))$, and $g\in \Ld^2_\loc([0,T);H^s(\R^d)^d)$, then we have for all $t\in[0,T)$,
\[\|w^t\|_{H^s}\le Ce^{Ct}(\|w^\circ\|_{H^s}+\|g\|_{\Ld^2_tH^s}),\]
where the constants $C$'s depend only on an upper bound on $s$ and $\|\nabla h\|_{W^{s,\infty}}$;
\item if $\nabla h\in\Ld^\infty(\R^d)$, $w^\circ\in \Ld^2(\R^d)$, $w\in \Ld^\infty_\loc([0,T); \Ld^2(\R^d))$, and $g\in\Ld^2_\loc([0,T);\Ld^2(\R^d))$, then we have for all $t\in[0,T)$,
\[\|w^t-w^\circ\|_{\dot H^{-1}\cap \Ld^2}\le Ce^{Ct}(\|w^\circ\|_{\Ld^2}+\|g\|_{\Ld^2_t\Ld^2}),\]
where the constants $C$'s depend only on an upper bound on $\|\nabla h\|_{\Ld^\infty}$;
\item for all $1\le p,q\le\infty$, and all $\frac{dq}{d+q}< s\le q$, $s\ge1$, if $\nabla h\in\Ld^\infty(\R^d)$, $w\in\Ld^p_\loc([0,T);\Ld^q(\R^d))$, and $g\in\Ld^p_\loc([0,T);\Ld^s(\R^d))$, then we have for all $t\in[0,T)$,
\begin{align*}
\|w\|_{\Ld^p_t\Ld^q}&\lesssim (\|w^\circ\|_{\Ld^q}+\kappa^{-1}t^\kappa\|g\|_{\Ld^p_t\Ld^s})\exp\Big(\inf_{2<r<\infty}r^{-1}\big(1+(r-2)^{-r/2}\big)(Ct)^{r/2}\Big).
\end{align*}
where $\kappa:=\frac d2(\frac1d+\frac1q-\frac1s)>0$, and where the constant $C$'s depend only on $\|\nabla h\|_{\Ld^\infty}$.
\end{enumerate}
\end{lem}
\begin{proof}
We split the proof into three steps, proving items (i), (ii) and~(iii) separately.
\medskip
{\it Step~1: proof of~(i).}
Denote $G:=g-w\nabla h$, so that $w$ satisfies $\partial_tw-\triangle w=\Div G$. Let $\langle\xi\rangle:=(1+|\xi|^2)^{1/2}$, and let $\hat u$ denote the Fourier transform of a function $u$ on $\R^d$. Let $s\ge0$ be fixed, and assume that $\nabla h,w,g$ are as in the statement of~(i) (which implies $G\in\Ld^2_\loc([0,T);H^s(\R^d))$ as shown below). In this step, we use the notation $\lesssim$ for $\le$ up to a constant $C$ as in the statement.
For all $\e>0$, rewriting the equation for $w$ in Fourier space and then testing it against $(\e+|\xi|)^{-2}\langle\xi\rangle^{2s}\partial_t \hat w(\xi)$, we obtain
\begin{align*}
\int(\e+|\xi|)^{-2}\langle\xi\rangle^{2s}|\partial_t\hat w^t(\xi)|^2d\xi+\frac12\int\frac{|\xi|^2}{(\e+|\xi|)^2}\langle\xi\rangle^{2s}\partial_t|\hat w^t(\xi)|^2d\xi=i\int(\e+|\xi|)^{-2}\langle\xi\rangle^{2s} \xi\cdot\hat G^t(\xi)\overline{\partial_t\hat w^t(\xi)}d\xi,
\end{align*}
and hence, integrating over $[0,t]$, and using the inequality $2xy\le x^2+y^2$,
\begin{align*}
&\int_0^t\int(\e+|\xi|)^{-2}\langle\xi\rangle^{2s}|\partial_u\hat w^u(\xi)|^2d\xi du+\frac12\int\frac{|\xi|^2}{(\e+|\xi|)^2}\langle\xi\rangle^{2s}|\hat w^t(\xi)|^2d\xi\\
=~&\frac12\int\frac{|\xi|^2}{(\e+|\xi|)^2}\langle\xi\rangle^{2s}|\hat w^\circ(\xi)|^2d\xi+i\int_0^t\int(\e+ |\xi|)^{-2}\langle\xi\rangle^{2s}\xi\cdot\hat G^u(\xi)\overline{\partial_u\hat w^u(\xi)}d\xi du\\
\le~& \frac12\int\langle\xi\rangle^{2s}|\hat w^\circ(\xi)|^2d\xi+\frac12\int_0^t\int \langle\xi\rangle^{2s}|\hat G^u(\xi)|^2d\xi du+\frac12\int_0^t\int (\e+|\xi|)^{-2}\langle\xi\rangle^{2s}|\partial_u\hat w^u(\xi)|^2d\xi du.
\end{align*}
Absorbing in the left-hand side the last right-hand side term, and letting $\e\downarrow0$, it follows that
\begin{align*}
\int\langle\xi\rangle^{2s}|\hat w^t(\xi)|^2d\xi\le\int\langle\xi\rangle^{2s}|\hat w^\circ(\xi)|^2d\xi+\int_0^t\int \langle\xi\rangle^{2s}|\hat G^u(\xi)|^2d\xi du,
\end{align*}
or equivalently
\[\|w^t\|_{H^s}\le\|w^\circ\|_{H^s}+\|G\|_{\Ld^2_tH^s}.\]
Lemma~\ref{lem:katoponce-1} yields
\begin{align*}
\|G\|_{\Ld^2_tH^s}\le\|g\|_{\Ld^2_tH^s}+\|w\nabla h\|_{\Ld^2_tH^s}&\lesssim\|g\|_{\Ld^2_tH^s}+\|\nabla h\|_{W^{s,\infty}}\|w\|_{\Ld^2_t\Ld^2}+\|\nabla h\|_{\Ld^\infty}\|w\|_{\Ld^2_tH^s}\\
&\lesssim\|g\|_{\Ld^2_tH^s}+\|w\|_{\Ld^2_tH^s},
\end{align*}
so that we obtain
\[\|w^t\|_{H^s}^2\lesssim \|w^\circ\|_{H^s}^2+\|g\|_{\Ld^2_tH^s}^2+\int_0^t\|w^u\|_{H^s}^2du,\]
and item~(i) now follows from the Grnwall inequality.
\medskip
{\it Step~2: proof of~(ii).}
Let $G^t:=g^t-w^t\nabla h$, and let $\nabla h,w^\circ,w,g$ be as in the statement of~(ii). For all $\e>0$, rewriting the equation for $w$ in Fourier space and then integrating it against $(\e+|\xi|)^{-2}(\hat w^t-\hat w^\circ)$, we may estimate
\begin{align*}
&\partial_t\int(\e+|\xi|)^{-2}|(\hat w^t-\hat w^\circ)(\xi)|^2d\xi=2\int(\e+|\xi|)^{-2}\overline{(\hat w^t-\hat w^\circ)(\xi)}\partial_t\hat w^t(\xi)d\xi\\
\le~&-2\int\frac{|\xi|^2}{(\e+|\xi|)^2}|(\hat w^t-\hat w^\circ)(\xi)|^2+2\int\frac{|\xi|^2}{(\e+|\xi|)^2}|(\hat w^t-\hat w^\circ)(\xi)||\hat w^\circ(\xi)|+2\int(\e+ |\xi|)^{-1}|(\hat w^t-\hat w^\circ)(\xi)||\hat G^t(\xi)|d\xi\\
\le~&\int\frac{|\xi|^2}{(\e+|\xi|)^2}|\hat w^\circ(\xi)|^2+\int(\e+ |\xi|)^{-2}|(\hat w^t-\hat w^\circ)(\xi)|^2d\xi+\int(1+ |\xi|^2)^{-1}|\hat G^t(\xi)|^2d\xi,
\end{align*}
that is
\begin{align*}
\partial_t\int(\e+|\xi|)^{-2}|(\hat w^t-\hat w^\circ)(\xi)|^2d\xi&\le\int(\e+ |\xi|)^{-2}|(\hat w^t-\hat w^\circ)(\xi)|^2d\xi+\|w^\circ\|_{\Ld^2}^2+\|G^t\|_{H^{-1}}^2,
\end{align*}
and hence by the Grnwall inequality
\begin{align*}
\int(\e+|\xi|)^{-2}|(\hat w^t-\hat w^\circ)(\xi)|^2d\xi&\le e^{t}\big(\|w^\circ\|_{\Ld^2}^2+\|G\|_{\Ld^2_tH^{-1}}^2\big).
\end{align*}
Letting $\e\downarrow0$, it follows that $w^t-w^\circ\in \dot H^{-1}(\R^2)$ with
\begin{align*}
\|w^t-w^\circ\|_{\dot H^{-1}}\le e^{t}(\|w^\circ\|_{\Ld^2}+\|G\|_{\Ld^2_tH^{-1}})\le e^{t}(\|w^\circ\|_{\Ld^2}+\|g\|_{\Ld^2_tH^{-1}}+\|\nabla h\|_{\Ld^\infty}\|w\|_{\Ld^2_t\Ld^2}).
\end{align*}
Combining this with~(i) for $s=0$, item~(ii) follows.
\medskip
{\it Step~3: proof of~(iii).}
Let $1\le p,q\le\infty$, and assume that $w\in\Ld^p([0,T);\Ld^q(\R^d))$, $\nabla h\in\Ld^\infty(\R^d)$, and $g\in\Ld^p([0,T);\Ld^q(\R^d))$. In this step, we use the notation $\lesssim$ for $\le$ up to a constant $C$ as in the statement.
Denoting by $\Gamma^t(x):=Ct^{-d/2}e^{-|x|^2/(2t)}$ the heat kernel, Duhamel's representation formula yields
\[w^t(x)=\Gamma^t\ast w^\circ(x)+\phi_g^t(x)-\int_0^t\int\nabla \Gamma^{u}(y)\cdot \nabla h(y)w^{t-u}(x-y)dydu,\]
where we have set
\[\phi_g^t(x):=\int_0^t\int\nabla\Gamma^u(y)\cdot g^{t-u}(x-y)dydu.\]
We find by the triangle inequality
\[\|w^t\|_{\Ld^q}\le \|w^\circ\|_{\Ld^q}\int|\Gamma^t(y)|dy+\|\phi_g^t\|_{\Ld^q}+\|\nabla h\|_{\Ld^\infty}\int_0^t \|w^{t-u}\|_{\Ld^q}\int|\nabla \Gamma^{u}(y)|dydu,\]
hence by a direct computation
\[\|w^t\|_{\Ld^q}\lesssim \|w^\circ\|_{\Ld^q}+\|\phi_g^t\|_{\Ld^q}+\int_0^t \|w^{t-u}\|_{\Ld^q}u^{-1/2}du.\]
Integrating with respect to $t$, the triangle and the Hlder inequalities yield
\begin{align*}
\|w\|_{\Ld^p_t\Ld^q}&\lesssim t^{1/p}\|w^\circ\|_{\Ld^q}+\|\phi_g\|_{\Ld^p_t\Ld^q}+\bigg(\int_0^t\Big(\int_0^t \mathds1_{u<v}\|w^{v-u}\|_{\Ld^q}u^{-1/2}du\Big)^pdv\bigg)^{1/p}\\
&\lesssim t^{1/p}\|w^\circ\|_{\Ld^q}+\|\phi_g\|_{\Ld^p_t\Ld^q}+\int_0^t\|w\|_{\Ld^p_u\Ld^q}(t-u)^{-1/2}du\\
&\lesssim t^{1/p}\|w^\circ\|_{\Ld^q}+\|\phi_g\|_{\Ld^p_t\Ld^q}+(1-r'/2)^{-1/r'}t^{\frac12-\frac1r}\bigg(\int_0^t\|w\|_{\Ld^p_u\Ld^q}^rdu\bigg)^{1/r},
\end{align*}
for all $r>2$.
Noting that $(1-r'/2)^{-1/r'}\lesssim1+(r-2)^{-1/2}$, and optimizing in $r$, the Grnwall inequality then gives
\begin{align}\label{eq:resparregzetapre}
\|w\|_{\Ld^p_t\Ld^q}&\lesssim (t^{1/p}\|w^\circ\|_{\Ld^q}+\|\phi_g\|_{\Ld^p_t\Ld^q})\exp\Big(\inf_{2<r<\infty}\frac{C^r}r(1+(r-2)^{-r/2})t^{r/2}\Big).
\end{align}
Now it remains to estimate the norm of $\phi_g$.
A similar computation as above yields $\|\phi_g\|_{\Ld^p_t\Ld^q}\lesssim t^{1/2}\|g\|_{\Ld^p_t\Ld^q}$,
but a more careful estimate is needed.
For $1\le s\le q$, we may estimate by the Hlder inequality
\begin{align*}
|\phi_g^t(x)|&\le\int_0^t\bigg(\int|\nabla \Gamma^{u}|^{s'/2}\bigg)^{1/s'}\bigg(\int|\nabla \Gamma^{u}(x-y)|^{s/2} |g^{t-u}(y)|^sdy\bigg)^{1/s}du,
\end{align*}
and hence, by the triangle inequality,
\begin{align*}
\|\phi_g^t\|_{\Ld^q}&\le\int_0^t\bigg(\int|\nabla \Gamma^{u}|^{s'/2}\bigg)^{1/s'}\bigg(\int|\nabla \Gamma^{u}|^{q/2}\bigg)^{1/q}\bigg(\int |g^{t-u}|^s\bigg)^{1/s}du.
\end{align*}
Assuming that $\kappa:=\frac d2\big(\frac1d+\frac 1q-\frac1s\big)>0$ (note that $\kappa\le1/2$ follows from the choice $s\le q$), a direct computation then yields
\begin{align*}
\|\phi_g^t\|_{\Ld^q}&\lesssim \int_0^tu^{\kappa-1}\|g^{t-u}\|_{\Ld^s}du.
\end{align*}
Integrating with respect to $t$, we find by the triangle inequality
\begin{align*}
\|\phi_g\|_{\Ld^p_t\Ld^q}&\lesssim \int_0^tu^{\kappa-1}\bigg(\int_0^{t-u}\|g^{v}\|_{\Ld^s}^pdv\bigg)^{1/p}du~\lesssim \kappa^{-1}t^\kappa\|g^{v}\|_{\Ld^p_t\Ld^s},
\end{align*}
and the result~(iii) follows from this together with~\eqref{eq:resparregzetapre}.
\end{proof}
Another ingredient that we need is the following string of critical potential theory estimates. The Sobolev embedding for $W^{1,d}(\R^d)$ gives that $\|\nabla\triangle^{-1}w\|_{\Ld^\infty}$ is {\it almost} bounded by the $\Ld^d(\R^d)$-norm of $w$, while the Caldern-Zygmund theory gives that $\|\nabla^2\triangle^{-1}w\|_{\Ld^\infty}$ is {\it almost} bounded by the $\Ld^\infty(\R^d)$-norm of $w$. The following result makes these assertions precise in a quantitative way, somehow in the spirit of~\cite{Brezis-Gallouet-80}. Item~(iii) can be found e.g. in~\cite[Appendix]{Lin-Zhang-00} in a slightly different form, but we were unable to find items~(i) and~(ii) in the literature. (By $(-\triangle)^{-1}$ we henceforth mean the convolution with the Coulomb kernel, that is, given $w\in C^\infty_c(\R^d)$, $v=(-\triangle)^{-1}w$ is meant as the decaying solution of $-\triangle v=w$.)
\begin{lem}[Potential estimates in $\Ld^\infty$]\label{lem:singint-1}
Let $d\ge2$. For all $w\in C^\infty_c(\R^d)$ the following hold:\footnote{A direct adaptation of the proof shows that in parts~(i) and~(ii) the $\Ld^\infty$-norms in the left-hand sides could be replaced by Hlder $C^\e$-norms with $\e\in[0,1)$: the exponents $d$ in the right-hand sides then need to be replaced by $d/(1-\e)>d$, and an additional multiplicative prefactor $(1-\e)^{-1}$ is further needed.}
\begin{enumerate}[(i)]
\item for all $1\le p<d<q\le\infty$, choosing $\theta\in(0,1)$ such that $\frac1d=\frac\theta p+\frac{1-\theta}q$, we have
\begin{align*}
\|\nabla\triangle^{-1}w\|_{\Ld^\infty}&\lesssim \big((1-d/q)\wedge(1-p/d)\big)^{-1+1/d}\,\|w\|_{\Ld^d}\bigg(1+\log\frac{\|w\|_{\Ld^p}^{\theta}\|w\|_{\Ld^q}^{1-\theta}}{\|w\|_{\Ld^d}}\bigg)^{1-1/d};
\end{align*}
\item if $ w=\Div\xi$ for $\xi\in C^\infty_c(\R^d)^d$, then, for all $d<q\le\infty$ and $1\le p<\infty$, we have
\begin{align*}
\|\nabla\triangle^{-1} w\|_{\Ld^\infty}&\lesssim (1-d/q)^{-1+1/d}\| w\|_{\Ld^{d}}\bigg(1+\log^+\frac{\| w\|_{\Ld^q}}{\| w\|_{\Ld^{d}}}\bigg)^{1-1/d}+p\|\xi\|_{\Ld^p};
\end{align*}
\item for all $0<s\le1$ and $1\le p<\infty$, we have
\begin{align*}
\|\nabla^2\triangle^{-1} w\|_{\Ld^\infty}&\lesssim s^{-1}\| w\|_{\Ld^\infty}\bigg(1+\log\frac{\|w\|_{C^s}}{\| w\|_{\Ld^\infty}}\bigg)+p\| w\|_{\Ld^p}.
\end{align*}
\end{enumerate}
\end{lem}
\begin{proof}
Recall that $-\triangle^{-1} w=g_d\ast w$, where $g_d(x)=c_d|x|^{2-d}$ if $d>2$ and $g_2(x)=-c_2\log|x|$ if $d=2$. The stated result is based on suitable decompositions of this Green integral.
We split the proof into three steps, separately proving items~(i), (ii) and~(iii).
\medskip
{\it Step~1: proof of~(i).}
Let $0<\gamma\le\Gamma<\infty$. The obvious estimate $|\nabla\triangle^{-1} w(x)|\lesssim \int|x-y|^{1-d}| w(y)|dy$ may be decomposed as
\begin{align*}
|\nabla\triangle^{-1} w(x)|&\lesssim\int_{|x-y|<\gamma}|x-y|^{1-d}| w(y)|dy+\int_{\gamma<|x-y|<\Gamma}|x-y|^{1-d}| w(y)|dy+\int_{|x-y|>\Gamma}|x-y|^{1-d}| w(y)|dy.
\end{align*}
Let $1\le p<d<q\le\infty$. We use the H\"older inequality with exponents $(q/(q-1),q)$ for the first term, $(d/(d-1),d)$ for the second, and $(p/(p-1),p)$ for the third, which yields after straightforward computations
\begin{align*}
|\nabla\triangle^{-1} w(x)|&\lesssim (q'(1-d/q))^{-1/q'}\gamma^{1-d/q}\| w\|_{\Ld^q}+(\log (\Gamma/\gamma))^{(d-1)/d}\| w\|_{\Ld^{d}}+(p'(d/p-1))^{-1/p'}\Gamma^{1-d/p}\| w\|_{\Ld^p}.
\end{align*}
Item~(i) now easily follows, choosing $\gamma^{1-d/q}=\| w\|_{\Ld^{d}}/\| w\|_{\Ld^q}$ and $\Gamma^{d/p-1}=\| w\|_{\Ld^p}/\| w\|_{\Ld^{d}}$, noting that $\gamma\le\Gamma$ follows from interpolation of $\Ld^{d}$ between $\Ld^p$ and $\Ld^\infty$, and observing that
\[(q'(1-d/q))^{-1/q'}\lesssim (1-d/q)^{-1+1/d},\qquad (p'(d/p-1))^{-1/p'}\lesssim (1-p/d)^{-1+1/d}.\]
\medskip
{\it Step~2: proof of~(ii).}
Let $0<\gamma\le1\le\Gamma<\infty$, and let $\chi_\Gamma$ denote a cut-off function with $\chi_\Gamma=0$ on $B_\Gamma$, $\chi_\Gamma=1$ outside $B_{\Gamma+1}$, and $|\nabla\chi_\Gamma|\le2$. We may then decompose
\begin{align*}
\nabla\triangle^{-1} w(x)=~&\int_{|x-y|<\gamma}\nabla g_d(x-y) w(y)dy+\int_{\gamma\le|x-y|\le\Gamma}\nabla g_d(x-y) w(y)dy\\
&+\int_{\Gamma\le|x-y|\le\Gamma+1}\nabla g_d(x-y)(1-\chi_\Gamma(x-y)) w(y)dy+\int_{|x-y|\ge\Gamma}\nabla g_d(x-y)\chi_\Gamma(x-y) w(y)dy.
\end{align*}
Using $ w=\Div\xi$ and integrating by parts, the last term becomes
\begin{align*}
\int\nabla g_d(x-y)\chi_\Gamma(x-y) w(y)dy&=\int\nabla g_d(x-y) \otimes\nabla\chi_\Gamma(x-y) \cdot\xi(y)dy+\int\chi_\Gamma(x-y) \nabla^2g_d(x-y)\cdot\xi(y)dy.
\end{align*}
Choosing $\Gamma=1$, we may then estimate
\begin{align*}
|\nabla\triangle^{-1} w(x)|\lesssim~&\int_{|x-y|<\gamma}|x-y|^{1-d} |w(y)|dy+\int_{\gamma\le|x-y|\le2}|x-y|^{1-d} |w(y)|dy+\int_{|x-y|\ge1}|x-y|^{-d}|\xi(y)|dy.
\end{align*}
Using the Hlder inequality just as in the proof of~(i) above for the first two terms, with $d<q\le\infty$, and using the Hlder inequality with exponents $(p/(p-1),p)$ for the last line, we obtain, for any $1\le p<\infty$,
\begin{align*}
|\nabla\triangle^{-1} w(x)|\lesssim~&(q'(1-d/q))^{-1/q'}\gamma^{1-d/q}\| w\|_{\Ld^q}+(\log(2/\gamma))^{(d-1)/d}\| w\|_{\Ld^{d}}+(d(p'-1))^{-1/p'}\|\xi\|_{\Ld^p},
\end{align*}
so that item~(ii) follows from the choice $\gamma^{1-d/q}=1\wedge(\| w\|_{\Ld^{d}}/\| w\|_{\Ld^q})$, observing that $(d(p'-1))^{-1/p'}\le p$.
\medskip
{\it Step~3: proof of~(iii).}
Given $0<\gamma\le1$, using the integration by parts
\begin{align*}
\int_{|x-y|<\gamma}\nabla^2g_d(x-y)dy=\int_{|x-y|=\gamma}n\otimes \nabla g_d(x-y)dy,
\end{align*}
and using the notation $(x-y)^{\otimes2}:=(x-y)\otimes(x-y)$, we may decompose
\begin{align*}
|\nabla^2\triangle^{-1} w(x)|&\lesssim\bigg|\int_{|x-y|<\gamma}\frac{(x-y)^{\otimes2}}{|x-y|^{d+2}} w(y)dy\bigg|+\bigg|\int_{\gamma\le|x-y|<1}\frac{(x-y)^{\otimes2}}{|x-y|^{d+2}} w(y)dy\bigg|+\bigg|\int_{|x-y|\ge1}\frac{(x-y)^{\otimes2}}{|x-y|^{d+2}} w(y)dy\bigg|\\
&\lesssim\bigg|\int_{|x-y|<\gamma}\frac{(x-y)^{\otimes2}}{|x-y|^{d+2}}( w(x)- w(y))dy\bigg|+| w(x)|\bigg|\int_{|x-y|=\gamma}\frac{x-y}{|x-y|^{d}}dy\bigg|\\
&\hspace{3cm}+\bigg|\int_{\gamma\le|x-y|<1}\frac{(x-y)^{\otimes2}}{|x-y|^{d+2}} w(y)dy\bigg|+\bigg|\int_{|x-y|\ge1}\frac{(x-y)^{\otimes2}}{|x-y|^{d+2}} w(y)dy\bigg|.
\end{align*}
Let $0<s\le1$ and $1\le p<\infty$. Using the inequality $| w(x)- w(y)|\le|x-y|^s| w|_{C^s}$, and then applying the H\"older inequality with exponents $(1,\infty)$ for the first three terms, and $(p/(p-1),p)$ for the last one, we obtain after straightforward computations
\begin{align*}
|\nabla^2\triangle^{-1} w(x)|&\lesssim s^{-1}\gamma^s| w|_{C^s}+\| w\|_{\Ld^\infty}+|\log\gamma|\| w\|_{\Ld^\infty}+(d(p'-1))^{-1/p'}\| w\|_{\Ld^p}.
\end{align*}
Item~(iii) then follows for the choice $\gamma^s=\| w\|_{\Ld^\infty}/\| w\|_{C^s}\le1$.
\end{proof}
In addition to the Sobolev regularity of solutions, the Hlder regularity is also studied in the sequel, in the framework of the usual Besov spaces $C^s_*(\R^d):=B^s_{\infty,\infty}(\R^d)$ (see e.g.~\cite{BCD-11}). These spaces actually coincide with the usual Hlder spaces $C^s(\R^d)$ only for non-integer $s\ge0$ (for integer $s\ge0$, they are strictly larger than $W^{s,\infty}(\R^d)\supset C^s(\R^d)$ and coincide with the corresponding Zygmund spaces). The following potential theory estimates are then needed both in Sobolev and in Hlder-Zygmund spaces.
\begin{lem}[Potential estimates in Sobolev and in Hlder-Zygmund spaces]\label{lem:pottheoryCsHs}
Let $d\ge2$. For all $w\in C^\infty_c(\R^d)$, the following hold:
\begin{enumerate}[(i)]
\item for all $s\ge0$,
\[\|\nabla\triangle^{-1}w\|_{H^s}\lesssim \|w\|_{\dot H^{-1}\cap H^{s-1}},\qquad\|\nabla^2\triangle^{-1}w\|_{H^s}\lesssim\|w\|_{H^s};\]
\item for all $s\in\R$,
\[\|\nabla\triangle^{-1}w\|_{C^s_*}\lesssim_s\|w\|_{\dot H^{-1}\cap C_*^{s-1}},\qquad\|\nabla^2\triangle^{-1}w\|_{C^s_*}\lesssim_{s}\|w\|_{\dot H^{-1}\cap C^{s}_*},\]
and for all $1\le p<d$ and $1\le q<\infty$,
\[\|\nabla\triangle^{-1}w\|_{C^s_*}\lesssim_{p,s}\|w\|_{\Ld^p\cap\Ld^\infty\cap C^{s-1}_*},\qquad \|\nabla^2\triangle^{-1}w\|_{C^s_*}\lesssim_{q,s}\|w\|_{\Ld^q\cap C^{s}_*},\]
\end{enumerate}
where the subscripts $s,p,q$ indicate the additional dependence of the multiplicative constants on an upper bound on $s$, $(d-p)^{-1}$, and $q$, respectively.
\end{lem}
\begin{proof}
As item~(i) is obvious via Fourier transform, we focus on item~(ii).
Let $s\in\R$, let $\chi\in C^\infty_c(\R^d)$ be fixed with $\chi=1$ in a neighborhood of the origin, and let $\chi(\nabla)$ denote the corresponding pseudo-differential operator. Applying~\cite[Proposition~2.78]{BCD-11} to the operator $(1-\chi(\nabla))\nabla\triangle^{-1}$, we find
\begin{align*}
\|\nabla\triangle^{-1}w\|_{C^s_*}\le\|(1-\chi(\nabla))\nabla\triangle^{-1}w\|_{C^s_*}+\|\chi(\nabla)\nabla\triangle^{-1}w\|_{C^s_*}\lesssim\|w\|_{C^{s-1}_*}+\|\chi(\nabla)\nabla\triangle^{-1}w\|_{C^s_*}.
\end{align*}
Let $k$ denote the smallest integer $\ge s\vee0$. Noting that $\|v\|_{C^s_*}\lesssim\sum_{j=0}^k\|\nabla^jv\|_{\Ld^\infty}$ holds for all $v$, we deduce
\begin{align*}
\|\nabla\triangle^{-1}w\|_{C^s_*}\lesssim\|w\|_{C^{s-1}_*}+\sum_{j=0}^k\|\nabla^j\chi(\nabla)\nabla\triangle^{-1}w\|_{\Ld^\infty},
\end{align*}
and similarly
\begin{align*}
\|\nabla^2\triangle^{-1}w\|_{C^s_*}\lesssim\|w\|_{C^{s}_*}+\sum_{j=0}^k\|\nabla^j\chi(\nabla)\nabla^2\triangle^{-1}w\|_{\Ld^\infty}.
\end{align*}
Writing $\nabla^j\chi(\nabla)\nabla\triangle^{-1}w=\nabla^j\chi\ast\nabla\triangle^{-1}w$, we find
\begin{align*}
\|\nabla^j\chi(\nabla)\nabla\triangle^{-1}w\|_{\Ld^\infty}\le\|\nabla^j\chi\|_{\Ld^2}\|\nabla\triangle^{-1}w\|_{\Ld^2}=\|\nabla^j\chi\|_{\Ld^2}\|w\|_{\dot H^{-1}},
\end{align*}
and the first two estimates in item~(ii) follow. Rather writing $\nabla^j\chi(\nabla)\nabla\triangle^{-1}w=\nabla\triangle^{-1}(\nabla^j\chi\ast w)$, and using the obvious estimate $|\nabla\triangle^{-1}v(x)|\lesssim\int|x-y|^{1-d}|v(y)|dy$ as in the proof of Lemma~\ref{lem:singint-1}, we find for all $1\le p<d$,
\begin{align*}
\|\nabla^j\chi(\nabla)\nabla\triangle^{-1}w\|_{\Ld^\infty}&\lesssim\sup_x\int_{|x-y|\le1}|x-y|^{1-d}|\nabla^j\chi\ast w(y)|dy+\sup_x\int_{|x-y|>1}|x-y|^{1-d}|\nabla^j\chi\ast w(y)|dy\\
&\lesssim_p\|\nabla^j\chi\ast w\|_{\Ld^p\cap\Ld^\infty}\lesssim\|\nabla^j\chi\|_{\Ld^1}\|w\|_{\Ld^p\cap\Ld^\infty},
\end{align*}
and the third estimate in item~(ii) follows. The last estimate in~(ii) is now easily obtained, arguing similarly as in the proof of Lemma~\ref{lem:singint-1}(iii).
\end{proof}
We now recall some global elliptic regularity results for the operator $-\Div(b\nabla)$ on the whole plane $\R^2$; as no reference was found in the literature, a detailed proof is included.
\begin{lem}[Global elliptic regularity]\label{lem:globellreg}
Let $b\in W^{1,\infty}(\R^2)^{2\times 2}$ be uniformly elliptic, $\Id\le b\le \Lambda\Id$, for some $\Lambda<\infty$.
Given $f:\R^2\to\R$, $g:\R^2\to\R^2$, we consider the following equations in $\R^2$,
\[-\Div (b\nabla u)=f,\qquad\text{and}\qquad -\Div (b\nabla v)=\Div g.\]
The following properties hold.
\begin{enumerate}[(i)]
\item \emph{Meyers type estimates:}
There exists $2< p_0,q_0,r_0<\infty$ (depending only on an upper bound on $\Lambda$) such that for all $2< p\le p_0$, all $q_0\le q<\infty$, and all $r_0'\le r\le r_0$,
\[\|\nabla u\|_{\Ld^p}\le C_p\|f\|_{\Ld^{2p/(p+2)}},\qquad \|v\|_{\Ld^q}\le C_q\|g\|_{\Ld^{2q/(q+2)}},\qquad\text{and}\qquad \|\nabla v\|_{\Ld^r}\le C\|g\|_{\Ld^r},\]
for some constant $C$ depending only on an upper bound on $\Lambda$, and for constants $C_p$ and $C_q$ further depending on an upper bound on $(p-2)^{-1}$ and $q$, respectively.
\item \emph{Sobolev regularity:}
For all $s\ge0$, we have
$\|\nabla u\|_{H^s}\le C_s\|f\|_{\dot H^{-1}\cap H^{s-1}}$ and $\|\nabla v\|_{H^s}\le C_s\|g\|_{H^s}$,
where the constant $C_s$ depends only on an upper bound on $s$ and on $\|b\|_{W^{s,\infty}}$.
\item \emph{Schauder type estimate:} For all $s\in(0,1)$, we have
$|\nabla u|_{C^s}\le C_s\|f\|_{\Ld^{2/(1-s)}}$ and $|v|_{C^s}\le C_s'\|g\|_{\Ld^{2/(1-s)}}$,
where the constant $C_s$ (resp. $C_s'$) depends only on $s$ and on an upper bound on $\|b\|_{W^{s,\infty}}$ (resp. on $s$ and on the modulus of continuity of $b$).
\end{enumerate}
In particular, we have $\|\nabla u\|_{\Ld^\infty}\le C\|f\|_{\Ld^1\cap\Ld^\infty}$ and $\|v\|_{\Ld^\infty}\le C'\|g\|_{\Ld^1\cap\Ld^\infty}$, where the constant $C$ (resp.~$C'$) depends only on an upper bound on $\|b\|_{W^{1,\infty}}$ (resp.~$\Lambda$).
\end{lem}
\begin{proof}
We split the proof into three steps, first proving~(i) as a consequence of Meyers' perturbative argument, then turning to the Sobolev regularity~(ii), and finally to the Schauder type estimate~(iii). The additional $\Ld^\infty$-estimate for $v$ directly follows from item~(i) and the Sobolev embedding, while the corresponding estimate for $\nabla u$ follows from items~(i) and~(iii) by interpolation: for $2<p\le p_0$ and $s\in(0,1)$, we indeed find
\[\|\nabla u\|_{\Ld^\infty}\lesssim \|\nabla u\|_{\Ld^p}+|\nabla u|_{C^s}\le C_p\|f\|_{\Ld^{2p/(p+2)}}+C_s\|f\|_{\Ld^{2/(1-s)}}\le C_{p,s}\|f\|_{\Ld^1\cap\Ld^\infty}.\]
In the proof below, we use the notation $\lesssim$ for $\le$ up to a constant $C>0$ that depends only on an upper bound on $\Lambda$, and we add subscripts to indicate dependence on further parameters.
\medskip
{\it Step~1: proof of~(i).}
We first consider the norm of $v$. By Meyers' perturbative argument~\cite{Meyers-63}, there exists some $1<r_0<2$ (depending only on $\Lambda$) such that $\|\nabla v\|_{\Ld^r}\lesssim \|g\|_{\Ld^r}$ holds for all $r_0\le r\le r_0'$.
On the other hand, decomposing the equation for $v$ as
\[-\triangle v=\Div (g+(b-1)\nabla v),\]
we deduce from Riesz potential theory that for all $1<r<2$
\[\|v\|_{\Ld^{2r/(2-r)}}\lesssim_r\|g+(b-1)\nabla v\|_{\Ld^{r}}\lesssim\|g\|_{\Ld^{r}}+\|\nabla v\|_{\Ld^{r}},\]
and hence $\|v\|_{\Ld^{2r/(2-r)}}\lesssim_r\|g\|_{\Ld^r}$ for all $r_0\le r<2$, that is, $\|v\|_{\Ld^q}\lesssim_q\|g\|_{\Ld^{2q/(q+2)}}$ for all $\frac{2r_0}{2-r_0}\le q<\infty$.
We now turn to the norm of $\nabla u$. The proof follows from a suitable adaptation of Meyers' perturbative argument~\cite{Meyers-63}, again combined with Riesz potential theory; for the reader's convenience, a complete proof is included.
First recall that the Caldern-Zygmund theory yields $\|\nabla^2\triangle w\|_{\Ld^p}\le K_p\|w\|_{\Ld^p}$ for all $1<p<\infty$ and all $w\in C^\infty_c(\R^2)$, where the constants $K_p$'s moreover satisfy $\limsup_{p\to2}K_p\le K_2$, while a simple energy estimate allows to choose $K_2=1$. Now rewriting the equation for $u$ as
\[-\triangle u=\frac2{\Lambda+1}f+\Div\bigg(\frac{2}{\Lambda+1}\Big(b-\frac{\Lambda+1}2\Big)\nabla u\bigg),\]
we deduce from Riesz potential theory and from the Caldern-Zygmund theory (applied to the first and to the second right-hand side term, respectively), for all $2<p<\infty$,
\begin{align*}
\|\nabla u\|_{\Ld^p}&\le \frac2{\Lambda+1}\|\nabla\triangle^{-1}f\|_{\Ld^p}+\bigg\|\nabla\triangle^{-1}\Div\bigg(\frac{2}{\Lambda+1}\Big(b-\frac{\Lambda+1}2\Big)\nabla u\bigg)\bigg\|_{\Ld^p}\\
&\le \frac{2C_p}{\Lambda+1}\|f\|_{\Ld^{2p/(p+2)}}+\frac{2K_p}{\Lambda+1}\Big\|\Big(b-\frac{\Lambda+1}2\Big)\nabla u\Big\|_{\Ld^p}\\
&\le \frac{2C_p}{\Lambda+1}\|f\|_{\Ld^{2p/(p+2)}}+\frac{K_p(\Lambda-1)}{\Lambda+1}\|\nabla u\|_{\Ld^p},
\end{align*}
where the last inequality follows from $\Id\le b\le\Lambda\Id$. Since we have $\frac{\Lambda-1}{\Lambda+1}<1$ and $\limsup_{p\to2}K_p\le K_2=1$, we may choose $p_0>2$ close enough to $2$ such that $\frac{K_p(\Lambda-1)}{\Lambda+1}<1$ holds for all $2\le p\le p_0$. This allows to absorb the last term of the above right-hand side, and to conclude $\|\nabla u\|_{\Ld^p}\lesssim_p \|f\|_{\Ld^{2p/(p+2)}}$ for all $2<p\le p_0$.
\medskip
{\it Step~2: proof of~(ii).}
We focus on the result for $u$, as the argument for $v$ is very similar. A simple energy estimate yields
\[\int|\nabla u|^2\le\int \nabla u\cdot b\nabla u=\int fu\le\|f\|_{\dot H^{-1}}\|\nabla u\|_{\Ld^2},\]
hence $\|\nabla u\|_{\Ld^2}\le\|f\|_{\dot H^{-1}}$, that is, (ii) with $s=0$. The result~(ii) for any integer $s\ge0$ is then deduced by induction, successively differentiating the equation.
It remains to consider the case of fractional values $s\ge0$. We only display the argument for $0<s<1$, while the other cases are similarly obtained after differentiation of the equation. Let $0<s<1$ be fixed. We use the following finite difference characterization of the fractional Sobolev space $H^s(\R^2)$: a function $w\in\Ld^2(\R^2)$ belongs to $H^s(\R^2)$, if and only if it satisfies $\|w-w(\cdot+h)\|_{\Ld^2}\le K|h|^s$ for all $h\in\R^2$, for some $K>0$, and we then have $\|w\|_{\dot H^s}\le K$. This characterization is easily checked, using e.g. the identity $\|w-w(\cdot+h)\|_{\Ld^2}^2\simeq\int|1-e^{i\xi\cdot h}|^2|\hat w(\xi)|^2d\xi$, where $\hat w$ denotes the Fourier transform of $w$, and noting that $|1-e^{ia}|\le 2\wedge |a|$ holds for all $a\in\R$.
Now applying finite difference to the equation for $u$, we find for all $h\in\R^2$,
\[-\Div( b(\cdot+h)(\nabla u-\nabla u(\cdot+h)))=\Div( (b-b(\cdot+h))\nabla u)+f-f(\cdot+h),\]
and hence, testing against $u-u(\cdot+h)$,
\begin{align*}
\int |\nabla u-\nabla u(\cdot+h)|^2&\le-\int (\nabla u-\nabla u(\cdot+h))\cdot (b-b(\cdot+h))\nabla u+\int (u-u(\cdot+h))(f-f(\cdot+h))\\
&\le |h|^s|b|_{C^s}\|\nabla u\|_{\Ld^2}\|\nabla u-\nabla u(\cdot+h)\|_{\Ld^2}+\|f-f(\cdot+h)\|_{\dot H^{-1}}\|\nabla u-\nabla u(\cdot+h)\|_{\Ld^2},
\end{align*}
where we compute by means of Fourier transforms
\begin{align*}
\|f-f(\cdot+h)\|_{\dot H^{-1}}^2&\simeq\int |\xi|^{-2}|1-e^{i\xi\cdot h}|^2|\hat f(\xi)|^2d\xi\lesssim \int|\xi|^{-2}|\xi\cdot h|^{2s}|\hat f(\xi)|^2d\xi\lesssim |h|^{2s}\|f\|_{\dot H^{-1}\cap H^{s-1}}^2.
\end{align*}
Further combining this with the $\Ld^2$-estimate for $\nabla u$ proven at the beginning of this step, we conclude
\begin{align*}
\|\nabla u-\nabla u(\cdot+h)\|_{\Ld^2}&\lesssim |h|^s(|b|_{C^{s}}\|\nabla u\|_{\Ld^2}+\|f\|_{\dot H^{-1}\cap H^{s-1}})\lesssim |h|^s(1+|b|_{C^{s}})\|f\|_{\dot H^{-1}\cap H^{s-1}},
\end{align*}
and the result follows from the above stated characterization of $H^s(\R^2)$.
\medskip
{\it Step~3: proof of~(iii).}
We focus on the result for $u$, while that for $v$ is easily obtained as an adaptation of~\cite[Theorem~3.8]{Han-Lin-97}.
Let $x_0\in\R^2$ be fixed. The equation for $u$ may be rewritten as
\[-\Div(b(x_0)\nabla u)=f+\Div((b-b(x_0))\nabla u).\]
For all $r>0$, let $w_r\in u+H^1_0(B(x_0,r))$ be the unique solution of $-\Div(b(x_0)\nabla w_r)=0$ in $B(x_0,r)$. The difference $v_r:=u-w_r\in H^1_0(B(x_0,r))$ then satisfies in $B(x_0,r)$
\[-\Div(b(x_0)\nabla v_r)=f+\Div((b-b(x_0))\nabla u).\]
Testing this equation against $v_r$ itself, we obtain
\begin{align*}
\int|\nabla v_r|^2&\le \bigg|\int_{B(x_0,r)}fv_r\bigg|+\int_{B(x_0,r)}|b-b(x_0)||\nabla u||\nabla v_r|\le \bigg|\int_{B(x_0,r)}fv_r\bigg|+r^s|b|_{C^s}\|\nabla u\|_{\Ld^2(B(x_0,r))}\|\nabla v_r\|_{\Ld^2}.
\end{align*}
We estimate the first term as follows
\begin{align*}
\bigg|\int_{B(x_0,r)}fv_r\bigg|=\bigg|\int_{B(x_0,r)}\nabla v_r\cdot\nabla\triangle^{-1}(\mathds1_{B(x_0,r)}f)\bigg|&\le\|\nabla v_r\|_{\Ld^{p'}(B(x_0,r))}\|\nabla\triangle^{-1}(\mathds1_{B(x_0,r)}f)\|_{\Ld^p},
\end{align*}
and hence by Riesz potential theory, for all $2<p<\infty$,
\begin{align*}
\bigg|\int_{B(x_0,r)}fv_r\bigg|&\lesssim_p\|\nabla v_r\|_{\Ld^{p'}(B(x_0,r))}\|f\|_{\Ld^{2p/(p+2)}(B(x_0,r))}.
\end{align*}
The Hlder inequality then yields, choosing $q:=\frac2{1-s}>2$,
\begin{align*}
\bigg|\int_{B(x_0,r)}fv_r\bigg|&\lesssim_pr^{\frac2{p'}-1}\|\nabla v_r\|_{\Ld^{2}}~r^{1+\frac2p-\frac2q}\|f\|_{\Ld^{q}}=r^{2(1-\frac1q)}\|\nabla v_r\|_{\Ld^{2}}\|f\|_{\Ld^{q}}=r^{1+s}\|\nabla v_r\|_{\Ld^{2}}\|f\|_{\Ld^{2/(1-s)}}.
\end{align*}
Combining the above estimates, and using the inequality $2xy\le x^2+y^2$ to absorb the norms $\|\nabla v_r\|_{\Ld^2}$ appearing in the right-hand side, we find
\begin{align*}
\int|\nabla v_r|^2\lesssim r^{2(1+s)}\|f\|_{\Ld^{2/(1-s)}}^2+r^{2s}|b|_{C^s}^2\|\nabla u\|_{\Ld^2(B(x_0,r))}^2.
\end{align*}
We are now in position to conclude exactly as in the classical proof of the Schauder estimates (see e.g.~\cite[Theorem~3.13]{Han-Lin-97}).
\end{proof}
The interaction force $v$ in equation~\eqref{eq:limeqn1VF} is defined by the values of $\curl v$ and $\Div (av)$. The following result shows how $v$ is controlled by such specifications.
\begin{lem}\label{lem:reconstr}
Let $a,a^{-1}\in \Ld^{\infty}(\R^2)$. For all $\delta\omega,\delta\zeta\in\dot H^{-1}(\R^2)$, there exists a unique $\delta v\in\Ld^2(\R^2)^2$ such that $\curl\delta v=\delta\omega$ and $\Div(a\delta v)=\delta\zeta$. Moreover, for all $s\ge0$, if $a,a^{-1}\in W^{s,\infty}(\R^2)$ and $\delta\omega,\delta\zeta\in\dot H^{-1}\cap H^{s-1}(\R^2)$, we have
\[\|\delta v\|_{H^s}\le C\|\delta\omega\|_{\dot H^{-1}\cap H^{s-1}}+C\|\delta\zeta\|_{\dot H^{-1}\cap H^{s-1}},\]
where the constants $C$'s depend only on an upper bound on $s$ and $\|(a,a^{-1})\|_{W^{s,\infty}}$.
\end{lem}
\begin{proof}
We split the proof into two steps.
\medskip
{\it Step~1: uniqueness.}
We prove that at most one function $\delta v\in\Ld^2(\R^2)^2$ can be associated with a given couple $(\delta\omega,\delta\zeta)$. For that purpose, we assume that $\delta v\in \Ld^2(\R^2)^2$ satisfies $\curl\delta v=0$ and $\Div(a\delta v)=0$, and we deduce $\delta v=0$. By the Hodge decompositions in $\Ld^2(\R^2)^2$, there exist functions $\phi,\psi\in H^1_\loc(\R^2)$ such that $a\delta v=\nabla\phi+\nabla^\bot\psi$, with $\nabla\phi,\nabla\psi\in\Ld^2(\R^2)^2$. Now note that $\triangle \phi=\Div(a\delta v)=0$ and $\Div (a^{-1}\nabla\psi)+\curl(a^{-1}\nabla \phi)=\curl\delta v=0$, which implies $\nabla\phi=0$ and $\nabla\psi=0$, hence $\delta v=0$.
\medskip
{\it Step~2: existence.}
Given $\delta\omega,\delta\zeta\in\dot H^{-1}(\R^2)$,
we deduce that $\nabla(\Div a^{-1}\nabla)^{-1}\delta\omega$ and $\nabla(\Div a\nabla)^{-1}\delta\zeta$ are well-defined in $\Ld^2(\R^2)^2$. The vector field
\begin{align*}
\delta v:=a^{-1}\nabla^\bot(\Div a^{-1}\nabla)^{-1}\delta \omega+\nabla(\Div a\nabla)^{-1}\delta \zeta
\end{align*}
is thus well-defined in $\Ld^2(\R^2)^2$, and trivially satisfies $\curl \delta v=\delta \omega$, $\Div(a\delta v)=\delta \zeta$. The additional estimate follows from Lemmas~\ref{lem:katoponce-1} and~\ref{lem:globellreg}(ii).
\end{proof}
As emphasized in Remark~\ref{rem:sol}(i), weak solutions of the incompressible model~\eqref{eq:limeqn1} are rather defined via the vorticity formulation~\eqref{eq:limeqn1VF} in order to avoid compactness issues related to the pressure $P$. Although this will not be used in the sequel, we quickly check that under mild regularity assumptions a weak solution $v$ of~\eqref{eq:limeqn1} automatically also satisfies equation~\eqref{eq:limeqn1} in the distributional sense on $[0,T)\times\R^2$ for some pressure~$P:\R^2\to\R$.
\begin{lem}[Control on the pressure]\label{lem:pressure}
Let $\alpha,\beta\in\R$, $T>0$, $h\in W^{1,\infty}(\R^2)$, and $\Psi,\bar v^\circ\in \Ld^\infty(\R^2)^2$. There exists $2<q_0\lesssim1$ (depending only on an upper bound on $\|h\|_{\Ld^\infty}$) such that the following holds: If $v\in\Ld^\infty_\loc([0,T);\bar v^\circ+\Ld^2(\R^2)^2)$ is a weak solution of~\eqref{eq:limeqn1} on $[0,T)\times\R^2$ with $\omega:=\curl v\in\Ld^\infty_\loc([0,T);\Pc\cap\Ld^{q_0}(\R^2))$, then $v$ actually satisfies~\eqref{eq:limeqn1} in the distributional sense on $[0,T)\times\R^2$ for some pressure $P\in\Ld^\infty_\loc([0,T);\Ld^{q_0}(\R^2))$.
\end{lem}
\begin{proof}
In this proof, we use the notation $\lesssim$ for $\le$ up to a constant $C$ depending only on an upper bound on $\|(h,\Psi,\bar v^\circ)\|_{\Ld^\infty}$.
Let $2<p_0,q_0\lesssim1$ and $r_0=p_0$ be as in Lemma~\ref{lem:globellreg}(i) (with $b$ replaced by $a$ or $a^{-1}$), and note that $q_0$ can be chosen large enough so that $\frac1{p_0}+\frac1{q_0}\le\frac12$. Assume that $\omega\in \Ld^\infty_\loc([0,T);\Pc\cap\Ld^{q_0}(\R^2))$ holds for this choice of the exponent $q_0$. By Lemma~\ref{lem:globellreg}(i), the function
\[P:=(-\Div a\nabla)^{-1}\Div (a\omega(-\alpha(\Psi+v)+\beta(\Psi+v)^\bot))\]
is well-defined in $\Ld^\infty_\loc([0,T);\Ld^{q_0}(\R^2))$ and satisfies for all $t$,
\begin{align*}
\|P^t\|_{\Ld^{q_0}}&\lesssim \|a\omega^t(-\alpha(\Psi+v^t)+\beta(\Psi+v^t)^\bot)\|_{\Ld^{2q_0/(2+q_0)}}\\
&\lesssim\|\Psi+\bar v^\circ\|_{\Ld^\infty}\|\omega^t\|_{\Ld^{2q_0/(2+q_0)}}+\|v^t-\bar v^\circ\|_{\Ld^{2}}\|\omega^t\|_{\Ld^{q_0}}\\
&\lesssim (1+\|v^t-\bar v^\circ\|_{\Ld^2})\|\omega^t\|_{\Ld^1\cap\Ld^{q_0}}.
\end{align*}
Now note that the following Helmholtz-Leray type identity follows from the proof of Lemma~\ref{lem:reconstr}: for any vector field $F\in C^\infty_c(\R^2)^2$,
\begin{align}\label{eq:Helmholtz}
F=a^{-1}\nabla^\bot(\Div a^{-1}\nabla)^{-1}\curl F+\nabla(\Div a\nabla)^{-1}\Div (aF).
\end{align}
This implies in particular, for the choice $F=\omega(-\alpha(\Psi+v)+\beta(\Psi+v)^\bot)$,
\begin{align}
&a^{-1}\nabla^\bot(\Div a^{-1}\nabla)^{-1}\Div(\omega(\alpha(\Psi+v)^\bot+\beta(\Psi+v)))\nonumber\\
=~&a^{-1}\nabla^\bot(\Div a^{-1}\nabla)^{-1}\curl(\omega(-\alpha(\Psi+v)+\beta(\Psi+v)^\bot))\nonumber\\
=~&\omega(-\alpha(\Psi+v)+\beta(\Psi+v)^\bot)+\nabla P.\label{eq:linkpressure}
\end{align}
For $\phi\in C^\infty_c([0,T)\times\R^2)^2$, we know by Lemma~\ref{lem:globellreg}(i) that $(\Div a^{-1}\nabla)^{-1}\curl (a^{-1}\phi)\in C^\infty_c([0,T);\Ld^{q_0}(\R^2))$ and $\nabla(\Div a^{-1}\nabla)^{-1}\curl (a^{-1}\phi)\in C^\infty_c([0,T);\Ld^2\cap\Ld^{p_0}(\R^2))$. With the choice $\frac1{p_0}+\frac1{q_0}\le\frac12$, the $\Ld^{q_0}$-regularity of $\omega$ then allows to test the weak formulation of~\eqref{eq:limeqn1VF} (which defines weak solutions of~\eqref{eq:limeqn1}, cf. Definition~\ref{defin:sol}(b)) against $(\Div a^{-1}\nabla)^{-1}\curl (a^{-1}\phi)$, to the effect of
\begin{multline*}
\int\omega^\circ(\Div a^{-1}\nabla)^{-1}\curl (a^{-1}\phi(0,\cdot))+\iint\omega(\Div a^{-1}\nabla)^{-1}\curl (a^{-1}\partial_t\phi)\\
=\iint \omega(\alpha(\Psi+v)^\bot+\beta(\Psi+v))\cdot\nabla(\Div a^{-1}\nabla)^{-1}\curl (a^{-1}\phi).
\end{multline*}
As by~\eqref{eq:Helmholtz} the constraint $\Div(av)=0$ implies $v=a^{-1}\nabla^\bot(\Div a^{-1}\nabla)^{-1}\omega$ and $v^\circ=a^{-1}\nabla^\bot(\Div a^{-1}\nabla)^{-1}\omega^\circ$, and as by definition $\omega\in\Ld^\infty_\loc([0,T);\Ld^1\cap\Ld^{2}(\R^2))$, we deduce $v\in\Ld^\infty_\loc([0,T);\Ld^{p_0}(\R^2)^2)$ from Lemma~\ref{lem:globellreg}(i). We may then integrate by parts in the weak formulation above, which yields
\begin{align*}
\int \phi(0,\cdot)\cdot v^\circ+\iint \partial_t\phi\cdot v=-\iint a^{-1}\phi\cdot\nabla^\bot(\Div a^{-1}\nabla)^{-1}\Div(\omega(\alpha(\Psi+v)^\bot+\beta(\Psi+v))),
\end{align*}
and the result now directly follows from the decomposition~\eqref{eq:linkpressure}.
\end{proof}
\section{Local-in-time existence of smooth solutions}\label{chap:local}
In this section, we prove the local-in-time existence of smooth solutions of~\eqref{eq:limeqn1} and of~\eqref{eq:limeqn2}, as summarized in Theorem~\ref{th:mainloc}. Note that we choose to work here in the framework of Sobolev spaces, but the results could easily be adapted to Hlder spaces (compare indeed with Lemma~\ref{lem:conservHold}). We first study the non-degenerate case $\lambda>0$.
\begin{prop}[Local existence, non-degenerate case]\label{prop:locexist}
Let $\alpha,\beta\in\R$, $\lambda>0$. Let $s>1$, and let $h,\Psi,\bar v^\circ\in W^{s+1,\infty}(\R^2)^2$.
Let $v^\circ\in\bar v^\circ+H^{s+1}(\R^2)^2$ with $\omega^\circ:=\curl v^\circ$, $\bar\omega^\circ:=\curl \bar v^\circ\in H^s(\R^2)$, and with either $\Div(av^\circ)=\Div(a\bar v^\circ)=0$ in the case~\eqref{eq:limeqn1}, or $\zeta^\circ:=\Div(av^\circ)$, $\bar\zeta^\circ:=\Div(a\bar v^\circ)\in H^s(\R^2)$ in the case~\eqref{eq:limeqn2}. Then, there exists $T>0$ and a weak solution $v\in\Ld^\infty([0,T);\bar v^\circ+H^{s+1}(\R^2)^2)$ of~\eqref{eq:limeqn1} or of~\eqref{eq:limeqn2} on $[0,T)\times\R^2$ with initial data $v^\circ$. Moreover, $T$ depends only on an upper bound on $|\alpha|$, $|\beta|$, $\lambda$, $\lambda^{-1}$, $s$, $(s-1)^{-1}$, $\|(h,\Psi,\bar v^\circ)\|_{W^{s+1,\infty}}$, $\|v^\circ-\bar v^\circ\|_{H^{s+1}}$, $\|(\omega^\circ,\bar\omega^\circ,\zeta^\circ,\bar\zeta^\circ)\|_{H^s}$.
\end{prop}
\begin{proof}
We focus on the compressible case~\eqref{eq:limeqn2}, the situation being similar and simpler in the incompressible case~\eqref{eq:limeqn1}. Let $s>1$. We set up the following iterative scheme: let $v_0:=v^\circ$, $\omega_0:=\omega^\circ=\curl v^\circ$ and $\zeta_0:=\zeta^\circ=\Div(av^\circ)$, and, for all $n\ge0$, given $v_n$, $\omega_n:=\curl v_n$, and $\zeta_n:=\Div(av_n)$, we let $\omega_{n+1}$ and $\zeta_{n+1}$ solve on $\R^+\times\R^2$ the linear PDEs
\begin{gather}\label{eq:iterativescheme-1}
\partial_t\omega_{n+1}=\Div(\omega_{n+1}(\alpha(\Psi+v_n)^\bot+\beta(\Psi+v_n))),\quad \omega_{n+1}|_{t=0}=\omega^\circ,\\
\partial_t\zeta_{n+1}=\lambda\triangle\zeta_{n+1}-\lambda\Div(\zeta_{n+1}\nabla h)+\Div(a\omega_n(-\alpha(\Psi+v_n)+\beta(\Psi+v_n)^\bot)),\quad \zeta_{n+1}|_{t=0}=\zeta^\circ,\label{eq:iterativescheme-2}
\end{gather}
and we let $v_{n+1}$ satisfy $\curl v_{n+1}=\omega_{n+1}$ and $\Div(av_{n+1})=\zeta_{n+1}$. For all $n\ge0$, let also
\[t_n:=\sup\Big\{t\ge0:\|(\omega_n^t,\zeta_n^t)\|_{H^s}+\|v_n^t-\bar v^\circ\|_{H^{s+1}}\le C_0\Big\},\]
for some $C_0\ge1$ to be suitably chosen (depending on the initial data), and let $T_0:=\inf_n t_n$. We show that this iterative scheme is well-defined, that $T_0>0$, and that it converges to a solution of equation~\eqref{eq:limeqn2} on $[0,T_0)\times\R^2$.
We split the proof into two steps. In this proof, we use the notation $\lesssim$ for $\le$ up to a constant $C>0$ that depends only on an upper bound on $|\alpha|$, $|\beta|$, $\lambda$, $\lambda^{-1}$, $s$, $(s-1)^{-1}$, $\|(h,\Psi,\bar v^\circ)\|_{W^{s+1,\infty}}$, $\|v^\circ-\bar v^\circ\|_{H^{s+1}}$, $\|(\zeta^\circ,\bar\zeta^\circ)\|_{H^s}$, and $\|(\omega^\circ,\bar\omega^\circ)\|_{H^s}$.
\medskip
{\it Step~1: the iterative scheme is well-defined.}
In this step, we show that for all $n\ge0$ the system~\eqref{eq:iterativescheme-1}--\eqref{eq:iterativescheme-2} admits a unique solution $(\omega_{n+1},\zeta_{n+1},v_{n+1})$ with $\omega_{n+1}\in\Ld^\infty_\loc(\R^+; H^s(\R^2))$, $\zeta_{n+1}\in\Ld^\infty_\loc(\R^+;H^s(\R^2))$, and $v_{n+1}\in \Ld^\infty_\loc(\R^+;\bar v^\circ+H^{s+1}(\R^2)^2)$, and that moreover for a suitable choice of $1\le C_0\lesssim1$ we have $T_0\ge C_0^{-4}>0$.
We argue by induction. Let $n\ge0$ be fixed, and assume that $(\omega_{n},\zeta_{n},v_n)$ is well-defined with $\omega_n\in\Ld^\infty_\loc(\R^+; H^s(\R^2))$, $\zeta_n\in\Ld^\infty_\loc(\R^+;H^s(\R^2))$, and $v_n\in \Ld^\infty_\loc(\R^+;\bar v^\circ+H^{s+1}(\R^2)^2)$. (For $n=0$, this is indeed trivial by assumption.)
We first study the equation for $\omega_{n+1}$. By the Sobolev embedding with $s>1$, $v_n$ is Lipschitz-continuous, and by assumption $\Psi$ is also Lipschitz-continuous, hence the transport equation~\eqref{eq:iterativescheme-1} admits a unique continuous solution $\omega_{n+1}$, which automatically belongs to $\Ld^\infty_\loc(\R^+;\omega^\circ+\dot H^{-1}\cap H^s(\R^2))$ by Lemma~\ref{lem:katoponce}. More precisely, for all $t\ge0$, Lemma~\ref{lem:katoponce} together with the Sobolev embedding for $s>1$ yields
\begin{align*}
\partial_t\|\omega_{n+1}^t\|_{H^s}&\le C(1+\|\nabla v_n^t\|_{\Ld^\infty})\|\omega_{n+1}^t\|_{H^s}+C \|\omega_{n+1}^t\|_{\Ld^\infty}\|\nabla(v_n^t-\bar v^\circ)\|_{H^s}\\
&\le C(1+\|v_n^t-\bar v^\circ\|_{H^{s+1}})\|\omega_{n+1}^t\|_{H^s}.
\end{align*}
Hence, for all $t\in[0,t_n]$, we obtain $\partial_t\|\omega_{n+1}^t\|_{H^s}\le CC_0\|\omega_{n+1}^t\|_{H^s}$, which proves
\[\|\omega_{n+1}^t\|_{H^s}\le e^{CC_0t}\|\omega^\circ\|_{H^s}\le Ce^{CC_0t}.\]
Noting that
\[\|\omega^\circ-\bar\omega^\circ\|_{\dot H^{-1}}\le \|v^\circ-\bar v^\circ\|_{\Ld^2}\le C,\]
Lemma~\ref{lem:katoponce} together with the Sobolev embedding for $s>1$ also gives for all $t\ge0$,
\begin{align*}
\|\omega_{n+1}^t-\bar \omega^\circ\|_{\dot H^{-1}}\le C+\|\omega_{n+1}^t-\omega^\circ\|_{\dot H^{-1}}&\le C+Ct\|\omega_{n+1}\|_{\Ld^\infty_t\Ld^2}(1+\|v_n\|_{\Ld^\infty_t\Ld^\infty})\\
&\le C+Ct\|\omega_{n+1}\|_{\Ld^\infty_tH^s}(1+\|v_n-\bar v^\circ\|_{\Ld^\infty_tH^s}),
\end{align*}
and hence, for all $t\in[0,t_n]$,
\[\|\omega_{n+1}^t-\bar\omega^\circ\|_{\dot H^{-1}}\le C(1+tC_0)e^{CC_0t}.\]
We now turn to $\zeta_{n+1}$. Equation~\eqref{eq:iterativescheme-2} (with $\lambda>0$) is a transport-diffusion equation and admits a unique solution $\zeta_{n+1}$, which belongs to $\Ld^\infty_\loc(\R^+;(\zeta^\circ+\dot H^{-1}(\R^d))\cap H^s(\R^d))$ by Lemma~\ref{lem:parreg+tsp}(i)--(ii). More precisely, for all $t\ge0$, Lemma~\ref{lem:parreg+tsp}(i) yields for $s>1$
\begin{align}\label{eq:boundzetan1alm}
\|\zeta_{n+1}^t\|_{H^s}&\le Ce^{Ct}\big(\|\zeta^\circ\|_{H^s}+\|a\omega_n(\alpha(\Psi+v_n)^\bot+\beta(\Psi+v_n))\|_{\Ld^2_tH^s}\big)\nonumber\\
&\le Ce^{Ct}\big(1+t^{1/2}\|\omega_n\|_{\Ld^\infty_tH^s}(1+\|v_n-\bar v^\circ\|_{\Ld^\infty_tH^s})\big),
\end{align}
where we have used Lemma~\ref{lem:katoponce-1} together with the Sobolev embedding to estimate the terms. Noting that
\[\|\zeta^\circ-\bar\zeta^\circ\|_{\dot H^{-1}}\le\|av^\circ-a\bar v^\circ\|_{\Ld^2}\le C,\]
Lemma~\ref{lem:parreg+tsp}(ii) together with the Sobolev embedding for $s>1$ also gives for all $t\ge0$,
\begin{align*}
\|\zeta^t_{n+1}-\bar\zeta^\circ\|_{\dot H^{-1}}\le C+\|\zeta^t_{n+1}-\zeta^\circ\|_{\dot H^{-1}}&\le C+Ce^{Ct}(\|\zeta^\circ\|_{\Ld^2}+\|a\omega_n(\alpha(\Psi+v_n)^\bot+\beta(\Psi+v_n))\|_{\Ld^2_t\Ld^2})\\
&\le Ce^{Ct}(1+t^{1/2}\|\omega_n\|_{\Ld^\infty_tH^s}(1+\|v_n-\bar v^\circ\|_{\Ld^\infty_tH^s}).
\end{align*}
Combining this with~\eqref{eq:boundzetan1alm} yields, for all $t\in[0,t_n]$,
\[\|\zeta_{n+1}^t\|_{H^s}+\|\zeta_{n+1}^t-\bar\zeta^\circ\|_{\dot H^{-1}}\le Ce^{Ct}\big(1+t^{1/2}C_0(1+C_0)\big)\le C(1+t^{1/2}C_0^2)e^{Ct}.\]
We finally turn to $v_{n+1}$. By the above properties of $\omega_{n+1}$ and $\zeta_{n+1}$, Lemma~\ref{lem:reconstr} ensures that $v_{n+1}$ is uniquely defined in $\Ld^\infty_\loc(\R^+;\bar v^\circ+H^{s+1}(\R^2)^2)$ with $\curl(v_{n+1}^t-\bar v^\circ)=\omega_{n+1}^t-\bar\omega^\circ$ and $\Div(a(v_{n+1}^t-\bar v^\circ))=\zeta_{n+1}^t-\bar\zeta^\circ$ for all $t\ge0$. More precisely, Lemma~\ref{lem:reconstr} gives for all $t\in[0,t_n]$
\begin{align*}
\|v_{n+1}^t-\bar v^\circ\|_{H^{s+1}}&\le C\|\omega_{n+1}^t-\bar \omega^\circ\|_{\dot H^{-1}\cap H^s}+C\|\zeta_{n+1}^t-\bar \zeta^\circ\|_{\dot H^{-1}\cap H^s}\\
&\le C+C\|\omega_{n+1}^t-\bar \omega^\circ\|_{\dot H^{-1}}+C\|\omega_{n+1}^t\|_{H^s}+C\|\zeta_{n+1}^t-\bar\zeta^\circ\|_{\dot H^{-1}}+C\|\zeta_{n+1}^t\|_{H^s}\\
&\le C(1+tC_0+t^{1/2}C_0^2)e^{CC_0t}.
\end{align*}
Hence, we have proven that $(\omega_{n+1},\zeta_{n+1},v_{n+1})$ is well-defined in the correct space, and moreover, combining all the previous estimates, we find for all $t\in[0,t_n]$
\[\|(\omega_{n+1}^t,\zeta_{n+1}^t)\|_{H^s}+\|v_{n+1}^t-\bar v^\circ\|_{H^{s+1}}\le C(1+tC_0+t^{1/2}C_0^2)e^{CC_0t}.\]
Therefore, choosing $C_0=1+3Ce^C\lesssim1$, we obtain for all $t\le t_{n}\wedge C_0^{-4}$
\[\|(\omega_{n+1}^t,\zeta_{n+1}^t)\|_{H^s}+\|v_{n+1}^t-\bar v^\circ\|_{H^{s+1}}\le C_0,\]
and thus $t_{n+1}\ge t_n\wedge C_0^{-4}$. The result follows by induction.
\medskip
{\it Step~2: passing to the limit in the scheme.}
In this step, we show that up to an extraction the iterative scheme $(\omega_n,\zeta_n,v_n)$ converges to a weak solution of equation~\eqref{eq:limeqn2} on $[0,T_0)\times\R^2$.
By Step~1, the sequences $(\omega_n)_n$ and $(\zeta_n)_n$ are bounded in $\Ld^\infty([0,T_0];H^s(\R^2)^2)$, and the sequence $(v_n)_n$ is bounded in $\Ld^\infty([0,T_0];\bar v^\circ+H^{s+1}(\R^2)^2)$. Up to an extraction, we thus have $\omega_n\cvf{*}\omega$, $\zeta_n\cvf{*}\zeta$ in $\Ld^\infty([0,T_0];H^s(\R^2))$, and $v_n\cvf{*}v$ in $\Ld^\infty([0,T_0];\bar v^\circ+H^{s+1}(\R^2)^2)$. Comparing with equation~\eqref{eq:iterativescheme-1}, we deduce that $(\partial_t\omega_n)_n$ is bounded in $\Ld^{\infty}([0,T_0];H^{s-1}(\R^2))$.
Since by the Rellich theorem the space $H^{s}(U)$ is compactly embedded in $H^{s-1}(U)$ for any bounded domain $U\subset\R^2$, the Aubin-Simon lemma ensures that we have $\omega_n\to\omega$ strongly in $C^0([0,T_0];H^{s-1}_\loc(\R^2))$. This implies in particular $\omega_n v_n\to \omega v$ in the distributional sense, and hence we may pass to the limit in the weak formulation of equations~\eqref{eq:iterativescheme-1}--\eqref{eq:iterativescheme-2}, which yields $\curl v=\omega$, $\Div(av)=\zeta$, with $\omega$ and $\zeta$ satisfying, in the distributional sense on $[0,T_0)\times\R^2$,
\begin{gather*}
\partial_t\omega=\Div(\omega(\alpha(\Psi+v)^\bot+\beta(\Psi+v))),\quad \omega|_{t=0}=\omega^\circ,\\
\partial_t\zeta=\lambda\triangle\zeta-\lambda\Div(\zeta\nabla h)+\Div(a\omega(-\alpha(\Psi+v)+\beta(\Psi+v)^\bot)),\quad \zeta|_{t=0}=\zeta^\circ,
\end{gather*}
that is, the vorticity formulation~\eqref{eq:limeqn1VF}--\eqref{eq:limeqn2VF}. Let us quickly deduce that $v$ is a weak solution of~\eqref{eq:limeqn2}.
From the above equations, we deduce $\partial_t\omega\in \Ld^\infty([0,T_0];\dot H^{-1}\cap H^{s-1}(\R^2))$ and $\partial_t\zeta\in \Ld^\infty([0,T_0];\dot H^{-1}\cap H^{s-2}(\R^2))$. Lemma~\ref{lem:reconstr} then implies $\partial_t v\in\Ld^\infty([0,T_0];H^{s-1}(\R^2)^2)$. We may then deduce that the quantity $V:=\partial_tv-\lambda\nabla(a^{-1}\zeta)+\alpha(\Psi+v)\omega-\beta(\Psi+v)^\bot\omega$ belongs to $\Ld^\infty([0,T_0];\Ld^2(\R^2)^2)$ and satisfies $\curl V=\Div(aV)=0$ in the distributional sense. Using the Hodge decomposition in $\Ld^2(\R^2)^2$, we easily conclude $V=0$, hence $v\in\Ld^\infty([0,T_0];\bar v^\circ+H^{s+1}(\R^2)^2)$ is indeed a weak solution of~\eqref{eq:limeqn2} on $[0,T_0)\times\R^2$.
\end{proof}
We now turn to the local-in-time existence of smooth solutions of~\eqref{eq:limeqn2} in the degenerate case $\lambda=0$; note that the proof only works in the parabolic regime $\beta=0$.
\begin{prop}[Local existence, degenerate case]\label{prop:locexistdeg}
Let $\alpha\in\R$, $\beta=\lambda=0$. Let $s>2$, and let $h\in W^{s,\infty}(\R^2)$, $\Psi,\bar v^\circ\in W^{s+1,\infty}(\R^2)^2$. Let $v^\circ\in \bar v^\circ+H^{s}(\R^2)^2$ with $\omega^\circ:=\curl v^\circ$, $\bar \omega^\circ:=\curl \bar v^\circ\in H^s(\R^2)$ and $\zeta^\circ:=\Div(av^\circ)$, $\bar \zeta^\circ:=\Div(a\bar v^\circ)\in H^{s-1}(\R^2)$. Then, there exists $T>0$ and a weak solution $v\in \Ld^\infty([0,T);\bar v^\circ+ H^s(\R^2)^2)$ of~\eqref{eq:limeqn2} on $[0,T)\times\R^2$, with initial data $v^\circ$. Moreover, $T$ depends only on an upper bound on $|\alpha|$, $s$, $(s-2)^{-1}$, $\|h\|_{W^{s,\infty}}$, $\|(\Psi,\bar v^\circ)\|_{W^{s+1,\infty}}$, $\|v^\circ-\bar v^\circ\|_{H^s}$, $\|(\omega^\circ,\bar \omega^\circ)\|_{H^s}$, and $\|(\zeta^\circ,\bar \zeta^\circ)\|_{H^{s-1}}$.
\end{prop}
\begin{proof}
We consider the same iterative scheme $(\omega_n,\zeta_n,v_n)$ as in the proof of Proposition~\ref{prop:locexist}, but with $\lambda=\beta=0$. Let $s>2$. For all $n\ge0$, let
\[t_n:=\sup\Big\{t\ge0:\|\omega_n^t\|_{H^s}+\|\zeta_n^t\|_{H^{s-1}}+\|v_n^t-\bar v^\circ\|_{H^{s}}\le C_0\Big\},\]
for some $C_0\ge1$ to be suitably chosen (depending on initial data), and let $T_0:=\inf_n t_n$. In this proof, we use the notation $\lesssim$ for $\le$ up to a constant $C>0$ that depends only on an upper bound on $|\alpha|$, $s$, $(s-2)^{-1}$, $\|h\|_{W^{s,\infty}}$, $\|(\Psi,\bar v^\circ)\|_{W^{s+1,\infty}}$, $\|v^\circ-\bar v^\circ\|_{H^s}$, $\|(\zeta^\circ,\bar\zeta^\circ)\|_{H^{s-1}}$, and $\|(\omega^\circ,\bar\omega^\circ)\|_{H^s}$.
Just as in the proof of Proposition~\ref{prop:locexist}, we first need to show that this iterative scheme is well-defined and that $T_0>0$. We proceed by induction: let $n\ge0$ be fixed, and assume that $(\omega_n,\zeta_n,v_n)$ is well-defined with $\omega_n\in\Ld^\infty_\loc(\R^+; H^s(\R^2))$, $\zeta_n\in \Ld^\infty_\loc(\R^+;H^{s-1}(\R^2))$, and $v_n\in\Ld^\infty_\loc(\R^+;\bar v^\circ+H^{s}(\R^2)^2)$. (For $n=0$ this is indeed trivial by assumption.)
We first study $\zeta_{n+1}$. As $\lambda=0$, equation~\eqref{eq:iterativescheme-2} takes the form $\partial_t\zeta_{n+1}=-\alpha\Div(a\omega_n(\Psi+v_n))$. Integrating this equation in time then yields
\begin{align*}
\|\zeta^t_{n+1}\|_{H^{s-1}}&\le\|\zeta^\circ\|_{H^{s-1}}+|\alpha|\int_0^t\|\omega^u_n(\Psi+v^u_n)\|_{H^{s}}du\lesssim 1+t(1+\|v_n-\bar v^\circ\|_{\Ld^\infty_tH^{s}})\|\omega_n\|_{\Ld^\infty_{t}H^{s}}.
\end{align*}
where we have used Lemma~\ref{lem:katoponce-1} together with the Sobolev embedding to estimate the last term. Similarly, noting that $\|\zeta^\circ-\bar\zeta^\circ\|_{\dot H^{-1}}\le \|av^\circ-a\bar v^\circ\|_{\Ld^2}\le C$, we find for $s>1$
\begin{align*}
\|\zeta^t_{n+1}-\bar\zeta^\circ\|_{\dot H^{-1}}\le C+\|\zeta^t_{n+1}-\zeta^\circ\|_{\dot H^{-1}}&\le
\|\zeta^\circ\|_{H^{s-1}}+|\alpha|\int_0^t\|\omega^u_n(\Psi+v^u_n)\|_{\Ld^2}du\\
&\lesssim 1+t(1+\|v_n-\bar v^\circ\|_{\Ld^\infty_tH^{s}})\|\omega_n\|_{\Ld^\infty_{t}H^{s}}.
\end{align*}
Hence we obtain for all $t\in[0,t_n]$
\begin{align*}
\|\zeta^t_{n+1}\|_{H^{s-1}}+\|\zeta^t_{n+1}-\bar\zeta^\circ\|_{\dot H^{-1}}&\le C+ Ct(1+C_0)C_0\le C(1+tC_0^2).
\end{align*}
We now turn to the study of $\omega_{n+1}$. As $\beta=0$, equation~\eqref{eq:iterativescheme-1} takes the form $\partial_t\omega_{n+1}=\alpha\Div(\omega_{n+1}(\Psi+v_n)^\bot)$. For all $t\ge0$, Lemma~\ref{lem:katoponce} together with the Sobolev embedding then yields (here the choice $\beta=0$ is crucial)
\begin{align*}
\partial_t\|\omega_{n+1}^t\|_{H^s}&\lesssim (1+\|\nabla v_n^t\|_{\Ld^\infty})\|\omega_{n+1}^t\|_{H^s}+\|\omega_{n+1}^t\|_{\Ld^\infty}\|\curl(v_n^t-\bar v^\circ)\|_{H^s}\nonumber\\
&\lesssim (1+\|\omega_n^t\|_{H^s}+\|\nabla (v_n^t-\bar v^\circ)\|_{\Ld^\infty})\|\omega_{n+1}^t\|_{H^s}.
\end{align*}
By the Sobolev embedding for $s>2$, we find, for all $t\in[0,t_n]$,
\begin{align*}
\partial_t\|\omega_{n+1}^t\|_{H^s}&\le C(1+\|\omega_n^t\|_{H^s}+\|v_n^t-\bar v^\circ\|_{H^s})\|\omega_{n+1}^t\|_{H^s}\le C(1+2C_0)\|\omega_{n+1}^t\|_{H^s},
\end{align*}
and thus
\[\|\omega_{n+1}^t\|_{H^s}\le \|\omega^\circ\|_{H^s}e^{C(1+2C_0)t}\le Ce^{CC_0t}.\]
Moreover, noting that $\|\omega^\circ-\bar\omega^\circ\|_{\dot H^{-1}}\le\|v^\circ-\bar v^\circ\|_{\Ld^2}\le C$, and applying Lemma~\ref{lem:katoponce} together with the Sobolev embedding, we obtain
\begin{align*}
\|\omega_{n+1}^t-\bar\omega^\circ\|_{\dot H^{-1}}&\le C+\|\omega_{n+1}^t-\omega^\circ\|_{\dot H^{-1}}\\
&\le C+Ct(1+\|v_n\|_{\Ld^\infty_t\Ld^\infty})\|\omega_{n+1}\|_{\Ld^\infty_t\Ld^2}\\
&\le C+Ct(1+\|v_n-\bar v^\circ\|_{\Ld^\infty_tH^s})\|\omega_{n+1}\|_{\Ld^\infty_t\Ld^2},
\end{align*}
hence for all $t\in[0,t_n]$
\begin{align*}
\|\omega_{n+1}^t-\bar\omega^\circ\|_{\dot H^{-1}}&\le C+Ct(1+C_0)\|\omega_{n+1}\|_{\Ld^\infty_t\Ld^2}\le C+CC_0te^{CC_0t}.
\end{align*}
We finally turn to $v_{n+1}$. By the above properties of $\omega_{n+1}$ and $\zeta_{n+1}$, Lemma~\ref{lem:reconstr} ensures that $v_{n+1}$ is uniquely defined in $\Ld^\infty_\loc(\R^+;\bar v^\circ+H^s(\R^2)^2)$, and for all $t\in[0,t_n]$
\begin{align*}
\|v_{n+1}^t-\bar v^\circ\|_{H^s}&\le C\|\omega_{n+1}^t-\bar\omega^\circ\|_{\dot H^{-1}\cap H^{s-1}}+C\|\zeta_{n+1}^t-\bar \zeta^\circ\|_{\dot H^{-1}\cap H^{s-1}}\\
&\le C+C\|\omega_{n+1}^t-\bar \omega^\circ\|_{\dot H^{-1}}+C\|\omega_{n+1}^t\|_{H^s}+C\|\zeta_{n+1}^t-\bar\zeta^\circ\|_{\dot H^{-1}}+C\|\zeta_{n+1}^t\|_{H^{s-1}}\\
&\le C(1+tC_0^2)e^{CC_0t}.
\end{align*}
Hence, we have proven that $(\omega_{n+1},\zeta_{n+1},v_{n+1})$ is well-defined in the correct space, and moreover, combining all the previous estimates, we find for all $t\in[0,t_n]$
\[\|\omega_{n+1}^t\|_{H^s}+\|\zeta_{n+1}^t\|_{H^{s-1}}+\|v_{n+1}^t-\bar v^\circ\|_{H^s}\le C(1+tC_0^2)e^{CC_0t}.\]
Therefore, choosing $C_0=1+2Ce^C\lesssim1$, we obtain for all $t\le t_n\wedge C_0^{-2}$
\[\|\omega_{n+1}^t\|_{H^s}+\|\zeta_{n+1}^t\|_{H^{s-1}}+\|v_{n+1}^t-\bar v^\circ\|_{H^s}\le C_0,\]
and thus $t_{n+1}\ge t_n\wedge C_0^{-2}$. The conclusion now follows just as in the proof of Proposition~\ref{prop:locexist}.
\end{proof}
\section{Global existence}\label{chap:global}
As local existence is proven in the framework of Sobolev spaces, the strategy for global existence consists in looking for a priori estimates on the Sobolev norms. Since we are also interested in Hlder regularity of solutions, we study a priori estimates on Hlder-Zygmund norms as well. As we will see, the main key for this is to prove a priori estimates for the vorticity $\omega$ in $\Ld^p(\R^2)$ for some $p>1$.
\subsection{A priori estimates}\label{chap:apriori}
We begin with the following elementary energy estimates. Note that in the degenerate case $\lambda=0$, the a priori estimate for $\zeta$ in $\Ld^2_\loc(\R^+;\Ld^2(\R^2))$ disappears, which is precisely the reason why we do not manage to prove any global result in that case.
Although we stick in the sequel to the framework of item~(iii), a priori estimates in slightly more general spaces are obtained in item~(ii) for the compressible model~\eqref{eq:limeqn2}.
\begin{lem}[Energy estimates]\label{lem:aprioriest}
Let $\lambda\ge0$, $\alpha\ge0$, $\beta\in\R$, $T>0$ and $\Psi\in W^{1,\infty}(\R^2)$. Let $v^\circ\in\Ld^2_\loc(\R^2)^2$ be such that $\omega^\circ:=\curl v^\circ\in \Pc\cap\Ld^2_\loc(\R^2)$, and such that either $\Div(av^\circ)=0$ in the case~\eqref{eq:limeqn1}, or $\zeta^\circ:=\Div(av^\circ)\in\Ld^2_\loc(\R^2)$ in the case~\eqref{eq:limeqn2}. Let $v\in\Ld^2_\loc([0,T)\times\R^2)^2$ be a weak solution of~\eqref{eq:limeqn1} or of~\eqref{eq:limeqn2} on $[0,T)\times\R^2$ with initial data $v^\circ$. Set $\zeta:=0$ in the case~\eqref{eq:limeqn1}. Then the following hold:
\begin{enumerate}[~(i)]
\item For all $t\in[0,T)$, we have $\omega^t\in\Pc(\R^2)$.
\item \emph{Localized energy estimate for~\eqref{eq:limeqn2}:} if $v\in\Ld^2_\loc([0,T);\Ld^2_\uloc(\R^2)^2)$ is such that $\omega\in \Ld^\infty_\loc([0,T);\Ld^\infty(\R^2))$ and $\zeta\in\Ld^2_\loc([0,T);\Ld^2_\uloc(\R^2))$, then we have for all $t\in[0,T)$
\begin{multline*}
\|v^t\|_{\Ld^2_\uloc}^2+\alpha\||v|^2\omega\|_{\Ld^1_t\Ld^1_\uloc}+\lambda\|\zeta\|_{\Ld^2_t\Ld^2_\uloc}^2\le \begin{cases}Ce^{C(1+\lambda^{-1})t}\|v^\circ\|_{\Ld^2_\uloc}^2,&\text{if $\alpha=0$, $\lambda>0$};\\
C\alpha^{-1}\lambda^{-1}(e^{\lambda t}-1)+Ce^{\lambda t}\|v^\circ\|_{\Ld^2_\uloc}^2,&\text{if $\alpha>0$, $\lambda>0$;}\\
C\alpha^{-1}t+C\|v^\circ\|_{\Ld^2_\uloc}^2,&\text{if $\alpha>0$, $\lambda=0$;}
\end{cases}
\end{multline*}
where the constants $C$'s depend only on an upper bound on $\alpha$, $|\beta|$, $\lambda$, $\|h\|_{W^{1,\infty}}$, $\|\Psi\|_{\Ld^\infty}$, and additionally on $\|\nabla\Psi\|_{\Ld^{\infty}}$ in the case $\alpha=0$.
\item \emph{Relative energy estimate for~\eqref{eq:limeqn1} and~\eqref{eq:limeqn2}:} if there is some $\bar v^\circ\in W^{1,\infty}(\R^2)^2$ such that $v^\circ\in\bar v^\circ+\Ld^2(\R^2)^2$, $\bar\omega^\circ:=\curl \bar v^\circ\in \Ld^2(\R^2)$, and such that either $\Div(a\bar v^\circ)=0$ in the case~\eqref{eq:limeqn1}, or $\bar\zeta^\circ:=\Div(a\bar v^\circ)\in \Ld^2(\R^2)$ in the case~\eqref{eq:limeqn2}, and if $v\in\Ld^\infty_\loc([0,T);\bar v^\circ+\Ld^2(\R^2))$, $\omega\in\Ld^\infty_\loc([0,T);\Ld^\infty(\R^2))$, $\zeta\in\Ld^2_\loc([0,T);\Ld^2(\R^2))$, then we have for all $t\in[0,T)$
\begin{multline*}
\int_{\R^2}a|v^t-\bar v^\circ|^2+\alpha\int_0^tdu\int_{\R^2} a|v^u-\bar v^\circ|^2\omega^u+\lambda\int_0^tdu\int_{\R^2} a^{-1}|\zeta^u|^2\\
\le \begin{cases}Ct(1+\alpha^{-1})+\int_{\R^2} a|v^\circ-\bar v^\circ|^2,&\text{in both cases~\eqref{eq:limeqn1} and~\eqref{eq:limeqn2}, with $\alpha>0$;}\\
e^{Ct}\big(1+\int_{\R^2} a|v^\circ-\bar v^\circ|^2\big),&\text{in the case~\eqref{eq:limeqn1}, with $\alpha=0$}\\
C(e^{C(1+\lambda^{-1})t}-1)+e^{C(1+\lambda^{-1})t}\int_{\R^2} a|v^\circ-\bar v^\circ|^2,&\text{in the case~\eqref{eq:limeqn2}, with $\alpha=0$, $\lambda>0$};\end{cases}
\end{multline*}
where the constants $C$'s depend only on an upper bound on $\alpha$, $|\beta|$, $\lambda$, $\|h\|_{W^{1,\infty}}$, $\|(\Psi, \bar v^\circ)\|_{\Ld^\infty}$, $\|\bar\zeta^\circ\|_{\Ld^2}$, and additionally on $\|\bar\omega^\circ\|_{\Ld^2}$ and $\|(\nabla\Psi,\nabla\bar v^\circ)\|_{\Ld^{\infty}}$ in the case $\alpha=0$.
\end{enumerate}
\end{lem}
\begin{proof}
Item~(i) is a standard consequence of the fact that $\omega$ satisfies a transport equation~\eqref{eq:limeqn1VF}.
It thus remains to check items (ii) and (iii). We split the proof into three steps.
\medskip
{\it Step~1: proof of~(ii).}
Let $v$ be a weak solution of the compressible equation~\eqref{eq:limeqn2} as in the statement, and let also $C>0$ denote any constant as in the statement. We prove more precisely, for all $t\in[0,T)$ and $x_0\in\R^2$,
\begin{align}\label{eq:it2apriori}
&\int ae^{-|x-x_0|}|v^t|^2+\alpha\int_0^tdu\int ae^{-|x-x_0|}|v^u|^2\omega^u+\lambda\int_0^tdu\int a^{-1}e^{-|x-x_0|}|\zeta^u|^2\\&\hspace{5cm}\le \begin{cases}e^{C(1+\lambda^{-1})t}\int ae^{-|x-x_0|}|v^\circ|^2,&\text{if $\alpha=0$, $\lambda>0$};\\
C\alpha^{-1}\lambda^{-1}(e^{\lambda t}-1)+e^{\lambda t}\int ae^{-|x-x_0|}|v^\circ|^2,&\text{if $\alpha>0$, $\lambda>0$;}\\
C\alpha^{-1}t+\int ae^{-|x-x_0|}|v^\circ|^2,&\text{if $\alpha>0$, $\lambda=0$.}
\end{cases}\nonumber
\end{align}
Item~(ii) directly follows from this, noting that
\[\|f\|_{\Ld^p_\uloc}^p\simeq\sup_{x_0\in\R^2}\int e^{-|x-x_0|}|f(x)|^pdx\]
holds for all $1\le p<\infty$. So it suffices to prove~\eqref{eq:it2apriori}.
Let $x_0\in\R^2$ be fixed, and denote by $\chi(x):=e^{-|x-x_0|}$ the exponential cut-off function centered at $x_0$.
From equation~\eqref{eq:limeqn2}, we compute the following time-derivative
\begin{align*}
\partial_t\int a\chi|v^t|^2&=2\int a\chi (\lambda\nabla(a^{-1}\zeta^t)-\alpha(\Psi+v^t)\omega^t+\beta(\Psi+v^t)^\bot\omega^t)\cdot v^t,
\end{align*}
and hence, by integration by parts with $|\nabla\chi|\le\chi$,
\begin{align}
\partial_t\int a\chi |v^t|^2&=-2\lambda\int a^{-1}\chi |\zeta^t|^2-2\lambda\int \nabla\chi \cdot v^t\zeta^t-2\alpha\int a\chi |v^t|^2\omega^t+2\int a\chi (-\alpha\Psi+\beta\Psi^\bot)\cdot v^t\omega^t\nonumber\\
&\le-2\lambda\int a^{-1}\chi |\zeta^t|^2+2\lambda\int \chi |\zeta^t||v^t|-2\alpha\int a\chi |v^t|^2\omega^t+2\int a\chi (-\alpha\Psi+\beta\Psi^\bot)\cdot v^t\omega^t.\label{eq:identityderenergy}
\end{align}
First consider the case $\alpha>0$. We may then bound the terms as follows, using the inequality $2xy\le x^2+y^2$,
\begin{align*}
\partial_t\int a\chi |v^t|^2&\le-2\lambda\int a^{-1}\chi |\zeta^t|^2+2\lambda \int\chi |\zeta^t| |v^t|-2\alpha\int a\chi |v^t|^2\omega^t+2C\int a\chi |v^t|\omega^t\\
&\le-\lambda\int a^{-1}\chi|\zeta^t|^2+\lambda\int a\chi|v^t|^2-\alpha\int a\chi|v^t|^2\omega^t+C\alpha^{-1}\underbrace{\int a\chi\omega^t}_{\le C}.
\end{align*}
As $\omega^t$ is nonnegative by item~(i), the first and third right-hand side terms are nonpositive, and the Grnwall inequality yields $\int a\chi|v^t|^2\le C\alpha^{-1}\lambda^{-1}(e^{\lambda t}-1)+e^{\lambda t}\int a\chi|v^\circ|^2$ (or $\int a\chi|v^t|^2\le C\alpha^{-1}t+\int a\chi|v^\circ|^2$ if $\lambda=0$). The above estimate may then be rewritten as follows
\begin{align*}
\alpha\int a\chi|v^t|^2\omega^t+\lambda\int a^{-1}\chi|\zeta^t|^2&\le C\alpha^{-1}+\lambda\int a\chi|v^t|^2-\partial_t\int a\chi|v^t|^2\\
&\le C\alpha^{-1}e^{\lambda t}+\lambda e^{\lambda t}\int a\chi|v^\circ|^2-\partial_t\int a\chi|v^t|^2.
\end{align*}
Integrating in time yields
\begin{align*}
\alpha\int_0^Tdt\int a\chi|v^t|^2\omega^t+\lambda\int_0^Tdt\int a^{-1}\chi|\zeta^t|^2\le C\alpha^{-1}\lambda^{-1}(e^{-\lambda T}-1)+e^{\lambda T}\int a\chi|v^\circ|^2-\int a\chi|v^T|^2,
\end{align*}
so that~\eqref{eq:it2apriori} is proven for $\alpha>0$.
We now turn to the case $\alpha=0$, $\lambda>0$.
In that case, using the following Delort type identity, which holds here in $\Ld^\infty_\loc([0,T);W^{-1,1}_\loc(\R^2)^2)$,
\begin{gather*}
\omega v=a^{-1}\zeta v^\bot-\frac12|v|^2\nabla^\bot h-a^{-1}(\Div (aS_{v}))^\bot,\qquad S_{v}:=v\otimes v-\frac12 |v|^2\Id,
\end{gather*}
the estimate~\eqref{eq:identityderenergy} becomes, by integration by parts with $|\nabla\chi|\le\chi$,
\begin{multline*}
\partial_t\int a\chi|v^t|^2\le-2\lambda\int a^{-1}\chi|\zeta^t|^2+2\lambda\int\chi|\zeta^t||v^t|-2\alpha\int a\chi|v^t|^2\omega^t+2\int\chi(-\alpha\Psi+\beta\Psi^\bot)\cdot (v^t)^\bot\zeta^t\\
-\int a\chi(-\alpha\Psi+\beta\Psi^\bot)\cdot \nabla^\bot h|v^t|^2+2\int a\chi(\alpha\nabla\Psi^\bot+\beta\nabla\Psi):S_{v^t}+2\int a\chi|\alpha\Psi^\bot+\beta\Psi||S_{v^t}|,
\end{multline*}
and hence, noting that $|S_{v^t}|\le C|v^t|^2$, and using the inequality $2xy\le x^2+y^2$,
\begin{align*}
\partial_t\int a\chi|v^t|^2&\le-2\lambda\int a^{-1}\chi|\zeta^t|^2+2C\int \chi|\zeta^t||v^t|-2\alpha\int a\chi|v^t|^2\omega^t+C\int a\chi|v^t|^2\\
&\le-\lambda\int a^{-1}\chi|\zeta^t|^2+C(1+\lambda^{-1})\int a\chi|v^t|^2.
\end{align*}
The Grnwall inequality yields $\int a\chi|v^t|^2\le e^{C(1+\lambda^{-1})t}\int a\chi|v^\circ|^2$. The above estimate may then be rewritten as follows:
\begin{align*}
\lambda\int a^{-1}\chi|\zeta^t|^2&\le C(1+\lambda^{-1})\int a\chi|v^t|^2-\partial_t\int a\chi|v^t|^2\\
&\le C(1+\lambda^{-1})e^{C(1+\lambda^{-1})t}\int a\chi|v^\circ|^2-\partial_t\int a\chi|v^t|^2.
\end{align*}
Integrating in time, the result~\eqref{eq:it2apriori} is proven for $\alpha=0$. (Note that this proof cannot be adapted to the incompressible case~\eqref{eq:limeqn1}, due to the lack of a sufficiently good control on the pressure $p$ in~\eqref{eq:limeqn1} in general.)
\medskip
{\it Step~2: proof of~(iii) for~\eqref{eq:limeqn2}.}
We denote by $C$ any positive constant as in the statement~(iii).
From equation~\eqref{eq:limeqn2}, we compute the following time-derivative
\begin{align*}
\partial_t\int a|v^t-\bar v^\circ|^2&=2\int a(\lambda\nabla(a^{-1}\zeta^t)-\alpha(\Psi+v^t)\omega^t+\beta(\Psi+v^t)^\bot\omega^t)\cdot(v^t-\bar v^\circ),
\end{align*}
or equivalently, integrating by parts and suitably regrouping the terms,
\begin{align}\label{eq:decompapest}
\partial_t\int a|v^t-\bar v^\circ|^2&=-2\lambda\int a^{-1}|\zeta^t|^2+2\lambda\int a^{-1}\zeta^t\bar\zeta^\circ-2\alpha\int a|v^t-\bar v^\circ|^2\omega^t\\
&\qquad+2\int a(-\alpha(\Psi+\bar v^\circ)+\beta(\Psi+\bar v^\circ)^\bot)\cdot(v^t-\bar v^\circ)\omega^t.\nonumber
\end{align}
First consider the case $\alpha>0$. We may then bound the terms as follows, using the inequality $2xy\le x^2+y^2$,
\begin{align*}
\partial_t\int a|v^t-\bar v^\circ|^2&\le-2\lambda\int a^{-1}|\zeta^t|^2+2\lambda\int a^{-1}\zeta^t\bar\zeta^\circ-2\alpha\int a|v^t-\bar v^\circ|^2\omega^t+2C\int a|v^t-\bar v^\circ|\omega^t\\
&\le-\lambda\int a^{-1}|\zeta^t|^2+\lambda\int a^{-1}|\bar\zeta^\circ|^2-\alpha\int a|v^t-\bar v^\circ|^2\omega^t+C\alpha^{-1}.
\end{align*}
Applying the Grnwall inequality as in Step~1, item~(iii) easily follows from this in the case $\alpha>0$.
We now turn to the case $\alpha=0$, $\lambda>0$. In that case, we rather rewrite~\eqref{eq:decompapest} in the form
\begin{multline*}
\partial_t\int a|v^t-\bar v^\circ|^2=-2\lambda\int a^{-1}|\zeta^t|^2+2\lambda\int a^{-1}\zeta^t\bar\zeta^\circ-2\alpha\int a|v^t-\bar v^\circ|^2\omega^t\\
+2\int a(-\alpha(\Psi+\bar v^\circ)+\beta(\Psi+\bar v^\circ)^\bot)\cdot(v^t-\bar v^\circ)(\omega^t-\bar\omega^\circ)+2\int a(-\alpha(\Psi+\bar v^\circ)+\beta(\Psi+\bar v^\circ)^\bot)\cdot(v^t-\bar v^\circ)\bar\omega^\circ,
\end{multline*}
so that, using the following Delort type identity, which holds here in $\Ld^\infty_\loc([0,T);W^{-1,1}_\loc(\R^2)^2)$,
\begin{gather*}
(\omega-\bar\omega^\circ)(v-\bar v^\circ)=a^{-1}(\zeta-\bar\zeta^\circ)(v-\bar v^\circ)^\bot-\frac12|v-\bar v^\circ|^2\nabla^\bot h-a^{-1}(\Div (aS_{v-\bar v^\circ}))^\bot,
\end{gather*}
we find by integration by parts
\begin{multline*}
\partial_t\int a|v^t-\bar v^\circ|^2=-2\lambda\int a^{-1}|\zeta^t|^2+2\lambda\int a^{-1}\zeta^t\bar\zeta^\circ-2\alpha\int a|v^t-\bar v^\circ|^2\omega^t\\
+2\int (-\alpha(\Psi+\bar v^\circ)+\beta(\Psi+\bar v^\circ)^\bot)\cdot(v^t-\bar v^\circ)^\bot(\zeta^t-\bar\zeta^\circ)-\int a(-\alpha(\Psi+\bar v^\circ)+\beta(\Psi+\bar v^\circ)^\bot)\cdot\nabla^\bot h|v^t-\bar v^\circ|^2\\
+2\int a\nabla(\alpha(\Psi+\bar v^\circ)^\bot+\beta(\Psi+\bar v^\circ)): S_{v^t-\bar v^\circ}+2\int a(-\alpha(\Psi+\bar v^\circ)+\beta(\Psi+\bar v^\circ)^\bot)\cdot(v^t-\bar v^\circ)\bar\omega^\circ.
\end{multline*}
We may then bound the terms as follows, using the inequality $2xy\le x^2+y^2$,
\begin{align*}
\partial_t\int a|v^t-\bar v^\circ|^2&\le-2\lambda\int a^{-1}|\zeta^t|^2+2\lambda\int a^{-1}|\zeta^t||\bar \zeta^\circ|-2\alpha\int a|v^t-\bar v^\circ|^2\omega^t\\
&\qquad+C\int |v^t-\bar v^\circ|\,|\zeta^t|+C\int |v^t-\bar v^\circ|\,|\bar \zeta^\circ|+C\int a|v^t-\bar v^\circ|^2+C\int a|v^t-\bar v^\circ|\bar \omega^\circ\\
&\le-\lambda\int a^{-1}|\zeta^t|^2+C\int a^{-1}|\bar \zeta^\circ|^2+C\int |\bar\omega^\circ|^2+C(1+\lambda^{-1})\int a |v^t-\bar v^\circ|^2.
\end{align*}
Item~(iii) in the case $\alpha=0$ then easily follows from the Grnwall inequality.
\medskip
{\it Step~3: proof of~(iii) for~\eqref{eq:limeqn1}.}
We denote by $C$ any positive constant as in the statement~(iii).
As the identity $v-\bar v^\circ=a^{-1}\nabla^\bot(\Div a^{-1}\nabla)^{-1}(\omega-\bar\omega^\circ)$ follows from~\eqref{eq:Helmholtz} together with the constraint $\Div(a v)=0$, and as by assumption $v-\bar v^\circ\in\Ld^2_\loc([0,T);\Ld^2(\R^2)^2)$, we deduce
$\omega-\bar\omega^\circ\in\Ld^2_\loc([0,T);\dot H^{-1}(\R^2))$ and $(\Div a^{-1}\nabla)^{-1}(\omega-\bar\omega^\circ)\in\Ld^2_\loc([0,T);\dot H^{1}(\R^2))$.
In particular, this implies by integration by parts
\begin{align}\label{eq:ippomegav}
\int a|v-\bar v^\circ|^2=\int a^{-1}|\nabla(\Div a^{-1}\nabla)^{-1}(\omega-\bar\omega^\circ)|^2=\int (\omega-\bar\omega^\circ) (-\Div a^{-1}\nabla)^{-1}(\omega-\bar\omega^\circ).
\end{align}
From equation~\eqref{eq:limeqn1VF}, we compute the following time-derivative
\begin{align*}
&\partial_t\int(\omega-\bar \omega^\circ)(-\Div a^{-1}\nabla)^{-1}(\omega-\bar \omega^\circ)\\
=~&2\int \nabla(\Div a^{-1}\nabla)^{-1}(\omega-\bar \omega^\circ)\cdot(\alpha(\Psi+v)^\bot+\beta(\Psi+v))\omega\\
=~&-2\int a(v-\bar v^\circ)^\bot\cdot\Big(\alpha(v-\bar v^\circ)^\bot+\beta(v-\bar v^\circ)+\alpha(\Psi+\bar v^\circ)^\bot+\beta(\Psi+\bar v^\circ)\Big)\omega\\
=~&-2\alpha\int a|v-\bar v^\circ|^2\omega-2\int a\omega(v-\bar v^\circ)^\bot\cdot(\alpha(\Psi+\bar v^\circ)^\bot+\beta(\Psi+\bar v^\circ)).
\end{align*}
Combining this with identity~\eqref{eq:ippomegav}, we are now in position to conclude exactly as in Step~2 after equation~\eqref{eq:decompapest} (but with $\zeta,\bar\zeta^\circ\equiv0$).
\end{proof}
The energy estimates given by Lemma~\ref{lem:aprioriest} above are not strong enough to prove global existence. The key is to find an additional a priori $\Ld^p$-estimate for the vorticity $\omega$.
We begin with the following $\Ld^p$-estimate. Note that unfortunately the same kind of argument does not work in the mixed-flow compressible case (that is, \eqref{eq:limeqn2} with $\alpha\ge0$, $\beta\ne0$), as it would require a too strong additional control on the norm $\|\zeta^t\|_{\Ld^{p+1}}$. This is why this case is excluded from our global results.
\begin{lem}[$\Ld^p$-estimates for vorticity]\label{lem:Lpvort}
Let $\lambda,\alpha\ge0$, $\beta\in\R$, $T>0$, $h,\Psi\in W^{1,\infty}(\R^2)$, $\bar v^\circ\in \Ld^{\infty}(\R^2)^2$, and $v^\circ\in\bar v^\circ+\Ld^2(\R^2)^2$, with $\omega^\circ:=\curl v^\circ\in\Pc(\R^2)$, $\bar\omega^\circ:=\curl\bar v^\circ\in\Pc\cap\Ld^\infty(\R^2)$. In the case~\eqref{eq:limeqn1}, also assume $\Div(av^\circ)=\Div(a\bar v^\circ)=0$. Let $v\in\Ld^\infty_\loc([0,T);\bar v^\circ+\Ld^2\cap\Ld^\infty(\R^2)^2)$ be a weak solution of~\eqref{eq:limeqn1} or of~\eqref{eq:limeqn2} in $[0,T)\times\R^2$ with initial data $v^\circ$, and with $\omega:=\curl v\in\Ld^\infty_\loc([0,T);\Pc\cap\Ld^{\infty}(\R^2))$. For all $1<p\le\infty$ and $t\in[0,T)$
\begin{enumerate}[(i)]
\item in the case~\eqref{eq:limeqn1} with $\alpha>0$, $\beta\in\R$, we have
\begin{align}\label{eq:boundomegaLp1p}
\bigg(\frac{\alpha(p-1)}2\bigg)^{1/p}\|\omega\|_{\Ld^{p+1}_t\Ld^{p+1}}^{1+1/p}+\|\omega^t\|_{\Ld^p}&\le \|\omega^\circ\|_{\Ld^p}+C_p,
\end{align}
where the constant $C_p$ depends only on an upper bound on $(p-1)^{-1}$, $\alpha$, $\alpha^{-1}$, $|\beta|$, $T$, $\|(h,\Psi)\|_{W^{1,\infty}}$, $\|(\bar v^\circ,\bar\omega^\circ)\|_{\Ld^\infty}$, and on $\|v^\circ-\bar v^\circ\|_{\Ld^2}$;
\item in both cases~\eqref{eq:limeqn1} and~\eqref{eq:limeqn2} with $\alpha\ge0$, $\beta=0$, $\lambda\ge0$, the same estimate~\eqref{eq:boundomegaLp1p} holds, where the constant $C_p=C$ depends only on an upper bound on $\alpha$, $T$, and on $\|(\curl\Psi)^-\|_{\Ld^{\infty}}$.
\end{enumerate}
\end{lem}
\begin{proof}
It is sufficient to prove the result for all $1<p<\infty$. In this proof, we use the notation $\lesssim$ for $\le$ up to a constant $C>0$ as in the statement but independent of $p$.
As explained at the end of Step~1, we may focus on item~(i), the other being much simpler. We split the proof into three steps. Set $\bar\theta^\circ:=\Div\bar v^\circ$, $\theta:=\Div v$.
In the sequel, we repeatedly use the a priori estimate of Lemma~\ref{lem:aprioriest}(i) in the following interpolated form: for all $s\le q$ and $t\in[0,T)$
\begin{align}\label{eq:interpolom}
\|\omega^t\|_{\Ld^s}\le \|\omega^t\|_{\Ld^q}^{q'/s'}\|\omega^t\|_{\Ld^1}^{1-q'/s'}=\|\omega^t\|_{\Ld^q}^{q'/s'}.
\end{align}
\medskip
{\it Step~1: preliminary estimate for $\omega$.}
In this step, we prove that for all $1<p<\infty$ and all $t\in[0,T)$
\begin{align}\label{eq:apomegLp}
\alpha(p-1)\|\omega\|_{\Ld^{p+1}_t\Ld^{p+1}}^{p+1}+\|\omega^t\|_{\Ld^p}^p\le \|\omega^\circ\|_{\Ld^p}^p+C(p-1)(t^{1/p}+\|v\|_{\Ld^p_t\Ld^\infty})\|\omega\|_{\Ld^{p+1}_t\Ld^{p+1}}^{p-1/p}.
\end{align}
Using equation~\eqref{eq:limeqn1VF} and integrating by parts we may compute
\begin{align*}
\partial_t\int(\omega^t)^p&=p\int(\omega^t)^{p-1}\Div(\omega^t(\alpha(\Psi+v^t)^\bot+\beta(\Psi+v^t)))\\
&=-p(p-1)\int(\omega^t)^{p-1}\nabla\omega^t\cdot(\alpha(\Psi+v^t)^\bot+\beta(\Psi+v^t))\\
&=-(p-1)\int\nabla(\omega^t)^p\cdot(\alpha(\Psi+v^t)^\bot+\beta(\Psi+v^t))\\
&=(p-1)\int(\omega^t)^p\Div(\alpha(\Psi+v^t)^\bot+\beta(\Psi+v^t)).
\end{align*}
Using the constraint $\Div(av)=0$ to compute $\Div(\alpha v^\bot+\beta v)=-\alpha\omega+\beta\Div v=-\alpha\omega-\beta\nabla h\cdot v$, we find
\begin{align*}
(p-1)^{-1}\partial_t\int(\omega^t)^p&\le -\alpha\int(\omega^t)^{p+1}+C\int(\omega^t)^p(1+|v^t|)\\
&\le -\alpha\int(\omega^t)^{p+1}+C(1+\|v^t\|_{\Ld^\infty})\int(\omega^t)^p.
\end{align*}
By interpolation~\eqref{eq:interpolom}, we obtain
\begin{align*}
\alpha\int(\omega^t)^{p+1}+(p-1)^{-1}\partial_t\int(\omega^t)^p&\le C(1+\|v^t\|_{\Ld^\infty})\|\omega^t\|_{\Ld^{p+1}}^{p-1/p},
\end{align*}
and the result~\eqref{eq:apomegLp} directly follows by integration with respect to $t$ and by the Hlder inequality.
Note that in the case of item~(ii) we rather have
\[\alpha\int(\omega^t)^{p+1}+(p-1)^{-1}\partial_t\int(\omega^t)^p\le \alpha\|(\curl\Psi)^-\|_{\Ld^\infty}\int(\omega^t)^p\le \alpha\|(\curl\Psi)^-\|_{\Ld^\infty}\Big(\int(\omega^t)^{p+1}\Big)^{1-1/p},\]
from which the conclusion~(ii) already follows.
\medskip
{\it Step~2: preliminary estimate for $v$.}
In this step, we prove for all $2<q\le\infty$ and $t\in[0,T)$
\begin{align}\label{eq:boundvincompr}
\|v^t\|_{\Ld^\infty}&\lesssim1+(1-2/q)^{-1/2}\|\omega^t\|_{\Ld^q}^{q'/2}\log^{1/2}(2+\|\omega^t\|_{\Ld^q}).
\end{align}
Let $2<q\le\infty$.
Note that $v^t-\bar v^\circ=\nabla^\bot\triangle^{-1}(\omega^t-\omega^\circ)+\nabla\triangle^{-1}(\theta^t-\bar \theta^\circ)$.
By Lemma~\ref{lem:singint-1}(i) for $w:=\omega^t-\bar\omega^\circ$ and Lemma~\ref{lem:singint-1}(ii) for $w:=\theta^t-\bar\theta^\circ=\Div(v^t-\bar v^\circ)$, we find
\begin{align*}
\|v^t\|_{\Ld^\infty}&\le \|\bar v^\circ\|_{\Ld^\infty}+\|\nabla\triangle^{-1}(\omega^t-\bar\omega^\circ)\|_{\Ld^\infty}+\|\nabla\triangle^{-1}(\theta^t-\bar \theta^\circ)\|_{\Ld^\infty}\\
&\lesssim1+(1-2/q)^{-1/2}\|\omega^t-\bar\omega^\circ\|_{\Ld^2}\log^{1/2}(2+\|\omega^t-\bar\omega^\circ\|_{\Ld^1\cap\Ld^q})\\
&\qquad+\|\theta^t-\bar \theta^\circ\|_{\Ld^2}\log^{1/2}(2+\|\theta^t-\bar \theta^\circ\|_{\Ld^2\cap\Ld^\infty})+\|v^t-\bar v^\circ\|_{\Ld^2}.
\end{align*}
Noting that $\theta^t-\bar\theta^\circ=-\nabla h\cdot (v^t-\bar v^\circ)$, using interpolation~\eqref{eq:interpolom} in the form $\|\omega^t\|_{\Ld^2}\lesssim\|\omega^t\|_{\Ld^q}^{q'/2}$, and using the a priori estimates of Lemma~\ref{lem:aprioriest} in the form $\|v^t-\bar v^\circ\|_{\Ld^2}+\|\omega^t\|_{\Ld^1}\lesssim1$, we obtain
\begin{align*}
\|v^t\|_{\Ld^\infty}&\lesssim(1-2/q)^{-1/2}\|\omega^t\|_{\Ld^q}^{q'/2}\log^{1/2}(2+\|\omega^t\|_{\Ld^q})+\log^{1/2}(2+\|v^t-\bar v^\circ\|_{\Ld^\infty}),
\end{align*}
and the result follows, absorbing in the left-hand side the last norm of $v$.
\medskip
{\it Step~3: conclusion.}
Let $1<p<\infty$. From~\eqref{eq:boundvincompr} with $q=p+1$, we deduce in particular
\[\|v^t\|_{\Ld^\infty}\lesssim1+(1-1/p)^{-1/2}\|\omega^t\|_{\Ld^{p+1}}^{\frac12(1+1/p)}\log^{1/2}(2+\|\omega^t\|_{\Ld^{p+1}})\lesssim (1-1/p)^{-1/2}\big(1+\|\omega^t\|_{\Ld^{p+1}}^{\frac34(1+1/p)}\big),\]
and hence, integrating with respect to $t$ and combining with~\eqref{eq:apomegLp},
\begin{align*}
\alpha(p-1)\|\omega\|_{\Ld^{p+1}_t\Ld^{p+1}}^{p+1}+\|\omega^t\|_{\Ld^p}^p&\le \|\omega^\circ\|_{\Ld^p}^p+Cp\big(1+\|\omega\|_{\Ld^{p+1}_t\Ld^{p+1}}^{\frac34(1+1/p)}\big)\|\omega\|_{\Ld^{p+1}_t\Ld^{p+1}}^{p-1/p}\\
&\le \|\omega^\circ\|_{\Ld^p}^p+Cp\|\omega\|_{\Ld^{p+1}_t\Ld^{p+1}}^{p-1/p}+Cp\|\omega\|_{\Ld^{p+1}_t\Ld^{p+1}}^{p+\frac34}.
\end{align*}
We may now absorb in the left-hand side the last two terms, to the effect of
\begin{align*}
\frac{\alpha(p-1)}2\|\omega\|_{\Ld^{p+1}_t\Ld^{p+1}}^{p+1}+\|\omega^t\|_{\Ld^p}^p&\le \|\omega^\circ\|_{\Ld^p}^p+C_p^p,
\end{align*}
where the constant $C_p$ further depends on an upper bound on $(p-1)^{-1}$, and the conclusion follows.
\end{proof}
Inspired by the work of Lin and Zhang~\cite{Lin-Zhang-00} on the simplified equation~\eqref{eq:LZh}, we now exploit the very particular structure of the transport equation~\eqref{eq:limeqn1VF} to deduce in the parabolic case an a priori $\Ld^p$-estimate for the vorticity $\omega$ through its initial $\Ld^1$-norm only. Note that the same estimate holds in the mixed-flow incompressible case with $a$ constant. This strengthening of Lemma~\ref{lem:Lpvort} is the key for global existence results with vortex-sheet data.
In items~(i)--(ii) below, we further display what can be obtained from this ODE method in other regimes, but the conclusion is then weaker than that of Lemma~\ref{lem:Lpvort}.
\begin{lem}[$\Ld^p$-estimates for vorticity, cont'd]\label{lem:Lpest}
Let $\lambda\ge0$, $\alpha\ge0$, $\beta\in\R$, $T>0$, and $h,\Psi,v^\circ\in W^{1,\infty}(\R^2)^2$, with $\omega^\circ:=\curl v^\circ\in \Pc\cap C^0(\R^2)$. Set $\zeta^\circ:=\Div(av^\circ)$, and in the case~\eqref{eq:limeqn1} assume that $\Div(av^\circ)=0$. Let $v\in W^{1,\infty}_\loc([0,T);W^{1,\infty}(\R^2)^2)$ be a weak solution of~\eqref{eq:limeqn1} or of~\eqref{eq:limeqn2} on $[0,T)\times\R^2$ with initial data $v^\circ$.
For all $1\le p\le \infty$ and $t\in[0,T)$, the following hold:
\begin{enumerate}[\quad(i)]
\item in both cases~\eqref{eq:limeqn1} and~\eqref{eq:limeqn2}, without restriction on the parameters,
\begin{align*}
\|\omega^t\|_{\Ld^p}&\le\|\omega^\circ\|_{\Ld^p}\min\bigg\{\exp\Big[\frac{p-1}p\big(Ct+C|\beta|\|\zeta\|_{\Ld^1_t\Ld^\infty}+C|\beta|\|\nabla h\|_{\Ld^\infty}\|v\|_{\Ld^1_t\Ld^\infty}\big)\Big];\\
&\hspace{4cm}\exp\Big[\frac{p-1}p\big(C+Ct+C|\beta|\|\zeta\|_{\Ld^1_t\Ld^\infty}+C\alpha\|\nabla h\|_{\Ld^\infty}\|v\|_{\Ld^1_t\Ld^\infty}\big)\Big]\bigg\};
\end{align*}
\item in the case~\eqref{eq:limeqn1} with either $\beta=0$ or $\alpha=0$ or $h$ constant, and in the case~\eqref{eq:limeqn2} with $\beta=0$, we have
\[\|\omega^t\|_{\Ld^p}\le Ce^{Ct}\|\omega^\circ\|_{\Ld^p};\]
\item given $\alpha>0$, in the case~\eqref{eq:limeqn1} with either $\beta=0$ or $h$ constant, and in the case~\eqref{eq:limeqn2} with $\beta=0$, we have
\[\|\omega^t\|_{\Ld^p}\le \Big((\alpha t)^{-1}+C\alpha^{-1}e^{Ct}\Big)^{1-1/p};\]
\end{enumerate}
where the constants $C$'s depend only on an upper bound on $\alpha$, $|\beta|$, and on $\|(h,\Psi)\|_{W^{1,\infty}}$.
\end{lem}
\begin{rem}
In the context of item~(iii),
if we further assume $\Psi\equiv0$ (i.e. no forcing),
then the constant~$C$ in Step~2 of the proof below may then be set to $0$, so that we simply obtain, for all $1\le p<\infty$ and all $t>0$,
\[\|\omega^t\|_{\Ld^p}\le\bigg(\int|\omega^\circ|^p(1+\alpha t\omega^\circ)^{1-p}\bigg)^{1/p}\le (\alpha t)^{-(1-1/p)},\]
without additional exponential growth.
\end{rem}
\begin{proof}
We split the proof into two steps, and we use the notation $\lesssim$ for $\le$ up to a constant $C>0$ as in the statement.
\medskip
{\it Step~1: general bounds.}
In this step, we prove (i) (from which (ii) directly follows, noting that $a$ constant implies $\nabla h\equiv0$).
Let us consider the flow
\[\partial_t\psi^t(x)=-\alpha(\Psi+v^t)^\bot(\psi^t(x))-\beta(\Psi+v^t)(\psi^t(x)),\qquad \psi^t(x)|_{t=0}=x.\]
The Lipschitz assumptions ensure that $\psi$ is well-defined in $W_\loc^{1,\infty}([0,T);W^{1,\infty}(\R^2)^2)$. As $\omega$ satisfies the transport equation~\eqref{eq:limeqn1VF} with initial data $\omega^\circ\in C^0(\R^2)$, the method of propagation along characteristics yields
\[\omega^t(x)=\omega^\circ((\psi^{t})^{-1}(x))|\det\nabla(\psi^{t})^{-1}(x)|=\omega^\circ((\psi^{t})^{-1}(x))|\det\nabla\psi^{t}((\psi^t)^{-1}(x))|^{-1},\]
and hence, for any $1\le p<\infty$, we have
\begin{align}\label{eq:rewriteLpnormdetpsi}
\int|\omega^t|^p&=\int |\omega^\circ((\psi^{t})^{-1}(x))|^p|\det\nabla\psi^t((\psi^{t})^{-1}(x))|^{-p}dx=\int |\omega^\circ(x)|^p|\det\nabla\psi^t(x)|^{1-p}dx,
\end{align}
while, for $p=\infty$,
\[\|\omega^t\|_{\Ld^\infty}\le\|\omega^\circ\|_{\Ld^\infty}\|(\det\nabla\psi^{t})^{-1}\|_{\Ld^\infty}.\]
Now let us examine this determinant more closely. By the Liouville-Ostrogradski formula,
\begin{align}\label{eq:detdevLiouv}
|\det\nabla \psi^{t}(x)|^{-1}&=\exp\bigg(\int_0^t\Div\Big(\alpha(\Psi+v^u)^\bot+\beta(\Psi+v^u)\Big)(\psi^u(x))du\bigg).
\end{align}
A simple computation gives
\begin{align}\label{eq:rewritedivchpvect}
\Div(\alpha(v^t)^\bot+\beta v^t)&=-\alpha\curl v^t+\beta \Div v^t=-\alpha\omega^t+\beta a^{-1}\zeta^t-\beta \nabla h\cdot v^t,
\end{align}
hence, by non-negativity of $\omega$,
\begin{align*}
\Div(\alpha(v^t)^\bot+\beta v^t)&\le|\beta|\|a^{-1}\|_{\Ld^\infty}\|\zeta^t\|_{\Ld^\infty}+|\beta|\|\nabla h\|_{\Ld^\infty}\|v^t\|_{\Ld^\infty}.
\end{align*}
We then find
\begin{align*}
|\det\nabla \psi^{t}(x)|^{-1}&\le\exp\big(t\alpha\|\curl\Psi\|_{\Ld^\infty}+t|\beta|\|\Div\Psi\|_{\Ld^\infty}+|\beta|\|a^{-1}\|_{\Ld^\infty}\|\zeta\|_{\Ld^1_t\Ld^\infty}+|\beta|\|\nabla h\|_{\Ld^\infty}\|v\|_{\Ld^1_t\Ld^\infty}\big),
\end{align*}
and thus, for all $1\le p\le\infty$,
\begin{align}\label{eq:estomegaLp-1}
\|\omega^t\|_{\Ld^p}&\le\|\omega^\circ\|_{\Ld^p}\exp\Big[\frac{p-1}p\big(t\alpha\|\curl\Psi\|_{\Ld^\infty}+t|\beta|\|\Div\Psi\|_{\Ld^\infty}\\
&\hspace{4cm}+|\beta|\|a^{-1}\|_{\Ld^\infty}\|\zeta\|_{\Ld^1_t\Ld^\infty}+|\beta|\|\nabla h\|_{\Ld^\infty}\|v\|_{\Ld^1_t\Ld^\infty}\big)\Big].\nonumber
\end{align}
On the other hand, noting that
\[\partial_t h(\psi^t(x))=-\nabla h(\psi^t(x))\cdot(\alpha(\Psi+v^t)^\bot+\beta(\Psi+v^t))(\psi^t(x)),\]
we may alternatively rewrite
\begin{align}\label{eq:rewritedecdiv}
\Div(\alpha(v^t)^\bot+\beta v^t)(\psi^t(x))&=\big(-\alpha\omega^t+\beta a^{-1}\zeta^t-\beta \nabla h\cdot v^t\big)(\psi^t(x))\\
&=\partial_t h(\psi^t(x))+\big(-\alpha\omega^t+\beta a^{-1}\zeta^t-\alpha\nabla^\bot h\cdot v^t+\nabla h\cdot(\alpha\Psi^\bot+\beta\Psi)\big)(\psi^t(x)).\nonumber
\end{align}
Integrating this identity with respect to $t$ and using again the same formula for $|\det\nabla\psi^t|^{-1}$, we obtain
\begin{align}\label{eq:estomegaLp-2}
&\|\omega^t\|_{\Ld^p}\le\|\omega^\circ\|_{\Ld^p}\exp\Big[\frac{p-1}p\big(t\alpha\|\curl\Psi\|_{\Ld^\infty}+t|\beta|\|\Div\Psi\|_{\Ld^\infty}+|\beta|\|a^{-1}\|_{\Ld^\infty}\|\zeta\|_{\Ld^1_t\Ld^\infty}\\
&\hspace{6cm}+2\|h\|_{\Ld^\infty}+t(\alpha+|\beta|)\|\nabla h\|_{\Ld^\infty}\|\Psi\|_{\Ld^\infty}+\alpha\|\nabla h\|_{\Ld^\infty}\|v\|_{\Ld^1_t\Ld^\infty}\big)\Big].\nonumber
\end{align}
Combining~\eqref{eq:estomegaLp-1} and~\eqref{eq:estomegaLp-2}, the conclusion~(i) follows.
\medskip
{\it Step~2: proof of~(iii).}
It suffices to prove the result for any $1<p<\infty$. Let such a $p$ be fixed.
Assuming either $\beta=0$, or $\zeta\equiv0$ and $a$ constant, we deduce from~\eqref{eq:rewriteLpnormdetpsi}, \eqref{eq:detdevLiouv} and~\eqref{eq:rewritedivchpvect}
\begin{align}\label{eq:rewriteLpnormdetpsiLiouv}
\int|\omega^t|^p&=\int |\omega^\circ(x)|^p\exp\Big((p-1)\int_0^t\Div\big(\alpha(\Psi+v^u)^\bot+\beta(\Psi+v^u)\big)(\psi^u(x))du\Big)dx\nonumber\\
&\le e^{C(p-1)t}\int |\omega^\circ(x)|^p\exp\Big(-\alpha(p-1)\int_0^t \omega^u(\psi^u(x))du\Big)dx.
\end{align}
Let $x$ be momentarily fixed, and set $f_x(t):=\omega^t(\psi^t(x))$. We need to estimate the integral $\int_0^tf_x(u)du$. For that purpose, we first compute $\partial_t f_x$: again using~\eqref{eq:rewritedivchpvect} (with either $\beta=0$, or $\zeta\equiv0$ and $a$ constant), we find
\begin{align*}
\partial_t f_x(t)&=\Div(\omega^t(\alpha(\Psi+v^t)^\bot+\beta(\Psi+v^t)))(\psi^t(x))-\nabla\omega^t(\psi^t(x))\cdot(\alpha(\Psi+v^t)^\bot+\beta(\Psi+v^t))(\psi^t(x))\\
&=\omega^t(\psi^t(x))\Div(\alpha(\Psi+v^t)^\bot+\beta(\Psi+v^t)))(\psi^t(x))\\
&=-\alpha(\omega^t(\psi^t(x)))^2+(-\alpha\omega^t\curl \Psi+\beta\omega^t\Div\Psi)(\psi^t(x)),
\end{align*}
and hence
\[\partial_tf_x\ge -\alpha f_x^2-Cf_x.\]
We may then deduce $f_x\ge g_x$ pointwise, where $g_x$ satisfies
\[\partial_t g_x= -\alpha g_x^2- C g_x,\qquad g_x(0)=f_x(0)=\omega^\circ(x).\]
A direct computation yields
\[g_x(t)=\frac{Ce^{-Ct}\omega^\circ(x)}{C+\alpha(1-e^{-Ct})\omega^\circ(x)},\]
and hence
\[\int_0^tf_x(u)du\ge\int_0^tg_x(u)du=\alpha^{-1}\log\Big(1+\alpha C^{-1}(1-e^{-Ct})\omega^\circ(x)\Big).\]
Inserting this into~\eqref{eq:rewriteLpnormdetpsiLiouv}, we obtain for all $t>0$
\begin{align*}
\int|\omega^t|^p&\le e^{C(p-1)t}\int |\omega^\circ(x)|^p\Big(1+\alpha C^{-1}(1-e^{-Ct})\omega^\circ(x)\Big)^{1-p}dx\\
&\le \bigg(\frac{C\alpha^{-1}e^{Ct}}{1-e^{-Ct}}\bigg)^{p-1}\int |\omega^\circ(x)|dx=\bigg(\frac{C\alpha^{-1}e^{Ct}}{1-e^{-Ct}}\bigg)^{p-1}.
\end{align*}
The result~(iii) then follows from the obvious inequality $e^{Ct}(1-e^{-Ct})^{-1}\le e^{Ct}+1+(Ct)^{-1}$ for all $t>0$.
\end{proof}
In the following result, we examine the regularity of $v$ and of the divergence $\zeta$ that follows from the boundedness of the vorticity $\omega$.
\begin{lem}[Relative $\Ld^p$-estimates]\label{lem:Lpestdiv}
Let $\lambda>0$, $\alpha\ge0$, $\beta\in\R$, $T>0$, $h,\Psi,\bar v^\circ\in W^{1,\infty}(\R^2)^2$, and $v^\circ\in \bar v^\circ+\Ld^2(\R^2)^2$, with $\omega^\circ:=\curl v^\circ\in\Pc(\R^2)$, $\bar\omega^\circ:=\curl \bar v^\circ\in\Pc\cap\Ld^\infty(\R^2)$, and with either $\Div(av^\circ)=\Div(a\bar v^\circ)=0$ in the case~\eqref{eq:limeqn1}, or $\zeta^\circ:=\Div(av^\circ)$, $\bar\zeta^\circ:=\Div(a\bar v^\circ)\in\Ld^2\cap\Ld^\infty(\R^2)$ in the case~\eqref{eq:limeqn2}. Let $v\in\Ld_\loc^\infty([0,T);\bar v^\circ+\Ld^2(\R^2)^2)$ be a weak solution of~\eqref{eq:limeqn1} or of~\eqref{eq:limeqn2} on $[0,T)\times\R^2$ with initial data $v^\circ$, and with $\omega:=\curl v\in\Ld^\infty([0,T];\Ld^\infty(\R^2))$. Then we have for all $t\in[0,T)$
\[\|\zeta^t\|_{\Ld^2\cap\Ld^\infty}\le C, \qquad\|\Div(v^t-\bar v^\circ)\|_{\Ld^2\cap\Ld^\infty}\le C,\qquad\|v^t\|_{\Ld^\infty}\le C,\]
where the constants $C$'s depend only on an upper bound on $\alpha$, $|\beta|$, $\lambda$, $\lambda^{-1}$, $T$, $\|h\|_{W^{1,\infty}}$, $\|(\Psi,\bar v^\circ)\|_{\Ld^\infty}$, $\|v^\circ-\bar v^\circ\|_{\Ld^2}$, $\|\bar\omega^\circ\|_{\Ld^1\cap\Ld^\infty}$, $\|(\zeta^\circ,\bar\zeta^\circ)\|_{\Ld^2\cap\Ld^\infty}$, $\|\omega\|_{\Ld^\infty_T\Ld^\infty}$, and additionally on $\|(\nabla\Psi,\nabla\bar v^\circ)\|_{\Ld^\infty}$ (resp. on $\alpha^{-1}$) in the case $\alpha=0$ (resp. $\alpha>0$).
\end{lem}
\begin{proof}
In this proof, we use the notation $\lesssim$ for $\le$ up to a constant $C>0$ as in the statement, and we also set $\theta:=\Div v$, $\bar\theta^\circ:=\Div\bar v^\circ$. We may focus on the case of the compressible equation~\eqref{eq:limeqn2}, the other case being similar and simpler. We split the proof into three steps.
\medskip
{\it Step~1: preliminary estimate for $v$.}
In this step, we prove for all $t\in[0,T)$,
\begin{align}
\|v^t\|_{\Ld^\infty}&\lesssim1+\|\theta^t-\bar\theta^\circ\|_{\Ld^2}\log^{1/2}(2+\|\theta^t-\bar\theta^\circ\|_{\Ld^2\cap\Ld^\infty}).\label{eq:v^tL^infty0}
\end{align}
Note that $v^t-\bar v^\circ=\nabla^\bot\triangle^{-1}(\omega^t-\bar \omega^\circ)+\nabla\triangle^{-1}(\theta^t-\bar \theta^\circ)$. By Lemma~\ref{lem:singint-1}(i)--(ii),
we may then estimate
\begin{align*}
\|v^t-\bar v^\circ\|_{\Ld^\infty}&\le \|\nabla\triangle^{-1}(\omega^t-\bar \omega^\circ)\|_{\Ld^\infty}+\|\nabla\triangle^{-1}(\theta^t-\bar \theta^\circ)\|_{\Ld^\infty}\\
&\lesssim\|\omega^t-\bar \omega^\circ\|_{\Ld^2}\log^{1/2}(2+\|\omega^t-\bar \omega^\circ\|_{\Ld^1\cap\Ld^\infty})\\
&\qquad+\|\theta^t-\bar \theta^\circ\|_{\Ld^2}\log^{1/2}(2+\|\theta^t-\bar \theta^\circ\|_{\Ld^2\cap\Ld^\infty})+\|v^t-\bar v^\circ\|_{\Ld^2},
\end{align*}
so that~\eqref{eq:v^tL^infty0} follows from the a priori estimates of Lemma~\ref{lem:aprioriest} (in the form $\|v^t-\bar v^\circ\|_{\Ld^2}+\|\omega^t\|_{\Ld^1}\lesssim1$) and the boundedness assumption on $\omega$.
\medskip
{\it Step~2: boundedness of $\theta$.}
In this step, we prove $\|\theta^t-\bar\theta^\circ\|_{\Ld^2\cap\Ld^\infty}\lesssim1$ for all $t\in[0,T)$.
We begin with the $\Ld^2$-estimate. As $\zeta$ satisfies the transport-diffusion equation~\eqref{eq:limeqn2VF}, Lemma~\ref{lem:parreg+tsp}(i) with $s=0$ gives
\begin{align*}
\|\zeta^t\|_{\Ld^2}&\lesssim \|\zeta^\circ\|_{\Ld^2}+\|a\omega(-\alpha(\Psi+v)+\beta(\Psi+v)^\bot)\|_{\Ld^2_t\Ld^2}\\
&\lesssim 1+\|\omega\|_{\Ld^2_t\Ld^\infty}\|v-\bar v^\circ\|_{\Ld^\infty_t\Ld^2}+\|\omega\|_{\Ld^2_t\Ld^2}\|(\Psi,\bar v^\circ)\|_{\Ld^\infty},
\end{align*}
and hence $\|\zeta^t\|_{\Ld^2}\lesssim1$ follows from the a priori estimates of Lemma~\ref{lem:aprioriest} (in the form $\|v^t-\bar v^\circ\|_{\Ld^2}+\|\omega^t\|_{\Ld^1}\lesssim1$) and the boundedness assumption for $\omega$. Similarly, for $\theta^t=a^{-1}\zeta^t-\nabla h\cdot v^t$, we deduce $\|\theta^t-\bar \theta^\circ\|_{\Ld^2}\lesssim1$. We now turn to the $\Ld^\infty$-estimate.
Lemma~\ref{lem:parreg+tsp}(iii) with $p=q=s=\infty$ gives
\begin{align}
\|\zeta^t\|_{\Ld^\infty}&\lesssim \|\zeta^\circ\|_{\Ld^\infty}+\|a\omega(-\alpha(\Psi+v)+\beta(\Psi+v)^\bot)\|_{\Ld^\infty_t\Ld^\infty}\nonumber\\
&\lesssim 1+\|\omega\|_{\Ld^\infty_t\Ld^\infty}(1+\|v\|_{\Ld^\infty_t\Ld^\infty}),\label{eq:estzetaLinftyfin}
\end{align}
or alternatively, for $\theta^t=a^{-1}\zeta^t-\nabla h\cdot v^t$,
\begin{align*}
\|\theta^t\|_{\Ld^\infty}&\lesssim 1+\|v^t\|_{\Ld^\infty}+\|\omega\|_{\Ld^\infty_t\Ld^\infty}(1+\|v\|_{\Ld^\infty_t\Ld^\infty}).
\end{align*}
Combining this estimate with the result of Step~1 yields
\begin{align*}
\|\theta^t\|_{\Ld^\infty}&\lesssim 1+\|\theta^t-\bar \theta^\circ\|_{\Ld^2}\log^{1/2}(2+\|\theta^t-\bar \theta^\circ\|_{\Ld^2\cap\Ld^\infty})\\
&\qquad+\|\omega\|_{\Ld^\infty_t\Ld^\infty}(1+\|\theta-\bar \theta^\circ\|_{\Ld^\infty_t\Ld^2}\log^{1/2}(2+\|\theta-\bar \theta^\circ\|_{\Ld^\infty_t(\Ld^2\cap\Ld^\infty)})).
\end{align*}
Now the boundedness assumption on $\omega$ and the $\Ld^2$-estimate for $\theta$ proven above reduce this expression to
\begin{align*}
\|\theta^t\|_{\Ld^\infty}&\lesssim 1+\log^{1/2}(2+\|\theta\|_{\Ld^\infty_t\Ld^\infty}).
\end{align*}
Taking the supremum with respect to $t$, we may then conclude $\|\theta^t\|_{\Ld^\infty}\lesssim1$ for all $t\in[0,T)$.
\medskip
{\it Step~3: conclusion.}
By the result of Step~2, the estimate~\eqref{eq:v^tL^infty0} of Step~1 takes the form $\|v^t\|_{\Ld^\infty}\lesssim1$. The estimate~\eqref{eq:estzetaLinftyfin} of Step~2 then yields $\|\zeta^t\|_{\Ld^\infty}\lesssim1$, while the $\Ld^2$-estimate for $\zeta$ has already been proven in Step~2.
\end{proof}
\subsection{Propagation of regularity}\label{chap:propagation}
As local existence is established in Proposition~\ref{prop:locexist} only for smooth enough data, it is necessary for the global existence result to first prove propagation of regularity along the flow. We prove it here as a consequence of the boundedness of the vorticity $\omega$.
We begin with Sobolev $H^s$-regularity, and then turn to Hlder regularity.
\begin{lem}[Conservation of Sobolev norms]\label{lem:conservSob}
Let $s>1$. Let $\lambda>0$, $\alpha\ge0$, $\beta\in\R$, $T>0$, $h,\Psi,\bar v^\circ\in W^{s+1,\infty}(\R^2)^2$, and $v^\circ\in\bar v^\circ+\Ld^2(\R^2)^2$, with $\omega^\circ:=\curl v^\circ,\bar\omega^\circ:=\curl \bar v^\circ\in \Pc\cap H^s(\R^2)$, and with either $\Div(av^\circ)=\Div(a\bar v^\circ)=0$ in the case~\eqref{eq:limeqn1}, or $\zeta^\circ:=\Div(av^\circ),\bar\zeta^\circ:=\Div(a\bar v^\circ)\in H^s(\R^2)$ in the case~\eqref{eq:limeqn2}. Let $v\in\Ld^\infty([0,T];\bar v^\circ+H^{s+1}(\R^2)^2)$ be a weak solution of~\eqref{eq:limeqn1} or of~\eqref{eq:limeqn2} on $[0,T)\times\R^2$ with initial data $v^\circ$. Then we have for all $t\in[0,T)$
\[\|\omega^t\|_{H^s}\le C,\qquad\|\zeta^t\|_{H^s}\le C,
\qquad\|v^t-\bar v^\circ\|_{H^{s+1}}\le C,\qquad\|\nabla v^t\|_{\Ld^\infty}\le C,\]
where the constants $C$'s depend only on an upper bound on $s$, $(s-1)^{-1}$, $\alpha$, $|\beta|$, $\lambda$, $\lambda^{-1}$, $T$, $\|(h,\Psi,\bar v^\circ)\|_{W^{s+1,\infty}}$, $\|v^\circ-\bar v^\circ\|_{\Ld^2}$, $\|(\omega^\circ,\bar\omega^\circ,\zeta^\circ,\bar\zeta^\circ)\|_{H^s}$, $\|\omega\|_{\Ld^\infty_T\Ld^\infty}$, and additionally on $\alpha^{-1}$ in the case $\alpha>0$.
\end{lem}
\begin{proof}
We set $\theta:=\Div v$, $\bar\theta^\circ:=\Div\bar v^\circ$. In this proof, we use the notation $\lesssim$ for $\le$ up to a constant $C>0$ as in the statement.
We may focus on the compressible case~\eqref{eq:limeqn2}, the other case being similar and simpler. We split the proof into four steps.
\medskip
{\it Step~1: time-derivative of $\|\omega\|_{H^s}$.} In this step, we prove for all $s\ge0$ and $t\in[0,T)$
\begin{align*}
\partial_t\|\omega^t\|_{H^s}&\lesssim(1+\|\nabla v^t\|_{\Ld^\infty})(1+\|\omega^t\|_{H^s})+\|\theta^t-\bar\theta^\circ\|_{H^s}.
\end{align*}
Lemma~\ref{lem:katoponce} with $\rho=\omega$, $w=\alpha(\Psi+v)^\bot+\beta(\Psi+v)$ and $W=\alpha(\Psi+\bar v^\circ)^\bot+\beta(\Psi+\bar v^\circ)$ yields
\begin{align*}
\partial_t\|\omega^t\|_{H^s}&\lesssim(1+\|\nabla v^t\|_{\Ld^\infty})\|\omega^t\|_{H^s}+(\alpha\|\omega^t-\bar \omega^\circ\|_{H^s}+|\beta|\|\theta^t-\bar \theta^\circ\|_{H^s})\|\omega^t\|_{\Ld^\infty}\\
&\qquad+\|\omega^t\|_{\Ld^2}+(1+\alpha\|\omega^t\|_{\Ld^\infty}+|\beta|\|\theta^t\|_{\Ld^\infty})\|\omega^t\|_{H^s}.
\end{align*}
and the claim then follows from Lemma~\ref{lem:Lpestdiv} and the boundedness assumption on $\omega$.
\medskip
{\it Step~2: Lipschitz estimate for $v$.}
In this step, we prove for all $s>1$ and $t\in[0,T)$
\begin{align}
\|\nabla v^t\|_{\Ld^\infty}&\lesssim\log(2+\|\omega^t\|_{H^s}+\|\theta^t-\bar\theta^\circ\|_{H^s}).\label{eq:v^tnablaL^infty0}
\end{align}
Since $v^t-\bar v^\circ=\nabla^\bot\triangle^{-1}(\omega^t-\bar\omega^\circ)+\nabla\triangle^{-1}(\theta^t-\bar\theta^\circ)$, Lemma~\ref{lem:singint-1}(iii) yields, together with the Sobolev embedding of $H^s$ into a Hlder space for any $s>1$,
\begin{align*}
\|\nabla (v^t-\bar v^\circ)\|_{\Ld^\infty}&\le\|\nabla^2\triangle^{-1}(\omega^t-\bar\omega^\circ)\|_{\Ld^\infty}+\|\nabla^2\triangle^{-1}(\theta^t-\bar\theta^\circ)\|_{\Ld^\infty}\\
&\lesssim\|\omega^t-\bar\omega^\circ\|_{\Ld^\infty}\log(2+{\|\omega^t-\bar\omega^\circ\|_{H^s}})+\|\omega^t-\bar\omega^\circ\|_{\Ld^1}\\
&\qquad+\|\theta^t-\bar\theta^\circ\|_{\Ld^\infty}\log(2+\|\theta^t-\bar\theta^\circ\|_{H^s})+\|\theta^t-\bar\theta^\circ\|_{\Ld^2},
\end{align*}
and the claim~\eqref{eq:v^tnablaL^infty0} then follows from Lemma~\ref{lem:aprioriest}(i), Lemma~\ref{lem:Lpestdiv}, and the boundedness assumption on $\omega$.
\medskip
{\it Step~3: Sobolev estimate for $\theta$.}
In this step, we prove for all $s\ge0$ and $t\in[0,T)$
\begin{align}\label{eq:v^tnablaL^infty00bis}
\|\theta^t-\bar\theta^\circ\|_{H^{s}}\lesssim1+\|\omega\|_{\Ld^\infty_tH^s}.
\end{align}
As $\zeta$ satisfies the transport-diffusion equation~\eqref{eq:limeqn2VF}, Lemma~\ref{lem:parreg+tsp}(i)--(ii) gives, for all $s\ge0$,
\begin{align*}
\|\zeta^t\|_{H^{s}}&\lesssim \|\zeta^\circ\|_{H^{s}}+\|a\omega(-\alpha(\Psi+v)+\beta(\Psi+v)^\bot)\|_{\Ld^2_tH^{s}}.
\end{align*}
Using Lemma~\ref{lem:katoponce-1} to estimate the right-hand side, we find, for all $s\ge0$,
\begin{align*}
\|\zeta^t\|_{H^{s}}&\lesssim1+\|a\omega(-\alpha(v-\bar v^\circ)+\beta(v-\bar v^\circ)^\bot)\|_{\Ld^2_tH^{s}}+\|a\omega(-\alpha(\Psi+\bar v^\circ)+\beta(\Psi+\bar v^\circ)^\bot)\|_{\Ld^2_tH^{s}}\\
&\lesssim1+\|\omega\|_{\Ld^\infty_t\Ld^\infty}\|v-\bar v^\circ\|_{\Ld^2_tH^s}+\|\omega\|_{\Ld^2_tH^s}\|v-\bar v^\circ\|_{\Ld^\infty_t\Ld^\infty}\\
&\qquad+\|\omega\|_{\Ld^2_t\Ld^2}(1+\|\bar v^\circ\|_{W^{s,\infty}})+\|\omega\|_{\Ld^2_tH^s}(1+\|\bar v^\circ\|_{\Ld^\infty}),
\end{align*}
and hence, by Lemma~\ref{lem:Lpestdiv} and the boundedness assumption on $\omega$,
\begin{align}\label{eq:v^tnablaL^infty0ter}
\|\zeta^t\|_{H^s}&\lesssim1+\|\omega\|_{\Ld^\infty_tH^s}+\|v-\bar v^\circ\|_{\Ld^\infty_tH^s}.
\end{align}
Lemma~\ref{lem:reconstr} then yields for all $s\ge0$,
\begin{align*}
\|\zeta^t\|_{H^s}&\lesssim1+\|\omega\|_{\Ld^\infty_tH^s}+\|\omega-\bar\omega^\circ\|_{\Ld^\infty_t(\dot H^{-1}\cap H^{s-1})}+\|\zeta-\bar\zeta^\circ\|_{\Ld^\infty_t(\dot H^{-1}\cap H^{s-1})}.
\end{align*}
Noting that $\|(\omega-\bar\omega^\circ,\zeta-\bar\zeta^\circ)\|_{\dot H^{-1}}\lesssim\|v-\bar v^\circ\|_{\Ld^2}$, and using Lemma~\ref{lem:aprioriest}(iii) in the form $\|v-\bar v^\circ\|_{\Ld^2}\lesssim1$, we deduce
\begin{align*}
\|\zeta^t\|_{H^s}&\lesssim1+\|\omega\|_{\Ld^\infty_tH^s}+\|\zeta\|_{\Ld^\infty_tH^{s-1}}.
\end{align*}
Taking the supremum in time, we find by induction $\|\zeta\|_{\Ld^\infty_tH^s}\lesssim1+\|\omega\|_{\Ld^\infty_tH^s}+\|\zeta\|_{\Ld^\infty_t\Ld^2}$ for all $s\ge0$.
Recalling that Lemma~\ref{lem:Lpestdiv} gives $\|\theta^t-\bar \theta^\circ\|_{\Ld^2}\lesssim1$, and using the identity $\theta^t=a^{-1}\zeta^t-\nabla h\cdot v^t$, the claim~\eqref{eq:v^tnablaL^infty00bis} directly follows.
\medskip
{\it Step~4: conclusion.}
Combining the results of the three previous steps yields, for all $s>1$,
\begin{align*}
\partial_t\|\omega^t\|_{H^s}&\lesssim (1+\|\omega^t\|_{H^s})\log(2+\|\omega^t\|_{H^s}+\|\theta^t-\bar \theta^\circ\|_{H^s})+\|\theta^t-\bar \theta^\circ\|_{H^s}\\
&\lesssim(1+\|\omega^t\|_{H^s}) \log(2+\|\omega\|_{\Ld^\infty_tH^s})+\|\omega\|_{\Ld^\infty_tH^s},
\end{align*}
hence
\begin{align*}
\partial_t\|\omega\|_{\Ld^\infty_tH^s}\le\sup_{[0,t]}\partial_t\|\omega\|_{H^s}\lesssim (1+\|\omega\|_{\Ld^\infty_tH^s})\log(2+\|\omega\|_{\Ld^\infty_tH^s}),
\end{align*}
and the Grnwall inequality then gives $\|\omega\|_{\Ld^\infty_tH^s}\lesssim1$. Combining this with~\eqref{eq:v^tnablaL^infty0}, \eqref{eq:v^tnablaL^infty00bis} and~\eqref{eq:v^tnablaL^infty0ter}, and recalling the identity $v^t-\bar v^\circ=\nabla^\bot\triangle^{-1}(\omega^t-\bar \omega^\circ)+\nabla\triangle^{-1}(\theta^t-\bar \theta^\circ)$, the conclusion follows.
\end{proof}
We now turn to the propagation of Hlder regularity. More precisely, we consider the Besov spaces $C^s_*(\R^2):=B^s_{\infty,\infty}(\R^2)$. Recall that these spaces coincide with the usual Hlder spaces $C^{s}(\R^2)$ only for non-integer $s\ge0$ (for integer $s>0$, they are strictly larger and coincide with the corresponding Zygmund spaces).
\begin{lem}[Conservation of Hlder-Zygmund norms]\label{lem:conservHold}
Let $s>0$. Let $\lambda>0$, $\alpha\ge0$, $\beta\in\R$, $T>0$, and $h,\Psi,v^\circ\in C^{s+1}_*(\R^2)^2$ with $\omega^\circ:=\curl v^\circ\in \Pc(\R^2)$, and with either $\Div(av^\circ)=0$ in the case~\eqref{eq:limeqn1}, or $\zeta^\circ:=\Div(av^\circ)\in \Ld^2(\R^2)$ in the case~\eqref{eq:limeqn2}. Let $v\in\Ld^\infty([0,T];C^{s+1}_*(\R^2)^2)$ be a weak solution of~\eqref{eq:limeqn1} or of~\eqref{eq:limeqn2} on $[0,T)\times\R^2$ with initial data $v^\circ$.
Then we have for all $t\in[0,T)$
\[\|\omega^t\|_{C_*^s}\le C,\qquad \|\zeta^t\|_{C_*^{s}}\le C,
\qquad\|v^t\|_{C_*^{s+1}}\le C,\]
where the constants $C$'s depend only on an upper bound on
$s$, $s^{-1}$, $\alpha$, $|\beta|$, $\lambda$, $\lambda^{-1}$, $T$, $\|(h,\Psi,v^\circ)\|_{C^{s+1}_*}$, $\|\zeta^\circ\|_{\Ld^2}$, $\|\omega\|_{\Ld^\infty_T\Ld^\infty}$, and additionally on $\alpha^{-1}$ in the case $\alpha>0$.
\end{lem}
\begin{proof}
We set $\theta:=\Div v$. In this proof, we use the notation $\lesssim$ for $\le$ up to a constant $C>0$ as in the statement.
We may focus on the compressible equation~\eqref{eq:limeqn2}, the other case being similar and simpler.
We split the proof into four steps, and make a systematic use of the standard Besov machinery as presented in~\cite{BCD-11}.
\medskip
{\it Step~1: time-derivative of $\|\omega^t\|_{C^s_*}$.} In this step, we prove, for all $s\ge0$ and $t\in[0,T)$,
\begin{align*}
\partial_t\|\omega^t\|_{C^s_*}&\lesssim (1+\|\omega^t\|_{C^s_*})(1+\|\nabla v^t\|_{\Ld^\infty\cap C^{s-1}_*})+\|\theta^t\|_{C^s_*}.
\end{align*}
The transport equation~\eqref{eq:limeqn1VF} has the form $\partial_t\omega^t=\Div(\omega^tw^t)$ with $w^t=\alpha(\Psi+v^t)^\bot+\beta(\Psi+v^t)$. Arguing as in~\cite[Chapter~3.2]{BCD-11}, we obtain, for all $s\ge-1$,
\begin{align*}
\partial_t\|\omega^t\|_{C^s_*}\lesssim \|\omega^t\|_{C^s_*}\|\nabla w^t\|_{\Ld^\infty\cap C^{s-1}_*}+\|\omega^t\Div w^t\|_{C^s_*}.
\end{align*}
Using the usual product rules~\cite[Corollary~2.86]{BCD-11}, for all $s>0$,
\begin{align*}
\partial_t\|\omega^t\|_{C^s_*}&\lesssim \|\omega^t\|_{C^s_*}\|\nabla w^t\|_{\Ld^\infty\cap C^{s-1}_*}+\|\omega^t\|_{\Ld^\infty}\|\Div w^t\|_{C^s_*}+\|\omega^t\|_{C^s_*}\|\Div w^t\|_{\Ld^\infty}\\
&\lesssim \|\omega^t\|_{C^s_*}(1+\|\nabla v^t\|_{\Ld^\infty\cap C^{s-1}_*})+\|\omega^t\|_{\Ld^\infty}(1+\|\omega^t\|_{C^s_*}+\|\theta^t\|_{C^s_*}),
\end{align*}
and the result follows from the boundedness assumption on $\omega$.
\medskip
{\it Step~2: estimate for $\nabla v^t$.} In this step, we prove, for all $s>0$ and $t\in[0,T)$,
\[\|\nabla v^t\|_{\Ld^\infty\cap C_*^{s-1}}\lesssim\|\omega^t\|_{C_*^{s-1}}+\|\theta^t\|_{C_*^{s-1}}+\log(2+\|\omega^t\|_{C^s_*}+\|\theta^t\|_{C^s_*}).\]
Since $v^t-v^\circ=\nabla^\bot\triangle^{-1}(\omega^t-\omega^\circ)+\nabla\triangle^{-1}(\theta^t-\theta^\circ)$, Lemma~\ref{lem:pottheoryCsHs}(ii) yields for all $s\in\R$,
\begin{align*}
\|\nabla v^t\|_{C_*^{s-1}}\lesssim 1+\|\omega^t-\omega^\circ\|_{\dot H^{-1}\cap C_*^{s-1}}+\|\theta^t-\theta^\circ\|_{\dot H^{-1}\cap C_*^{s-1}},
\end{align*}
and thus, noting that $\|(\omega-\omega^\circ,\theta-\theta^\circ)\|_{\dot H^{-1}}\lesssim\|v-v^\circ\|_{\Ld^2}$, and using Lemma~\ref{lem:aprioriest}(iii) in the form $\|v-v^\circ\|_{\Ld^2}\lesssim1$,
\begin{align*}
\|\nabla v^t\|_{C_*^{s-1}}\lesssim 1+\|\omega^t\|_{C_*^{s-1}}+\|\theta^t\|_{C_*^{s-1}}.
\end{align*}
Arguing as in Step~2 of the proof of Lemma~\ref{lem:conservSob} further yields, for all $s>0$,
\begin{align*}
\|\nabla v^t\|_{\Ld^\infty}\lesssim \log(2+\|\omega^t\|_{C^s_*}+\|\theta^t-\theta^\circ\|_{C^s_*}),
\end{align*}
so the result follows.
\medskip
{\it Step~3: estimate for $\theta^t$.}
In this step, we prove, for all $s\ge-1$ and $t\in[0,T)$,
\begin{align*}
\|\theta^t\|_{C^{s}_*}\lesssim1+\|\omega\|_{\Ld^\infty_tC^{s-1}_*}.
\end{align*}
As $\zeta$ satisfies the transport-diffusion equation~\eqref{eq:limeqn2VF}, we obtain, for all $s\ge-1$, arguing as in~\cite[Chapter~3.4]{BCD-11},
\begin{align*}
\|\zeta^t\|_{C^s_*}
&\lesssim\|\zeta^\circ\|_{C^s_*}+\|a\omega(-\alpha(\Psi+v)+\beta(\Psi+v)^\bot)\|_{\Ld^\infty_tC^{s-1}_*},
\end{align*}
and thus, by the usual product rules~\cite[Corollary~2.86]{BCD-11}, the boundedness assumption on $\omega$, and Lemma~\ref{lem:Lpestdiv}, we deduce, for all $s\ge-1$,
\begin{align*}
\|\zeta^t\|_{C^s_*}&\lesssim1+\|\omega\|_{\Ld^\infty_t(\Ld^\infty\cap C^{s-1}_*)}(1+\|v\|_{\Ld^\infty_t\Ld^\infty})+\|\omega\|_{\Ld^\infty_t\Ld^\infty}(1+\|v\|_{\Ld^\infty_t(\Ld^\infty\cap C^{s-1}_*)})\\
&\lesssim1+\|\omega\|_{\Ld^\infty_tC^{s-1}_*}+\|v\|_{\Ld^\infty_t C^{s-1}_*},
\end{align*}
or alternatively, in terms of $\theta^t=a^{-1}\zeta^t-\nabla h\cdot v^t$,
\begin{align*}
\|\theta^t\|_{C^s_*}&\lesssim\|\zeta^t\|_{\Ld^\infty\cap C^s_*}+\|v^t\|_{\Ld^\infty\cap C^s_*}\lesssim 1+\|\omega\|_{\Ld^\infty_tC^{s-1}_*}+\|v\|_{\Ld^\infty_t C^s_*}.
\end{align*}
Decomposing $v^t-v^\circ=\nabla^\bot\triangle^{-1}(\omega^t-\omega^\circ)+\nabla\triangle^{-1}(\theta^t-\theta^\circ)$, using Lemma~\ref{lem:pottheoryCsHs}(ii), and again Lemma~\ref{lem:aprioriest}(iii) in the form $\|(\omega-\omega^\circ,\theta-\theta^\circ)\|_{\dot H^{-1}}\lesssim\|v-v^\circ\|_{\Ld^2}\lesssim1$, we find
\begin{align*}
\|v^t\|_{C_*^{s}}\lesssim 1+\|\omega^t-\omega^\circ\|_{\dot H^{-1}\cap C_*^{s-1}}+\|\theta^t-\theta^\circ\|_{\dot H^{-1}\cap C_*^{s-1}}\lesssim1+\|\omega^t\|_{C_*^{s-1}}+\|\theta^t\|_{C_*^{s-1}},
\end{align*}
and hence
\begin{align*}
\|\theta\|_{\Ld^\infty_tC^s_*}&\lesssim1+\|\omega\|_{\Ld^\infty_tC^{s-1}_*}+\|\theta\|_{\Ld^\infty_tC_*^{s-1}}.
\end{align*}
If $s\le1$, then we have $\|\cdot\|_{C^{s-1}_*}\lesssim\|\cdot\|_{\Ld^\infty}$, so that the above estimate, the boundedness assumption on $\omega$, and Lemma~\ref{lem:Lpestdiv} yield $\|\theta\|_{\Ld^\infty_t C^s_*}\lesssim1$. The result for $s>1$ then follows by induction.
\medskip
{\it Step~4: conclusion.}
Combining the results of the three previous steps yields, for all $s>0$,
\begin{align*}
\partial_t\|\omega\|_{\Ld^\infty_t C^s_*}\le\sup_{[0,t]}\partial_t\|\omega\|_{C^s_*}&\lesssim(1+ \|\omega\|_{\Ld^\infty_tC^s_*})(\|\omega\|_{\Ld^\infty_tC_*^{s-1}}+\|\theta\|_{\Ld^\infty_tC_*^{s-1}}+\log(2+\|\omega^t\|_{C^s_*}+\|\theta^t\|_{C^s_*}))+\|\theta\|_{\Ld^\infty_tC^s_*}\\
&\lesssim (1+\|\omega\|_{\Ld^\infty_tC^s_*})(\|\omega\|_{\Ld^\infty_tC_*^{s-1}}+\log(2+\|\omega\|_{\Ld^\infty_tC^s_*})).
\end{align*}
If $s\le1$, then we have $\|\cdot\|_{C^{s-1}_*}\lesssim\|\cdot\|_{\Ld^\infty}$, so that the above estimate and the boundedness assumption on $\omega$ yield $\partial_t\|\omega\|_{\Ld^\infty_t C^s_*}\lesssim (1+\|\omega\|_{\Ld^\infty_tC^s_*})\log(2+\|\omega\|_{\Ld^\infty_tC^s_*})$, hence $\|\omega\|_{\Ld^\infty_tC^s_*}\lesssim1$ by the Grnwall inequality. The conclusion for $s>1$ then follows by induction.
\end{proof}
\subsection{Global existence of solutions}\label{chap:globalpr}
With Lemmas~\ref{lem:conservSob} and~\ref{lem:conservHold} at hand, together with the a priori bounds of Lemmas~\ref{lem:Lpvort} and~\ref{lem:Lpest}, it is now straightforward to deduce the following global existence result from the local existence statement of Proposition~\ref{prop:locexist}.
\begin{cor}[Global existence of smooth solutions]\label{cor:globexist1}
Let $s>1$. Let $\lambda>0$, $\alpha\ge0$, $\beta\in\R$, $h,\Psi,\bar v^\circ\in W^{s+1,\infty}(\R^2)^2$, and $v^\circ\in\bar v^\circ+\Ld^2(\R^2)^2$, with $\omega^\circ:=\curl v^\circ,\bar\omega^\circ:=\curl \bar v^\circ\in \Pc\cap H^{s}(\R^2)$, and with either $\Div(av^\circ)=\Div(a\bar v^\circ)=0$ in the case~\eqref{eq:limeqn1}, or $\zeta^\circ:=\Div(av^\circ),\bar\zeta^\circ:=\Div(a\bar v^\circ)\in H^s(\R^2)$ in the case~\eqref{eq:limeqn2}. Then,
\begin{enumerate}[(i)]
\item there exists a global weak solution $v\in\Ld^\infty_\loc(\R^+;\bar v^\circ+H^{s+1}(\R^2)^2)$ of~\eqref{eq:limeqn1} on $\R^+\times\R^2$ with initial data $v^\circ$, and with $\omega=\curl v\in \Ld^\infty_\loc(\R^+;\Pc\cap H^{ s}(\R^2))$;
\item if $\beta=0$, there exists a global weak solution $v\in\Ld^\infty_\loc(\R^+;\bar v^\circ+H^{ s+1}(\R^2)^2)$ of~\eqref{eq:limeqn2} on $\R^+\times\R^2$ with initial data $v^\circ$, and with $\omega:=\curl v\in \Ld^\infty_\loc(\R^+;\Pc\cap H^{ s}(\R^2))$ and $\zeta:=\Div(av)\in \Ld^\infty_\loc(\R^+; H^{ s}(\R^2))$.
\end{enumerate}
\end{cor}
\begin{proof}
We may focus on item~(ii), the first item being completely similar.
In this proof we use the notation $\simeq$ and $\lesssim$ for $=$ and $\le$ up to positive constants that depend only on an upper bound on $\alpha$, $\alpha^{-1}$, $|\beta|$, $\lambda$, $\lambda^{-1}$, $s$, $(s-1)^{-1}$, $\|(h,\Psi,\bar v^\circ)\|_{W^{s+1,\infty}}$, $\|v^\circ-\bar v^\circ\|_{\Ld^2}$, $\|(\omega^\circ,\bar\omega^\circ,\zeta^\circ,\bar\zeta^\circ)\|_{H^{s}}$.
Given $\bar v^\circ\in W^{s+1,\infty}(\R^2)^2$ and $v^\circ\in\bar v^\circ+\Ld^2(\R^2)^2$ with $\omega^\circ,\bar\omega^\circ\in \Pc\cap H^{s}(\R^2)$ and $\zeta^\circ,\bar\zeta^\circ\in H^{s}(\R^2)$, Proposition~\ref{prop:locexist} gives a time $T>0$, $T\simeq1$, such that there exists a weak solution $v\in\Ld^\infty([0,T);\bar v^\circ+H^s(\R^2)^2)$ of~\eqref{eq:limeqn2} on $[0,T)\times\R^2$ with initial data $v^\circ$. For all $t\in[0,T)$, Lemma~\ref{lem:Lpest}(ii) (with $\beta=0$) then gives $\|\omega^t\|_{\Ld^\infty}\lesssim1$, which implies by Lemma~\ref{lem:conservSob}
\begin{align*}
\|\omega^t\|_{H^s}+\|\zeta^t\|_{H^s}+\|v^t-\bar v^\circ\|_{H^{s+1}}\lesssim1,
\end{align*}
and moreover by Lemma~\ref{lem:aprioriest}(i) we have $\omega^t\in\Pc(\R^2)$ for all $t\in[0,T)$. These a priori estimates show that the solution $v$ can be extended globally in time.
\end{proof}
We now extend this global existence result beyond the setting of smooth initial data. We start with the following result for $\Ld^2$-data, which is easily deduced by approximation. (Note that, for $a=e^h$ smooth, it is not clear at all whether smooth functions are dense in the set $\{(w,z)\in W^{1,\infty}(\R^2)^2\times\Ld^2(\R^2):z=\Div(aw)\}$, which thus causes troubles when regularizing the initial data; this problem is solved e.g. by assuming that the reference map $\bar v^\circ$ further satisfies $\bar\omega^\circ,\bar\zeta^\circ\in H^s(\R^2)$ for some $s>1$, as we do here.)
\begin{cor}[Global existence for $\Ld^2$-data]\label{cor:globexist2}
Let $\lambda>0$, $\alpha\ge0$, $\beta\in\R$, $h,\Psi\in W^{1,\infty}(\R^2)^2$. Let $\bar v^\circ\in W^{1,\infty}(\R^2)^2$ be some reference map with $\bar \omega^\circ:=\curl\bar v^\circ\in \Pc\cap H^{s}(\R^2)$ for some $s>1$, and with either $\Div(a\bar v^\circ)=0$ in the case~\eqref{eq:limeqn1}, or $\bar\zeta^\circ:=\Div(a\bar v^\circ)\in H^s(\R^2)$ in the case~\eqref{eq:limeqn2}. Let $v^\circ\in \bar v^\circ+\Ld^2(\R^2)^2$, with $\omega^\circ:=\curl v^\circ\in\Pc\cap\Ld^2(\R^2)$, and with either $\Div(av^\circ)=0$ in the case~\eqref{eq:limeqn1}, or $\zeta^\circ:=\Div(av^\circ)\in\Ld^2(\R^2)$ in the case~\eqref{eq:limeqn2}. Then,
\begin{enumerate}[(i)]
\item there exists a global weak solution $v\in\Ld^\infty_\loc(\R^+;\bar v^\circ+\Ld^2(\R^2)^2)$ of~\eqref{eq:limeqn1} on $\R^+\times\R^2$ with initial data $v^\circ$, and with $v\in\Ld^2_\loc(\R^+;\bar v^\circ+H^1(\R^2)^2)$ and $\omega:=\curl v\in \Ld^\infty_\loc(\R^+;\Pc\cap \Ld^2(\R^2))$;
\item if $\beta=0$, there exists a global weak solution $v\in\Ld^\infty_\loc(\R^+;\bar v^\circ+\Ld^2(\R^2)^2)$ of~\eqref{eq:limeqn2} on $\R^+\times\R^2$ with initial data $v^\circ$, and with $v\in\Ld^2_\loc(\R^+;\bar v^\circ+H^1(\R^2)^2)$, $\omega:=\curl v\in \Ld^\infty_\loc(\R^+;\Pc\cap \Ld^2(\R^2))$ and $\zeta:=\Div(av)\in \Ld^2_\loc(\R^+;\Ld^2(\R^2))$.
\end{enumerate}
\end{cor}
\begin{proof}
We may focus on the case~(ii) (with $\beta=0$), the other case being exactly similar. In this proof we use the notation $\lesssim$ for $\le$ up to a positive constant that depends only on an upper bound on $\alpha$, $\alpha^{-1}$, $\lambda$, $(s-1)^{-1}$, $\|(h,\Psi,\bar v^\circ)\|_{W^{1,\infty}}$, $\|(\bar\omega^\circ,\bar\zeta^\circ)\|_{H^s}$, $\|v^\circ-\bar v^\circ\|_{\Ld^2}$, and $\|(\omega^\circ,\zeta^\circ)\|_{\Ld^2}$. We use the notation $\lesssim_t$ if it further depends on an upper bound on time $t$.
Let $\rho\in C^\infty_c(\R^2)$ with $\rho\ge0$, $\int\rho=1$, and $\rho(0)=1$, define $\rho_\e(x):=\e^{-d}\rho(x/\e)$ for all $\e>0$, and set $\omega^\circ_\e:=\rho_\e\ast \omega^\circ$, $\bar\omega^\circ_\e:=\rho_\e\ast \bar\omega^\circ$, $\zeta^\circ_\e:=\rho_\e\ast \zeta^\circ$, $\bar\zeta^\circ_\e:=\rho_\e\ast \bar\zeta^\circ$, $a_\e:=\rho_\e\ast a$ and $\Psi_\e:=\rho_\e\ast\Psi$. For all $\e>0$, we have $\omega^\circ_\e$, $\bar\omega^\circ_\e\in\Pc\cap H^\infty(\R^2)$, $\zeta_\e^\circ$, $\bar \zeta_\e^\circ\in H^\infty(\R^2)$, and $a_\e$, $a_\e^{-1}$, $\Psi_\e\in C^\infty(\R^2)^2$. By construction, we have $a_\e\to a$, $a_\e^{-1}\to a^{-1}$, $\Psi_\e\to\Psi$ in $W^{1,\infty}(\R^2)$, $\bar\omega^\circ_\e-\bar\omega^\circ$, $\bar\zeta^\circ_\e-\bar\zeta^\circ\to0$ in $\dot H^{-1}\cap H^s(\R^2)$, and $\omega^\circ_\e-\omega^\circ$, $\zeta^\circ_\e-\zeta^\circ$ in $\dot H^{-1}\cap \Ld^2(\R^2)$. The additional convergence in $\dot H^{-1}(\R^2)$ indeed follows from the following computation with Fourier transforms,
\begin{align*}
\|\omega_\e^\circ-\omega^\circ\|_{\dot H^{-1}}^2&=\int|\xi|^{-2}|\hat\rho(\e\xi)-1|^2|\hat\omega^\circ(\xi)|^2d\xi\le\e^2\|\nabla\hat\rho\|_{\Ld^\infty}^2\int|\hat\omega^\circ|^2=\e^2\|\nabla\hat\rho\|_{\Ld^\infty}^2\|\omega^\circ\|_{\Ld^2}^2,
\end{align*}
and similarly for $\bar\omega_\e^\circ$, $\zeta_\e^\circ$, and $\bar\zeta_\e^\circ$.
Lemma~\ref{lem:reconstr} then gives a unique $v_\e^\circ\in v^\circ+H^1(\R^2)^2$ and a unique $\bar v_\e^\circ\in \bar v^\circ+H^{s+1}(\R^2)^2$ such that $\curl v_\e^\circ=\omega_\e^\circ$, $\curl \bar v_\e^\circ=\bar\omega_\e^\circ$, $\Div(a_\e v_\e^\circ)=\zeta_\e^\circ$, $\Div(a_\e \bar v_\e^\circ)=\bar\zeta_\e^\circ$, and we have $v_\e^\circ-v^\circ\to0$ in $H^1(\R^2)^2$ and $\bar v_\e^\circ-\bar v^\circ\to0$ in $H^{s+1}(\R^2)^2$. In particular, the assumption $\bar v^\circ\in W^{1,\infty}(\R^2)^2$ yields by the Sobolev embedding with $s>1$,
\[\|\bar v_\e^\circ\|_{W^{1,\infty}}\lesssim \|\bar v_\e^\circ-\bar v^\circ\|_{H^{s+1}}+\|\bar v^\circ\|_{W^{1,\infty}}\lesssim1,\]
and the assumption $v^\circ-\bar v^\circ\in\Ld^2(\R^2)^2$ implies
\[\|v_\e^\circ-\bar v_\e^\circ\|_{\Ld^2}\le \|v_\e^\circ- v^\circ\|_{\Ld^2}+\|v^\circ-\bar v^\circ\|_{\Ld^2}+\|\bar v_\e^\circ-\bar v^\circ\|_{\Ld^2}\lesssim1.\]
Corollary~\ref{cor:globexist1} then gives a solution $v_\e\in \Ld^\infty_\loc(\R^+;\bar v_\e^\circ+H^\infty(\R^2)^2)$ of~\eqref{eq:limeqn2} on $\R^+\times\R^2$ with initial data $v^\circ_\e$, and with $(a,\Psi)$ replaced by $(a_\e,\Psi_\e)$.
Lemma~\ref{lem:aprioriest}(iii) and Lemma~\ref{lem:Lpest}(ii) (with $\beta=0$) give for all $t\ge0$
\[\|v_\e-\bar v_\e^\circ\|_{\Ld^\infty_t\Ld^2}+\|\zeta_\e\|_{\Ld^2_t\Ld^2}+\|\omega_\e\|_{\Ld^\infty_t\Ld^2}\lesssim_t1,\]
hence by Lemma~\ref{lem:reconstr}, together with the obvious estimate $\|(\omega_\e-\bar\omega_\e^\circ,\zeta_\e-\bar\zeta_\e^\circ)\|_{\dot H^{-1}}\lesssim\|v_\e-\bar v_\e^\circ\|_{\Ld^2}$,
\begin{align*}
\|v_\e-\bar v_\e^\circ\|_{\Ld^2_tH^1}&\lesssim \|v_\e-\bar v_\e^\circ\|_{\Ld^2_t\Ld^2}+\|\zeta_\e-\bar\zeta_\e^\circ\|_{\Ld^2_t \Ld^2}+\|\omega_\e-\bar\omega_\e^\circ\|_{\Ld^2_t\Ld^2}
\lesssim_t1.
\end{align*}
As $\bar v_\e^\circ$ is bounded in $H^1_\loc(\R^2)^2$, we deduce up to an extraction $v_\e\cvf{} v$ in $\Ld^2_\loc(\R^+;H^1_\loc(\R^2)^2)$, and also $\omega_\e\cvf{} \omega$, $\zeta_\e\cvf{} \zeta$ in $\Ld^2_\loc(\R^+;\Ld^2(\R^2))$, for some functions $v,\omega,\zeta$.
Comparing equation~\eqref{eq:limeqn1VF} with the above estimates,
we deduce that $(\partial_t\omega_\e)_\e$ is bounded in $\Ld^{1}_\loc(\R^+;W^{-1,1}_\loc(\R^2))$.
Since by the Rellich theorem the space $\Ld^2(U)$ is compactly embedded in $H^{-1}(U)\subset W^{-1,1}(U)$ for any bounded domain $U\subset\R^2$, the Aubin-Simon lemma ensures that we have $\omega_\e\to\omega$ strongly in $\Ld^2_\loc(\R^+;H^{-1}_\loc(\R^2))$.
This implies $\omega_\e v_\e\to \omega v$ in the distributional sense. We may then pass to the limit in the weak formulation of equation~\eqref{eq:limeqn2}, and the result follows.
\end{proof}
We now investigate the case of rougher initial data.
Using the a priori estimates of Lemmas~\ref{lem:Lpvort} and~\ref{lem:Lpest}(ii), we prove global existence for $\Ld^q$-data, $q>1$.
In the parabolic regime $\alpha>0$, $\beta=0$, we have at our disposal the much finer a priori estimates of Lemma~\ref{lem:Lpest}(iii), which then allow to deduce global existence for vortex-sheet data $\omega^\circ\in\Pc(\R^2)$.
As in~\cite{Lin-Zhang-00} we make here a crucial use of some compactness result due to Lions~\cite{Lions-98} in the context of the compressible Navier-Stokes equations. In the conservative regime~(iv) below, however, this result is not enough and compactness is proven by hand.
Theorem~\ref{th:main} directly follows from this proposition, together with the various a priori estimates of Sections~\ref{chap:apriori} and~\ref{chap:propagation}.
\begin{prop}[Global existence for general data]\label{cor:globexist3}
Let $\lambda>0$, $\alpha\ge0$, $\beta\in\R$, and $h,\Psi\in W^{1,\infty}(\R^2)^2$. Let $\bar v^\circ\in W^{1,\infty}(\R^2)^2$ be some reference map with $\bar \omega^\circ:=\curl\bar v^\circ\in \Pc\cap H^{s}(\R^2)$ for some $s>1$, and with either $\Div(a\bar v^\circ)=0$ in the case~\eqref{eq:limeqn1}, or $\bar\zeta^\circ:=\Div(a\bar v^\circ)\in H^s(\R^2)$ in the case~\eqref{eq:limeqn2}. Let $v^\circ\in \bar v^\circ+\Ld^2(\R^2)^2$, with $\omega^\circ=\curl v^\circ\in\Pc(\R^2)$, and with either $\Div(av^\circ)=0$ in the case~\eqref{eq:limeqn1}, or $\zeta^\circ:=\Div(av^\circ)\in \Ld^2(\R^2)$ in the case~\eqref{eq:limeqn2}. Then,
\begin{enumerate}[(i)]
\item \emph{Case~\eqref{eq:limeqn2} with $\alpha>0$, $\beta=0$:} There exists a weak solution $v\in\Ld^\infty_\loc(\R^+;\bar v^\circ+\Ld^2(\R^2)^2)$ on $\R^+\times\R^2$ with initial data $v^\circ$, and with $\omega=\curl v\in \Ld^\infty(\R^+;\Pc(\R^2))$ and $\zeta=\Div(av)\in \Ld^2_\loc(\R^+;\Ld^2(\R^2))$.
\item \emph{Case~\eqref{eq:limeqn1} with $\alpha>0$, and either $\beta=0$ or $a$ constant:} There exists a weak solution $v\in\Ld^\infty_\loc(\R^+;\bar v^\circ+\Ld^2(\R^2)^2)$ on $\R^+\times\R^2$ with initial data $v^\circ$, and with $\omega=\curl v\in \Ld^\infty(\R^+;\Pc(\R^2))$.
\item \emph{Case~\eqref{eq:limeqn1} with $\alpha>0$:} If $\omega^\circ\in\Ld^q(\R^2)$ for some $q>1$, there exists a weak solution $v\in\Ld^\infty_\loc(\R^+;\bar v^\circ+\Ld^{2}(\R^2)^2)$ on $\R^+\times\R^2$ with initial data $v^\circ$, and with $\omega=\curl v\in \Ld^\infty_\loc(\R^+;\Pc\cap\Ld^q(\R^2))$.
\item \emph{Case~\eqref{eq:limeqn1} with $\alpha=0$:} If $\omega^\circ\in\Ld^q(\R^2)$ for some $q>1$, there exists a very weak solution $v\in\Ld^\infty_\loc(\R^+;\bar v^\circ+\Ld^{2}(\R^2)^2)$ on $\R^+\times\R^2$ with initial data $v^\circ$, and with $\omega=\curl v\in \Ld^\infty_\loc(\R^+;\Pc\cap \Ld^q(\R^2))$. This is a weak solution whenever $q\ge4/3$.
\end{enumerate}
\end{prop}
\begin{proof}
We split the proof into three steps, first proving item~(i), then explaining how the argument has to be adapted to prove items~(ii) and~(iii), and finally turning to item~(iv).
\medskip
{\it Step~1: proof of~(i).}
In this step, we use the notation $\lesssim$ for $\le$ up to a positive constant that depends only on an upper bound on $\alpha$, $\alpha^{-1}$, $\lambda$, $\|(h,\Psi,\bar v^\circ)\|_{W^{1,\infty}}$, $\|(\bar\omega^\circ,\bar \zeta^\circ)\|_{H^s}$, $\|v^\circ-\bar v^\circ\|_{\Ld^2}$, and $\|\zeta^\circ\|_{\Ld^2}$. We use the notation $\lesssim_t$ (resp. $\lesssim_{t,U}$) if it further depends on an upper bound on time $t$ (resp. and on the size of $U\subset\R^2$).
Let $\rho\in C^\infty_c(\R^2)$ with $\rho\ge0$, $\int\rho=1$, $\rho(0)=1$, and $\rho|_{\R^2\setminus B_1}=0$, define $\rho_\e(x):=\e^{-d}\rho(x/\e)$ for all $\e>0$, and set $\omega^\circ_\e:=\rho_\e\ast \omega^\circ$, $\bar\omega^\circ_\e:=\rho_\e\ast \bar\omega^\circ$, $\zeta^\circ_\e:=\rho_\e\ast \zeta^\circ$, $\bar\zeta^\circ_\e:=\rho_\e\ast \bar\zeta^\circ$.
For all $\e>0$, we have $\omega^\circ_\e$, $\bar\omega^\circ_\e\in\Pc\cap H^\infty(\R^2)$, $\zeta_\e^\circ$, $\bar \zeta_\e^\circ\in H^\infty(\R^2)$. As in the proof of Corollary~\ref{cor:globexist2}, we have by construction $\bar\omega^\circ_\e-\bar\omega^\circ$, $\bar\zeta^\circ_\e-\bar\zeta^\circ\to0$ in $\dot H^{-1}\cap H^s(\R^2)$, and $\zeta^\circ_\e-\zeta^\circ$ in $\dot H^{-1}\cap \Ld^2(\R^2)$. The assumption $v^\circ-\bar v^\circ\in\Ld^2(\R^2)^2$ further yields $\omega^\circ-\bar\omega^\circ\in\dot H^{-1}(\R^2)$, which implies
$\omega_\e^\circ-\bar\omega_\e^\circ\to\omega^\circ-\bar\omega^\circ$, hence $\omega_\e^\circ-\omega^\circ\to0$, in $\dot H^{-1}(\R^2)$.
Lemma~\ref{lem:reconstr} then gives a unique $v_\e^\circ\in v^\circ+\Ld^2(\R^2)^2$ and a unique $\bar v_\e^\circ\in \bar v^\circ+H^{s+1}(\R^2)^2$ such that $\curl v_\e^\circ=\omega_\e^\circ$, $\curl \bar v_\e^\circ=\bar\omega_\e^\circ$, $\Div(a_\e v_\e^\circ)=\zeta_\e^\circ$, $\Div(a_\e \bar v_\e^\circ)=\bar\zeta_\e^\circ$, and we have $v_\e^\circ-v^\circ\to0$ in $\Ld^2(\R^2)^2$ and $\bar v_\e^\circ-\bar v^\circ\to0$ in $H^{s+1}(\R^2)^2$. In particular, arguing as in the proof of Corollary~\ref{cor:globexist2}, the assumption $\bar v^\circ\in W^{1,\infty}(\R^2)^2$ yields $\|\bar v_\e^\circ\|_{W^{1,\infty}}\lesssim1$ by the Sobolev embedding with $s>1$, and the assumption $v^\circ-\bar v^\circ\in\Ld^2(\R^2)^2$ implies $\|v_\e^\circ-\bar v_\e^\circ\|_{\Ld^2}\lesssim1$.
Corollary~\ref{cor:globexist2} then gives a global weak solution $v_\e\in \Ld^\infty_\loc(\R^+;\bar v_\e^\circ+\Ld^2(\R^2)^2)$ of~\eqref{eq:limeqn2} on $\R^+\times\R^2$ with initial data $v^\circ_\e$, and
Lemma~\ref{lem:aprioriest}(iii) yields for all $t\ge0$
\begin{align}\label{eq:boundaprioriSt1}
\|v_\e-\bar v_\e^\circ\|_{\Ld^\infty_t\Ld^2}+\|\zeta_\e\|_{\Ld^2_t\Ld^2}\lesssim_t1,
\end{align}
while Lemma~\ref{lem:Lpest}(iii) with $\beta=0$ yields after integration in time, for all $1\le p<2$,
\[\|\omega_\e\|_{\Ld^p_t\Ld^p}\lesssim \bigg(\int_0^t(u^{1-p}+e^{Cu})du\bigg)^{1/p}\lesssim_t(2-p)^{-1/p}.\]
Using this last estimate for $p=3/2$ and $11/6$, and combining it with Lemma~\ref{lem:aprioriest}(i) in the form $\|\omega_\e\|_{\Ld^\infty_t\Ld^1}\le1$, we deduce by interpolation
\[\|\omega_\e\|_{\Ld^{2}_t(\Ld^{4/3}\cap\Ld^{12/7})}\lesssim_t1.\]
Now we need to prove more precise estimates on $v_\e$. First recall the identity
\begin{align}\label{eq:decompveps}
v_\e=v_{\e,1}+v_{\e,2},\qquad v_{\e,1}:=\nabla^\bot\triangle^{-1}\omega_\e,\qquad v_{\e,2}:=\nabla\triangle^{-1}\Div v_\e.
\end{align}
On the one hand, as $\omega_\e$ is bounded in $\Ld^{2}_\loc(\R^+;\Ld^{4/3}\cap\Ld^{12/7}(\R^2))$, we deduce from Riesz potential theory that $v_{\e,1}$ is bounded in $\Ld^{2}_\loc(\R^+;\Ld^{4}\cap\Ld^{12}(\R^2)^2)$, and we deduce from the Caldern-Zygmund theory that $\nabla v_{\e,1}$ is bounded in $\Ld^2_\loc(\R^+;\Ld^{4/3}(\R^2))$.
On the other hand, decomposing
\[v_{\e,2}=\nabla\triangle^{-1}\Div(v_\e-\bar v_\e^\circ)+\bar v_\e^\circ-\nabla^\bot\triangle^{-1}\bar\omega_\e^\circ,\]
noting that $v_\e-\bar v_\e^\circ$ is bounded in $\Ld^\infty_\loc(\R^+;\Ld^2(\R^2)^2)$ (cf.~\eqref{eq:boundaprioriSt1}), that $\bar v_\e^\circ$ is bounded in $\Ld^2_\loc(\R^2)^2$, and that $\|\nabla\triangle^{-1}\bar\omega_\e^\circ\|_{\Ld^2}\lesssim\|\bar\omega_\e^\circ\|_{\Ld^1\cap\Ld^\infty}\lesssim1$ (cf. Lemma~\ref{lem:singint-1}), we deduce that $v_{\e,2}$ is bounded in $\Ld^\infty_\loc(\R^+;\Ld^2_\loc(\R^2)^2)$. Further, decomposing
\[v_{\e,2}=\nabla\triangle^{-1}(a^{-1}(\zeta_\e-\bar\zeta_\e^\circ))-\nabla\triangle^{-1}(\nabla h\cdot(v_\e-\bar v_\e^\circ))+\bar v_\e^\circ-\nabla^\bot\triangle^{-1}\bar\omega_\e^\circ,\]
we easily check that $\nabla v_{\e,2}$ is bounded
in $\Ld^2_\loc(\R^+;\Ld^2_\loc(\R^2)^2)$. We then conclude from the Sobolev embedding that $v_{\e,2}$ is bounded in $\Ld^2_\loc(\R^+;\Ld^q_\loc(\R^2)^2)$ for all $q<\infty$.
For our purposes it is enough to choose $q=4$ and $12$. In particular, we have proven that, for all bounded subset $U\subset\R^2$,
\begin{align}\label{eq:boundsmallbigexp}
&\|\omega_\e\|_{\Ld^2_t\Ld^{4/3}}+\|\zeta_\e\|_{\Ld^2_t\Ld^2}+\|v_\e\|_{\Ld^\infty_t\Ld^2(U)}\\
&\hspace{2cm}+\|v_{\e,1}\|_{\Ld^2_t(\Ld^4\cap\Ld^{12})}+\|\nabla v_{\e,1}\|_{\Ld^2_t\Ld^{4/3}}+\|v_{\e,2}\|_{\Ld^2_t(\Ld^4\cap\Ld^{12}(U))}+\|\nabla v_{\e,2}\|_{\Ld^2_t\Ld^2(U)}\lesssim_{t,U}1.\nonumber
\end{align}
Therefore we have up to an extraction $\omega_\e\cvf{}\omega$ in $\Ld^{2}_\loc(\R^+;\Ld^{4/3}(\R^2))$, $\zeta_\e\cvf{}\zeta$ in $\Ld^2_\loc(\R^+;\Ld^2(\R^2))$, $v_{\e,1}\cvf{}v_1$ in $\Ld^{2}_\loc(\R^+;\Ld^{4}(\R^2)^2)$, and $v_{\e,2}\cvf{}v_2$ in $\Ld^2_\loc(\R^+;\Ld^4_\loc(\R^2)^2)$, for some functions $\omega,\zeta,v_1,v_2$. Comparing the above estimates with equation~\eqref{eq:limeqn1VF}, we deduce that $(\partial_t\omega_\e)_\e$ is bounded in $\Ld^{1}_\loc(\R^+;W^{-1,1}_\loc(\R^2))$. Moreover, we find by interpolation, for any $|\xi|<1$ and any bounded domain $U\subset\R^2$, denoting by $U^1:=U+B_1$ its $1$-fattening,
\begin{align*}
&\|v_\e-v_\e(\cdot+\xi)\|_{\Ld^2_t\Ld^4(U)}\le\|v_{\e,1}-v_{\e,1}(\cdot+\xi)\|_{\Ld^2_t\Ld^4(U)}+\|v_{\e,2}-v_{\e,2}(\cdot+\xi)\|_{\Ld^2_t\Ld^4(U)}\\
\le~&\|v_{\e,1}-v_{\e,1}(\cdot+\xi)\|_{\Ld^2_t\Ld^{4/3}(U)}^{1/4}\|v_{\e,1}-v_{\e,1}(\cdot+\xi)\|_{\Ld^2_t\Ld^{12}(U)}^{3/4}\\
&\hspace{4cm}+\|v_{\e,2}-v_{\e,2}(\cdot+\xi)\|_{\Ld^2_t\Ld^{2}(U)}^{2/5}\|v_{\e,2}-v_{\e,2}(\cdot+\xi)\|_{\Ld^2_t\Ld^{12}(U)}^{3/5}\\
\le~&2\|v_{\e,1}-v_{\e,1}(\cdot+\xi)\|_{\Ld^2_t\Ld^{4/3}(U)}^{1/4}\|v_{\e,1}\|_{\Ld^2_t\Ld^{12}(U^1)}^{3/4}+2\|v_{\e,2}-v_{\e,2}(\cdot+\xi)\|_{\Ld^2_t\Ld^{2}(U)}^{2/5}\|v_{\e,2}\|_{\Ld^2_t\Ld^{12}(U^1)}^{3/5}\\
\le~&2|\xi|^{1/4}\|\nabla v_{\e,1}\|_{\Ld^2_t\Ld^{4/3}(U^1)}^{1/4}\|v_{\e,1}\|_{\Ld^2_t\Ld^{12}(U^1)}^{3/4}+2|\xi|^{2/5}\|\nabla v_{\e,2}\|_{\Ld^2_t\Ld^{2}(U^1)}^{2/5}\|v_{\e,2}\|_{\Ld^2_t\Ld^{12}(U^1)}^{3/5},
\end{align*}
and hence by~\eqref{eq:boundsmallbigexp},
\begin{align*}
\|v_\e-v_\e(\cdot+\xi)\|_{\Ld^2_t\Ld^4(U)}\lesssim_{t,U}|\xi|^{1/4}+|\xi|^{2/5}.
\end{align*}
Let us summarize the previous observations: up to an extraction, setting $v:=v_1+v_2$, we have
\begin{gather*}
\text{$\omega_\e\cvf{}\omega$ in $\Ld^{2}_\loc(\R^+;\Ld^{4/3}(\R^2))$, $v_{\e}\cvf{}v$ in $\Ld^{2}_\loc(\R^+;\Ld^{4}_\loc(\R^2)^2)$,}\\
\text{$\partial_t\omega_\e$ bounded in $\Ld^1_\loc(\R^+;W^{-1,1}_\loc(\R^2))$,}\\
\text{$\sup_{\e>0}\|v_\e-v_\e(\cdot+\xi)\|_{\Ld^2_t\Ld^4(U)}\to0$ as $|\xi|\to0$, for any $t\ge0$ and bounded subset $U\subset\R^2$.}
\end{gather*}
We may then apply~\cite[Lemma~5.1]{Lions-98}, which ensures that $\omega_\e v_\e\to\omega v$ holds in the distributional sense. This allows to pass to the limit in the weak formulation of equation~\eqref{eq:limeqn2}, and the result follows.
\medskip
{\it Step~2: proof of~(ii) and~(iii).}
The proof of item~(ii) is also based on Lemma~\ref{lem:Lpest}(iii), and is completely analogous to the proof of item~(i) in Step~1. As far as item~(iii) is concerned, Lemma~\ref{lem:Lpest}(iii) does no longer apply in that case, but, since we further assume $\omega^\circ\in\Ld^q(\R^2)$ for some $q>1$, Lemma~\ref{lem:Lpvort} gives the following a priori estimate: for all $t\ge0$
\begin{align}\label{eq:additionalreg}
\|\omega\|_{\Ld^{q+1}_t\Ld^{q+1}}+\|\omega\|_{\Ld^\infty_t\Ld^q}\lesssim_t1,
\end{align}
hence in particular by interpolation $\|\omega\|_{\Ld^{p}_t\Ld^{p}}\lesssim_t1$ for all $1\le p \le2$ (here we use the notation $\lesssim_t$ for $\le$ up to a constant that depends only on an upper bound on $t$, $(q-1)^{-1}$, $\alpha$, $\alpha^{-1}$, $|\beta|$, $\|(h,\Psi)\|_{W^{1,\infty}}$, $\|v^\circ-\bar v^\circ\|_{\Ld^2}$, and $\|\omega^\circ\|_{\Ld^q}$). The conclusion then follows from a similar argument as in Step~1.
\medskip
{\it Step~3: proof of~(iv).}
We finally turn to the incompressible equation~\eqref{eq:limeqn1} in the conservative regime $\alpha=0$. Let $q>1$ be such that $\omega^\circ\in\Ld^q(\R^2)$.
Lemmas~\ref{lem:Lpvort} and~\ref{lem:Lpest}(ii) ensure that $\omega_\e$ is bounded in $\Ld^\infty_\loc(\R^+;\Ld^1\cap\Ld^q(\R^2))$, and hence, for $q>4/3$, replacing the exponents $4/3$ and $12/7$ of Step~1 by $4/3$ and $q$, the argument of Step~1 can be immediately adapted to this case, for which we thus obtain global existence of a weak solution. In the remaining case $1<q<4/3$, the product $\omega \nabla\Delta^{-1}\omega$ (hence the product $\omega v$, cf.~\eqref{eq:decompveps}) does not make sense any more for $\omega\in\Ld^q(\R^2)$. Since in the conservative regime $\alpha=0$ no additional regularity is available (in particular, \eqref{eq:additionalreg} does not hold), we do not expect the existence of a weak solution, and we need to turn to the notion of very weak solutions as defined in Definition~\ref{defin:sol}(c), where the product $\omega v$ is reinterpreted la Delort. Let us now focus on this case $1<q\le 4/3$, and prove the global existence of a very weak solution.
For the critical exponent $q=4/3$, the integrability of $v$ found below directly implies by Remark~\ref{rem:sol}(ii) that the constructed very weak solution is automatically a weak solution. In this step, we use the notation $\lesssim$ for $\le$ up to a constant $C$ that depends only on an upper bound on $(q-1)^{-1}$, $|\beta|$, $\|(h,\Psi,\bar v^\circ)\|_{W^{1,\infty}}$, $\|v^\circ-\bar v^\circ\|_{\Ld^2}$, $\|\bar\omega^\circ\|_{\Ld^2}$, and $\|\omega^\circ\|_{\Ld^q}$, and we use the notation $\lesssim_t$ (resp. $\lesssim_{t,U}$) if it further depends on an upper bound on time $t$ (resp. on $t$ and on the size of $U\subset\R^2$).
Let $\omega_\e^\circ$, $\bar\omega_\e^\circ$, $v_\e^\circ$, $\bar v_\e^\circ$ be defined as in Step~1 (with of course $\zeta_\e^\circ=\bar\zeta_\e^\circ=0$), and let $v_\e\in\Ld^\infty_\loc(\R^+;\bar v_\e^\circ+\Ld^2(\R^2)^2)$ be a global weak solution of~\eqref{eq:limeqn1} on $\R^+\times\R^2$ with initial data $v_\e^\circ$, as given by Corollary~\ref{cor:globexist2}.
Lemmas~\ref{lem:aprioriest}(iii) and~\ref{lem:Lpest}(ii) then give for all $t\ge0$
\begin{align}\label{eq:apriorieststep3-0}
\|\omega_\e\|_{\Ld^\infty_t(\Ld^1\cap\Ld^q)}+\|v_\e-\bar v_\e^\circ\|_{\Ld^\infty_t\Ld^2}\lesssim_t1.
\end{align}
As $\bar v_\e^\circ$ is bounded in $\Ld^2_\loc(\R^2)^2$, we deduce in particular that $v_\e$ is bounded in $\Ld^\infty_\loc(\R^+;\Ld^2_\loc(\R^2))$.
Moreover, using the Delort type identity
\[\omega_\e v_\e=-\frac12|v_\e|^2\nabla^\bot h-a^{-1}(\Div(aS_{v_\e}))^\bot,\]
we then deduce that $\omega_\e v_\e$ is bounded in $\Ld^\infty_\loc(\R^+;W^{-1,1}_\loc(\R^2)^2)$.
Let us now recall the following useful decomposition:
\begin{align}\label{eq:decompv12veryweak}
v_\e=v_{\e,1}+v_{\e,2},\quad v_{\e,1}:=\nabla^\bot\triangle^{-1}\omega_\e,\quad v_{\e,2}:=\nabla\triangle^{-1}\Div v_\e.
\end{align}
By Riesz potential theory $v_{\e,1}$ is bounded in $\Ld^\infty_\loc(\R^+;\Ld^p(\R^2)^2)$ for all $2< p\le \frac{2q}{2-q}$, while as in Step~1 we check that $v_{\e,2}$ is bounded in $\Ld^\infty_\loc(\R^+;H^1_\loc(\R^2)^2)$. Hence by the Sobolev embedding, for all bounded domain $U\subset\R^2$ and all $t\ge0$,
\begin{align}\label{eq:apriorieststep3}
\|(v_\e,v_{\e,1})\|_{\Ld^\infty_t\Ld^{2q/(2-q)}(U)}\lesssim_{t,U}1.
\end{align}
Up to an extraction we then have $v_\e\cvf*v$ in $\Ld^\infty_\loc(\R^+;\Ld^2_\loc(\R^2)^2)$ and $\omega_\e\cvf*\omega$ in $\Ld^\infty_\loc(\R^+;\Ld^q(\R^2))$, for some functions $v,\omega$, with necessarily $\omega=\curl v$ and $\Div(av)=0$.
We now need to pass to the limit in the nonlinearity $\omega_\e v_\e$. For that purpose, for all $\eta>0$, we set $v_{\e,\eta}:=\rho_\eta\ast v_\e$ and $\omega_{\e,\eta}:=\rho_\eta\ast\omega_\e=\curl v_{\e,\eta}$, where $\rho_\eta(x):=\eta^{-d}\rho(x/\eta)$ is the regularization kernel defined in Step~1, and we then decompose the nonlinearity as follows:
\begin{align*}
\omega_\e v_\e&=(\omega_{\e,\eta}-\omega_\e)(v_{\e,\eta}-v_\e)-\omega_{\e,\eta}v_{\e,\eta}+\omega_{\e,\eta}v_\e+\omega_\e v_{\e,\eta}.
\end{align*}
We consider separately each term in the right-hand side, and split the proof into four substeps.
\medskip
{\it Substep~3.1.}
We prove that $(\omega_{\e,\eta}-\omega_\e)(v_{\e,\eta}-v_\e)\to0$ in the distributional sense (and even strongly in $\Ld^\infty_\loc(\R^+;W^{-1,1}_\loc(\R^2)^2)$) as $\eta\downarrow0$, uniformly in $\e>0$.
For that purpose, we use the Delort type identity
\[(\omega_{\e,\eta}-\omega_\e) (v_{\e,\eta}-v_\e)=a^{-1}(v_{\e,\eta}-v_\e)\Div(a(v_{\e,\eta}-v_\e))-\frac12|v_{\e,\eta}-v_\e|^2\nabla^\bot h-a^{-1}(\Div (aS_{v_{\e,\eta}-v_\e}))^\bot.\]
Noting that the constraint $0=a^{-1}\Div(av_\e)=\nabla h\cdot v_\e+\Div v_\e$ yields
\[a^{-1}\Div(a(v_{\e,\eta}-v_\e))=\nabla h\cdot v_{\e,\eta}+\Div v_{\e,\eta}=\nabla h\cdot (\rho_\eta\ast v_\e)+\rho_\eta\ast\Div v_{\e}=\nabla h\cdot (\rho_\eta\ast v_{\e})-\rho_\eta\ast(\nabla h\cdot v_{\e}),\]
the above identity becomes
\begin{align*}
(\omega_{\e,\eta}-\omega_\e) (v_{\e,\eta}-v_\e)&=(v_{\e,\eta}-v_\e)\big(\nabla h\cdot (\rho_\eta\ast v_{\e})-\rho_\eta\ast(\nabla h\cdot v_{\e})\big)\\
&\hspace{3cm}-\frac12|v_{\e,\eta}-v_\e|^2\nabla^\bot h-a^{-1}(\Div(a S_{v_{\e,\eta}-v_\e}))^\bot.
\end{align*}
First, using the boundedness of $v_\e$ (hence of $v_{\e,\eta}$) in $\Ld^\infty_\loc(\R^+;\Ld^2_\loc(\R^2)^2)$, we may estimate, for any bounded domain $U\subset\R^2$, denoting by $U^\eta:=U+B_\eta$ its $\eta$-fattening,
\begin{align*}
&\int_U\big|(v_{\e,\eta}-v_\e)\big(\nabla h\cdot (\rho_\eta\ast v_{\e})-\rho_\eta\ast(\nabla h\cdot v_{\e})\big)\big|\\
\le~&\|(v_\e,v_{\e,\eta})\|_{\Ld^2(U)}\bigg(\int_U \Big(\int \rho_\eta(y)|\nabla h(x)-\nabla h(x-y)||v_\e(x-y)|dy\Big)^2dx\bigg)^{1/2}\\
\lesssim~&\|(v_\e,v_{\e,\eta})\|_{\Ld^2(U^\eta)}^2\Big(\int\rho_\eta(y)\int_U|\nabla h(x)-\nabla h(x-y)|^2dxdy\Big)^{1/2},
\end{align*}
where the right-hand side converges to $0$ as $\eta\downarrow0$, uniformly in $\e$.
Second, using the decomposition~\eqref{eq:decompv12veryweak}, and setting $v_{\e,\eta,1}:=\rho_\eta\ast v_{\e,1}$, $v_{\e,\eta,2}:=\rho_\eta\ast v_{\e,2}$, we may estimate by the Hlder inequality, for any bounded domain $U\subset\R^2$,
\begin{align*}
&\int_{U}|(v_\e-v_{\e,\eta})\otimes(v_\e-v_{\e,\eta})|\le\int_{U}|v_\e-v_{\e,\eta}||v_{\e,1}-v_{\e,\eta,1}|+\int_{U}|v_{\e}-v_{\e,\eta}||v_{\e,2}-v_{\e,\eta,2}|\\
&\hspace{2cm}\le \|(v_\e,v_{\e,\eta})\|_{\Ld^{2q/(2-q)}(U)}\|v_{\e,1}-v_{\e,\eta,1}\|_{\Ld^{2q/(3q-2)}(U)}+\|(v_\e,v_{\e,\eta})\|_{\Ld^{2}(U)}\|v_{\e,2}-v_{\e,\eta,2}\|_{\Ld^{2}(U)}.
\end{align*}
Recalling the choice $1<q\le 4/3$, we find by interpolation
\begin{align*}
\|v_{\e,1}-v_{\e,\eta,1}\|_{\Ld^{2q/(3q-2)}(U)}&\le \|v_{\e,1}-v_{\e,\eta,1}\|_{\Ld^{2}(U)}^{\frac{4-3q}{2-q}}\|v_{\e,1}-v_{\e,\eta,1}\|_{\Ld^{q}(U)}^{2\frac{q-1}{2-q}}\\
&\le\eta^{2\frac{q-1}{2-q}} \|(v_{\e,1},v_{\e,\eta,1})\|_{\Ld^{2}(U)}^{\frac{4-3q}{2-q}}\|\nabla v_{\e,1}\|_{\Ld^{q}}^{2\frac{q-1}{2-q}},
\end{align*}
and hence by the Caldern-Zygmund theory
\begin{align*}
\|v_{\e,1}-v_{\e,\eta,1}\|_{\Ld^{2q/(3q-2)}(U)}&\lesssim\eta^{2\frac{q-1}{2-q}} \|(v_{\e,1},v_{\e,\eta,1})\|_{\Ld^{2}(U)}^{\frac{4-3q}{2-q}}\|\omega_\e\|_{\Ld^{q}}^{2\frac{q-1}{2-q}},
\end{align*}
while as in Step~1 we may estimate
\begin{align*}
\|v_{\e,2}-v_{\e,\eta,2}\|_{\Ld^2_t\Ld^{2}(U)}&\le\eta\|\nabla v_{\e,2}\|_{\Ld^2_t\Ld^{2}(U^\eta)}\lesssim_U\eta.
\end{align*}
Combining this with the a priori estimates~\eqref{eq:apriorieststep3}, we may conclude
\[\int_0^t\int_{U}|(v_\e^i-v_{\e,\eta}^i)(v_\e^j-v_{\e,\eta}^j)|\lesssim_{t,U}\eta^{2\frac{q-1}{2-q}}+\eta,\]
and the claim follows.
\medskip
{\it Substep~3.2.}
We set $v_\eta:=\rho_\eta\ast v$, $\omega_\eta:=\rho_\eta\ast\omega=\curl v_\eta$, and we prove that $-\omega_{\e,\eta}v_{\e,\eta}+\omega_{\e,\eta}v_\e+\omega_\e v_{\e,\eta}\to-\omega_{\eta}v_{\eta}+\omega_\eta v+\omega v_\eta$ in the distributional sense as $\e\downarrow0$, for any fixed $\eta>0$. As $q<2<q'$, the weak convergences $v_\e\cvf*v$ in $\Ld^\infty_\loc(\R^+;\Ld^2_\loc(\R^2)^2)$ and $\omega_\e\cvf*\omega$ in $\Ld^\infty_\loc(\R^+;\Ld^q(\R^2))$ imply for instance $v_{\e,\eta}\cvf{*} v_\eta$ in $\Ld^\infty_\loc(\R^+;W^{1,q'}_\loc(\R^2)^2)$ and $\omega_{\e,\eta}\cvf{*} \omega_\eta$ in $\Ld^\infty_\loc(\R^+;H^1(\R^2))$ for all $\eta>0$ (note that these are still only weak-* convergences because no regularization occurs with respect to the time-variable $t$). Moreover, examining equation~\eqref{eq:limeqn1VF} together with the a priori estimates obtained at the beginning of this step, we observe that $\partial_t\omega_\e$ is bounded in $\Ld^\infty_\loc(\R^+;W^{-2,1}_\loc(\R^2))$, hence $\partial_t\omega_{\e,\eta}=\rho_\eta\ast\partial_t\omega_\e$ is also bounded in the same space. Since by the Rellich theorem the space $\Ld^q(U)$ is compactly embedded in $W^{-1,q}(U)\subset W^{-2,1}(U)$ for any bounded domain $U\subset\R^2$, the Aubin-Simon lemma ensures that we have $\omega_\e\to\omega$ strongly in $\Ld^\infty_\loc(\R^+;W^{-1,q}_\loc(\R^2))$, and similarly, since $H^1(U)$ is compactly embedded in $\Ld^2(U)\subset W^{-2,1}(U)$, we also deduce $\omega_{\e,\eta}\to\omega_\eta$ strongly in $\Ld^\infty_\loc(\R^+;\Ld^2_\loc(\R^2))$.
This proves the claim.
\medskip
{\it Substep~3.3.}
We prove that $-\omega_{\eta}v_{\eta}+\omega_\eta v+\omega v_\eta\to-\frac12|v|^2\nabla^\bot h-a^{-1}(\Div (aS_{v}))^\bot$ in the distributional sense as $\eta\downarrow0$. For that purpose, we use the following Delort type identity:
\begin{align*}
-\omega_{\eta}v_{\eta}+\omega_\eta v+\omega v_\eta&=-a^{-1}(v_\eta-v)\Div(a(v_{\eta}-v))+\frac12|v_\eta-v|^2\nabla^\bot h+a^{-1}(\Div (aS_{v_\eta-v}))^\bot\\
&\qquad+a^{-1}v\Div(av)-\frac12|v|^2\nabla^\bot h-a^{-1}(\Div (aS_{v}))^\bot.
\end{align*}
Noting that the limiting constraint $0=a^{-1}\Div(av)=\nabla h\cdot v+\Div v$ gives
\[a^{-1}\Div(a(v_{\eta}-v))=\nabla h\cdot v_{\eta}+\Div v_{\eta}=\nabla h\cdot( \rho_{\eta}\ast v)+\rho_\eta\ast\Div v=\nabla h\cdot( \rho_{\eta}\ast v)-\rho_\eta\ast(\nabla h\cdot v),\]
the above identity takes the form
\begin{align*}
-\omega_{\eta}v_{\eta}+\omega_\eta v+\omega v_\eta&=-a^{-1}(v_\eta-v)\big(\nabla h\cdot( \rho_{\eta}\ast v)-\rho_\eta\ast(\nabla h\cdot v)\big)+\frac12|v_\eta-v|^2\nabla^\bot h+a^{-1}(\Div (aS_{v_\eta-v}))^\bot\\
&\qquad-\frac12|v|^2\nabla^\bot h-a^{-1}(\Div (aS_{v}))^\bot,
\end{align*}
and it is thus sufficient to prove that the first three right-hand side terms tend to $0$ in the distributional sense as $\eta\downarrow0$. This is proven just as in Substep~3.1 above, with $v_{\e,\eta},v_\e$ replaced by $v_\eta,v$.
\medskip
{\it Substep~3.4: conclusion.}
Combining the three previous substeps yields $\omega_\e v_\e\to -\frac12|v|^2\nabla^\bot h-a^{-1}(\Div (aS_{v}))^\bot$ in the distributional sense as $\e\downarrow0$. Passing to the limit in the weak formulation of equation~\eqref{eq:limeqn1VF}, the conclusion follows.
\end{proof}
\section{Uniqueness}\label{chap:unique}
We finally turn to the uniqueness results stated in Theorem~\ref{th:mainunique}. On the one hand, using similar energy arguments as in the proof of Lemma~\ref{lem:aprioriest}, in the spirit of~\cite[Appendix~B]{Serfaty-15}, we prove a general weak-strong uniqueness principle (which only holds in a weaker form in the degenerate case $\lambda=0$). In the incompressible case~\eqref{eq:limeqn1}, we further prove uniqueness in the class of bounded vorticity, based on transport type arguments and on the Loeper inequality~\cite{Loeper-06} (see also~\cite{S-Vazquez-14}), while these tools are no longer available in the compressible case.
\begin{prop}[Uniqueness]
Let $\alpha,\beta\in\R$, $\lambda\ge0$, $T>0$, and $h,\Psi\in W^{1,\infty}(\R^2)^2$. Let $v^\circ:\R^2\to\R^2$ with $\omega^\circ:=\curl v^\circ\in\Pc(\R^2)$, and in the incompressible case~\eqref{eq:limeqn1} further assume that $\Div(av^\circ)=0$.
\begin{enumerate}[(i)]
\item \emph{Weak-strong uniqueness principle for~\eqref{eq:limeqn1} and~\eqref{eq:limeqn2} in the non-degenerate case $\lambda>0$, $\alpha\ge0$:}\\
If~\eqref{eq:limeqn1} or~\eqref{eq:limeqn2} admits a weak solution $v\in \Ld^2_\loc([0,T);v^\circ+\Ld^2(\R^2)^2)\cap\Ld^\infty_\loc([0,T);W^{1,\infty}(\R^2)^2)$ on $[0,T)\times\R^2$ with initial data $v^\circ$, then it is the unique weak solution of~\eqref{eq:limeqn1} or of~\eqref{eq:limeqn2} on $[0,T)\times\R^2$ in the class $\Ld^2_\loc([0,T);v^\circ+\Ld^2(\R^2)^2)$ with initial data $v^\circ$.
\item \emph{Weak-strong uniqueness principle for~\eqref{eq:limeqn2} in the degenerate parabolic case $\lambda=\beta=0$, $\alpha\ge0$:}\\
Let $E^2_{T,v^\circ}$ denote the class of all $w\in \Ld^2_\loc([0,T);v^\circ+\Ld^2(\R^2)^2)$ with $\curl w\in\Ld^2_\loc([0,T);\Ld^2(\R^2))$. If~\eqref{eq:limeqn2} admits a weak solution $v\in E^2_{T,v^\circ}\cap\Ld^\infty_\loc([0,T);\Ld^{\infty}(\R^2)^2)$ on $[0,T)\times\R^2$ with initial data $v^\circ$, and with $\omega:=\curl v\in\Ld^\infty_\loc([0,T);W^{1,\infty}(\R^2))$, then it is the unique weak solution of~\eqref{eq:limeqn2} on $[0,T)\times\R^2$ in the class $E^2_{T,v^\circ}$ with initial data $v^\circ$.
\item \emph{Uniqueness for~\eqref{eq:limeqn1} with bounded vorticity, $\alpha,\beta\in\R$:}\\
There exists at most a unique weak solution $v$ of~\eqref{eq:limeqn1} on $[0,T)\times\R^2$ with initial data $v^\circ$, in the class of all $w$'s such that $\curl w\in\Ld^\infty_\loc([0,T);\Ld^\infty(\R^2))$.
\end{enumerate}
Moreover, in items~(i)--(ii), the condition $\alpha\ge0$ may be dropped if we further restrict to weak solutions $v$ such that $\curl v\in\Ld^\infty_\loc([0,T);\Ld^\infty(\R^2))$.
\end{prop}
\begin{proof}
In this proof, we use the notation $\lesssim$ for $\le$ up to a constant $C>0$ that depends only on an upper bound on $\alpha$, $|\beta|$, $\lambda$, $\lambda^{-1}$, and $\|(h,\Psi)\|_{W^{1,\infty}}$, and we add subscripts to indicate dependence on further parameters.
We split the proof into four steps, first proving item~(i) in the case~\eqref{eq:limeqn1}, then in the case~\eqref{eq:limeqn2}, and finally turning to items~(ii) and~(iii).
\medskip
{\it Step~1: proof of~(i) in the case~\eqref{eq:limeqn1}.}
Let $\alpha\ge0$, $\beta\in\R$, and let $v_1,v_2\in\Ld^2_\loc([0,T);v^\circ+\Ld^2(\R^2)^2)$ be two weak solutions of~\eqref{eq:limeqn1} on $[0,T)\times\R^2$ with initial data $v^\circ$,
and assume $v_2\in\Ld^\infty_\loc([0,T);W^{1,\infty}(\R^2)^2)$.
Set $\delta v:=v_1-v_2$ and $\delta\omega:=\omega_1-\omega_2$.
As the constraint $\Div(a\delta v)=0$ yields $\delta v=a^{-1}\nabla^\bot(\Div a^{-1}\nabla)^{-1}\delta\omega$, and as by assumption $\delta v\in\Ld^2_\loc([0,T);\Ld^2(\R^2)^2)$, we deduce
$\delta\omega\in\Ld^2_\loc([0,T);\dot H^{-1}(\R^2))$ and $(\Div a^{-1}\nabla)^{-1}\delta\omega\in\Ld^2_\loc([0,T);\dot H^{1}(\R^2))$. Moreover, the definition of a weak solution ensures that $\omega_i:=\curl v_i\in\Ld^\infty([0,T);\Pc(\R^2))$ (cf. Lemma~\ref{lem:aprioriest}(i)), and $|v_i|^2\omega_i\in\Ld^1_\loc([0,T);\Ld^1(\R^2))$, for $i=1,2$, so that all the integrations by parts below are directly justified.
From equation~\eqref{eq:limeqn1VF}, we compute the following time-derivative
\begin{align}
\partial_t\int\delta\omega(-\Div a^{-1}\nabla)^{-1}\delta\omega&=2\int \nabla(\Div a^{-1}\nabla)^{-1}\delta\omega\cdot\Big((\alpha(\Psi+v_1)^\bot+\beta(\Psi+v_1))\omega_1\nonumber\\
&\hspace{5.5cm}-(\alpha(\Psi+v_2)^\bot+\beta(\Psi+v_2))\omega_2\Big)\nonumber\\
&=-2\int a\delta v^\bot\cdot\Big((\alpha(\delta v)^\bot+\beta\delta v)\omega_1+(\alpha(\Psi+v_2)^\bot+\beta(\Psi+v_2))\delta\omega\Big)\nonumber\\
&=-2\alpha\int a|\delta v|^2\omega_1-2\int a\delta\omega\delta v^\bot\cdot(\alpha(\Psi+v_2)^\bot+\beta(\Psi+v_2)).\label{eq:decompderivunique}
\end{align}
As $v_2$ is Lipschitz-continuous, and as the definition of a weak solution ensures that $\omega_1v_1\in\Ld^1_\loc([0,T);\Ld^1(\R^2)^2)$, the following Delort type identity holds in $\Ld^1_\loc([0,T);W^{-1,1}_\loc(\R^2)^2)$,
\[\delta \omega\delta v^\bot=\frac12|\delta v|^2\nabla h+a^{-1}\Div(aS_{\delta v}).\]
Combining this with~\eqref{eq:decompderivunique} and the non-negativity of $\alpha\omega_1$ yields
\begin{align*}
\partial_t\int\delta\omega(-\Div a^{-1}\nabla)^{-1}\delta\omega&\le-\int a|\delta v|^2\nabla h\cdot(\alpha(\Psi+v_2)^\bot+\beta(\Psi+v_2))+2\int a S_{\delta v}:\nabla(\alpha(\Psi+v_2)^\bot+\beta(\Psi+v_2))\\
&\le C(1+\|v_2\|_{W^{1,\infty}})\int a|\delta v|^2.
\end{align*}
The uniqueness result $\delta v=0$ then follows from the Grnwall inequality, since by integration by parts
\[\int a|\delta v|^2=\int a^{-1}|\nabla(\Div a^{-1}\nabla)^{-1}\delta\omega|^2=\int \delta \omega (-\Div a^{-1}\nabla)^{-1}\delta \omega.\]
Note that if we further assume $\omega_1\in\Ld^\infty([0,T);\Ld^\infty(\R^2))$, then the non-negativity of $\alpha$ can be dropped: it indeed suffices to estimate in that case $-2\alpha\int a|\delta v|^2\omega_1\le C\|\omega_1\|_{\Ld^\infty}\int a|\delta v|^2$, and the result then follows as above. A similar observation also holds in the context of item~(ii).
\medskip
{\it Step~2: proof of~(i) in the case~\eqref{eq:limeqn2}.}
Let $\alpha\ge0$, $\beta\in\R$, $\lambda>0$, and let $v_1,v_2\in\Ld^2_\loc([0,T);v^\circ+\Ld^2(\R^2)^2)$ be two weak solutions of~\eqref{eq:limeqn2} on $[0,T)\times\R^2$ with initial data $v^\circ$, and assume $v_2\in\Ld^\infty_\loc([0,T);W^{1,\infty}(\R^2)^2)$. The definition of a weak solution ensures that $\omega_i:=\curl v_i\in\Ld^\infty([0,T);\Pc(\R^2))$ (cf. Lemma~\ref{lem:aprioriest}(i)), $\zeta_i:=\Div(av_i)\in\Ld^2_\loc([0,T);\Ld^2(\R^2))$, and $|v_i|^2\omega_i\in\Ld^1_\loc([0,T);\Ld^1(\R^2))$, for $i=1,2$, and hence the integrations by parts below are directly justified. Set $\delta v:=v_1-v_2$, $\delta\omega:=\omega_1-\omega_2$, and $\delta \zeta:=\zeta_1-\zeta_2$. From equation~\eqref{eq:limeqn2}, we compute the following time-derivative
\begin{align*}
\partial_t\int a|\delta v|^2&=2\int a\delta v\cdot \Big(\lambda\nabla(a^{-1}\delta \zeta)-\alpha(\Psi+v_1)\omega_1+\beta(\Psi+v_1)^\bot\omega_1+\alpha(\Psi+v_2)\omega_2-\beta(\Psi+v_2)^\bot\omega_2\Big)\\
&=-2\lambda\int a^{-1}|\delta\zeta|^2-2\alpha\int a|\delta v|^2\omega_1+2\int a\delta\omega\delta v\cdot\big(\alpha(\Psi+v_2)-\beta (\Psi+v_2)^\bot\big).
\end{align*}
As $v_2$ is Lipschitz-continuous, and as $\omega_1v_1\in\Ld^1_\loc([0,T)\times\R^2)^2$ follows from the definition of a weak solution, the following Delort type identity holds in $\Ld^1_\loc([0,T);W^{-1,1}_\loc(\R^2)^2)$,
\begin{gather*}
\delta\omega\delta v=a^{-1}\delta\zeta\delta v^\bot-\frac12|\delta v|^2\nabla^\bot h-a^{-1}(\Div (aS_{\delta v}))^\bot.
\end{gather*}
The above may then be estimated as follows, after integration by parts,
\begin{align*}
\partial_t\int a|\delta v|^2&\le-2\lambda\int a^{-1}|\delta \zeta|^2-2\alpha\int a|\delta v|^2\omega_1+C(1+\|v_2\|_{\Ld^{\infty}})\int|\delta\zeta||\delta v|+C(1+\|v_2\|_{W^{1,\infty}})\int a|\delta v|^2,
\end{align*}
and thus, using the choice $\lambda>0$, the inequality $2xy\le x^2+y^2$, and the non-negativity of $\alpha\omega_1$,
\begin{align*}
\partial_t\int a|\delta v|^2&\le C(1+\| v_2\|_{W^{1,\infty}}^2)\int a|\delta v|^2.
\end{align*}
The Grnwall inequality then implies uniqueness, $\delta v=0$.
\medskip
{\it Step~3: proof of~(ii).}
Let $\lambda=\beta=0$, $\alpha=1$, and let $v_1,v_2\in\Ld^2_\loc([0,T);v^\circ+\Ld^2(\R^2)^2)$ be two weak solutions of~\eqref{eq:limeqn2} on $[0,T)\times\R^2$ with initial data $v^\circ$, and with $\omega_i:=\curl v_i\in\Ld^2_\loc([0,T);\Ld^2(\R^2))$ for $i=1,2$, and further assume $v_2\in\Ld^\infty_\loc([0,T);\Ld^\infty(\R^2)^2)$ and $\omega_2\in\Ld^\infty_\loc([0,T);W^{1,\infty}(\R^2))$.
The definition of a weak solution ensures that $\omega_i:=\curl v_i\in\Ld^\infty([0,T);\Pc(\R^2))$ (cf. Lemma~\ref{lem:aprioriest}(i)), $\zeta_i:=\Div(av_i)\in\Ld^2_\loc([0,T);\Ld^2(\R^2))$, and $|v_i|^2\omega_i\in\Ld^1_\loc([0,T);\Ld^1(\R^2))$, for $i=1,2$, and hence the integrations by parts below are directly justified.
Denoting $\delta v:=v_1-v_2$ and $\delta\omega:=\omega_1-\omega_2$, equation~\eqref{eq:limeqn2} yields
\begin{align}
\partial_t\delta v=-(\Psi+v_2)\delta\omega-\omega_1\delta v,\label{eq:lambda0eq1}
\end{align}
while equation~\eqref{eq:limeqn1VF} takes the form
\begin{align}
\partial_t\delta\omega&=\Div((\Psi+v_2)^\bot\delta\omega)+\Div(\omega_1\delta v^\bot)\nonumber\\
&=\Div((\Psi+v_2)^\bot\delta\omega)+\nabla\omega_1\cdot\delta v^\bot-\omega_1\delta \omega\nonumber\\
&=\Div((\Psi+v_2)^\bot\delta\omega)+\nabla\omega_2\cdot\delta v^\bot+\nabla\delta\omega\cdot\delta v^\bot-\omega_1\delta \omega.\label{eq:lambda0eq2}
\end{align}
Testing equation~\eqref{eq:lambda0eq1} against $\delta v$ yields, by non-negativity of $\omega_1$,
\begin{align*}
\partial_t\int|\delta v|^2&=-2\int|\delta v|^2 \omega_1-2\int\delta v\cdot(\Psi+v_2)\delta\omega\le C(1+\|v_2\|_{\Ld^\infty})\int|\delta v||\delta\omega|.
\end{align*}
Testing equation~\eqref{eq:lambda0eq2} against $\delta \omega$ and integrating by parts yields, by non-negativity of $\omega_1$ and $\omega_2$,
\begin{align*}
\partial_t\int|\delta \omega|^2&=-\int \nabla|\delta\omega|^2\cdot (\Psi+v_2)^\bot+2\int \delta\omega\nabla \omega_2\cdot\delta v^\bot+\int\nabla|\delta\omega|^2\cdot\delta v^\bot-2\int |\delta\omega|^2\omega_1\\
&=-\int |\delta\omega|^2(\curl \Psi+\omega_2)+2\int \delta\omega \nabla \omega_2\cdot\delta v^\bot+\int|\delta\omega|^2(\omega_1-\omega_2)-2\int |\delta\omega|^2\omega_1\\
&\le C\int |\delta\omega|^2+2\|\nabla\omega_2\|_{\Ld^\infty}\int |\delta v||\delta\omega|.
\end{align*}
Combining the above two estimates and using the inequality $2xy\le x^2+y^2$, we find
\begin{align*}
\partial_t\int(|\delta v|^2+|\delta \omega|^2)&\le C(1+\|(v_2,\nabla\omega_2)\|_{\Ld^\infty})\int (|\delta v|^2+|\delta\omega|^2),
\end{align*}
and the uniqueness result follows from the Grnwall inequality.
\medskip
{\it Step~4: proof of~(iii).}
Let $\alpha,\beta\in\R$, and let $v_1,v_2$ denote two solutions of~\eqref{eq:limeqn1} on $[0,T)\times\R^2$ with initial data $v^\circ$, and with $\omega_1,\omega_2\in\Ld^\infty_\loc([0,T);\Ld^\infty(\R^2))$.
First we prove that $v_1^t,v_2^t$ are log-Lipschitz for all $t\in[0,T)$ (compare with the easier situation in~\cite[Lemma~4.1]{S-Vazquez-14}). Let $i=1,2$. Using the identity $v_i^t=\nabla^\bot\triangle^{-1}\omega_i^t+\nabla\triangle^{-1}\Div v_i^t$ with $\Div v_i^t=-\nabla h\cdot v_i^t$, we may decompose for all $x,y$,
\[|v_i^t(x)-v_i^t(y)|\le|\nabla\triangle^{-1}\omega_i^t(x)-\nabla\triangle^{-1}\omega_i^t(y)|+|\nabla\triangle^{-1}(\nabla h\cdot v_i^t)(x)-\nabla\triangle^{-1}(\nabla h\cdot v_i^t)(y)|.\]
By the embedding of the Zygmund space $C^1_*(\R^2)=B^1_{\infty,\infty}(\R^2)$ into the space of log-Lipschitz functions (see e.g. \cite[Proposition~2.107]{BCD-11}), we may estimate
\[|v_i^t(x)-v_i^t(y)|\lesssim \big(\|\nabla^2\triangle^{-1}\omega_i^t\|_{C^0_*}+\|\nabla^2\triangle^{-1}(\nabla h\cdot v_i^t)\|_{C^0_*}\big)|x-y|(1+\log^-(|x-y|)),\]
and hence, for any $1\le p<\infty$, by Lemma~\ref{lem:pottheoryCsHs}(ii), and recalling that $\Ld^\infty(\R^2)$ is embedded in $C^0_*(\R^2)=B^0_{\infty,\infty}(\R^2)$,
\begin{align*}
|v_i^t(x)-v_i^t(y)|&\lesssim_p \big(\|\omega_i^t\|_{\Ld^1\cap C^0_*}+\|\nabla h\cdot v_i^t\|_{\Ld^{p}\cap C^0_*}\big)|x-y|(1+\log^-(|x-y|))\\
&\lesssim \big(\|\omega_i^t\|_{\Ld^1\cap \Ld^\infty}+\|v_i^t\|_{\Ld^{p}\cap \Ld^\infty}\big)|x-y|(1+\log^-(|x-y|)).
\end{align*}
Noting that $v_i^t=a^{-1}\nabla^\bot(\Div a^{-1}\nabla)^{-1}\omega_i^t$, the elliptic estimates of Lemma~\ref{lem:globellreg} yield $\|v_i^t\|_{\Ld^{p_0}\cap\Ld^\infty}\lesssim\|\omega_i^t\|_{\Ld^1\cap\Ld^\infty}$ for some exponent $2<p_0\lesssim1$. For the choice $p=p_0$, the above thus takes the following form,
\begin{align}\label{eq:logLipom}
|v_i^t(x)-v_i^t(y)|&\lesssim \|\omega_i^t\|_{\Ld^1\cap \Ld^\infty}|x-y|(1+\log^-(|x-y|))\le(1+\|\omega_i^t\|_{\Ld^\infty})|x-y|(1+\log^-(|x-y|)),
\end{align}
which proves the claim that $v_1^t,v_2^t$ are log-Lipschitz for all $t\in[0,T)$.
For $i=1,2$, as the vector field $\alpha(\Psi+v_i)+\beta(\Psi+v_i)^\bot$ is log-Lipschitz, the associated flow $\psi_i:[0,T)\times\R^2\to\R^2$ is well-defined globally,
\[\partial_t\psi_i(x)=-(\alpha(\Psi+v_i)+\beta(\Psi+v_i)^\bot)(\psi_i(x)).\]
As the transport equation~\eqref{eq:limeqn1VF} ensures that $\omega_i^t=(\psi_i^t)_*\omega^\circ$ for $i=1,2$, the $2$-Wasserstein distance between the solutions $\omega_1^t,\omega_2^t\in\Pc(\R^2)$ is bounded by
\begin{align}\label{eq:boundW2Q}
W_2(\omega_1^t,\omega_2^t)^2\le\int|\psi_1^t(x)-\psi_2^t(x)|^2\omega^\circ(x)dx=:Q^t.
\end{align}
Now the time-derivative of $Q$ is estimated by
\begin{align*}
\partial_t Q^t&=-2\int(\psi_1^t(x)-\psi_2^t(x))\cdot\big((\alpha\Psi+\beta\Psi^\bot)(\psi_1^t(x))-(\alpha\Psi+\beta\Psi^\bot)(\psi_2^t(x))\big)\omega^\circ(x)dx\\
&\qquad-2\int(\psi_1^t(x)-\psi_2^t(x))\cdot\big((\alpha v_1^t+\beta(v_1^t)^\bot)(\psi_1^t(x))-(\alpha v_2^t+\beta(v_2^t)^\bot)(\psi_2^t(x))\big)\omega^\circ(x)dx\\
&\le CQ^t+C(Q^t)^{1/2}\bigg(\int|v_1^t(\psi_1^t(x))-v_2^t(\psi_2^t(x))|^2\omega^\circ(x)dx\bigg)^{1/2}\\
&\le CQ^t+C(Q^t)^{1/2}(T_1^t+T_2^t)^{1/2},
\end{align*}
where we have set
\[T_1^t:=\int|(v_1^t-v_2^t)(\psi_2^t(x))|^2\omega^\circ(x)dx,\qquad T_2^t:=\int|v_1^t(\psi_1^t(x))-v_1^t(\psi_2^t(x))|^2\omega^\circ(x)dx.\]
We first study $T_1$. Using that $v_i=a^{-1}\nabla^\bot(\Div a^{-1}\nabla)^{-1}\omega_i$, we find
\begin{align*}
T_1^t=\int |v_1^t-v_2^t|^2\omega_2^t\le\|\omega_2^t\|_{\Ld^\infty}\int|v_1^t-v_2^t|^2&=\|\omega_2^t\|_{\Ld^\infty}\int|\nabla(\Div a^{-1}\nabla)^{-1}(\omega_1^t-\omega_2^t)|^2\\
&\lesssim\|\omega_2^t\|_{\Ld^\infty}\int|\nabla\triangle^{-1}(\omega_1^t-\omega_2^t)|^2.
\end{align*}
(Here, we use the fact that if $-\Div (a^{-1}\nabla u_1)=-\triangle u_2$ with $u_1,u_2\in H^1(\R^2)$, then $\int a^{-1} |\nabla u_1|^2=\int\nabla u_1\cdot \nabla u_2\le\frac12\int a^{-1}|\nabla u_1|^2+\frac12\int a|\nabla u_2|^2$, hence $\int a^{-1}|\nabla u_1|^2\le\int a|\nabla u_2|^2$.) The Loeper inequality~\cite[Proposition~3.1]{Loeper-06} and the bound~\eqref{eq:boundW2Q} then imply
\begin{align*}
T_1^t\le\|\omega_2^t\|_{\Ld^\infty}(\|\omega_1^t\|_{\Ld^\infty}\vee\|\omega_2^t\|_{\Ld^\infty})W_2(\omega_1^t,\omega_2^t)^2\le\|(\omega_1^t,\omega_2^t)\|_{\Ld^\infty}^2Q^t.
\end{align*}
We finally turn to $T_2$.
Using the log-Lipschitz property~\eqref{eq:logLipom} and the concavity of the function $f(x)=x(1+\log^-x)^2$,
we obtain by Jensen's inequality
\begin{align*}
T_2^t&\lesssim \|\omega^t_1\|_{\Ld^\infty}^2\int(1+\log^-(|\psi_1^t-\psi_2^t|))^2|\psi_1^t-\psi_2^t|^2\omega^\circ\\
&\le\|\omega^t_1\|_{\Ld^\infty}^2\bigg(1+\log^-\int|\psi_1^t-\psi_2^t|^2\omega^\circ\bigg)^2\int|\psi_1^t-\psi_2^t|^2\omega^\circ\\
&\lesssim \|\omega^t_1\|_{\Ld^\infty}^2(1+\log^-Q^t)^2Q^t.
\end{align*}
We may thus conclude $\partial_tQ\lesssim (1+\|(\omega_1,\omega_2)\|_{\Ld^\infty})Q(1+\log^-Q)$,
and the uniqueness result directly follows from a Grnwall argument.
\end{proof}
\subsubsection*{Acknowledgements}
The work of the author is supported by F.R.S.-FNRS (Belgian National Fund for Scientific Research) through a Research Fellowship.
The author would also like to thank his PhD advisor Sylvia Serfaty for useful comments on this work.
\bigskip
\bibliographystyle{plain}
\bibliography{biblio}
\bigskip
{\small (Mitia Duerinckx) {\sc Universit Libre de Bruxelles (ULB), Brussels, Belgium, \& MEPHYSTO team, Inria Lille--Nord Europe, Villeneuve d'Ascq, France, \& Laboratoire Jacques-Louis-Lions, Universit Pierre et Marie Curie (UPMC), Paris, France}
{\it E-mail address:} mduerinc@ulb.ac.be
}
\end{document} | {"config": "arxiv", "file": "1607.00268/MF-glob-ex_4.tex"} |
\section{One-dimensional PCP Lagrangian schemes}\label{section:1D-PCP}
This section considers the Lagrangian finite volume schemes for the special relativistic hydrodynamics (RHD) equations \eqref{eq1} or \eqref{eq4} with $d=1$, here
the notations $\vec{ F}_1$, $u_1$ and $x_1$ are replaced with $\vec{ F}$, $u$ and $x$, respectively,
and $\vec{\mathcal F}_{\vec{n}}(\vec{U})=\left(0,p,pu\right)^T=:\vec{\mathcal F}(\vec{U})$.
Assume that the time interval $\{t>0\}$ is discretized as:
$t_{n+1}=t_n+\Delta t^n$, $n=0,1,2,\cdots$,
where $\Delta t^n$ is the time step size at $t=t_n$ and will be determined
by the CFL type condition.
The (dynamic) computational domain at $t=t_n$ is divided into $N$ cells: $I_i^n =[x^n_{i-\frac{1}{2}},x^n_{i+\frac{1}{2}}]$ with the sizes $\Delta x_i^n =x^n_{i+\frac{1}{2}}-x^n_{i-\frac{1}{2}}$
for $i=1,\cdots,N$.
The mesh node $x^n_{i+\frac{1}{2}}$ is moved with
the fluid velocity $u^n_{i+\frac{1}{2}}$, that is,
$$
x^{n+1}_{i+\frac{1}{2}}=x^n_{i+\frac{1}{2}}+\Delta t^n u^n_{i+\frac{1}{2}},\ i=1,\cdots,N.
$$
For the RHD system \eqref{eq1} or \eqref{eq4} with $d=1$,
the Lagrangian finite volume scheme with the Euler forward time discretization
\begin{equation}\label{eq8}
\overline{\vec{U}}_{i}^{n+1}\Delta x_i^{n+1}=\overline{\vec{U}}_{i}^{n}\Delta x_i^{n}-\Delta t^n
\left(\widehat{\vec{\mathcal F}}(\vec{U}_{i+\frac{1}{2}}^{-},\vec{U}_{i+\frac{1}{2}}^{+})-\widehat{\vec{\mathcal F}}(\vec{U}_{i-\frac{1}{2}}^{-},
\vec{U}_{i-\frac{1}{2}}^{+})\right),
\end{equation}
where $\overline{\vec{U}}_i^{n}$ and $\overline{\vec{U}}_i^{n+1}$ are approximate
cell average values of the conservative vectors at $t_n$ and $t_{n+1}$ over the cells $I_i^n$ and $I_i^{n+1}$,
respectively, the numerical flux
$\widehat{\vec{\mathcal F}}$ is a function of two variables and satisfies the consistency
$\widehat{\vec{\mathcal F}}(\vec U,\vec U)= \left(0,p,pu\right)^T$,
and $\vec{U}_{i+\frac{1}{2}}^{\mp}$ are the left- and right-limited approximations of $\vec{U}$
at $x_{i+\frac{1}{2}}$, respectively, and obtained by using the initial reconstruction technique and $\{\overline{\vec{U}}_i^{n}\}$.
In this paper, the numerical flux $\widehat{\vec{\mathcal F}}(\vec U^-,\vec U^+)$ is computed by means of the HLLC Riemann solver of \eqref{eq1} with $d=1$ \cite{mignone}, but in a coordinate system that is moving with the intermediate wave (i.e. contact discontinuity) speed $s^*$ (see below).
Let us consider the Riemann problem
\begin{equation}\label{eq11}
\vec{U}_t+\widetilde{\vec{F}}(\vec{U})_{\xi}=0,\ \
\vec{U}(\xi,0)=\left\{\begin{array}{ll}
\vec{U}^{-},&\xi<0,\\
\vec{U}^{+},&\xi>0,
\end{array}\right.
\end{equation}
where $\xi=x-s^* t$ and
\begin{equation}\label{eq12}
\vec{U}=(D,m,E)^T,\quad \widetilde{\vec{F}}(\vec{U})=\left(
D(u-s^\ast), m(u-s^\ast)+p, E(u-s^\ast)+pu
\right)^T.
\end{equation}
\begin{figure}[h!]
\centering
\includegraphics[width=4.2in]{figures/HLLC.pdf}
\caption{Structure of the exact or HLLC approximate solution of the Riemann problem \eqref{eq11}.}
\label{fig-hllc}
\end{figure}
Figure \ref{fig-hllc} shows corresponding wave structure of the exact or HLLC approximate solution of the Riemann problem \eqref{eq11}.
There are three wave families associated with the eigenvalues of $\partial \widetilde{\vec{F}}/\partial \vec{U}$,
and the contact discontinuity is always in accordance with the vertical axis $\xi=0$.
Two constant intermediate states between the left and right acoustic waves are
denoted by $\vec{U}^{\ast,-}$ and $\vec{U}^{\ast,+}$, respectively, corresponding fluxes
are denoted by $\widetilde{\vec F}^{\ast,\pm}$.
The symbols $s^{-}$ and $s^{+}$ denote the smallest and largest speeds of the acoustic waves, respectively,
and are specifically set as
\begin{equation}\label{ws}
s^{-}=\min\big(s_{\min}(\vec{U}^{-}),s_{\min}(\vec{U}^{+})\big),~~s^{+}=\max\big(s_{\max}(\vec{U}^{-}),s_{\max}(\vec{U}^{+})\big),
\end{equation}
where $s_{\max}(\vec{U})$ and $s_{\min}(\vec{U})$ are the maximum and minimum eigenvalues of
the Jacobian matrix $\partial {\vec F}/\partial {\vec U}$, and
can be explicitly given by
\begin{equation}\label{eq31}
s_{\min}(\vec{U})=\frac{u-c_s}{1-c_su},
~~~s_{\max}(\vec{U})=\frac{u+c_s}{1+c_su}.
\end{equation}
Integrating
the conservation law in \eqref{eq11} over the control volume
$[\xi_L,0]\times[t_n,t_{n+1}]$, the left portion of Figure \ref{fig-hllc}, gives
the flux $\widetilde{\vec {F}}^{\ast,-}$ along the $t$-axis as follows
\begin{equation}\label{eq50}
\widetilde{\vec {F}}^{\ast,-}=\widetilde{\vec {F}}(\vec U^-)-(s^{-}-s^{\ast})\vec{U}^{-}
-\frac{1}{\Delta t^n}\int_{\Delta t^n(s^{-}-s^{\ast})}^{0}\vec{U}(\xi,t_{n+1})d\xi.
\end{equation}
Similarly, performing the same operation on the control volume $[0,\xi_R]\times[t_n,t_{n+1}]$ gives
\begin{equation}\label{eq51}
\widetilde{\vec {F}}^{\ast,+}=\widetilde{\vec {F}}(\vec U^+)-(s^{+}-s^{\ast})\vec{U}^{+}
+\frac{1}{\Delta t^n}\int_{0}^{\Delta t^n(s^{+}-s^{\ast})}\vec{U}(\xi,t_{n+1})d\xi.
\end{equation}
By means of the the definitions of the intermediate states $\vec{U}^{\ast,\pm}$ \cite{toro}
\begin{equation*}
\vec{U}^{\ast,\pm} =\frac{1}{\Delta t^n(s^{\pm}-s^{\ast})}\int_{0}^{\Delta t^n(s^{\pm}-s^{\ast})}\vec{U}(\xi,t_{n+1})d\xi,
\end{equation*}
one can rewrite \eqref{eq50} and \eqref{eq51} as
\begin{equation}\label{eq53}
\widetilde{\vec {F}}^{\ast,\pm}=\widetilde{\vec {F}}(\vec U^{\pm})-(s^{\pm}-s^{\ast})(\vec{U}^{\pm}-\vec{U}^{\ast,\pm}),
\end{equation}
which are the Rankine-Hugoniot conditions for the right and left waves in Figure \ref{fig-hllc}.
If assuming
$\vec{U}^{\ast,\pm}=(D^{\ast,\pm},m^{\ast,\pm},E^{\ast,\pm})^T$ and
$\widetilde{\vec {F}} ^{\ast,\pm}=\big(D^{\ast,\pm}(u^{\ast,\pm}-s^{\ast}),m^{\ast,\pm}(u^{\ast,\pm}-s^{\ast})+p^{\ast,\pm},
E^{\ast,\pm}(u^{\ast,\pm}-s^{\ast})+p^{\ast,\pm}u^{\ast,\pm}\big)^T$,
\eqref{eq53} can explicitly give
\begin{equation}\label{add7}
\begin{aligned}
&D^{\ast,\pm}(s^{\pm}-u^{\ast,\pm})=D^{\pm}(s^{\pm}-u^{\pm}),\\
&m^{\ast,\pm}(s^{\pm}-u^{\ast,\pm})=m^{\pm}(s^{\pm}-u^{\pm})+p^{\ast,\pm}-p^{\pm},\\
&E^{\ast,\pm}(s^{\pm}-u^{\ast,\pm})=E^{\pm}(s^{\pm}-u^{\pm})+p^{\ast,\pm}u^{\ast,\pm}-p^{\pm}u^{\pm}.\\
\end{aligned}
\end{equation}
From those, one obtains
\begin{equation}\label{eq15}
(A^{\pm}+s^{\pm}p^{\ast,\pm})u^{\ast,\pm}=B^{\pm}+p^{\ast,\pm},
\end{equation}
where
\begin{equation}\label{eq67}
A^{\pm}=s^{\pm}E^{\pm}-m^{\pm},~~~B^{\pm}=m^{\pm}(s^{\pm}-u^{\pm})-p^{\pm}.
\end{equation}
Because the system \eqref{add7} is underdetermined,
one can impose the conditions on the continuity of the pressure and velocity across the contact discontinuity,
i.e.
\begin{equation}\label{add8}
p^{\ast,-}=p^{\ast,+}=:p^{\ast},~~u^{\ast,-}=u^{\ast,+}=:u^{\ast} (=s^{\ast}).
\end{equation}
Combing \eqref{eq15} with \eqref{add8} gives
\begin{equation}\label{eq16}
(1-s^{\pm}s^\ast)p^\ast=s^\ast A^{\pm}-B^{\pm}.
\end{equation}
Eliminating $p^*$ gives a quadratic equation in terms of $s^\ast$ as follows
\begin{equation}\label{eq17}
\mathcal{C}_0+\mathcal{C}_1s^\ast+\mathcal{C}_2(s^\ast)^2=0,
\end{equation}
where
\begin{equation*}
\mathcal{C}_0=B ^{+}-B^{-},~~\mathcal{C}_1=A^{-}+s^{+}B^{-}-A^{+}-s^{-}B^{+},~~
\mathcal{C}_2=s^{-}A^{+}-s^{+}A^{-}.
\end{equation*}
From \eqref{eq17}, one has
\begin{equation}\label{eq19}
s^{\ast}=u^{\ast}=\frac{-\mathcal{C}_1-\sqrt{\mathcal{C}_1^2-4\mathcal{C}_0\mathcal{C}_2}}{2\mathcal{C}_2},
\end{equation}
since the wave speed $s^\ast$ should be less than
the speed of light $c=1$ \cite{mignone}.
Then \eqref{eq16} further gives
\begin{equation}\label{eq66}
p^{\ast}=\frac{s^\ast A^{-}-B^{-}}{1-s^{-}s^\ast}=\frac{s^\ast A^{+}-B^{+}}{1-s^{+}s^\ast}.
\end{equation}
From \eqref{add7}, one can obtain $\vec U^{\ast,\pm}$ and then
substitute $\vec U^{\ast,\pm}$ into
\eqref{eq53} to give the HLLC flux (approximating $\widetilde{\vec{F}}(\vec{U})$) in the Lagrangian framework
$\widehat{\vec{\mathcal{F}}}=\widetilde{\vec {F}}^{\ast,+}=\widetilde{\vec {F}}^{\ast,-}$.
It is worth noticing that $p^{\ast}$ in \eqref{eq66} and $u^\ast$ in
\eqref{eq19}
are different from the pressure and the velocity calculated from the resulting $\vec U^{\ast,\pm}$ via \eqref{pre} and \eqref{add1},
so in general
$p^{\ast}\neq p(\vec{U}^{\ast,\pm})$, $u^{\ast}\neq u(\vec{U}^{\ast,\pm})$ and
$\widetilde{\vec {F}}^{\ast,\pm}\neq \widetilde{\vec{F}}(\vec{U}^{\ast,\pm})$.
For the above HLLC solver, the following conclusion holds.
\begin{lemma}\label{lem3}
For the given $\vec{U}^{\pm}=(D^{\pm},m^{\pm},E^{\pm})^{T}$, if the wave speed $s^{\pm}$ are estimated in \eqref{ws} and $s^{\ast}$ is the speed of the contact discontinuity,
then $A^{\pm}$ and $B^{\pm}$ defined in \eqref{eq67} satisfy
the following
inequalities
\begin{enumerate}[(\romannumeral1)]
\item $A^{-}<0,~~A^{+}>0$;
\item $A^{-}-B^{-}<0,~~~A^{+}+B^{+}>0$;
\item $s^{-}A^{-}-B^{-}>0,~~s^{+}A^{+}-B^{+}>0$;
\item $s^{+}A^{-}-B^{-}<0,~~s^{-}A^{+}-B^{+}<0$;
\item $s^{-}<u^{\pm}<s^{+},~~s^{-}<s^{\ast}<s^{+}$;
\item $4(A^{\pm})^2-(s^{\pm}A^{\pm}+B^{\pm})^2>0$;
\item $(A^{\pm})^2-(B^{\pm})^2-(D^{\pm})^2(s^{\pm}-u^{\pm})^2>0$.
\end{enumerate}
\end{lemma}
\noindent
The proof is presented in Appendix \ref{appendix-A}.
\subsection{First-order accurate scheme}
For the first-order accurate Lagrangian scheme, $\vec{U}_{i\pm\frac{1}{2}}^{\mp}$ are calculated
from the reconstructed piecewise constant function,
namely, $\vec{U}_{i\pm\frac{1}{2}}^{\mp}$ in the scheme \eqref{eq8} are
defined by
\begin{equation}\label{eq9}
\vec{U}_{i+\frac{1}{2}}^{-}=\overline{\vec{U}}_{i}^{n}, \quad
\ \vec{U}_{i+\frac{1}{2}}^{+}=\overline{\vec{U}}_{i+1}^{n}.
\end{equation}
Those form some local Riemann problems at the endpoints of each cell naturally, see the diagrammatic sketch in Figure \ref{scheme}.
The endpoints of the cell will be evolved by
\begin{equation}\label{eq8-2}
\begin{aligned}
&x_{i+\frac{1}{2}}^{n+1}=x_{i+\frac{1}{2}}^{n}+\Delta t^nu_{i+\frac{1}{2}}^{\ast},\ \
u_{i+\frac{1}{2}}^{\ast}=s_{i+\frac{1}{2}}^{\ast}.
\end{aligned}
\end{equation}
Similar to the Godunov scheme, under a suitable CFL condition (see the coming Theorem \ref{theorem2.3}),
the waves in two neighboring local Riemann problems (e.g. centered at the points $x_{i-\frac{1}{2}}^n$ and $x_{i+\frac{1}{2}}^n$) do not interact with each other within
a time step, so
$\overline{\vec{U}}^{n+1}_i$ in the scheme \eqref{eq8} can be equivalently
derived by exactly integrating the approximate Riemann solutions over
the cell $[x^{n+1}_{i-\frac{1}{2}}, x^{n+1}_{i+\frac{1}{2}}]$, i.e.
\begin{equation}\label{eq21}
\overline{\vec{U}}^{n+1}_i=\frac{1}{\Delta x_i^{n+1}}\left(\int_{x_{i-\frac{1}{2}}^{n+1}}^{x_1}R_h(x/t,\overline{\vec{U}}_{i-1}^{n},
\overline{\vec{U}}_{i}^{n})dx+\int_{x_1}^{x_2}\overline{\vec{U}}_i^ndx+
\int_{x_2}^{x_{i+\frac{1}{2}}^{n+1}}R_h(x/t,\overline{\vec{U}}_i^n,\overline{\vec{U}}_{i+1}^n)dx\right),
\end{equation}
where $R_h(x/t,\overline{\vec{U}}^n_{j-1},\overline{\vec{U}}^n_j)$ is the approximate Riemann
solution related to the states $\overline{\vec{U}}^n_{j-1}$
and $\overline{\vec{U}}^n_j$, $j=i,i+1$. For the above HLLC Riemann solver, the integration of $R_h(x/t,\overline{\vec{U}}^n_{i-1},\overline{\vec{U}}^n_i)$
will be equal to $(x_1-x_{i-\frac{1}{2}}^{n+1})\vec{U}^{\ast,+}$ with $\vec{U}^{\ast,+}$ is
computed from $\overline{\vec{U}}^n_{i-1}$ and
$\overline{\vec{U}}^n_i$,
while the integration of $R(x/t,\overline{\vec{U}}^n_i,\overline{\vec{U}}^n_{i+1}$) will be $(x_{i+\frac{1}{2}}^{n+1}-x_2)\vec{U}^{\ast,-}$ with
$\vec{U}^{\ast,-}$ derived from $\overline{\vec{U}}^n_i$ and
$\overline{\vec{U}}^n_{i+1}$. Thus the first-order HLLC scheme is equivalent to
\begin{equation}\label{eq:hllc1d}
\overline{\vec{U}}^{n+1}_i=\frac{1}{\Delta x_i^{n+1}}\left((x_1-x_{i-\frac{1}{2}}^{n+1})\vec{U}^{\ast,+}
+(x_2-x_1)\overline{\vec{U}}_{i}^{n}+(x_{i+\frac{1}{2}}^{n+1}-x_2)\vec{U}^{\ast,-}\right).
\end{equation}
\begin{figure}[h!]
\centering
\includegraphics[width=5.0in]{figures/Lagrangian_scheme.pdf}
\caption{Diagrammatic sketch of the neighboring Riemann problems.}
\label{scheme}
\end{figure}
Therefore in order to prove that the first-order HLLC scheme \eqref{eq:hllc1d} is PCP,
one just needs to show the intermediate states $\vec{U}^{\ast,-},\vec{U}^{\ast,+}\in \mathcal G$ due to the convexity of $\mathcal G$.
\begin{theorem}\label{theorem2.2}
For $\vec{U}^{\pm}\in \mathcal G$ and the wave speeds $s^\pm$ defined in \eqref{ws},
the intermediate states $\vec{U}^{\ast,\pm}$ obtained by the HLLC Riemann solver are PCP,
that is to say, $\vec{U}^{\ast,\pm}\in \mathcal G$.
\end{theorem}
\begin{proof}
Following \cite{wu2015}, one has to prove
$D^{\ast,\pm}>0$, $E^{\ast,\pm}>0$, and
$(E^{\ast,\pm})^2-(D^{\ast,\pm})^2-(m^{\ast,\pm})^2>0$.
(\romannumeral1)
Due to (\romannumeral4) of Lemma \ref{lem3} and the first equality in \eqref{add7}, it is easy to get $D^{\ast,\pm}>0$.
(\romannumeral2)
For the left or right state, from (\ref{add7}) and (\ref{eq15}), one has
\begin{align}
\mathcal{A}^{\pm}&=E^{\ast,\pm}(s^{\pm}-s^{\ast})(1-s^{\pm}s^{\ast})\notag\\
&=(E^{\pm}(s^{\pm}-u^{\pm})+p^{\ast}s^{\ast}-p^{\pm}u^{\pm})(1-s^{\pm}s^{\ast})\notag\\
&=A^{\pm}\left(s^{\ast}-\frac{s^{\pm}A^{\pm}+B^{\pm}}{2A^{\pm}}\right)^2+\frac{4(A^{\pm})^2-(s^{\pm}A^{\pm}+B^{\pm})^2}{4A^{\pm}},\label{eq90}
\end{align}
where $A^{\pm}$ and $B^{\pm}$ are defined in \eqref{eq67}.
Using (\romannumeral1) of Lemma \ref{lem3} gives
\begin{equation}\label{eq68}
\begin{aligned}
\mathcal{A}^{-}&=A^{-}\left(s^{\ast}-\frac{s^{-}A^{-}+B^{-}}{2A^{-}}\right)^2+\frac{4(A^{-})^2-(s^{-}A^{-}+B^{-})^2}{4A^{-}}
\le\frac{4(A^{-})^2-(s^{-}A^{-}+B^{-})^2}{4A^{-}},\\
\mathcal{A}^{+}&=A^{+}\left(s^{\ast}-\frac{s^{+}A^{+}+B^{+}}{2A^{+}}\right)^2+\frac{4(A^{+})^2-(s^{+}A^{+}+B^{+})^2}{4A^{+}}
\ge\frac{4(A^{+})^2-(s^{+}A^{+}+B^{+})^2}{4A^{+}}.\\
\end{aligned}
\end{equation}
Combining \eqref{eq90},~\eqref{eq68} with (\romannumeral1) and (\romannumeral5) in Lemma \ref{lem3} further yields
$\mathcal{A}^{-}<0$ and $\mathcal{A}^{+}>0$, which imply
$$E^{\ast,-}>0,~~~E^{\ast,+}>0.$$
(\romannumeral3)
Define
$$\mathcal{B}^{\pm}=\big[(E^{\ast,\pm})^2-(D^{\ast,\pm})^2-(m^{\ast,\pm})^2\big]
(s^{\pm}-s^\ast)^2(A^{\pm}+s^{\pm}p^\ast)^2.$$
Using (\ref{eq15}) gives
\begin{align}
\mathcal{B}^{\pm}&=(A^{\pm}+s^{\pm}p^\ast)^2\big[(A^{\pm}
+p^\ast s^\ast)^2-(D^{\pm})^2(s^{\pm}-u^{\pm})^2-(B^{\pm}+p^\ast)^2\big]\notag\\
&=(A^{\pm}+s^{\pm}p^\ast)^2\bigg((p^\ast)^2(s^\ast)^2+2A^{\pm}p^{\ast}s^{\ast}-(2B^{\pm}+p^{\ast})p^{\ast}+\mathcal{K}^{\pm}\bigg)\notag\\
&=[1-(s^{\pm})^2](p^\ast)^4+2B^{\pm}(1-(s ^{\pm})^2)(p^{\ast})^3
+2s^{\pm}A^{\pm}\mathcal{K}^{\pm}p^\ast+(A^{\pm})^2\mathcal{K}^{\pm}\notag\\
&~~~+\bigg((A^{\pm}-s^{\pm}B^{\pm})^2+(1-(s^{\pm})^2)(B^{\pm})^2+(s^{\pm})^2\mathcal{K}^{\pm}\bigg)(p^\ast)^2\notag\\
&=[1-(s^{\pm})^2](B^{\pm}+p^{\ast})^2+(A^{\pm}-s^{\pm}B^{\pm})^2(p^{\ast})^2
+\mathcal{K}^{\pm}(A^{\pm}+s^{\pm}p^{\ast})^2,
\label{eq25}
\end{align}
where
$$\mathcal{K}^{\pm}=(A^{\pm})^2-(B^{\pm})^2-(D^{\pm})^2(s^{\pm}-u^{\pm})^2.$$
The conclusion (\romannumeral6) in Lemma \ref{lem3} can tells us $\mathcal{K}^{\pm}>0$ and then
$\mathcal{B}^{\pm}>0$, which imply
$$(E^{\ast,\pm})^2-(D^{\ast,\pm})^2-(m^{\ast,\pm})^2>0.$$
The proof is completed.
\end{proof}
Based the above discussion, one can draw the following conclusion.
\begin{theorem}\label{theorem2.3}
If $\{\overline{\vec{U}}_i^{n}\in \mathcal G,\forall i=1,
\cdots,N\}$ and the wave speeds $s^\pm$ estimated by \eqref{ws},
then $\overline{\vec{U}}_i^{n+1}$ obtained by
the first-order Lagrangian scheme \eqref{eq8} with
\eqref{eq9} and HLLC solver
belong to the admissible state set $\mathcal G$
under the following time step restriction
\begin{equation}\label{eq:dt1}
\Delta t^n\le\lambda\min\limits_{i}\left\{\Delta x_i^n/\max(|s_{\min}(\overline{\vec{U}}_i^n)|,
|s_{\max}(\overline{\vec{U}}_i^n)|)\right\},
\end{equation}
where the CFL number $\lambda\le\frac{1}{2}$.
\end{theorem}
\subsection{High-order accurate scheme}
This section will develop the one-dimensional high-order accurate PCP Lagrangian finite volume scheme with the HLLC solver. It consists of three parts: the high-order accurate initial reconstruction,
the scaling PCP limiter, and the high-order accurate
time discretization.
For the known cell-average values $\{\overline{\vec{U}}_{i}^{n}\}$ of the solutions of the RHD equations \eqref{eq1} or \eqref{eq4} with $d=1$,
by the aid of the local characteristic decomposition \cite{zhao},
the WENO reconstruction technique is applied to get the high-order approximations of the point values $\vec{U}_{i-\frac{1}{2}}^{+}$ and
$\vec{U}_{i+\frac{1}{2}}^{-}$,
and then they can be used to give the HLLC flux $\widetilde{\vec {F}}^{\ast,\pm}$
and intermediate states $\vec{U}^{\ast,\pm}$.
For a $(K+1)$th-order accurate finite volume WENO scheme, as soon as the point values
$\vec{U}_{i-\frac{1}{2}}^{+}$ and $\vec{U}_{i+\frac{1}{2}}^{-}$ are obtained by the WENO reconstruction at $t=t_n$,
one can also give a polynomial vector $\vec{U}_i^n(x)$ of degree $K$ in principle such that $\vec{U}_i^n(x_{i-\frac{1}{2}})=\vec{U}_{i-\frac{1}{2}}^{+},
\vec{U}_i^n(x_{i+\frac{1}{2}})=\vec{U}_{i+\frac{1}{2}}^{-}$, its cell average value over $I_i^n$ is equal to $\overline{\vec{U}}_i^n$, and $\vec{U}^n_i(x)$ is a $(K+1)$th-order accurate approximation
to $\vec U(x,t_n)$ in $I_i^n$. Such a polynomial vector can be gotten by using the Hermite type reconstruction technique \cite{zhang2} and satisfy exactly
the $L$-point Gauss-Lobatto quadrature rule with $2L-3\ge K$, i.e.
\begin{equation}\label{add100}
\overline{\vec{U}}_i^n
=\frac{1}{\Delta x_i^n}\int_{x_{i-\frac{1}{2}}^n}^{x_{i+\frac{1}{2}}^n}\vec{U}_i^n(x)dx
=\sum\limits_{\alpha=1}^{L}\omega_\alpha\vec{U}_{i}^n(x_i^{\alpha}),
\end{equation}
which gives a split of $\overline{\vec{U}}_i^n$, where $\omega_{1},\cdots, \omega_{L}$ are the quadrature weights
satisfying $\omega_1=\omega_L>0$. Practically, it does not need to explicitly obtain such a polynomial vector since
the mean value theorem tell us that there exists some $x^{\ast}\in I_i^n$
such that $\vec{U}_i^n(x^{\ast})=\frac{1}{1-2\omega_1}\sum\limits_{\alpha=2}^{L-1}\omega_\alpha\vec{U}_{i}^n(x_i^{\alpha})=:\vec{U}^{\ast\ast}_i$
\cite{zhang1}. At this time, the split \eqref{add100} can be simply replaced with
\begin{equation}\label{eq2.25}
\overline{\vec{U}}_i^n
=\omega_1\vec{U}_{i-\frac{1}{2}}^{+}+\omega_1\vec{U}_{i+\frac{1}{2}}^{-}+
(1-2\omega_1)\vec{U}_i^{\ast\ast},
\end{equation}
and $\vec{U}^{\ast\ast}_i$ can be calculated by
\begin{equation}
\vec{U}^{\ast\ast}_i=\frac{\overline{\vec{U}}_i^n-\omega_1\vec{U}_{i-\frac{1}{2}}^{+}-\omega_1\vec{U}_{i+\frac{1}{2}}^{-}}{1-2\omega_1}.
\end{equation}
By adding and subtracting the term $\Delta t^n\widehat{\vec{\mathcal{F}}}(\vec{U}_{i-\frac{1}{2}}^{+},\vec{U}_{i+\frac{1}{2}}^{-})$ and using
the split \eqref{eq2.25},
the scheme \eqref{eq8} with high-order WENO reconstruction can be reformulated as follows
\begin{equation}
\begin{aligned}
\overline{\vec{U}}_i^{n+1}\Delta x_i^{n+1}
&=\big(\omega_1\vec{U}_{i-\frac{1}{2}}^{+}+\omega_1\vec{U}_{i+\frac{1}{2}}^{-}+(1-2\omega_1)\vec{U}_i^{\ast\ast}\big)\Delta x_i^n\\
&~~~-\Delta t^n\left(\widehat{\vec{\mathcal{F}}}(\vec{U}_{i+\frac{1}{2}}^{-},\vec{U}_{i+\frac{1}{2}}^{+})
-\widehat{\vec{\mathcal{F}}}(\vec{U}_{i-\frac{1}{2}}^{+},\vec{U}_{i+\frac{1}{2}}^{-})
+\widehat{\vec{\mathcal{F}}}(\vec{U}_{i-\frac{1}{2}}^{+},\vec{U}_{i+\frac{1}{2}}^{-})
-\widehat{\vec{\mathcal{F}}}(\vec{U}_{i-\frac{1}{2}}^{-},\vec{U}_{i-\frac{1}{2}}^{+})\right)\\
&=(1-2\omega_1)\vec{U}_i^{\ast\ast}\Delta x_i^n
+\omega_1\left\{\vec{U}_{i-\frac{1}{2}}^{+}\Delta x_i^n-\frac{\Delta t^n}{\omega_1}
\left(\widehat{\vec{\mathcal{F}}}(\vec{U}_{i-\frac{1}{2}}^{+},\vec{U}_{i+\frac{1}{2}}^{-})
-\widehat{\vec{\mathcal{F}}}(\vec{U}_{i-\frac{1}{2}}^{-},\vec{U}_{i-\frac{1}{2}}^{+})\right)\right\}\\
&~~~+\omega_1\left\{\vec{U}_{i+\frac{1}{2}}^{-}\Delta x_i^n-\frac{\Delta t^n}{\omega_1}
\left(\widehat{\vec{\mathcal{F}}}(\vec{U}_{i+\frac{1}{2}}^{-},\vec{U}_{i+\frac{1}{2}}^{+})
-\widehat{\vec{\mathcal{F}}}(\vec{U}_{i-\frac{1}{2}}^{+},\vec{U}_{i+\frac{1}{2}}^{-})\right)\right\}\\
&=(1-2\omega_1)\vec{U}_i^{\ast\ast}\Delta x_i^n+\omega_1\mathcal{H}_1+\omega_1\mathcal{H}_L,
\end{aligned}
\end{equation}
where
\begin{equation}
\begin{aligned}
\mathcal{H}_1&=\vec{U}_{i-\frac{1}{2}}^{+}\Delta x_i^n-\frac{\Delta t^n}{\omega_1}
\left(\widehat{\vec{\mathcal{F}}}(\vec{U}_{i-\frac{1}{2}}^{+},\vec{U}_{i+\frac{1}{2}}^{-})
-\widehat{\vec{\mathcal{F}}}(\vec{U}_{i-\frac{1}{2}}^{-},\vec{U}_{i-\frac{1}{2}}^{+})\right),\\
\mathcal{H}_L&=\vec{U}_{i+\frac{1}{2}}^{-}\Delta x_i^n-\frac{\Delta t^n}{\omega_1}
\left(\widehat{\vec{\mathcal{F}}}(\vec{U}_{i+\frac{1}{2}}^{-},\vec{U}_{i+\frac{1}{2}}^{+})
-\widehat{\vec{\mathcal{F}}}(\vec{U}_{i-\frac{1}{2}}^{+},\vec{U}_{i+\frac{1}{2}}^{-})\right).
\end{aligned}
\end{equation}
Because $\mathcal{H}_1$ and $\mathcal{H}_L$ do exactly mimic the first-order scheme
\eqref{eq8} with (\ref{eq9}) and the HLLC solver,
one can know $\mathcal{H}_1,\mathcal{H}_L\in \mathcal G$
if the two boundary values $\vec{U}_{i-\frac{1}{2}}^{+},\vec{U}_{i+\frac{1}{2}}^{-}\in \mathcal G$, the wave speeds $s^\pm$ estimated by \eqref{ws}, and the time stepsize $\Delta t^n$ satisfies the restriction
\begin{equation}\label{eq56}
\Delta t^n\le\lambda~\omega_1\min\limits_{i}
\left\{\Delta x_i^n/\max\limits(|s_{\min}(\vec{U}_{i\pm\frac{1}{2}}^{\mp})|,|s_{\max}(\vec{U}_{i\pm\frac{1}{2}}^{\mp})|)\right\},
\end{equation}
with $\lambda\le\frac{1}{2}$.
Therefore, besides the time step restriction \eqref{eq56},
the sufficient condition for
$\overline{\vec{U}}_i^{n+1}\in \mathcal G$ is
\begin{equation}\label{eq57}
\vec{U}_{i-\frac{1}{2}}^{+},~\vec{U}_{i+\frac{1}{2}}^{-},~\vec{U}_{i}^{\ast\ast}\in \mathcal G, \quad \forall i=1,\dots, N,
\end{equation}
which can be ensured by using a scaling limiter, presented in the next
subsection. Hence the aforementioned results can be summarized as follows.
\begin{theorem}
If $\vec{U}_{i-\frac{1}{2}}^{+},\vec{U}_{i+\frac{1}{2}}^{-},\vec{U}_{i}^{\ast\ast}\in \mathcal G$
and the wave speeds $s^\pm$ estimated by \eqref{ws},
then $\overline{\vec{U}}_i^{n+1}$ obtained by the high-order
Lagrangian scheme \eqref{eq8} with
the WENO reconstruction and the HLLC solver belongs to the admissible
state set $\mathcal G$ under the time stepsize restriction \eqref{eq56} with $\lambda\le\frac{1}{2}$.
\end{theorem}
\subsubsection{Scaling PCP limiter}\label{subsection2.2.1}
This section uses the scaling PCP limiter, which has been used in \cite{cheng,zhang,wu2017},
to limit $\vec{U}_{i-\frac{1}{2}}^{+},\vec{U}_{i+\frac{1}{2}}^{-},\vec{U}_{i}^{\ast\ast}$
such that the limited values $\widetilde{\vec{U}}_{i-\frac{1}{2}}^{+},\widetilde{\vec{U}}_{i+\frac{1}{2}}^{-},
\widetilde{\vec{U}}_i^{\ast\ast}$ belong to $\mathcal G$ when $\overline{\vec{U}}_i^{n}\in \mathcal{G}$.
For the sake of brevity, the superscript $n$ will be omitted in this section and a small parameter
$\varepsilon$ is taken as $10^{-13}$.
Such limiting procedure can be implemented as follows.
First, let us enforce the positivity of the mass density.
For each cell $I_i$, define the limiter
\begin{equation}
\theta_i^1=\min\left\{1,\frac{\overline{D}_i-\varepsilon}
{\overline{D}_i-D_{\min}}\right\},~~D_{\min}=\min(D_{i-\frac{1}{2}}^{+},D_i^{\ast\ast},D_{i+\frac{1}{2}}^{-}),
\end{equation}
and then limit the mass density as follows
\begin{equation}\label{eq60}
\widehat{D}_{i\pm\frac{1}{2}}^{\mp}=\overline{D}_i+\theta_{i}^{1}(D_{i\pm\frac{1}{2}}^{\mp}-\overline{D}_i),
~~\widehat{D}_{i}^{\ast\ast}=\overline{D}_i+\theta_{i}^{1}(D_{i}^{\ast\ast}-\overline{D}_i).
\end{equation}
Define
$\widehat{\vec{U}}_{i\pm\frac{1}{2}}^{\mp}=(\widehat{D}_{i\pm\frac{1}{2}}^{\mp},
m_{i\pm\frac{1}{2}}^{\mp},E_{i\pm\frac{1}{2}}^{\mp})^{\mathrm{T}}$
and
$\widehat{\vec{U}}_i^{\ast\ast}=(\widehat{D}_i^{\ast\ast},
m_i^{\ast\ast},E_i^{\ast\ast})^{\mathrm{T}}$.
Then, enforce the positivity of $q(\vec{U})=E-\sqrt{D^2+m^2}$. For each cell $I_i$, define the limiter
\begin{equation}\label{eq61}
\theta_i^2=\min\left\{1,\frac{q(\overline{\vec{U}}_i)
-\varepsilon}{q(\overline{\vec{U}}_i)-q_{\min}}\right\},\quad
q_{\min}=\min\left(q(\widehat{\vec{U}}_{i-\frac{1}{2}}^{+}),q(\widehat{\vec{U}}_i^{\ast\ast}),
q(\widehat{\vec{U}}_{i+\frac{1}{2}}^{-})\right),
\end{equation}
and then limit the conservative vectors as follows
\begin{equation}\label{eq62}
\widetilde{\vec{U}}_{i\pm\frac{1}{2}}^{\mp}=\overline{\vec{U}}_i+\theta_i^2(\widehat{\vec{U}}_{i\pm\frac{1}{2}}^{\mp}-\overline{\vec{U}}_i),
~~\widetilde{\vec{U}}_{i}^{\ast\ast}=\overline{\vec{U}}_i+\theta_i^2(\widehat{\vec{U}}_{i}^{\ast\ast}-\overline{\vec{U}}_i).
\end{equation}
It is easy to check that
$\widetilde{\vec{U}}_{i-\frac{1}{2}}^{+},\widetilde{\vec{U}}_{i+\frac{1}{2}}^{-},
\widetilde{\vec{U}}_i^{\ast\ast}\in \mathcal G$.
Moreover, the above scaling PCP limiter does not destroy the original high order accuracy
in smooth regions, see the detailed discussion in \cite{zhang}.
\subsubsection{High-order time discretization}\label{subsubsection2.2.2}
To get a globally high-order accurate scheme in time and space,
we can further employ the strong stability preserving (SSP) Runge-Kutta (RK) method to replace the explicit Euler time discretization in \eqref{eq8} and \eqref{eq8-2}.
Similar to \cite{cheng},
for instance, to obtain a third-order accurate scheme in time,
a third-stage SSP, explicit RK method may be used
for the time discretization as follows.
Stage 1:
\begin{equation}\label{eq63}
\begin{aligned}
&x_{i+\frac{1}{2}}^{(1)}=x_{i+\frac{1}{2}}^n+\Delta t^nu_{i+\frac{1}{2}}^{\ast},
~~~\Delta x_{i}^{(1)}=x_{i+\frac{1}{2}}^{(1)}-x_{i-\frac{1}{2}}^{(1)},\\
&\overline{\vec{U}}_i^{(1)}\Delta x_{i}^{(1)}
=\overline{\vec{U}}_i^{n}\Delta x_{i}^{n}-\Delta t^n\mathcal{L}(\overline{\vec{U}}^n;i);\\
\end{aligned}
\end{equation}
\par
Stage 2:
\begin{equation}\label{eq64}
\begin{aligned}
&x_{i+\frac{1}{2}}^{(2)}=\frac{3}{4}x_{i+\frac{1}{2}}^n+
\frac{1}{4}\big(x_{i+\frac{1}{2}}^{(1)}+\Delta t^nu_{i+\frac{1}{2}}^{(1),\ast}\big),
~~~\Delta x_i^{(2)}=x_{i+\frac{1}{2}}^{(2)}-x_{i-\frac{1}{2}}^{(2)},\\
&\overline{\vec{U}}_i^{(2)}\Delta x_{i}^{(2)}=\frac{3}{4}\overline{\vec{U}}_i^{n}\Delta x_{i}^{n}
+\frac{1}{4}\left(\overline{\vec{U}}_i^{(1)}\Delta x_{i}^{(1)}-\Delta t^n\mathcal{L}(\overline{\vec{U}}^{(1)};i)\right);
\end{aligned}
\end{equation}
\par
Stage 3:
\begin{equation}\label{eq65}
\begin{aligned}
&x_{i+\frac{1}{2}}^{n+1}=\frac{1}{3}x_{i+\frac{1}{2}}^n
+\frac{2}{3}\big(x_{i+\frac{1}{2}}^{(2)}+\Delta t^nu_{i+\frac{1}{2}}^{(2),\ast}\big),
~~~\Delta x_i^{n+1}=x _{i+\frac{1}{2}}^{n+1}-x_{i-\frac{1}{2}}^{n+1},\\
&\overline{\vec{U}}_i^{n+1}\Delta x_{i}^{n+1}=\frac{1}{3}\overline{\vec{U}}_i^{n}\Delta x_{i}^{n}
+\frac{2}{3}\left(\overline{\vec{U}}_i^{(2)}\Delta x_{i}^{(2)}-\Delta t^n\mathcal{L}(\overline{\vec{U}}^{(2)};i)\right),
\end{aligned}
\end{equation}
where $\mathcal{L}(\overline{\vec{U}};i)=\widehat{\vec{\mathcal{F}}}(\vec{U}_{i+\frac{1}{2}}^{-},\vec{U}_{i+\frac{1}{2}}^{+})-
\widehat{\vec{\mathcal{F}}}(\vec{U}_{i-\frac{1}{2}}^{-},\vec{U}_{i-\frac{1}{2}}^{+})$.
The scaling PCP limiter described in Section \ref{subsection2.2.1}
needs to be performed at each stage of the above RK method
to limit the value of ${\vec{U}}_{i-\frac{1}{2}}^{+}, {\vec{U}}_{i+\frac{1}{2}}^{-},
{\vec{U}}_i^{\ast\ast}$. Because each stage of the above SSP RK method is a convex
combination of the forward Euler time discretization and $\mathcal G$ is convex,
so is the above SSP RK method when the forward Euler method is conservative, stable and PCP. | {"config": "arxiv", "file": "1901.10625/agorithm_1D.tex"} |
TITLE: Fundamental Representation of $SU(3)$ is a complex representation
QUESTION [16 upvotes]: Let in a $D(R)$ dimensional representation of $SU(N)$ the generators, $T^a$s follow the following commutation rule:
$\qquad \qquad \qquad [T^a_R, T^b_R]=if^{abc}T^c_R$.
Now if $-(T^a_R)^* = T^a_R $, the representation $R$ is real. Again if we can find a unitary matrix, $V(\neq I)$ such that
$
\qquad \qquad \qquad -(T^a_R)^*=V^{-1} T^a_R V \quad \forall a
$
then the representation $R$ is pseudoreal.
If a representation is neither real nor pseudoreal, the representation $R$ is complex.
Claim:
One way to show that a representation is complex is to show that at least one generator matrix $T^a_R$ has eigenvalues that do not come in plus-minus pairs.
Now let us consider $SU(3)$ group. The generators in the fundamental representation are given by
$T^a =\lambda^a/2; \quad a=1,...8$,
where $\lambda^a$s are the Gell-Mann matrices. We see that $T^8$ has eigenvalues $(1/\sqrt{12}, 1/\sqrt{12}, -1/\sqrt{3} )$.
My doubt is:
According to the claim, is the fundamental representation of $SU(3)$ a complex representation?
REPLY [9 votes]: First of all, we are dealing with unitary representations, so that the $T^a$s are always self-adjoint and the representations have the form
$$U(v) = e^{i \sum_{a=1}^Nv^a T_a}$$
with $v \in \mathbb R^N$. When you say that $U$ is real you just mean that the representation is made of the very real, unitary, $n\times n$ matrices $U$. This way, the condition $(T_a)^* = -T_a$ is equivalent to the reality (in the proper sense) requirement.
Let us come to your pair of questions.
(1). You are right on your point:
PROPOSITION. A unitary finite dimensional representation is complex (i.e. it is neither real nor pseudoreal) if and only if at least one self-adjoint generator $T_a$ has an eigenvalue $\lambda$ such that $-\lambda$ is not an eigenvalue.
PROOF
Suppose that $$(T_a)^* = -V T_a V^{-1}\tag{1}$$ for some unitary matrix $V$ and every $a=1,2,3,\ldots, N$. Since we also know that $T_a$ is self-adjoint, there is an orthogonal basis of eigenvectors $u_j^{(a)}\neq 0$, $j=1,\ldots, n$ and the eigenvalues $\lambda_j^{(a)}$ are real. Therefore:
$$T_au_j^{(a)}= \lambda_j^{(a)} u_j^{(a)}\:.$$
Taking the complex conjugation and using (1)
$$VT_aV^{-1}u_j^{(a)*}= -\lambda_j^{(a)} u_j^{(a)*}$$
so that $V^{-1}u_j^{(a)*}\neq 0$ is an eigenvector of $T_a$ with eigenvalue $-\lambda_j$. We conclude that $\lambda$ is an eigenvalue if and only if $-\lambda$ is (consequently, if the dimension of the space is odd, $0$ must necessarily be an eigenvalue as well).
Soppose,vice versa, that for the self-adjoint matrix $T^a$ its (real) eigenvalues satisfy the constraint that $\lambda$ is an eigenvalue if and only if $-\lambda$ is. As $T^a$ is self adjoint, there is a unitary matrix such that:
$$T_a = U diag(\lambda_1, -\lambda_1, \lambda_2, -\lambda_2,..., \lambda_{n/2},-\lambda_{n/2}) U^{-1}$$
when $n$ is even, otherwise there is a further vanishing last element on the diagonal.
Thus
$$T^*_a = U^* diag(\lambda_1, -\lambda_1, \lambda_2, -\lambda_2,..., \lambda_{n/2},-\lambda_{n/2}) U^{-1*}$$
Notice that $U^*$ is unitary if $U$ is such.
Let us indicate by $e_1,e_2, \ldots, e_n$ the orthonormal basis of eigenvectors of $T^a$ where the matrix takes the above diagonal form.
If $W$ is the (real) unitary matrix which swaps $e_1$ with $e_2$, $e_3$ with $e_4$ and so on (leaving $e_n$ fixed if $n$ is odd), we have that
$$W diag(\lambda_1, -\lambda_1, \lambda_2, -\lambda_2,..., \lambda_{n/2},-\lambda_{n/2}) W^{-1} = - diag(\lambda_1, -\lambda_1, \lambda_2, -\lambda_2,..., \lambda_{n/2},-\lambda_{n/2})$$ and thus
$$T^*_a = U^*W^{-1}(- UT_aU^{-1}) WU^{-1*} = -S T_a S^{-1}$$
with $S= U^*W^{-1}U$, which is unitary because composition of unitary matrices.
We can conclude that, as you claim, a way to show that a representation is complex (i.e. it is not real) is to show that at least one generator matrix $T_a$ has (non-vanishing) eigenvalues that do not come in plus-minus pairs.
QED
(2). In view of (1), if the list of eigenvalues you presented is correct, the considered representation is obviously complex. | {"set_name": "stack_exchange", "score": 16, "question_id": 69443} |
TITLE: Smallest integer which leaves remainder 3, 2, 1 when divided by 17, 15, 13
QUESTION [1 upvotes]: Find the smallest positive integer so that the remainder when it is divided by $17,15,13$ is $3,2,1$ respectively.
This question can be solved using Chinese remainder theorem, but the theorem gives any integer, not the smallest.
For example, we have, letting $n_1, n_2, n_3 = 17, 15, 13$ and $a_1, a_2, a_3 = 3, 2, 1$ and $N_j = \frac{1}{n_j}\cdot\prod n_i$:
$$x = \sum a_i N_i^{\phi(n_i)}
\\x = 3\cdot(15\cdot 13)^{16} + 2\cdot(17\cdot13)^{8} + (17\cdot 15)^{12}$$
Essentially here $ N_1^{15} = (15\cdot 13)^{15}$ is root of $N_1 x \equiv 1(\mathrm{mod} \ 17)$
By choosing different roots, we get different answers, but how can we find the minimum $x$ that satisfies the condition?
Just so in this case the minimum is $1652$
REPLY [3 votes]: CRT becomes trivial if we exploit the innate Arithmetic Progression (A.P.) structure. Note
$$\begin{align} &x\equiv 1\!\!\pmod{\!13}\\ &x\equiv 2\!\!\pmod{\!15}\\ &x\equiv 3\!\!\pmod{\!17}\\[.2em] {\rm i.e.}\ \ \ &x\equiv 1\!+\!k\!\!\!\pmod{\!13\!+\!2k},\ \ k=0,1,2\end{align}$$
with progressions: $\ 1,2,3 = 1\!+\!k,\ $ & $\,\ 13,15,17 = 13\!+\!2k.\,$ Hence
$\!\!\bmod \color{#c00}{13\!+\!2k}\!:\,\ x\equiv 1\!+\!k\iff 2x \equiv 2\!+\!\color{#c00}{2k}\equiv 2\color{#c00}{-13}\equiv -11\iff 2x\!+\!11\equiv 0$
So $\ 13,15,17\mid 2x\!+\!11 \!\iff\! n\!=\!13(15)17\mid 2x\!+\!11,\,$ by lcm = product for pair-coprimes.
So $\bmod n\!:\,\ 2x \equiv -11\equiv n\!-\!11\iff x \equiv (n\!-\!11)/2\equiv \bbox[5px,border:1px solid #c00]{\!\!1652}\:$ by $\,n=3315\,$ is odd.
Hence $\ x = 1652 + 3315\,k,\, $ so $\,x = 1652\,$ is clearly the smallest positive solution.
Remark $\ $ If modular fractions are known then more clearly we have by CCRT
$$ x\equiv \dfrac{-11}2\!\!\! \pmod{\!13,15,17} \iff x\equiv \dfrac{-11}2\!\!\! \pmod {\!13\cdot 15\cdot 17}\qquad\qquad$$
More generally the same method shows that if $\,(a,b) = 1\,$ then
$\bbox[8px,border:1px solid #c00]{{\bf Theorem} \ \ \left\{\:\!x\equiv d\!+\!ck\pmod{\!b\!+\!ak}\:\!\right\}_{k=0}^{m_{\phantom{|}}}\!\!\iff\! x\equiv \dfrac{ad\!-\!bc}a\pmod{\!{\rm lcm}\{b\!+\!ak\}_{k=0}^{m_{\phantom{|}}}}} $
Proof $ $ by $\,(a,b\!+\!ak)=(a,b)=1,\,$ LHS $\!\overset{\times\ a}\iff\!\bmod \color{#c00}{b\!+\!ak}\!:\ ax\equiv ad\!+\!c(\color{#c00}{ak})\equiv ad\!\color{#c00}{-b}c\!$ $\!\iff\! ax\!-\!(ad\!-\!bc)\,$ is divisible by all moduli $\!\iff\!$ it is divisible by their lcm $n\, $ (since $\,a\,$ is coprime to each modulus $\,n_k\,$ it is invertible $\!\bmod n_k\,$ so it is invertible mod their lcm $ n)$.
OP is special case $\ a,b,c,d = 2,13,1,1\,$ so $\ x\equiv \dfrac{2(1)\!-\!13(1)}2\equiv\dfrac{-11}2\!\pmod{\!17(16)13}$
See this answer for how to choose the residues in A.P. when only the moduli are in A.P. | {"set_name": "stack_exchange", "score": 1, "question_id": 3280460} |
TITLE: Definition of null space
QUESTION [1 upvotes]: I have two definitions of null space. One by Serge Lang
Suppose that for every element $u$ of $V$ we have $\langle u,u\rangle=O$. The
scalar product is then said to be null, and $V$ is called a null space.
and another by David C. Lay
The null space of an $m \times n$ matrix $A$ is the set of all
solutions of the homogeneous equation $Ax=0$.
I cannot find the relationship between these definitions. Can anybody give a hint to me?
REPLY [5 votes]: The two things described are very different; the only "relationship" is that the term null refers to something having to do with zero, and both things are some sort of vector space.
Those who use Lang's definition of a null-space would strictly refer to the second idea (what Lay calls a "null space") as the "kernel" of a matrix. | {"set_name": "stack_exchange", "score": 1, "question_id": 863648} |
TITLE: Why does the average z-score for a standardized distribution always equal to zero?
QUESTION [0 upvotes]: My introductory statistics book mentioned this:
"When an entire distribution of scores is standardized, the average (i.e., mean) z score for the standardized distribution will always be 0, and the standard deviation of this distribution will always be 1.0."
Why does the average z-score always equal to zero?
REPLY [2 votes]: Standardization is the process of applying a linear transformation of a random variable so that its mean is zero and its variance is one.
Taking a non-standardized variable, if you subtract the mean, the new mean is obviously zero. | {"set_name": "stack_exchange", "score": 0, "question_id": 3217144} |
TITLE: Prove that it is NOT true that for every integer $n$, 60 divides $n$ if and only if 6 divides $n$ and 10 divides $n$.
QUESTION [2 upvotes]: This is Velleman's exercise 3.4.26 (b):
Prove that it is NOT true that for every integer $n$, 60 divides $n$ iff 6 divides $n$ and 10 divides $n$.
I do understand that a number will be divisible by 6 and 10 if it is divisible by 60 and that it will not necessarily be divisible by 60 if it is divisible by 6 and by 10. 30 is an example of it.
I still have an issue actually discovering and writing up the proof. To illustrate the issue and to put the question in context, I would like to refer to the first part of the question already asked by another user, Velleman's exercise 3.4.26 (a). Thanks in advance.
REPLY [4 votes]: Consider the number $30$. It is divisible by $6$ and $10$ and not by $60$. We have found a counterexample and therefore the statement is not true.
The statement $ab$ divides $n$ if and only if $a$ divides $n$ and $b$ divides $n$ is only true if $a$ and $b$ are relatively prime (share no prime factors).
Otherwise we can take the least common multiple of $a$ and $b$, this number will be smaller than $ab$ (since they are not relatively prime) and it will be divisible by $a$ and $b$. The least common multiple will always be a counterexample if $a$ and $b$ are not relatively prime. | {"set_name": "stack_exchange", "score": 2, "question_id": 1403337} |
TITLE: Finding $\lim_{n \to \infty} \prod_{r=1}^n (1+ \frac{1}{a_r})$
QUESTION [0 upvotes]: Evaluate the following infinite product:
$$\lim_{n\to \infty} \prod_{r=1}^n (1+ \frac{1}{a_r}),\\
a_1=1; \quad a_r=r(1+a_{r-1}).$$
I thought of two ways in which this might be evaluated: (1) By taking log and converting into a Riemann series which I saw isn't possible and (2) by converting into a polynomial equation of nth degree where coefficients become sums as per the theory of equations. Right now, I'm stuck, trying to figure out the way to proceed.
REPLY [4 votes]: As @Gribouillis mentionned, it is generally a good reflex to try to fit the recurrence equation in the product (or the sum).
Here you have
$$\prod_{k=1}^n (1+\frac{1}{a_k}) = \prod_{k=1}^n \frac{1+a_k}{a_k} = \prod_{k=1}^n \frac{a_{k+1}}{(k+1)a_k} = \frac{a_{n+1}}{(n+1)!a_1}$$
Let's see if we can find a limit for this. Note $b_n=\frac{a_n}{n!}$. With the recurrence relation, you get, by dividing by $n!$ :
$$b_n=\frac{a_n}{n!} = \frac{1}{(n-1)!}(1+a_{n-1}) = \frac{1}{(n-1)!} + \frac{a_{n-1}}{(n-1)!} = \frac{1}{(n-1)!} + b_{n-1}$$
So
$$b_n=b_1+\sum_{k=2}^n\frac{1}{(k-1)!} = \sum_{k=0}^{n-1}\frac{1}{(k-1)!}$$
and it is well known that $(b_n)$ has limit $e$. | {"set_name": "stack_exchange", "score": 0, "question_id": 2392104} |
TITLE: Find matrix rank according to parameter value
QUESTION [0 upvotes]: I've been trying to solve this "simple" problem using the gaussian elimination method, but I don't get the right reduction steps to simplify the matrix and left a simple parameter term in the last row.
So I can't get the matrix rank according to the parameter value.
The matrix is:
\begin{pmatrix}
1 & 1 & a & a \\
a & a & 1 & 1 \\
1 & a & 1 & a \\
\end{pmatrix}
Just any ideas of what reduction steps should I use?, I may solve it using this method, instead of calculating the Det.
Thank you so much.
REPLY [1 votes]: I assume that the reason you are finding this hard is because the solution does not depend on $a$. Letting $x = -1, y = 1, z = 1$, we see that the all three equations hold regardless of the value of $a$.
So if you were after a value of $a$ that would make this solvable, they all do.
If you were looking for the dependence of $x, y, z$ on $a$, they are all constant. | {"set_name": "stack_exchange", "score": 0, "question_id": 1513555} |
TITLE: Let $f\in\mathbb{Q}[X]$ such that $f(1)=-2, f(2)=1$ and $f(-1)=0$. Find the remainder of the division of $f$ by $X^3-2X^2-X+2$.
QUESTION [1 upvotes]: Let $f\in\mathbb{Q}[X]$ such that $f(1)=-2, f(2)=1$ and $f(-1)=0$. Find the remainder of the division of $f$ by $X^3-2X^2-X+2$.
So, I figured: $f=(X+1)q$. Assumming that $f$ has degree 3, I solve $\begin{cases} (2+1)(2-a)(2-b)=1 \\ (1+1)(1-a)(1-b)=-2\end{cases}$ to find that $\begin{cases} a=\frac{5}{6}-\frac{\sqrt{37}}{6}, b=\frac{5}{6}+\frac{\sqrt{37}}{6} \\ a=\frac{5}{6}+\frac{\sqrt{37}}{6}, b=\frac{5}{6}-\frac{\sqrt{37}}{6} \end{cases}$. I divide $X^3-2X^2-X+2$ by $(X+1)(X-\frac{5}{6}-\frac{\sqrt{37}}{6})(X-\frac{5}{6}+\frac{\sqrt{37}}{6})$. The remainder is $\frac{4}{3}(X+1)(X-\frac{7}{4})$. Is it correct to assume that the remainder is always this, no matter the degree of $f$? Since that's what the problem asks for, I'm lead to assume this.
REPLY [1 votes]: You can not assume that $f$ has degree $3$. Also the remainder that you have calculated is not correct, the actual remainder is $\frac{1}{3} \, {\left(4 \, x - 7\right)} {\left(x + 1\right)}$.
For example take $f(x) =
\frac{3}{4} \, x^{4} + \frac{1}{6} \, x^{3} - \frac{11}{4} \, x^{2} - \frac{7}{6} \, x + 1$.Observe that $f$ satisfies all the requirements given in the question. So when you divide (perhaps using long division or some other methods) $f$ by $x^3 - 2x^2 - x + 2$
you get the same remainder.
A big hint:
Let $g(x) = x^3 - 2x^2 - x + 2$ and let $r(x)$ be the remainder when you divide $f$ by $g$. Then there exist a $q(x)$ such that $f(x) = g(x)q(x) + r(x)$. Observe that $r(x)$ is atmost quadratic polynomial. Also see that $g(1) = g(2) = g(-1) = 0$. So $f(1) = r(1) = -2$, $f(2) = r(2) = 1$ and $f(-1) = r(-1) = 0$. Can you find $r(x)$ satisfieng these conditions? | {"set_name": "stack_exchange", "score": 1, "question_id": 4189376} |
\appendix
\section{Symbols}
\label{app:symbols}
List of symbols used in the paper with their brief description.
\begin{table}[h]
\centering
\small
\begin{tabular}{c|l}
\toprule
\multicolumn{2}{c}{Two-stage stochastic program}\\
\midrule
$x$ & First-stage decision vector \\
$c$ & First-stage objective coefficient vector\\
$K$ & \extform{} scenario set size\\
$k$ & Scenario index\\
$\xi_k$ & $k^{th}$ scenario realization \\
$p_k$ & Probability of scenario $k$ \\
$n$ & Dimension of $x$\\
$Q(x, \xi)$ & Second-stage sub-problem for first-stage decision $x$ and scenario $\xi$\\
$F(x, \xi)$ & Second-stage cost for first-stage decision $x$ and scenario $\xi$ \\
$\mathcal X$ & Constraint set exclusively on the first-stage decision\\
$\mathcal Y(x, \xi)$ & Scenario-specific constraint set for first-stage decision $x$ and scenario $\xi$\\
\midrule
\multicolumn{2}{c}{Neural Network}\\
\midrule
$\ell$ & Number of layers in the network \\
$m$ & Index over the neural network layers\\
$d^{0}$ & Dimensionality of input layer\\
$d^{m}$ & Dimensionality of layer $m$\\
\NNinput & Input to the neural network\\
\NNoutput & Output of the neural network\\
$W$ & Weight matrix \\
$b$ & Bias \\
$\sigma$ & Activation function \\
$h^{m}$ & $m^{th}$ hidden layer\\
$i$ & Index over the column of weight matrix\\
$j$ & Index over the row of weight matrix\\
$\Phi^1$ & Scenario-encoding network\\
$\Phi^2$ & Post scenario-aggregation network\\
$\Psi^{SC}$ & Scenario-embedding network for single-cut\\
$\Psi^{MC}$ & Scenario-embedding network for multi-cut\\
\midrule
\multicolumn{2}{c}{MIP-NN}\\
\midrule
$\hat h$ & Non-negative $ReLU$ input\\
$\check h$ & Negative $ReLU$ input\\
$z$& Indicator variables\\
$\Lambda$ & Number of predictions used in embedding\\
\bottomrule
\end{tabular}
\caption{Symbols summary}
\label{tab:symbols}
\end{table}
\section{Stochastic Programming Problems}
\label{app:spp}
\subsection{Capacitated Facility Location (CFLP)}
\label{app:spp_cflp}
The CFLP is a decision-making problem in which a set of facility opening decisions must be made in order to meet the demand of a set of customers. Typically this is formulated as a minimization problem, where the amount of customer demand satisfied by each facility cannot exceed its capacity. The two-stage stochastic CFLP arises when facility opening decisions must be made prior to knowing the actual demand. For this problem, we generate instances following the procedure described in \citep{cornuejols1991comparison} and create a stochastic variant by simply generating the first-stage costs and capacities, then generate scenarios by sampling $K$ demand vectors using the distributions defined in \citep{cornuejols1991comparison}). To ensure relatively complete recourse, we introduce additional variables with prohibitively expensive objective costs in the case where customers cannot be served. In the experiments a CFLP with $n$ facilities, $m$ customers, and $s$ scenarios is denoted by CFLP\_$n$\_$m$\_$s$.
\subsection{Investment Problem (INVP)}
\label{app:spp_invp}
The INVP is a 2SP problem studied in \citep{schultz1998solving}. This 2SP has a set of continuous first-stage decisions which yield an immediate revenue. In the second stage, after a set of random variables are realized, a set of binary decisions can be made to receive further profit. In this work, we specifically consider the instance described in the example 7.3. of \citep{schultz1998solving}. This problem has 2 continuous variables in the first stage with the domain $[0, 5]$, and 4 binary variables in the second stage. The scenarios are given by two random discrete variables which are defined with equal probability over the range $[5,15]$. Specifically, for $K$ scenarios, each random variable can take an equally spaced value in the range. Although the number of variables is quite small, the presence of continuous first-stage decision has made this problem relevant within the context of other recent work such as the heuristic approach proposed in \citep{van2021loose}. As a note, we reformulate the INVP as an equivalent minimization problem in the remainder of this work. In the experiments an INVP instance is denoted by INVP\_$v$\_$t$, where $v$ indicates the type of second-stage variable
(B for binary and I for integer) and $t$ indicates the type of technology matrix ($E$ for identity and $H$ for $[[2 / 3, 1 / 3], [1 / 3, 2 / 3]]$).
\subsection{Stochastic Server Location Problem (SSLP)}
\label{app:spp_sslp}
The SSLP is a 2SP, where in the first stage a set of decisions are made to decide which servers should be utilized and a set of second-stage decisions assigning clients to servers. In this case, the random variables take binary values, which represent a client with a request occurring in the scenario or not. A more detailed description of the problem can be found in \citep{ntaimo2005million}. In this work, we directly use the instances provided in SIPLIB \citep{ahmed2015siplib}. In the experiments a SSLP with $n$ servers, $m$ clients, and $s$ scenarios is denoted by SSLP\_$n$\_$m$\_$s$.
\subsection{Pooling Problem (PP)}
\label{app:spp_pp}
The pooling problem is a well-studied problem in the field of mixed-integer nonlinear programming \cite{audet2004pooling, gupte2017relaxations, gounaris2009computational, haverly1978studies}.
It can be formulated as a mixed-integer quadratically constrained quadratic program, making it the hardest problem class in our experiments.
We are given a directed graph, consisting of three disjoint sets of nodes, called the source, pool and terminal nodes.
We need to produce and send some products from the source to the terminal nodes, using the given arcs, such that the product demand and quality constraints on the terminal nodes, along with the arc capacity constraints, are satisfied.
The pool nodes can be used to mix products with different qualities in appropriate quantities to generate a desired quality product.
The goal is to decide the amount of product to send on each arc such that the total profit from the operations is maximized.
We consider a stochastic version of the problem as described in the case study of \cite{li2011stochastic}.
Here, in the first stage, we need to design the network by selecting nodes and arcs from the input graph, without knowing the quality of the product produced on source nodes and the exact demand on the terminal nodes.
Once the uncertainty is revealed, in the second stage, we make the recourse decisions about the amount of product to be sent on each arc, such that demand and quality constraints on the terminal nodes are satisfied.
In our case, we have 16 binary variables in the first stage and 11 continuous variables per scenario in the second stage.
An instance of this problem is referred to as PP\_$s$, where $s$ is the number of scenarios.
\section{Data Generation \& Supervised Learning Times}
\label{app:dg_tr}
In this section, we report details of the data generation and training times for all problem settings in Tables~\ref{tab:dg_times} and \ref{tab:tr_times}, respectively. For training, we split the \# samples into an 80\%-20\% train validation set, and select the best model on the validation set in the given number of epochs.
\begin{table*}[t]\centering\resizebox{0.9\textwidth}{!}{
\begin{tabular}{l|rrr|rrr}
\toprule
Problem & \multicolumn{3}{c}{SC} & \multicolumn{3}{c}{MC} \\
\cmidrule{2-7}
{} & {\# samples} & {Time per sample} & {Total time} & {\# samples} & {Time per sample} & {Total time} \\
\midrule
CFLP\_10\_10 & 5,000 & 0.36 & 1,823.07 & 10,000 & 0.00 & 13.59 \\
CFLP\_25\_25 & 5,000 & 0.83 & 4,148.83 & 10,000 & 0.01 & 112.83 \\
CFLP\_50\_50 & 5,000 & 1.54 & 7,697.91 & 10,000 & 0.01 & 135.57 \\
\midrule
SSLP\_10\_50 & 5,000 & 0.19 & 942.10 & 10,000 & 0.00 & 22.95 \\
SSLP\_15\_45 & 5,000 & 0.19 & 929.27 & 10,000 & 0.00 & 16.35 \\
SSLP\_5\_25 & 5,000 & 0.17 & 860.74 & 10,000 & 0.00 & 13.18 \\
\midrule
INVP\_B\_E & 5,000 & 1.79 & 8,951.27 & 10,000 & 0.00 & 4.17 \\
INVP\_B\_H & 5,000 & 1.84 & 9,207.90 & 10,000 & 0.00 & 4.22 \\
INVP\_I\_E & 5,000 & 1.75 & 8,759.83 & 10,000 & 0.00 & 4.34 \\
INVP\_I\_H & 5,000 & 1.79 & 8,944.65 & 10,000 & 0.00 & 3.32 \\
\midrule
PP & 5,000 & 0.24 & 1,202.11 & 10,000 & 0.00 & 14.86 \\
\bottomrule
\end{tabular}}
\caption{Data generation samples and times. Data was generated in parallel with 43 processes. All times in seconds.}
\label{tab:dg_times}
\end{table*}
\begin{table*}[t]\centering\resizebox{0.4\textwidth}{!}{
\begin{tabular}{l|rrr}
\toprule
{} & {SC} & {MC} & {LR} \\
\midrule
CFLP\_10\_10 & 667.28 & 127.12 & 0.53 \\
CFLP\_25\_25 & 2,205.23 & 840.07 & 0.28 \\
CFLP\_50\_50 & 463.71 & 128.11 & 0.75 \\
\midrule
SSLP\_10\_50 & 708.86 & 116.17 & 0.63 \\
SSLP\_15\_45 & 1,377.21 & 229.42 & 0.57 \\
SSLP\_5\_25 & 734.02 & 147.44 & 0.05 \\
\midrule
INVP\_B\_E & 344.87 & 1,000.14 & 0.02 \\
INVP\_B\_H & 1,214.54 & 607.49 & 0.02 \\
INVP\_I\_E & 2,115.25 & 680.93 & 0.02 \\
INVP\_I\_H & 393.82 & 174.26 & 0.02 \\
\midrule
PP & 576.08 & 367.25 & 0.05 \\
\bottomrule
\end{tabular}}
\caption{Training times. All times in seconds.}
\label{tab:tr_times}
\end{table*}
\section{Objective Results}
\label{app:objective}
In this section, we report the objective for the first-stage solutions obtained by each approximate MIP and the objective of EF (either optimal or at the end of the solving time). In addition, we report the objective of the approximate MIP. See Tables~\ref{tab:obj_CFLP} through \ref{tab:obj_PP} for results. As mentioned in the main paper, the results from linear regressor (LR) are quite poor, with a significantly worse objective in almost every instance. This is unsurprising a linear function will not likely have the capacity to estimate the integer and non-linear second-stage objectives. For both SC and MC we can see that the true objective and the approximate-MIP objective are relatively close for all of the problem settings, further indicating that the neural network embedding is a useful approximation to the second-stage expected cost.
\begin{table*}[t]\centering\resizebox{0.9\textwidth}{!}{
\begin{tabular}{l|rrrr|rrr}
\toprule
Problem & \multicolumn{4}{c}{True objective} & \multicolumn{3}{c}{Approximate-MIP objective} \\
\cmidrule(lr){2-5}
\cmidrule(lr){6-8}
{} & {SC} & {MC} & {LR} & {EF} & {SC\ } & {MC\ } & {LR\ } \\
\midrule
CFLP\_10\_10\_100 & 7,174.57 & 7,109.62 & 10,418.87 & \textbf{6,994.77} & 7,102.57 & 7,046.37 & 5,631.00 \\
CFLP\_10\_10\_500 & 7,171.79 & 7,068.91 & 10,410.19 & \textbf{7,003.30} & 7,102.57 & 7,084.46 & 5,643.68 \\
CFLP\_10\_10\_1000 & 7,154.60 & \textbf{7,040.70} & 10,406.08 & 7,088.56 & 7,102.57 & 7,064.36 & 5,622.40 \\
CFLP\_25\_25\_100 & \textbf{11,773.01} & \textbf{11,773.01} & 23,309.73 & 11,864.83 & 11,811.39 & 12,100.73 & 10,312.21 \\
CFLP\_25\_25\_500 & \textbf{11,726.34} & \textbf{11,726.34} & 23,310.34 & 12,170.67 & 11,811.39 & 12,051.51 & 10,277.01 \\
CFLP\_25\_25\_1000 & \textbf{11,709.90} & \textbf{11,709.90} & 23,309.85 & 11,868.04 & 11,811.39 & 12,041.12 & 10,263.37 \\
CFLP\_50\_50\_100 & 25,236.33 & \textbf{25,019.64} & 45,788.45 & 25,349.21 & 26,309.43 & 26,004.88 & 18,290.63 \\
CFLP\_50\_50\_500 & 25,281.13 & \textbf{24,964.33} & 45,786.97 & 28,037.66 & 26,287.48 & 25,986.50 & 18,209.77 \\
CFLP\_50\_50\_1000 & 25,247.77 & \textbf{24,981.70} & 45,787.18 & 30,282.41 & 26,309.43 & 26,002.78 & 18,217.14 \\
\bottomrule
\end{tabular}}
\caption{CFLP detailed objective results: each row represents an average over 10 2SP instance with varying scenario sets.
``True objective" for \{SC,MC,LR\} is the cost of the first-stage solution obtained from the approximate MIP evaluated on the second-stage scenarios. For EF it is the objective at the solving limit.
``Approximate-MIP objective" is objective from the approximate MIP for \{SC,MC,LR\}. All times in seconds.
}
\label{tab:obj_CFLP}
\end{table*}
\begin{table*}[t]\centering\resizebox{0.78\textwidth}{!}{
\begin{tabular}{l|rrrr|rrr}
\toprule
Problem & \multicolumn{4}{c}{True objective} & \multicolumn{3}{c}{Approximate-MIP objective} \\
\cmidrule(lr){2-5}
\cmidrule(lr){6-8}
{} & {SC} & {MC} & {LR} & {EF} & {SC\ } & {MC\ } & {LR\ } \\
\midrule
SSLP\_10\_50\_50 & -354.96 & -354.96 & -63.00 & \textbf{-354.96} & -350.96 & -339.42 & -294.69 \\
SSLP\_10\_50\_100 & \textbf{-345.86} & \textbf{-345.86} & -49.62 & -345.86 & -350.96 & -328.54 & -283.96 \\
SSLP\_10\_50\_500 & \textbf{-349.54} & \textbf{-349.54} & -54.68 & -349.54 & -350.96 & -332.82 & -288.02 \\
SSLP\_10\_50\_1000 & \textbf{-350.07} & \textbf{-350.07} & -55.45 & -235.22 & -350.96 & -333.46 & -288.55 \\
SSLP\_10\_50\_2000 & \textbf{-350.07} & \textbf{-350.07} & -54.72 & -172.73 & -350.96 & -332.87 & -288.19 \\
SSLP\_15\_45\_5 & -247.27 & -206.83 & -249.51 & \textbf{-255.55} & -238.44 & -259.11 & -58.28 \\
SSLP\_15\_45\_10 & -249.58 & -209.49 & -252.89 & \textbf{-257.41} & -238.44 & -265.92 & -64.01 \\
SSLP\_15\_45\_15 & -251.10 & -208.86 & -254.58 & \textbf{-257.68} & -238.44 & -267.01 & -66.71 \\
SSLP\_5\_25\_50 & -125.22 & -123.15 & 14.50 & \textbf{-125.36} & -121.64 & -110.18 & -119.98 \\
SSLP\_5\_25\_100 & -120.91 & -119.03 & 19.87 & \textbf{-120.94} & -121.64 & -109.59 & -117.79 \\
\bottomrule
\end{tabular}}
\caption{SSLP detailed objective results: each row represents an average over eleven 2SP instance with varying scenario sets. See Table~\ref{tab:obj_CFLP} for a detailed description of the columns.}
\label{tab:obj_SSLP}
\end{table*}
\begin{table*}[t]\centering\resizebox{0.78\textwidth}{!}{
\begin{tabular}{l|rrrr|rrr}
\toprule
Problem & \multicolumn{4}{c}{True objective} & \multicolumn{3}{c}{Approximate-MIP objective} \\
\cmidrule(lr){2-5}
\cmidrule(lr){6-8}
{} & {SC} & {MC} & {LR} & {EF} & {SC\ } & {MC\ } & {LR\ } \\
\midrule
INVP\_B\_E\_4 & -51.56 & -55.29 & -46.25 & \textbf{-57.00} & -58.59 & -52.15 & -63.67 \\
INVP\_B\_E\_9 & -54.86 & -58.15 & -53.11 & \textbf{-59.33} & -58.81 & -55.33 & -63.67 \\
INVP\_B\_E\_36 & -59.55 & -58.19 & -58.86 & \textbf{-61.22} & -59.38 & -57.92 & -63.67 \\
INVP\_B\_E\_121 & -61.44 & -60.78 & -61.06 & \textbf{-62.29} & -59.60 & -58.91 & -63.67 \\
INVP\_B\_E\_441 & -59.60 & -59.83 & -59.91 & \textbf{-61.32} & -59.91 & -58.51 & -63.67 \\
INVP\_B\_E\_1681 & -59.81 & - & -59.30 & \textbf{-60.63} & -59.94 & - & -63.67 \\
INVP\_B\_E\_10000 & \textbf{-59.85} & - & -58.68 & -58.98 & -59.95 & - & -63.67 \\
INVP\_B\_H\_4 & -51.75 & -51.36 & -51.75 & \textbf{-56.75} & -58.12 & -52.41 & -61.24 \\
INVP\_B\_H\_9 & -56.56 & -56.56 & -56.56 & \textbf{-59.56} & -61.78 & -56.67 & -61.24 \\
INVP\_B\_H\_36 & -59.31 & -59.31 & -59.31 & \textbf{-60.28} & -59.38 & -59.52 & -61.24 \\
INVP\_B\_H\_121 & -59.93 & -59.93 & -59.93 & \textbf{-61.01} & -60.22 & -60.54 & -61.24 \\
INVP\_B\_H\_441 & -60.14 & -58.07 & -60.14 & \textbf{-61.44} & -60.23 & -58.13 & -61.24 \\
INVP\_B\_H\_1681 & \textbf{-60.47} & - & \textbf{-60.47} & -60.04 & -60.57 & - & -61.24 \\
INVP\_B\_H\_10000 & \textbf{-60.53} & - & \textbf{-60.53} & -58.93 & -60.65 & - & -61.24 \\
INVP\_I\_E\_4 & -55.35 & \textbf{-63.50} & -52.50 & \textbf{-63.50} & -66.79 & -58.96 & -71.57 \\
INVP\_I\_E\_9 & -61.63 & -64.80 & -61.89 & \textbf{-66.56} & -66.70 & -61.70 & -71.57 \\
INVP\_I\_E\_36 & -66.03 & -66.25 & -67.08 & \textbf{-69.86} & -67.39 & -65.18 & -71.57 \\
INVP\_I\_E\_121 & -67.35 & -67.92 & -69.07 & \textbf{-71.12} & -67.39 & -66.70 & -71.57 \\
INVP\_I\_E\_441 & -67.55 & -69.16 & -67.39 & \textbf{-69.64} & -67.63 & -67.43 & -71.57 \\
INVP\_I\_E\_1681 & -67.95 & -66.73 & -66.52 & \textbf{-68.85} & -67.69 & -67.62 & -71.57 \\
INVP\_I\_E\_10000 & \textbf{-67.94} & - & -65.67 & -67.04 & -67.82 & - & -71.57 \\
INVP\_I\_H\_4 & -54.75 & -55.78 & -54.75 & \textbf{-63.50} & -65.31 & -59.99 & -66.07 \\
INVP\_I\_H\_9 & -59.78 & -65.25 & -59.78 & \textbf{-65.78} & -64.15 & -61.08 & -66.07 \\
INVP\_I\_H\_36 & -63.78 & -64.80 & -63.78 & \textbf{-67.11} & -66.79 & -63.76 & -66.07 \\
INVP\_I\_H\_121 & -65.03 & -64.37 & -65.03 & \textbf{-67.75} & -65.38 & -64.64 & -66.07 \\
INVP\_I\_H\_441 & -65.12 & -65.12 & -65.12 & \textbf{-67.24} & -67.13 & -65.16 & -66.07 \\
INVP\_I\_H\_1681 & \textbf{-65.63} & -65.34 & \textbf{-65.63} & -65.41 & -65.87 & -65.03 & -66.07 \\
INVP\_I\_H\_10000 & \textbf{-65.66} & - & \textbf{-65.66} & -64.63 & -66.45 & - & -66.07 \\
\bottomrule
\end{tabular}}
\caption{INVP detailed objective results: each row represents single instance. See Table~\ref{tab:obj_CFLP} for a detailed description of the columns.}
\label{tab:obj_INVP}
\end{table*}
\begin{table*}[t]\centering\resizebox{0.7\textwidth}{!}{
\begin{tabular}{l|rrrr|rrr}
\toprule
Problem & \multicolumn{4}{c}{True objective} & \multicolumn{3}{c}{Approximate-MIP objective} \\
\cmidrule(lr){2-5}
\cmidrule(lr){6-8}
{} & {SC} & {MC} & {LR} & {EF} & {SC\ } & {MC\ } & {LR\ } \\
\midrule
PP\_125 & 264.30 & 173.10 & -10.00 & \textbf{273.19} & 189.75 & 177.12 & 70.75 \\
PP\_216 & 200.29 & 131.83 & -10.00 & \textbf{220.25} & 189.75 & 168.10 & 70.75 \\
PP\_343 & 206.38 & 122.90 & -10.00 & \textbf{207.77} & 189.75 & 172.17 & 70.75 \\
PP\_512 & 204.41 & 134.83 & -10.00 & \textbf{223.86} & 189.75 & 162.54 & 70.75 \\
PP\_729 & 219.42 & 137.97 & -10.00 & \textbf{222.48} & 189.75 & 167.55 & 70.75 \\
PP\_1000 & 202.50 & 126.30 & -10.00 & \textbf{215.25} & 189.75 & 165.59 & 70.75 \\
\bottomrule
\end{tabular}}
\caption{PP detailed objective results: each row represents single instance. See Table~\ref{tab:obj_CFLP} for a detailed description of the columns.}
\label{tab:obj_PP}
\end{table*}
\section{Model Selection}
\label{app:ms_ds}
For the supervised learning task, we implement linear regression using Scikit-learn 1.0.1 \citep{scikit-learn}. In this case we use the base estimator with no regularization.
The multi\/single-cut neural models are all implemented using Pytorch 1.10.0 \citep{NEURIPS2019_9015}.
For model selection, we use random search over 100 configurations for each problem setting. For multi-cut and single-cut we sample configurations from Table~\ref{tab:random_search_space}. For both cases we limit the ReLU layers to a single layer with a varying hidden dimension. In the multi-cut case the choice of the ReLU hidden dimension is limited since a large number of predictions each with a large hidden dimension can lead to MILPs which are prohibitively expensive to solve. For the single-cut specific hidden dimensions, we have 3 layers, with Embed hidden dimension 1 and Embed hidden dimension 2 corresponding to layers before the aggregation and Embed hidden dimension 3 being a final hidden layer after the aggregation.
In Tables~\ref{tab:mc_rs_best} and \ref{tab:sc_rs_best} we report the best parameters for each problem setting for the multi and single cut models, respectively. In addition, we report the validation MSE across all 100 configurations for each problem in box plots in Figures~\ref{fig:cflp_rs_box_plots} to \ref{fig:pp_rs_box_plots}. From the box plots we can observe that lower validation MAE configurations are quite common as the medians are typically not too far from the lower tails of the distributions. This indicates that hyperparameter selection can be helpful when attempting to improve the second-stage cost estimates, however, the gains are marginal in most cases.
\begin{table*}[t]\centering\resizebox{0.8\textwidth}{!}{
\begin{tabular}{l|cc}
\toprule
{Parameter} & {MC} & {SC } \\
\midrule
Batch size & $\{16, 32, 64, 128\}$ & $\{16, 32, 64, 128\}$ \\
Learning rate & $[1e^{-5},1e^{-1}]$ & $[1e^{-5},1e^{-1}]$ \\
L1 weight penalty & $[1e^{-5}, 1e^{-1}]$ & $[1e^{-5}, 1e^{-1}]$ \\
L2 weight penalty & $[1e^{-5}, 1e^{-1}]$ & $[1e^{-5}, 1e^{-1}]$\\
Optimizer & \{Adam, Adagrad, RMSprop\} & \{Adam, Adagrad, RMSprop\}\\
Dropout & $[0, 0.5]$ & $[0, 0.5]$ \\
\# Epochs & $1000$ & $2000$ \\
ReLU hidden dimension & $\{32, 64\}$ & $\{64, 128, 256, 512\}$ \\
Embed hidden dimension 1 & - & $\{64, 128, 256, 512\}$ \\
Embed hidden dimension 2 & - & $\{16, 32, 64, 128\}$ \\
Embed hidden dimension 3 & - & $\{8, 16, 32, 64\}$ \\
\bottomrule
\end{tabular}}
\caption{Random search parameter space for multi-cut and single-cut models. For values in \{\}, we sample with equal probability for each discrete choice. For values in [], we sample a uniform distribution with the given bounds. For single values, we keep it fixed across all configurations. A value of - means that parameter is not applicable for the given model type. }
\label{tab:random_search_space}
\end{table*}
\begin{table*}[t]\centering\resizebox{\textwidth}{!}{
\begin{tabular}{l|rrrrrrrrrrr}
\toprule
{Parameter} & {CFLP\_10\_10} & {CFLP\_25\_25} & {CFLP\_50\_50} & {SSLP\_5\_25} & {SSLP\_10\_50} & {SSLP\_15\_45} & {INVP\_B\_E} & {INVP\_B\_H} & {INVP\_I\_E} & {INVP\_I\_H} & {PP}\\
\midrule
Batch size & 128 & 16 & 128 & 128 & 128 & 64 & 16 & 32 & 32 & 128 & 64\\
Learning rate & 0.05029 & 0.05217 & 0.00441 & 0.03385 & 0.07015 & 0.08996 & 0.00435 & 0.00521 & 0.06613 & 0.01614 & 0.0032\\
L1 weight penalty & 0.07512 & 0.00551 & 0.08945 & 0.03232 & 0.07079 & 0.09105 & 0.08321 & 0.05754 & 0.01683 & 0.
01859 & 0 \\
L2 weight penalty & 0.08343 & 0.02846 & 0.08602 & 0.0 & 0.01792 & 0.0 & 0.01047 & 0.02728 & 0 & 0& 0.0361\\
Optimizer & Adam & Adam & Adam & RMSprop & RMSprop & RMSprop & RMSProp & RMSProp & Adam & Adam & Adam \\
Dropout & 0.02198 & 0.02259 & 0.05565 & 0.00914 & 0.01944 & 0.02257 & 0.17237 & 0.13698 & 0.04986 & 0.0859 & 0.05945\\
ReLU hidden dimension & 64 & 32 & 64 & 32 & 64 & 32 & 64 & 64 & 64 & 32 & 64\\
\bottomrule
\end{tabular}}
\caption{MC best configurations from random search.}
\label{tab:mc_rs_best}
\end{table*}
\begin{table*}[t]\centering\resizebox{\textwidth}{!}{
\begin{tabular}{l|rrrrrrrrrrr}
\toprule
{Parameter} & {CFLP\_10\_10} & {CFLP\_25\_25} & {CFLP\_50\_50} & {SSLP\_5\_25} & {SSLP\_10\_50} & {SSLP\_15\_45} & {INVP\_B\_E} & {INVP\_B\_H} & {INVP\_I\_E} & {INVP\_I\_H} & {PP}\\
\midrule
Batch size & 32 & 16 & 128 & 64 & 64 & 32 & 128 & 32 & 16 & 128 & 64 \\
Learning rate & 0.0263 & 0.06571 & 0.02906 & 0.08876 & 0.07633 & 0.03115 & 0.01959 & 0.00846 & 0.02841 & 0.02867 & 0.08039 \\
L1 weight penalty & 0.02272 & 0.06841 & 0.05792 & 0.0 & 0.04529 & 0.07182 & 0.0 & 0.0 & 0.00022 & 0 & 0\\
L2 weight penalty & 0.05747 & 0.0 & 0.04176 & 0.03488 & 0.0 & 0.0& 0 & 0.09007 & 0.02272 & 0.01882 & 0\\
Optimizer & RMSprop & Adam & Adam & Adam & RMSprop & Adam & Adagrad & Adam & Adagrad & Adagrad & Adam\\
Dropout & 0.01686 & 0.0028 & 0.03318 & 0.00587 & 0.00018 & 0.0088 & 0.08692 & 0.04096 & 0.01854 & 0.01422 & 0.0072\\
ReLU hidden dimension & 128 & 256 & 256 & 256 & 64 & 256 & 256 & 256 & 256 & 256 & 512\\
Embed hidden dimension 1 & 512 & 128 & 256 & 64 & 128 & 512 & 256 & 512 & 64 & 256 & 512\\
Embed hidden dimension 2 & 16 & 64 & 64 & 16 & 32 & 64 & 16 & 16 & 32 & 32 & 128\\
Embed hidden dimension 3 & 16 & 16 & 8 & 32 & 64 & 16 & 32 & 16 & 8 & 64 & 16\\
\bottomrule
\end{tabular}}
\caption{SC best configurations from random search.}
\label{tab:sc_rs_best}
\end{table*}
\begin{figure}[ht]
\begin{center}
\centerline{\includegraphics[scale=0.5]{figs/cflp_rs_mae.png}}
\caption{CFLP validation MAE over random search configurations for multi-cut (MC) and single-cut (SC) models. }
\label{fig:cflp_rs_box_plots}
\end{center}
\end{figure}
\begin{figure}[ht]
\begin{center}
\centerline{\includegraphics[scale=0.5]{figs/sslp_rs_mae.png}}
\caption{SSLP validation MAE over random search configurations for multi-cut (MC) and single-cut (SC) models. }
\label{fig:sslp_rs_box_plots}
\end{center}
\end{figure}
\begin{figure}[ht]
\begin{center}
\centerline{\includegraphics[scale=0.5]{figs/ip_rs_mae.png}}
\caption{INVP validation MAE over random search configurations for multi-cut (MC) and single-cut (SC) models. }
\label{fig:ip_rs_box_plots}
\end{center}
\end{figure}
\begin{figure}[ht]
\begin{center}
\centerline{\includegraphics[scale=0.5]{figs/pp_rs_mae.png}}
\caption{PP validation MAE over random search configurations for multi-cut (MC) and single-cut (SC) models. }
\label{fig:pp_rs_box_plots}
\end{center}
\end{figure}
\begin{comment}
\section{Learning Models}
\label{app:training_details}
\jd{Reorganize appendix once we have a better idea of what we want to do. }
For the supervised learning task, we implement linear regression using Scikit-learn 1.0.1 \citep{scikit-learn}.
The neural models are all implemented using Pytorch 1.10.0 \citep{NEURIPS2019_9015}.
For all problem settings we are able to converge to reasonable results with a single, relatively small hidden dimension. In addition, we use the Adam optimzier \cite{kingma2015adam}, a learning rate of $0.001$, mean squared error as the loss, and a batch size of 32. The problem specific hyperparameters are reported in Table~\ref{tab:nn_params}.
We note that results may be improved through further hyperparameter tuning, however, with the limited amount of parameter tuning done, we are able to converge to reasonable results without overfitting.
\begin{table}[t]
\centering
\begin{tabular}{l|c c c}
\toprule
{Problem} & {Input Dimension} & {Hidden Dimension} & {\# Epochs}\\
\midrule
CFLP\_10\_10\_x & 20 & 128 & 1000 \\
CFLP\_25\_25\_x & 50 & 128 & 1000 \\
CFLP\_50\_50\_x & 100 & 164 & 1000 \\
\midrule
INVP\_x & 4 & 32 & 200 \\
\midrule
SSLP\_5\_25\_x & 30 & 32 & 500 \\
SSLP\_10\_50\_x & 60 & 32 & 500 \\
SSLP\_15\_45\_x & 60 & 64 & 500 \\
\midrule
PP\_x & 19 & 100 & 500 \\
\bottomrule
\end{tabular}
\caption{Neural network parameters. Here x denotes the number of scenarios, which the parameters are fixed across.}
\label{tab:nn_params}
\end{table}
\clearpage
\section{Updated Results}
\subsection{Random Data Generation}
Results for updated CFLP/SSLP with random data generation are reported in Tables~\ref{tab:update_cflp_rdg}-\ref{tab:update_sslp_rdg} and Figures~\ref{fig:cflp_rdg_gap}-\ref{fig:cflp_rdg_time_2}.
\begin{table*}[t]\centering\resizebox{\textwidth}{!}{
\begin{tabular}{l|ll|lll|lll}
\toprule
{} & {Gap to EF (NN)} & {Gap to EF (NN-SC)} & {~\method{} time to best} & {~\method{}-SC time to best} & {EF time to best} & {~\method{} objective} & {~\method{}-SC objective} & {EF objective} \\
\midrule
cflp\_10\_10\_100 & 0.00 & 0.00 & 8.74 & \textbf{0.12} & 179.79 & 6,951.15 & 6,951.15 & \textbf{6,951.15} \\
cflp\_10\_10\_1000 & -1.70 & -1.70 & 179.02 & \textbf{0.04} & 2,749.07 & \textbf{6,986.08} & \textbf{6,986.08} & 7,107.10 \\
cflp\_10\_10\_500 & 0.00 & 2.72 & 47.67 & \textbf{0.06} & 3,522.88 & 7,042.18 & 7,233.54 & \textbf{7,042.18} \\
cflp\_25\_25\_100 & -0.21 & -0.21 & 4.58 & \textbf{0.02} & 3,552.87 & \textbf{11,682.73} & \textbf{11,682.73} & 11,707.43 \\
cflp\_25\_25\_1000 & -2.24 & 0.98 & 93.38 & \textbf{0.90} & 3,489.34 & \textbf{11,692.53} & 12,077.85 & 11,960.42 \\
cflp\_25\_25\_500 & -0.72 & -0.72 & 49.24 & \textbf{0.02} & 3,559.68 & \textbf{11,673.89} & \textbf{11,673.89} & 11,758.69 \\
cflp\_50\_50\_100 & -1.43 & -0.46 & 6.15 & \textbf{0.12} & 2,464.41 & \textbf{25,146.63} & 25,395.48 & 25,512.70 \\
cflp\_50\_50\_1000 & -81.41 & -81.20 & 191.01 & \textbf{0.03} & 0.44 & \textbf{24,964.91} & 25,250.28 & 134,300.00 \\
cflp\_50\_50\_500 & -18.29 & -17.95 & 80.77 & \textbf{0.06} & 1,862.62 & \textbf{24,915.47} & 25,018.97 & 30,491.97 \\
\bottomrule
\end{tabular}}
\caption{In sample CFLP results}
\label{tab:update_cflp_rdg}
\end{table*}
\begin{table*}[t]\centering\resizebox{\textwidth}{!}{
\begin{tabular}{l|ll|lll|lll}
\toprule
{} & {Gap to EF (NN)} & {Gap to EF (NN-SC)} & {~\method{} time to best} & {~\method{}-SC time to best} & {EF time to best} & {~\method{} objective} & {~\method{}-SC objective} & {EF objective} \\
\midrule
sslp\_10\_50\_100 & -0.00 & 0.85 & 0.07 & \textbf{0.04} & 172.11 & \textbf{-354.19} & -351.17 & -354.19 \\
sslp\_10\_50\_1000 & -96.98 & -99.80 & 7.95 & \textbf{0.03} & 58.87 & -346.75 & \textbf{-351.71} & -176.03 \\
sslp\_10\_50\_2000 & -105.07 & -100.68 & 3.29 & \textbf{0.08} & 1,416.76 & \textbf{-347.26} & -339.83 & -169.34 \\
sslp\_10\_50\_50 & 0.00 & -0.00 & \textbf{0.05} & 0.08 & 81.04 & -364.64 & \textbf{-364.64} & -364.64 \\
sslp\_10\_50\_500 & 1.46 & 2.05 & 281.30 & \textbf{0.10} & 3,366.21 & -344.02 & -341.95 & \textbf{-349.12} \\
sslp\_15\_45\_10 & 0.46 & 3.26 & 0.05 & \textbf{0.05} & 0.77 & -259.30 & -252.00 & \textbf{-260.50} \\
sslp\_15\_45\_15 & 0.79 & 26.03 & \textbf{0.04} & 1.88 & 11.05 & -251.60 & -187.60 & \textbf{-253.60} \\
sslp\_15\_45\_5 & 1.22 & 19.97 & \textbf{0.18} & 0.69 & 2.33 & -259.20 & -210.00 & \textbf{-262.40} \\
sslp\_5\_25\_100 & 0.00 & 1.40 & 0.44 & \textbf{0.03} & 7.08 & -127.37 & -125.59 & \textbf{-127.37} \\
sslp\_5\_25\_50 & 0.00 & 0.00 & 0.20 & \textbf{0.02} & 1.99 & -121.60 & -121.60 & \textbf{-121.60} \\
\bottomrule
\end{tabular}}
\caption{In sample SSLP results}
\label{tab:update_sslp_rdg}
\end{table*}
\begin{figure}[ht]
\begin{center}
\centerline{\includegraphics[scale=0.6]{figs/rdg_gap.png}}
\caption{CFLP generalization results: Gap}
\label{fig:cflp_rdg_gap}
\end{center}
\end{figure}
\begin{figure}[ht]
\begin{center}
\centerline{\includegraphics[scale=0.6]{figs/rdg_time.png}}
\caption{CFLP generalization results: Time}
\label{fig:cflp_rdg_time}
\end{center}
\end{figure}
\begin{figure}[ht]
\begin{center}
\centerline{\includegraphics[scale=0.6]{figs/rdg_time_outliers.png}}
\caption{CFLP generalization results: Time (\method{}-SC removed)}
\label{fig:cflp_rdg_time_2}
\end{center}
\end{figure}
\clearpage
\subsection{Data Generation: Perturbing Optimal}
Results for updated CFLP/SSLP with previous data generation method (i.e. perturbing optimal) are reported in Tables~\ref{tab:update_cflp}-\ref{tab:update_sslp} and Figures~\ref{fig:cflp_gap}-\ref{fig:cflp_time_2}.
\begin{table*}[t]\centering\resizebox{\textwidth}{!}{
\begin{tabular}{l|ll|lll|lll}
\toprule
{} & {Gap to EF (NN)} & {Gap to EF (NN-SC)} & {~\method{} time to best} & {~\method{}-SC time to best} & {EF time to best} & {~\method{} objective} & {~\method{}-SC objective} & {EF objective} \\
\midrule
cflp\_10\_10\_100 & 0.00 & 0.00 & 2.52 & \textbf{0.11} & 177.12 & 6,951.15 & 6,951.15 & \textbf{6,951.15} \\
cflp\_10\_10\_1000 & -1.16 & -1.70 & 3.34 & \textbf{0.05} & 2,634.00 & 7,024.49 & \textbf{6,986.08} & 7,107.10 \\
cflp\_10\_10\_500 & 0.00 & 0.00 & 0.71 & \textbf{0.05} & 2,533.52 & 7,042.18 & 7,042.18 & \textbf{7,042.18} \\
cflp\_25\_25\_100 & -0.21 & -0.21 & 54.93 & \textbf{0.12} & 3,578.46 & \textbf{11,682.73} & \textbf{11,682.73} & 11,707.38 \\
cflp\_25\_25\_1000 & -2.52 & -2.52 & 3,295.27 & \textbf{0.13} & 3,304.37 & \textbf{11,692.53} & \textbf{11,692.53} & 11,994.23 \\
cflp\_25\_25\_500 & -0.72 & -0.72 & 56.76 & \textbf{0.04} & 2,731.94 & \textbf{11,673.89} & \textbf{11,673.89} & 11,758.89 \\
cflp\_50\_50\_100 & -1.56 & -0.54 & 4.59 & \textbf{0.30} & 3,334.03 & \textbf{25,115.30} & 25,374.09 & 25,512.25 \\
cflp\_50\_50\_1000 & -81.41 & -81.28 & 294.95 & \textbf{0.02} & 0.41 & \textbf{24,964.91} & 25,147.42 & 134,300.00 \\
cflp\_50\_50\_500 & -16.39 & -15.82 & 53.41 & \textbf{10.38} & 3,391.55 & \textbf{24,884.55} & 25,054.75 & 29,761.56 \\
\bottomrule
\end{tabular}}
\caption{In sample SSLP results}
\label{tab:update_cflp}
\end{table*}
\begin{table*}[t]\centering\resizebox{\textwidth}{!}{
\begin{tabular}{l|ll|lll|lll}
\toprule
{} & {Gap to EF (NN)} & {Gap to EF (NN-SC)} & {~\method{} time to best} & {~\method{}-SC time to best} & {EF time to best} & {~\method{} objective} & {~\method{}-SC objective} & {EF objective} \\
\midrule
sslp\_10\_50\_100 & -0.00 & -0.00 & 1.44 & \textbf{0.21} & 173.46 & \textbf{-354.19} & \textbf{-354.19} & -354.19 \\
sslp\_10\_50\_1000 & -99.80 & -96.41 & 3.85 & \textbf{0.01} & 59.11 & \textbf{-351.71} & -345.74 & -176.03 \\
sslp\_10\_50\_2000 & -56.68 & -105.07 & 2,831.43 & \textbf{0.02} & 1,542.83 & -265.32 & \textbf{-347.26} & -169.34 \\
sslp\_10\_50\_50 & 0.00 & 0.00 & \textbf{0.07} & 0.11 & 76.91 & -364.64 & -364.64 & \textbf{-364.64} \\
sslp\_10\_50\_500 & -0.01 & -0.01 & 47.18 & \textbf{0.01} & 2,289.62 & \textbf{-349.14} & \textbf{-349.14} & -349.12 \\
sslp\_15\_45\_10 & 0.65 & -0.00 & \textbf{0.06} & 0.70 & 1.04 & -258.80 & \textbf{-260.50} & -260.50 \\
sslp\_15\_45\_15 & 0.79 & 0.16 & \textbf{0.05} & 12.96 & 14.88 & -251.60 & -253.20 & \textbf{-253.60} \\
sslp\_15\_45\_5 & 1.22 & 12,968.22 & 0.03 & \textbf{0.02} & 3.12 & -259.20 & 33,766.20 & \textbf{-262.40} \\
sslp\_5\_25\_100 & 0.00 & 0.00 & 0.53 & \textbf{0.02} & 7.26 & -127.37 & -127.37 & \textbf{-127.37} \\
sslp\_5\_25\_50 & 0.00 & 0.00 & 8.90 & \textbf{0.01} & 2.05 & -121.60 & -121.60 & \textbf{-121.60} \\
\bottomrule
\end{tabular}}
\caption{In sample SSLP results}
\label{tab:update_sslp}
\end{table*}
\begin{figure}[ht]
\begin{center}
\centerline{\includegraphics[scale=0.6]{figs/gap.png}}
\caption{CFLP generalization results: Gap}
\label{fig:cflp_gap}
\end{center}
\end{figure}
\begin{figure}[ht]
\begin{center}
\centerline{\includegraphics[scale=0.6]{figs/time.png}}
\caption{CFLP generalization results: Time}
\label{fig:cflp_time}
\end{center}
\end{figure}
\begin{figure}[ht]
\begin{center}
\centerline{\includegraphics[scale=0.6]{figs/time_outliers.png}}
\caption{CFLP generalization results: Time (\method{}-SC removed)}
\label{fig:cflp_time_2}
\end{center}
\end{figure}
\clearpage
\subsection{\method{} Single-Cut Model}
Figure~\ref{fig:nn_sc} illustrates the single-cut model, which is then embedded to do a single predict across the scenarios.
\begin{figure}[ht]
\begin{center}
\centerline{\includegraphics[scale=0.45]{figs/nsp_nn_single_cut_model.drawio.png}}
\caption{\method{}-SC Model Illustration}
\label{fig:nn_sc}
\end{center}
\end{figure}
\end{comment}
\begin{comment}
\begin{table*}[t]\centering\resizebox{\textwidth}{!}{
\begin{tabular}{l|lll|lll|lll}
\toprule
& \multicolumn{3}{c|}{GAP} & \multicolumn{3}{c|}{OBJECTIVE} & \multicolumn{3}{c}{TIME TO BEST SOLUTION}\\
\midrule
{Problem} & {EF-SINGLE} & {EF-MULTI} & {MULTI-SINGLE} & {SINGLE} & {EF} & {MULTI} & {SINGLE} & {MULTI} & {EF} \\
\midrule
cflp\_10\_10\_100 & 2.58 & 0.04 & 2.53 & 7,174.57 & 6,994.77 & 6,997.87 & 0.44 & 0.50 & 185.23 \\
cflp\_10\_10\_500 & 2.37 & 0.29 & 2.08 & 7,171.79 & 7,005.47 & 7,025.32 & 0.44 & 9.05 & 2,591.77 \\
cflp\_10\_10\_1000 & 0.55 & -1.39 & 1.97 & 7,154.60 & 7,115.87 & 7,016.73 & 0.43 & 49.57 & 2,316.05 \\
cflp\_25\_25\_100 & -0.13 & -0.76 & 0.65 & 11,850.07 & 11,865.47 & 11,773.01 & 0.07 & 0.26 & 2,936.42 \\
cflp\_25\_25\_500 & -3.64 & -3.64 & 0.00 & 11,726.34 & 12,174.09 & 11,726.34 & 0.07 & 3.62 & 3,074.90 \\
cflp\_25\_25\_1000 & -5.96 & -5.96 & -0.00 & 11,709.90 & 12,459.91 & 11,709.90 & 0.08 & 8.55 & 3,407.92 \\
cflp\_50\_50\_100 & -0.64 & -1.63 & 1.01 & 25,231.40 & 25,394.55 & 24,977.51 & 0.05 & 6.78 & 2,973.66 \\
cflp\_50\_50\_500 & -11.80 & -12.49 & 0.79 & 25,159.77 & 28,620.50 & 24,961.79 & 0.05 & 111.70 & 2,625.14 \\
cflp\_50\_50\_1000 & -81.25 & -81.40 & 0.79 & 25,180.41 & 134,300.00 & 24,983.07 & 0.05 & 383.24 & 0.45 \\
\bottomrule
\end{tabular}}\end{table*}
\begin{table*}[t]\centering\resizebox{\textwidth}{!}{
\begin{tabular}{l|lll|lll|lll}
\toprule
& \multicolumn{3}{c|}{GAP} & \multicolumn{3}{c|}{OBJECTIVE} & \multicolumn{3}{c}{TIME TO BEST SOLUTION}\\
\midrule
{Problem} & {EF-SINGLE} & {EF-MULTI} & {MULTI-SINGLE} & {SINGLE} & {EF} & {MULTI} & {SINGLE} & {MULTI} & {EF} \\
\midrule
sslp\_10\_50\_50 & 1.84 & 0.00 & 1.84 & -348.42 & -354.96 & -354.96 & 0.02 & 0.14 & 192.09 \\
sslp\_10\_50\_100 & 2.16 & -0.00 & 2.16 & -338.39 & -345.86 & -345.86 & 0.03 & 0.36 & 114.38 \\
sslp\_10\_50\_500 & 1.35 & -0.79 & 2.12 & -342.14 & -346.82 & -349.54 & 0.03 & 22.45 & 2,504.47 \\
sslp\_10\_50\_1000 & -97.66 & -101.99 & 2.14 & -342.56 & -173.34 & -350.07 & 0.03 & 60.61 & 567.11 \\
sslp\_10\_50\_2000 & -98.41 & -102.69 & 2.11 & -342.67 & -172.73 & -350.07 & 0.02 & 964.83 & 1,076.98 \\
sslp\_15\_45\_5 & 2.27 & 42.05 & -69.37 & -249.51 & -255.55 & -147.36 & 0.02 & 0.12 & 1.58 \\
sslp\_15\_45\_10 & 1.68 & 43.27 & -73.65 & -252.89 & -257.36 & -145.74 & 0.02 & 0.22 & 10.12 \\
sslp\_15\_45\_15 & 1.18 & 43.01 & -73.64 & -254.58 & -257.68 & -146.73 & 0.02 & 0.36 & 96.10 \\
sslp\_5\_25\_50 & 0.02 & 1.88 & -1.91 & -125.33 & -125.36 & -123.00 & 0.04 & 0.39 & 1.75 \\
sslp\_5\_25\_100 & 0.02 & 1.60 & -1.61 & -120.91 & -120.94 & -119.03 & 0.04 & 1.71 & 6.80 \\
\bottomrule
\end{tabular}}\end{table*}
\begin{table*}[t]\centering\resizebox{\textwidth}{!}{
\begin{tabular}{l|lll|lll|lll}
\toprule
& \multicolumn{3}{c|}{GAP} & \multicolumn{3}{c|}{OBJECTIVE} & \multicolumn{3}{c}{TIME TO BEST SOLUTION}\\
\midrule
{} & {EF-SINGLE} & {EF-MULTI} & {MULTI-SINGLE} & {SINGLE} & {EF} & {MULTI} & {SINGLE} & {MULTI} & {EF} \\
\midrule
ip\_c\_b\_I\_4 & 9.54 & 3.01 & 6.74 & -51.56 & -57.00 & -55.29 & 0.10 & 0.05 & 0.01 \\
ip\_c\_b\_I\_9 & 7.54 & 2.00 & 5.65 & -54.86 & -59.33 & -58.15 & 0.04 & 0.15 & 0.01 \\
ip\_c\_b\_I\_36 & 2.72 & 4.96 & -2.35 & -59.55 & -61.22 & -58.19 & 0.03 & 8.77 & 0.02 \\
ip\_c\_b\_I\_121 & 1.37 & 2.42 & -1.08 & -61.44 & -62.29 & -60.78 & 0.04 & 84.34 & 0.39 \\
ip\_c\_b\_I\_441 & 2.80 & 2.43 & 0.38 & -59.60 & -61.32 & -59.83 & 0.04 & 3,882.41 & 63.98 \\
ip\_c\_b\_I\_1681 & 1.36 & - & - & -59.81 & -60.63 & - & 0.04 & - & 23.49 \\
ip\_c\_b\_I\_10000 & -1.48 & - & - & -59.85 & -58.98 & - & 0.04 & - & 1,088.03 \\
\midrule
ip\_c\_i\_I\_4 & 12.83 & 0.00 & 12.83 & -55.35 & -63.50 & -63.50 & 0.07 & 0.01 & 0.01 \\
ip\_c\_i\_I\_9 & 7.40 & 2.64 & 4.89 & -61.63 & -66.56 & -64.80 & 0.03 & 0.04 & 0.02 \\
ip\_c\_i\_I\_36 & 5.48 & 5.17 & 0.33 & -66.03 & -69.86 & -66.25 & 0.04 & 0.68 & 0.06 \\
ip\_c\_i\_I\_121 & 5.30 & 4.49 & 0.85 & -67.35 & -71.12 & -67.92 & 0.04 & 47.48 & 0.28 \\
ip\_c\_i\_I\_441 & 3.00 & 0.68 & 2.33 & -67.55 & -69.64 & -69.16 & 0.02 & 58.83 & 17.32 \\
ip\_c\_i\_I\_1681 & 1.31 & 3.08 & -1.82 & -67.95 & -68.85 & -66.73 & 0.02 & 1,258.62 & 10.31 \\
ip\_c\_i\_I\_10000 & -1.35 & - & - & -67.94 & -67.04 & - & 0.03 & - & 229.86 \\
\midrule
ip\_c\_b\_T\_4 & 8.81 & 9.50 & -0.76 & -51.75 & -56.75 & -51.36 & 0.02 & 0.02 & 0.02 \\
ip\_c\_b\_T\_9 & 5.04 & 5.04 & 0.00 & -56.56 & -59.56 & -56.56 & 0.01 & 0.17 & 0.01 \\
ip\_c\_b\_T\_36 & 1.61 & 1.61 & 0.00 & -59.31 & -60.28 & -59.31 & 0.01 & 6.03 & 1.45 \\
ip\_c\_b\_T\_121 & 1.77 & 1.77 & 0.00 & -59.93 & -61.01 & -59.93 & 0.01 & 34.65 & 0.17 \\
ip\_c\_b\_T\_441 & 2.13 & 5.50 & -3.57 & -60.14 & -61.44 & -58.07 & 0.02 & 1,816.24 & 11.54 \\
ip\_c\_b\_T\_1681 & -0.71 & - & - & -60.47 & -60.04 & - & 0.01 & - & 37.20 \\
ip\_c\_b\_T\_10000 & -2.72 & - & - & -60.53 & -58.93 & - & 0.01 & - & 280.58 \\
\midrule
ip\_c\_i\_T\_4 & 13.78 & 12.16 & 1.85 & -54.75 & -63.50 & -55.78 & 0.02 & 0.03 & 0.02 \\
ip\_c\_i\_T\_9 & 9.12 & 0.81 & 8.38 & -59.78 & -65.78 & -65.25 & 0.02 & 0.07 & 0.02 \\
ip\_c\_i\_T\_36 & 4.97 & 3.44 & 1.58 & -63.78 & -67.11 & -64.80 & 0.02 & 1.47 & 1.39 \\
ip\_c\_i\_T\_121 & 4.01 & 4.99 & -1.02 & -65.03 & -67.75 & -64.37 & 0.02 & 17.91 & 4.18 \\
ip\_c\_i\_T\_441 & 3.15 & 3.15 & 0.00 & -65.12 & -67.24 & -65.12 & 0.02 & 1,227.96 & 997.57 \\
ip\_c\_i\_T\_1681 & -0.34 & 0.11 & -0.45 & -65.63 & -65.41 & -65.34 & 0.02 & 869.26 & 492.15 \\
ip\_c\_i\_T\_10000 & -1.60 & - & - & -65.66 & -64.63 & - & 0.02 & - & 1,305.63 \\
\bottomrule
\end{tabular}}\end{table*}
\begin{table*}[t]\centering\resizebox{\textwidth}{!}{
\begin{tabular}{l|lll|lll|lll}
\toprule
& \multicolumn{3}{c|}{GAP} & \multicolumn{3}{c|}{OBJECTIVE} & \multicolumn{3}{c}{TIME TO BEST SOLUTION}\\
\midrule
{} & {EF-SINGLE} & {EF-MULTI} & {MULTI-SINGLE} & {SINGLE} & {EF} & {MULTI} & {SINGLE} & {MULTI} & {EF} \\
\midrule
pp\_125 & 0.03 & -34.49 & 52.68 & 264.30 & 264.21 & 173.10 & 0.76 & 134.13 & 21.83 \\
pp\_216 & - & - & 51.93 & 200.29 & - & 131.83 & 0.73 & 229.26 & - \\
pp\_343 & - & - & 67.93 & 206.38 & - & 122.90 & 0.76 & 527.65 & - \\
pp\_512 & - & - & 51.60 & 204.41 & - & 134.83 & 0.81 & 1,065.34 & - \\
pp\_729 & - & - & 59.03 & 219.42 & - & 137.97 & 0.80 & 3,278.38 & - \\
pp\_1000 & - & - & 60.33 & 202.50 & - & 126.30 & 0.75 & 2,477.32 & - \\
\bottomrule
\end{tabular}}\end{table*}
\begin{table*}[t]\centering\resizebox{\textwidth}{!}{
\begin{tabular}{l|lll|lll|lll}
\toprule
Problem & \multicolumn{3}{c}{Data Generation MULTI} & \multicolumn{3}{c}{Data Generation SINGLE} & \multicolumn{3}{c}{Training Times} \\
\cmidrule{2-10}
{} & {\# samples} & {Time per sample} & {Total time} & {\# samples} & {Time per sample} & {Total time} & {LR} & {SINGLE} & {MULTI} \\
\midrule
CFLP\_10\_10 & 10,000 & 0.00 & 13.59 & 5,000 & 0.36 & 1,823.07 & 0.53 & 127.12 & 667.28 \\
CFLP\_25\_25 & 10,000 & 0.01 & 112.83 & 5,000 & 0.83 & 4,148.83 & 0.28 & 840.07 & 2,205.23 \\
CFLP\_50\_50 & 10,000 & 0.01 & 135.57 & 5,000 & 1.54 & 7,697.91 & 0.75 & 128.11 & 463.71 \\
\midrule
SSLP\_10\_50 & 10,000 & 0.00 & 22.95 & 5,000 & 0.19 & 942.10 & 0.63 & 116.17 & 708.86 \\
SSLP\_15\_45 & 10,000 & 0.00 & 16.35 & 5,000 & 0.19 & 929.27 & 0.57 & 229.42 & 1,377.21 \\
SSLP\_5\_25 & 10,000 & 0.00 & 13.18 & 5,000 & 0.17 & 860.74 & 0.05 & 147.44 & 734.02 \\
\midrule
INVP\_B\_E & 10,000 & 0.00 & 4.17 & 5,000 & 1.79 & 8,951.27 & 0.02 & 1,000.14 & 344.87 \\
INVP\_B\_H & 10,000 & 0.00 & 4.22 & 5,000 & 1.84 & 9,207.90 & 0.02 & 607.49 & 1,214.54 \\
INVP\_I\_E & 10,000 & 0.00 & 4.34 & 5,000 & 1.75 & 8,759.83 & 0.02 & 680.93 & 2,115.25 \\
INVP\_I\_H & 10,000 & 0.00 & 3.32 & 5,000 & 1.79 & 8,944.65 & 0.02 & 174.26 & 393.82 \\
\midrule
PP & 10,000 & 0.00 & 14.86 & 5,000 & 0.24 & 1,202.11 & 0.05 & 367.25 & 576.08 \\
\bottomrule
\end{tabular}}
\caption{Data generation and training times. All in seconds.}
\label{tab:times}
\end{table*}
\end{comment} | {"config": "arxiv", "file": "2205.12006/sections/08_Appendix.tex"} |
TITLE: Corners are cut off from an equilateral triangle to produce a regular hexagon. Are the sides trisected?
QUESTION [8 upvotes]: The corners are cut off from an equilateral triangle to produce a regular hexagon. Are the sides of the triangle trisected?
In the actual question, the first line was exactly the same. It was then asked to find the ratio of area of the resulting hexagon and the original triangle. In the several solutions of this problem which I found on the internet, they had started with taking the sides of triangle as trisected by this operation and hence the side length of the hexagon would also be equal to one-third of the side length of triangle.
I have seen some variations of this problem where they had explicitly mentioned that the side was trisected and then hexagon was formed.
On stackexchange, there are problems in which they started by trisecting the sides (they mentioned it in the title) and getting a regular polygon.
My question is, if we cut off corners from the equilateral triangle to form regular hexagon, is it going to trisect the sides of triangle or not?
REPLY [2 votes]: In order to handle this question, be it about the size of the segments or the surface, don't look to this question as "cutting the edges of the equilateral triangle", but as "folding the edges of the equilateral triangle towards the centre": it produces exactly the same result (the equidistant hexagon), but the questions about segment length and surface become trivial. | {"set_name": "stack_exchange", "score": 8, "question_id": 3726183} |
/-
Copyright (c) 2018 Johannes Hölzl. All rights reserved.
Released under Apache 2.0 license as described in the file LICENSE.
Authors: Johannes Hölzl
-/
import data.finsupp.order
/-!
# Equivalence between `multiset` and `ℕ`-valued finitely supported functions
This defines `finsupp.to_multiset` the equivalence between `α →₀ ℕ` and `multiset α`, along
with `multiset.to_finsupp` the reverse equivalence and `finsupp.order_iso_multiset` the equivalence
promoted to an order isomorphism.
-/
open finset
open_locale big_operators classical
noncomputable theory
variables {α β ι : Type*}
namespace finsupp
/-- Given `f : α →₀ ℕ`, `f.to_multiset` is the multiset with multiplicities given by the values of
`f` on the elements of `α`. We define this function as an `add_equiv`. -/
def to_multiset : (α →₀ ℕ) ≃+ multiset α :=
{ to_fun := λ f, f.sum (λa n, n • {a}),
inv_fun := λ s, ⟨s.to_finset, λ a, s.count a, λ a, by simp⟩,
left_inv := λ f, ext $ λ a, by
{ simp only [sum, multiset.count_sum', multiset.count_singleton, mul_boole, coe_mk,
mem_support_iff, multiset.count_nsmul, finset.sum_ite_eq, ite_not, ite_eq_right_iff],
exact eq.symm },
right_inv := λ s, by simp only [sum, coe_mk, multiset.to_finset_sum_count_nsmul_eq],
map_add' := λ f g, sum_add_index' (λ a, zero_nsmul _) (λ a, add_nsmul _) }
lemma to_multiset_zero : (0 : α →₀ ℕ).to_multiset = 0 := rfl
lemma to_multiset_add (m n : α →₀ ℕ) : (m + n).to_multiset = m.to_multiset + n.to_multiset :=
to_multiset.map_add m n
lemma to_multiset_apply (f : α →₀ ℕ) : f.to_multiset = f.sum (λ a n, n • {a}) := rfl
@[simp]
lemma to_multiset_symm_apply [decidable_eq α] (s : multiset α) (x : α) :
finsupp.to_multiset.symm s x = s.count x :=
by convert rfl
@[simp] lemma to_multiset_single (a : α) (n : ℕ) : to_multiset (single a n) = n • {a} :=
by rw [to_multiset_apply, sum_single_index]; apply zero_nsmul
lemma to_multiset_sum {f : ι → α →₀ ℕ} (s : finset ι) :
finsupp.to_multiset (∑ i in s, f i) = ∑ i in s, finsupp.to_multiset (f i) :=
add_equiv.map_sum _ _ _
lemma to_multiset_sum_single (s : finset ι) (n : ℕ) :
finsupp.to_multiset (∑ i in s, single i n) = n • s.val :=
by simp_rw [to_multiset_sum, finsupp.to_multiset_single, sum_nsmul, sum_multiset_singleton]
lemma card_to_multiset (f : α →₀ ℕ) : f.to_multiset.card = f.sum (λ a, id) :=
by simp [to_multiset_apply, add_monoid_hom.map_finsupp_sum, function.id_def]
lemma to_multiset_map (f : α →₀ ℕ) (g : α → β) :
f.to_multiset.map g = (f.map_domain g).to_multiset :=
begin
refine f.induction _ _,
{ rw [to_multiset_zero, multiset.map_zero, map_domain_zero, to_multiset_zero] },
{ assume a n f _ _ ih,
rw [to_multiset_add, multiset.map_add, ih, map_domain_add, map_domain_single,
to_multiset_single, to_multiset_add, to_multiset_single,
← multiset.coe_map_add_monoid_hom, (multiset.map_add_monoid_hom g).map_nsmul],
refl }
end
@[simp] lemma prod_to_multiset [comm_monoid α] (f : α →₀ ℕ) :
f.to_multiset.prod = f.prod (λ a n, a ^ n) :=
begin
refine f.induction _ _,
{ rw [to_multiset_zero, multiset.prod_zero, finsupp.prod_zero_index] },
{ assume a n f _ _ ih,
rw [to_multiset_add, multiset.prod_add, ih, to_multiset_single, multiset.prod_nsmul,
finsupp.prod_add_index' pow_zero pow_add, finsupp.prod_single_index, multiset.prod_singleton],
{ exact pow_zero a } }
end
@[simp] lemma to_finset_to_multiset [decidable_eq α] (f : α →₀ ℕ) :
f.to_multiset.to_finset = f.support :=
begin
refine f.induction _ _,
{ rw [to_multiset_zero, multiset.to_finset_zero, support_zero] },
{ assume a n f ha hn ih,
rw [to_multiset_add, multiset.to_finset_add, ih, to_multiset_single, support_add_eq,
support_single_ne_zero _ hn, multiset.to_finset_nsmul _ _ hn, multiset.to_finset_singleton],
refine disjoint.mono_left support_single_subset _,
rwa [finset.disjoint_singleton_left] }
end
@[simp] lemma count_to_multiset [decidable_eq α] (f : α →₀ ℕ) (a : α) :
f.to_multiset.count a = f a :=
calc f.to_multiset.count a = f.sum (λx n, (n • {x} : multiset α).count a) :
(multiset.count_add_monoid_hom a).map_sum _ f.support
... = f.sum (λx n, n * ({x} : multiset α).count a) : by simp only [multiset.count_nsmul]
... = f a * ({a} : multiset α).count a : sum_eq_single _
(λ a' _ H, by simp only [multiset.count_singleton, if_false, H.symm, mul_zero])
(λ H, by simp only [not_mem_support_iff.1 H, zero_mul])
... = f a : by rw [multiset.count_singleton_self, mul_one]
@[simp] lemma mem_to_multiset (f : α →₀ ℕ) (i : α) : i ∈ f.to_multiset ↔ i ∈ f.support :=
by rw [←multiset.count_ne_zero, finsupp.count_to_multiset, finsupp.mem_support_iff]
end finsupp
namespace multiset
/-- Given a multiset `s`, `s.to_finsupp` returns the finitely supported function on `ℕ` given by
the multiplicities of the elements of `s`. -/
def to_finsupp : multiset α ≃+ (α →₀ ℕ) := finsupp.to_multiset.symm
@[simp] lemma to_finsupp_support [decidable_eq α] (s : multiset α) :
s.to_finsupp.support = s.to_finset :=
by convert rfl
@[simp] lemma to_finsupp_apply [decidable_eq α] (s : multiset α) (a : α) :
to_finsupp s a = s.count a :=
by convert rfl
lemma to_finsupp_zero : to_finsupp (0 : multiset α) = 0 := add_equiv.map_zero _
lemma to_finsupp_add (s t : multiset α) : to_finsupp (s + t) = to_finsupp s + to_finsupp t :=
to_finsupp.map_add s t
@[simp] lemma to_finsupp_singleton (a : α) : to_finsupp ({a} : multiset α) = finsupp.single a 1 :=
finsupp.to_multiset.symm_apply_eq.2 $ by simp
@[simp] lemma to_finsupp_to_multiset (s : multiset α) : s.to_finsupp.to_multiset = s :=
finsupp.to_multiset.apply_symm_apply s
lemma to_finsupp_eq_iff {s : multiset α} {f : α →₀ ℕ} : s.to_finsupp = f ↔ s = f.to_multiset :=
finsupp.to_multiset.symm_apply_eq
end multiset
@[simp] lemma finsupp.to_multiset_to_finsupp (f : α →₀ ℕ) : f.to_multiset.to_finsupp = f :=
finsupp.to_multiset.symm_apply_apply f
/-! ### As an order isomorphism -/
namespace finsupp
/-- `finsupp.to_multiset` as an order isomorphism. -/
def order_iso_multiset : (ι →₀ ℕ) ≃o multiset ι :=
{ to_equiv := to_multiset.to_equiv,
map_rel_iff' := λ f g, by simp [multiset.le_iff_count, le_def] }
@[simp] lemma coe_order_iso_multiset : ⇑(@order_iso_multiset ι) = to_multiset := rfl
@[simp] lemma coe_order_iso_multiset_symm : ⇑(@order_iso_multiset ι).symm = multiset.to_finsupp :=
rfl
lemma to_multiset_strict_mono : strict_mono (@to_multiset ι) := (@order_iso_multiset ι).strict_mono
lemma sum_id_lt_of_lt (m n : ι →₀ ℕ) (h : m < n) : m.sum (λ _, id) < n.sum (λ _, id) :=
begin
rw [←card_to_multiset, ←card_to_multiset],
apply multiset.card_lt_of_lt,
exact to_multiset_strict_mono h
end
variable (ι)
/-- The order on `ι →₀ ℕ` is well-founded. -/
lemma lt_wf : well_founded (@has_lt.lt (ι →₀ ℕ) _) :=
subrelation.wf (sum_id_lt_of_lt) $ inv_image.wf _ nat.lt_wf
end finsupp
lemma multiset.to_finsupp_strict_mono : strict_mono (@multiset.to_finsupp ι) :=
(@finsupp.order_iso_multiset ι).symm.strict_mono
| {"subset_name": "curated", "file": "formal/lean/mathlib/data/finsupp/multiset.lean"} |
TITLE: What is dark matter composed of?
QUESTION [1 upvotes]: They say that about 80% of the matter in the universe is dark matter -- some unknown substance that has mass. But what is this matter expected or speculated to be composed of? For example might it exist as neutrinos or protons or atoms of hydrogen or something like it? Or might whole worlds exist out there made solely of dark matter -- worlds that might actually sustain life?
REPLY [3 votes]: The answer to the question "what is dark matter" is that nobody really knows. It is easier to say what is isn't.
There are actually two dark matter problems. One is that the amount of gravitating matter in the universe appears to be much larger (by a factor of roughly 30) than the amount of matter that we can actually see in the form of luminous stars and galaxies. The evidence for this includes the motions of stars and gas in galaxies, the motion of galaxies in clusters and the gravitational lensing of light by clusters of galaxies.
The second problem is that most of this dark matter (maybe 5/6 of it) must be in a form that is not like the stuff that makes up the luminous stars, galaxies and you and me. This is the so-called non-baryonic matter that does not interact (or weakly interacts) with light and normal matter. The main pieces of evidence for this are: the ratios of lithium, helium and deuterium to hydrogen that were produced in the first few minutes after the big bang, which are still essentially imprinted on the bulk abundances seen in interstellar gas today (this is a bit of a simplification); a careful analysis of the small irregularities in the cosmic microwave background suggests that in order to grow into the kind of structures we see in the universe today, there must be large quantities of gravitating matter that only interacts through gravity.
So whilst things like intergalactic stars, cold gas, planets, faint stars and lost golf balls(!) could contribute to solving the former problem, the bigger second problem could not be solved in this manner because these are all examples of baryonic matter.
There are many theories about what the non-baryonic dark matter might be. Neutrinos were once thought to be a candidate but we now know that the neutrino mass is too low for this to be significant. The favoured models now seem to be that there are some sort of massive, weakly interacting particles (WIMPS) or massive axions. There are a number of detection experiments endeavouring to find strong evidence for such particles, but without any confirmed detection so far.
Because non-baryonic matter does not interact with light - there does not appear to be any obvious way that there could be "dark matter worlds" or "dark matter life". | {"set_name": "stack_exchange", "score": 1, "question_id": 166254} |
\begin{document}
\title[]
{A lower bound of the $L^2$ norm error estimate for the Adini element of the
biharmonic equation}
\author[J. Hu]{Jun Hu}
\address{LMAM and School of Mathematical Sciences,
Peking University, Beijing 100871, P. R. China}
\email{hujun@math.pku.edu.cn}
\author[Z. C. Shi]{Zhongci Shi}
\address{LSEC, ICMSEC, Academy of Mathematics and Systems
Science, Chinese Academy of Sciences, Beijing 100190, China.}
\email{shi@lsec.cc.ac.cn}
\date{\today}
\thanks{The research of the first author was supported
by the NSFC Project 11271035, and in part by the NSFC Key Project 11031006.}
\maketitle
\begin{abstract}
This paper is devoted to the $L^2$ norm error estimate of the
Adini element for the biharmonic equation. Surprisingly, a lower bound is
established which proves that the $ L^2$ norm convergence rate
can not be higher than that in the energy norm.
This proves the conjecture of [Lascaux and Lesaint,
Some nonconforming finite elements for the plate bending problem,
RAIRO Anal. Numer. 9 (1975), pp. 9--53.] that the convergence rates in both $L^2$ and $H^1$ norms
can not be higher than that in the energy norm for this element.
\end{abstract}
\section{Introduction}
For the numerical analysis of finite element methods for
fourth order elliptic problems, one unsolved fundamental problem is
the $L^2$ norm error estimates \cite{BrennerScott,CiaBook,ShiWang10}. In a recent paper \cite{HuShi12},
we analyzed several mostly popular lower order elements:
the Powell-Sabin $C^1-P_2$ macro element \cite{PowellSabin1977}, the nonconforming
Morley element \cite{Morley68,ShiWang10,WX06,WangXu12}, the $C^1-Q_2$ macro element \cite{HuHuangZhang2011},
the nonconforming rectangle Morley element \cite{WangShiXu07}, and the nonconforming incomplete
biquadratic element \cite{Shi86,WuMaoqing1983}. In particular, we proved that the
best $L^2$ norm error estimates for these elements were at most of second order
and could not be two order higher than that in the energy norm.
The Adini element \cite{BrennerScott,CiaBook,Lascaux85,ShiWang10} is one of the earliest finite elements, dating
back over 50 years. It is a nonconforming finite element for the biharmonic equation on rectangular meshes.
The shape function space contains the complete cubic space and two additional monomials on each rectangle.
In 1975, Lascaux and Lesaint analyzed this element and showed that the consistency error
was of second order for uniform meshes, the same as the approximation error, and thus obtained a second order convergence rate
\cite{Lascaux85}; see also \cite{LuoLin04,MaoChen2005} and \cite{HuHuang11,Yang2000}.
This in particular implies at least a second order $H^1$ norm convergence rate.
Lascaux and Lesaint also conjectured that it did not seem possible to improve this estimate; see \cite[Remark 4.5]{Lascaux85}. However, they did not provide a rigorous proof or justification for this remark.
The purpose of this paper is to analyze the $L^2$ norm error estimate for the Adini element \cite{BrennerScott,CiaBook,Lascaux85,ShiWang10}.
There are two main ingredients for the analysis. One is a refined property of the canonical interpolation operator,
which is proved by a new expansion method. The other is an identity for $(-f, e)_{L^2(\Omega)}$, where $f$ is the right-hand side function and $e$ is the error. Such an identity separates the dominant term from the other higher order terms, which is the key to
use the aforementioned refined property of the interpolation operator.
Based on these factors, a lower bound of the $L^2$ norm error estimate is surprisingly
established which proves that the best $L^2$ norm error estimate
is at most of order $\mathcal{O}(h^2)$. Thus, by the usual Poincare inequality, this
indicates that the best $H^1$ norm error estimate is also at most of order $\mathcal{O}(h^2)$.
This gives a rigorous proof of the conjecture from \cite{Lascaux85} that the convergence rates in both $L^2$ and $H^1$ norms
can not be higher than that in the energy norm.
The paper is organized as follows. In the following section, we
present the Adini element and define the canonical interpolation operator.
In Section 3, based on a refined property of the canonical interpolation operator and
an identity for $(-f, e)_{L^2(\Omega)}$, we prove the main result that the $L^2$ norm error estimate has a lower bound which indicates
that the convergence rates in both $L^2$ and $H^1$ norms are at most of order $\mathcal{O}(h^2)$.
In Section 4, we analyze the refined property of the canonical interpolation operator. In Section 5,
we establish the identity for $(-f, e)_{L^2(\Omega)}$. In Section 6, we end this paper by the conclusion and some comments.
\section{The Adini element method}
We consider the model fourth order elliptic problem: Given $f\in
L^2(\Omega)$ find $w\in W:=H_0^2(\Om)$, such that
\begin{equation}\label{cont}
\begin{split}
a(w, v):=(\na^2 w, \na^2 v)_{L^2(\Om)} =(f,v)_{L^2(\Om)}\text{ for
any } v\in W.
\end{split}
\end{equation}
where $\nabla^2 w$ denotes the Hessian matrix of the function $w$.
To consider the discretization of \eqref{cont} by the Adini element, let $\mathcal{T}_h$ be a uniform regular rectangular
triangulation of the domain $\Om\subset\R^2$ in the two dimensions.
Given $K\in \cT_h$, let $(x_c,y_c)$ be the center of $K$, the
horizontal length $2h_{x,K}$, the vertical length
$2h_{y,K}$, which define the meshsize $h:=\max\limits_{K\in\cT_h}\max(h_{x,K},h_{y,K})$ and
affine mapping:
\begin{equation}\label{mapping}
\xi:=\frac{x-x_c}{h_{x, K}},\quad \eta:=\frac{y-y_c}{h_{y, K}} \text{ for any
} (x,y)\in K.
\end{equation}
Since $\cT_h$ is a uniform mesh, we define $h_x:=h_{x,K}$ and $h_y:=h_{y,K}$ for any $K$.
On element $K$, the shape function space of the Adini element reads
\cite{BrennerScott,CiaBook,Lascaux85,ShiWang10}
\begin{equation}\label{nonconformingnDAd} Q_{Ad}(K):
=P_3(K)+\sspan\{ x^3y, y^3x\}\,,
\end{equation}
here and throughout this paper, $P_\ell(K)$ denotes the space of
polynomials of degree $\leq \ell$ over $K$.
The Adini element space $W_h$ is then defined
by
\begin{equation}\label{AdininD}
W_h:=\begin{array}[t]{l}\big\{v\in L^2(\Om):
v|_{K} \in Q_{Ad}(K) \text{ for each }K\in \mathcal{T}_h,
v \text{ and } \na v \text{ is continuous }\\[0.5ex]
\text{ at the internal nodes, and vanishes at the boundary nodes on
}\pa\Om
\big\}\,.
\end{array}\nn
\end{equation}
The finite element approximation of Problem \eqref{cont} reads:
Find $w_h\in W_h$, such that
\begin{equation}\label{disc}
\begin{split}
a_h(w_h, v_h):=(\na_h^2 w_h, \na_h^2 v_h)_{L^2(\Om)}
=(f,v_h)_{L^2(\Om)}\text{ for any } v_h\in W_h\,,
\end{split}
\end{equation}
where the operator $\nabla_h^2$ is the discrete counterpart of
$\nabla^2$, which is defined element by element since the discrete
space $W_h$ is nonconforming.
Given $K\in\cT_h$, define the canonical interpolation operator
$\Pi_K : H^3(K)\rightarrow Q_{Ad}(K)$ by, for any $v\in H^3(K)$,
\begin{equation}
(\Pi_K v)(P)=v(P)\text{ and }\na(\Pi_K v)(P)=\na
v(P)
\end{equation}
for any vertex $P$ of $K$. The interpolation $\Pi_K$ has the following estimates \cite{BrennerScott,CiaBook,Lascaux85,ShiWang10}:
\begin{equation}\label{AdinterpolationEstimate}
|v-\Pi _Kv|_{H^{\ell}(K)}\leq C h^{4-\ell}|v|_{H^4(K)}, \ell=1,2,3,4\,,
\end{equation}
provided that $v\in H^4(K)$. Herein and throughout this paper, $C$ denotes a generic positive constant which is independent of the meshsize and
may be different at different places. Then the global version $\Pi_h $ of the interpolation
operator $\Pi_K $ is defined as
\begin{equation}\label{interpolation}
\Pi_h |_K=\Pi_K \text{ for any } K\in \cT_h.
\end{equation}
\section{A lower bound of the $L^2$ norm error estimate}
This section proves the main result of this paper, namely, a lower bound of the $L^2$ norm error estimate.
The main ingredients are a lower bound of $a_h(w-\Pi_hw,\Pi_hw)$ in Lemma \ref{Lemma2.2}
and an identity for $(-f, w-w_h)_{L^2(\Omega)}$ in Lemma \ref{Lemma3.1}.
For the analysis, we list two results from \cite{Lascaux85} and \cite{LuoLin04,MaoChen2005}.
\begin{lemma}\label{Lemma3.2} Let $w\in H_0^2(\Omega)\cap H^4(\Omega)$ be the solution of problem \eqref{cont}. It
holds that
\begin{equation}
|a_h(w,v_h)-(f,v_h)|\leq Ch^2|w|_{H^4(\Omega)}\|\nabla_h^2
v_h\|_{L^2(\Omega)}\text{ for any }v_h\in W_h.
\end{equation}
\end{lemma}
\begin{lemma}\label{Lemma3.3} Let $w$ and $w_h$ be solutions of problems \eqref{cont}
and \eqref{disc}, respectively. Suppose that $w\in H_0^2(\Omega)\cap
H^4(\Omega)$. Then,
\begin{equation}
\|\nabla_h^2(w-w_h)\|_{L^2(\Omega)}\leq Ch^2|w|_{H^4(\Omega)}.
\end{equation}
\end{lemma}
\begin{theorem}\label{main} Let $w\in H_0^2(\Omega)\cap H^4(\Omega)$ and $w_h$ be solutions of problems \eqref{cont}
and \eqref{disc}, respectively. Then, there exists a positive constant $\alpha$
which is independent of the meshsize such that
\begin{equation}
\alpha h^2\leq \|w-w_h\|_{L^2(\Omega)},
\end{equation}
provided that $\|f\|_{L^2(\Omega)}\not=0$ and that the meshsize is small enough.
\end{theorem}
\begin{proof} The main ingredients for the proof are Lemma \ref{Lemma3.1} and Lemma \ref{Lemma2.2}.
Indeed, it follows from Lemma \ref{Lemma3.1} that
\begin{equation}\label{eq3.0a}
\begin{split}
&(-f, w-w_h)_{L^2(\Omega)}\\
&=a_h(w,\Pi_hw-w_h)-(f, \Pi_hw-w_h)_{L^2(\Omega)}\\
&\quad +a_h(w-\Pi_hw,w-\Pi_hw)+a_h(w-\Pi_hw,w_h-\Pi_hw)\\
&\quad +2(f, \Pi_hw-w)_{L^2(\Omega)}+2a_h(w-\Pi_hw,\Pi_hw).
\end{split}
\end{equation}
The first two terms on the right-hand side of \eqref{eq3.0a} can be
bounded by Lemmas \ref{Lemma3.2}-\ref{Lemma3.3}, and the estimates of
\eqref{AdinterpolationEstimate}, which leads to
\begin{equation*}
\begin{split}
&|a_h(w,\Pi_hw-w_h)-(f, \Pi_hw-w_h)_{L^2(\Omega)}|\\
& \leq
Ch^2|w|_{H^4(\Omega)}\|\nabla_h^2(\Pi_hw-w_h)\|_{L^2(\Omega)}\\
&\leq Ch^2 \big(
\|\nabla_h^2(\Pi_hw-w)\|_{L^2(\Omega)}+\|\nabla_h^2(w-w_h)\|_{L^2(\Omega)}\big)\\
&\leq Ch^4 |w|_{H^4(\Omega)}^2.
\end{split}
\end{equation*}
The estimates of the third and fifth terms on the right-hand side of
\eqref{eq3.0a} follow immediately from
\eqref{AdinterpolationEstimate}, which gives
\begin{equation*}
\begin{split}
&|a_h(w-\Pi_hw,w-\Pi_hw)+2(f, \Pi_hw-w)_{L^2(\Omega)}|\\
& \leq
Ch^4\big(|w|_{H^4(\Omega)}+\|f\|_{L^2(\Omega)}\big)|w|_{H^4(\Omega)}.
\end{split}
\end{equation*}
From the Cauchy-Schwarz inequality, the triangle inequality, Lemma
\ref{Lemma3.3}, and \eqref{AdinterpolationEstimate} it follows that
\begin{equation*}
|a_h(w-\Pi_hw,w_h-\Pi_hw)|\leq Ch^4|w|_{H^4(\Omega)}^2.
\end{equation*}
The last term on the right hand-side of \eqref{eq3.0a} has already
been analyzed in Lemma \ref{Lemma2.2}, which reads
\begin{equation*}
\beta h^2 \leq (\nabla_h^2(w-\Pi_hw),
\nabla_h^2\Pi_hw)_{L^2(\Omega)},
\end{equation*}
for some positive constant $\beta$. A combination of these
estimates states
\begin{equation*}
\delta h^2\leq (-f, w-w_h)_{L^2(\Omega)},
\end{equation*}
for some positive constant $\delta$ which is independent of the
meshsize provided that the meshsize is small enough. This plus the
definition of the $L^2$ norm of $w-w_h$ prove
\begin{equation*}
\begin{split}
\|w-w_h\|_{L^2(\Omega)}&=\sup\limits_{0\not=d\in
L^2(\Omega)}\frac{(d, w-w_h)_{L^2(\Omega)}}{\|d\|_{L^2(\Omega)}}\\
&\geq \frac{(-f,
w-w_h)}{\|-f\|_{L^2(\Omega)}}\geq \delta/\|f\|_{L^2(\Omega)}h^2.
\end{split}
\end{equation*}
Setting $\alpha=\delta/\|f\|_{L^2(\Omega)}$ completes the proof.
\end{proof}
\begin{remark} By the Poincare inequality, it follows that
\begin{equation*}
\alpha h^2 \leq \|\nabla(w-w_h)\|_{L^2(\Omega)}.
\end{equation*}
\end{remark}
\section{A refined property of the interpolation operator $\Pi_h$}
This section establishes a lower bound of $a_h(w-\Pi_hw,\Pi_hw)$. To this end,
given any element $K$, we follow \cite{HuHuangLin10,HuShi12} to define $P_Kv\in P_4(K)$ by
\begin{equation}\label{QuasiInterpolation}
\int_{K}\na^{\ell}P_Kv dxdy=\int_K \na^{\ell}v dxdy, \ell=0, 1, 2, 3, 4,
\end{equation}
for any $v\in H^4(K)$. Here and throughout this paper, $\na^{\ell}v$ denotes the $\ell$-th order tensor of all
$\ell$-th order derivatives of $v$, for instance, $\ell=1$ the
gradient, and $\ell=2$ the Hessian matrix, and that $\na_h^{\ell}$ are the
piecewise counterparts of $\na^{\ell}$ defined element by element. Note that the operator $P_K$ is
well-posed. It follows from the definition of $P_K$ in
\eqref{QuasiInterpolation} that
\begin{equation}\label{commuting}
\na^4 P_K v=\Pi_{0,K}\na^4v
\end{equation}
with $\Pi_{0,K}$ the $L^2$ constant projection operator over $K$.
Then the global version $\Pi_0 $ of the interpolation operator
$\Pi_{0,K} $ is defined as
\begin{equation}
\Pi_0 |_K=\Pi_{0,K} \text{ for any } K\in \cT_h.
\end{equation}
\begin{lemma}\label{Lemma2.1} For any $u\in P_4(K)$ and $v\in Q_{Ad}(K)$, there holds that
\begin{equation}\label{expansion}
\begin{split}
(\nabla^2(u-\Pi_Ku), \nabla^2 v)_{L^2(K)}&=
-\frac{h_{y, K}^2}{3}\int_K\frac{\pa^4u}{\pa x^2\pa y^2}\frac{\pa^2 v}{\pa x^2}dxdy\\
&\quad -\frac{h_{x, K}^2}{3}\int_K\frac{\pa^4u}{\pa x^2\pa
y^2}\frac{\pa^2 v}{\pa y^2}dxdy.
\end{split}
\end{equation}
\end{lemma}
\begin{proof} Let $\xi$ and $\eta$ be defined as in \eqref{mapping}.
It follows from the definition of $Q_{Ad}(K)$ that
\begin{equation}\label{eq2.10}
\begin{split}
&\frac{\pa^2 v}{\pa x^2}=a_0+a_1\xi+a_2\eta+a_3\xi\eta,\\
&\frac{\pa^2 v}{\pa y^2}=b_0+b_1\xi+b_2\eta+b_3\xi\eta,\\
&\frac{\pa^2 v}{\pa x\pa
y}=c_0+c_1\xi+c_2\eta+c_3\xi^2+c_4\eta^2,
\end{split}
\end{equation}
for some interpolation parameters $a_i$, $b_i$, $i=0, \cdots, 3$, and $c_i$, $i=0,\cdots, 4$.
Since $u\in P_4(K)$, we have
$$
u=u_1+\frac{h_{x,K}^4}{4!}\frac{\pa^4 u}{\pa x^4}\xi^4
+\frac{h_{y,K}^4}{4!}\frac{\pa^4 u}{\pa y^4}\eta^4
+\frac{h_{x,K}^2h_{y,K}^2}{4}\frac{\pa^4 u}{\pa x^2\pa
y^2}\xi^2\eta^2,
$$
where $u_1\in Q_{Ad}(K)$. Note that $\Pi_K u_1=u_1$, and
$$
\Pi_K\xi^4=2\xi^2-1, \Pi_K\eta^4=2\eta^2-1, \text{ and } \Pi_K\xi^2\eta^2=\xi^2+\eta^2-1.
$$
This implies
\begin{equation*}
\begin{split}
u-\Pi_Ku&=\frac{h_{x,K}^4}{4!}\frac{\pa^4 u}{\pa x^4}\big(\xi^2-1\big)^2
+\frac{h_{y,K}^4}{4!}\frac{\pa^4 u}{\pa y^4}\big(\eta^2-1\big)^2\\
&\quad +\frac{h_{x,K}^2h_{y,K}^2}{4}\frac{\pa^4 u}{\pa x^2\pa
y^2}\big(\xi^2-1\big) \big(\eta^2-1\big).
\end{split}
\end{equation*}
Therefore
\begin{equation}\label{eq2.11}
\begin{split}
&\frac{\pa^2 (u-\Pi_Ku)}{\pa x^2} =\frac{h_{x,K}^2}{4!}\frac{\pa^4
u}{\pa x^4}\big(12\xi^2-4\big)+\frac{h_{y,K}^2}{2}\frac{\pa^4 u}{\pa
x^2\pa
y^2}\big(\eta^2-1\big),\\
&\frac{\pa^2 (u-\Pi_Ku)}{\pa y^2} =\frac{h_{y,K}^2}{4!}\frac{\pa^4
u}{\pa y^4}\big(12\eta^2-4\big)+\frac{h_{x,K}^2}{2}\frac{\pa^4 u}{\pa
x^2\pa y^2}\big(\xi^2-1\big),\\
&\frac{\pa^2 (u-\Pi_Ku)}{\pa x\pa y} =h_{x,K}h_{y,K}\frac{\pa^4
u}{\pa x^2\pa y^2} \xi\eta.
\end{split}
\end{equation}
Since
$$
\int_K \big(12\xi^2-4\big)(a_0+a_1\xi+a_2\eta+a_3\xi\eta)dxdy=0,$$
and
$$
\int_K\big(\eta^2-1\big)(a_1\xi+a_2\eta+a_3\xi\eta)dxdy=0,
$$
a combination of \eqref{eq2.10} and \eqref{eq2.11} plus some
elementary calculation yield
\begin{equation}
\int_K\frac{\pa^2 (u-\Pi_Ku)}{\pa x^2}\frac{\pa^2 v}{\pa x^2}dxdy
=-\frac{h_{y,K}^2}{3}\int_K \frac{\pa^4 u}{\pa x^2\pa y^2}\frac{\pa^2
v}{\pa x^2}dxdy.
\end{equation}
A similar argument proves
\begin{equation}
\begin{split}
&\int_K\frac{\pa^2 (u-\Pi_Ku)}{\pa y^2}\frac{\pa^2 v}{\pa y^2}dxdy
=-\frac{h_{x,K}^2}{3}\int_K \frac{\pa^4 u}{\pa x^2\pa y^2}\frac{\pa^2
v}{\pa y^2}dxdy,\\
&\int_K\frac{\pa^2 (u-\Pi_Ku)}{\pa x\pa y }\frac{\pa^2 v}{\pa x \pa
y}dxdy =0,
\end{split}
\end{equation}
which completes the proof.
\end{proof}
The above lemma can be used to prove the following crucial lower bound.
\begin{lemma}\label{Lemma2.2} Suppose that $w\in H_0^2(\Omega)\cap H^4(\Omega)$ be the solution of Problem \eqref{cont}.
Then,
\begin{equation}
\beta h^2 \leq (\nabla_h^2(w-\Pi_hw),
\nabla_h^2\Pi_hw)_{L^2(\Omega)},
\end{equation}
for some positive constant $\beta$ which is independent of the
meshsize $h$ provided that $\|f\|_{L^2(\Omega)}\not= 0$ and that the meshsize is small enough.
\end{lemma}
\begin{proof} Given $K\in\cT_h$, let the interpolation operator $P_K$
be defined as in \eqref{QuasiInterpolation}, which leads to the
following decomposition
\begin{equation}\label{eq2.14}
\begin{split}
&(\nabla_h^2(w-\Pi_hw),
\nabla_h^2\Pi_hw)_{L^2(\Omega)}\\
&=\sum\limits_{K\in\cT_h} (\nabla ^2(P_K w-\Pi_K P_Kw),
\nabla^2\Pi_Kw)_{L^2(K)}\\
&\quad +\sum\limits_{K\in\cT_h} (\nabla ^2(I-\Pi_K)(I-P_K)w,
\nabla^2\Pi_Kw)_{L^2(K)}\\
&=I_1+I_2.
\end{split}
\end{equation}
Let $u=P_Kw$ and $v=\Pi_kw$ in Lemma \ref{Lemma2.1}. The first
term $I_1$ on the right-hand side of \eqref{eq2.14} reads
\begin{equation*}
\begin{split}
I_1&=-\sum\limits_{K\in\cT_h}\frac{h_{y,K}^2}{3}\int_K\frac{\pa^4 P_Kw}{\pa x^2\pa y^2}\frac{\pa^2 \Pi_Kw}{\pa x^2}dxdy\\
&\quad -\sum\limits_{K\in\cT_h}\frac{h_{x,K}^2}{3}\int_K\frac{\pa^4
P_K w}{\pa x^2\pa y^2}\frac{\pa^2 \Pi_K w}{\pa y^2}dxdy,
\end{split}
\end{equation*}
which can be rewritten as
\begin{equation*}
\begin{split}
&I_1=-\sum\limits_{K\in\cT_h}\frac{h_{y,K}^2}{3}\int_K\frac{\pa^4 w}{\pa x^2\pa y^2}\frac{\pa^2 w}{\pa x^2}dxdy-\sum\limits_{K\in\cT_h}\frac{h_{x,K}^2}{3}\int_K\frac{\pa^4
w}{\pa x^2\pa y^2}\frac{\pa^2 w}{\pa y^2}dxdy\\
&\quad +\sum\limits_{K\in\cT_h}\int_K\frac{\pa^4 (I-P_K)w}{\pa x^2\pa y^2}\bigg(\frac{h_{y,K}^2}{3}\frac{\pa^2 \Pi_Kw}{\pa x^2}+\frac{h_{x,K}^2}{3}\frac{\pa^2 \Pi_K w}{\pa y^2}\bigg)dxdy\\
&\quad+\sum\limits_{K\in\cT_h}\int_K\frac{\pa^4 w}{\pa x^2\pa y^2}\bigg(\frac{h_{y,K}^2}{3}\frac{\pa^2 (I-\Pi_K)w}{\pa x^2}+\frac{h_{x,K}^2}{3}\frac{\pa^2 (I-\Pi_K) w}{\pa y^2}\bigg)dxdy.
\end{split}
\end{equation*}
By the commuting property of \eqref{commuting},
$$
\frac{\pa^4(I-P_K )w}{\pa x^2\pa y^2}=(I-\Pi_{0,K})\frac{\pa^4 w}{\pa x^2\pa y^2}.
$$
Note that
$$
\|\frac{\pa^2 \Pi_K w}{\pa y^2}\|_{L^2(K)}+\|\frac{\pa^2 \Pi_K w}{\pa x^2}\|_{L^2(K)}\leq C \|w\|_{H^3{K}}.
$$
This, the error estimates of \eqref{AdinterpolationEstimate}, yield
\begin{equation*}
\begin{split}
I_1&=-\sum\limits_{K\in\cT_h}\frac{h_{y,K}^2}{3}\int_K\frac{\pa^4 w}{\pa x^2\pa y^2}\frac{\pa^2 w}{\pa x^2}dxdy\\
&\quad -\sum\limits_{K\in\cT_h}\frac{h_{x,K}^2}{3}\int_K\frac{\pa^4
w}{\pa x^2\pa y^2}\frac{\pa^2 w}{\pa
y^2}dxdy\\
&\quad +\mathcal{O}(h^2)\|(I-\Pi_0)\nabla^4 w\|_{L^2(\Omega)}\|w\|_{H^3(\Omega)}.
\end{split}
\end{equation*}
Since the mesh is uniform, an elementwise integration by parts yields
\begin{equation*}
\begin{split}
&-\sum\limits_{K\in\cT_h}\frac{h_{y,K}^2}{3}\int_K\frac{\pa^4 w}{\pa x^2\pa y^2}\frac{\pa^2 w}{\pa x^2}dxdy\\
&=\sum\limits_{K\in\cT_h}\frac{h_{y,K}^2}{3}\int_K\bigg(\frac{\pa^3 w}{\pa x^2\pa y}\bigg)^2 dxdy
-h_y^2\int_{\Gamma_y}\frac{\pa^3 w}{\pa x^2\pa y}\frac{\pa^2 w}{\pa x^2}\nu_2dx,
\end{split}
\end{equation*}
where $\Gamma_y$ is the boundary of $\Omega$ that parallels to the x-axis, and $\nu_2$ is the second component of the unit normal vector
$\nu=(\nu_1, \nu_2)$ of the boundary. Since $\frac{\pa w}{ \pa y}=0$ on $\Gamma_y$, $\frac{\pa^3 w}{\pa x^2\pa y}=0$ on $\Gamma_y$.
Hence,
\begin{equation*}
\begin{split}
-\sum\limits_{K\in\cT_h}\frac{h_{y,K}^2}{3}\int_K\frac{\pa^4 w}{\pa x^2\pa y^2}\frac{\pa^2 w}{\pa x^2}dxdy
=\sum\limits_{K\in\cT_h}\frac{h_{y,K}^2}{3}\int_K\bigg(\frac{\pa^3 w}{\pa x^2\pa y}\bigg)^2 dxdy
\end{split}
\end{equation*}
A similar procedure shows
$$
-\sum\limits_{K\in\cT_h}\frac{h_{x,K}^2}{3}\int_K\frac{\pa^4
w}{\pa x^2\pa y^2}\frac{\pa^2 w}{\pa
y^2}dxdy
=\sum\limits_{K\in\cT_h}\frac{h_{x,K}^2}{3}\int_K\bigg(\frac{\pa^3
w}{\pa x \pa y^2}\bigg)dxdy.
$$
Therefore
\begin{equation}\label{eq2.15}
\begin{split}
I_1&=\sum\limits_{K\in\cT_h}\frac{h_{y,K}^2}{3}\|\frac{\pa^3 w}{\pa x^2\pa
y}\|_{L^2(K)}^2 +\sum\limits_{K\in\cT_h} \frac{h_{x,K}^2}{3}\|\frac{\pa^3 w}{\pa x\pa
y^2}\|_{L^2(\Omega)}^2\\
&\quad +\mathcal{O}(h^2)\|(I-\Pi_0)\nabla^4 w\|_{L^2(\Omega)}\|w\|_{H^3(\Omega)}.
\end{split}
\end{equation}
The second term $I_2$ on the right-hand side of \eqref{eq2.14} can
be estimated by the error estimates of \eqref{AdinterpolationEstimate}
and the commuting property of \eqref{commuting}, which reads
\begin{equation}\label{eq2.16}
|I_2|\leq Ch^2 \|(I-\Pi_0)\nabla^4
w\|_{L^2(\Omega)}\|w\|_{H^3(\Omega)}.
\end{equation}
Since the piecewise constant functions are dense
in the space $L^2(\Om)$,
$$
\|(I-\Pi_0)\nabla^4
w\|_{L^2(\Omega)}\rightarrow 0 \text{ when } h\rightarrow 0.
$$
Since $\| f|_{L^2(\Omega)}\not =0$ implies that $|\frac{\pa^2 w}{\pa x \pa
y}|_{H^1(\Omega)}\not= 0$ (see more details in the following remark), a combination of
\eqref{eq2.14}-\eqref{eq2.16} proves the desired result.
\end{proof}
\begin{remark} For the rectangular domain $\Omega$ under consideration, the condition $|\frac{\pa^2 w}{\pa x \pa
y}|_{H^1(\Omega)}\not= 0$ holds provided that $\|f\|_{L^2(\Omega)}\not=0$. In fact, if $|\frac{\pa^2 w}{\pa x \pa
y}|_{H^1(\Omega)}= 0$, $w$ is of the form
$$
w=c_0xy+h(x)+g(y),
$$
for some function $h(x)$ with respect to $x$, and $g(y)$ with respect to $y$. Then, the boundary condition concludes that
both $h(x)$ and $g(y)$ are constant. Hence the boundary condition indicates $w\equiv 0$, which contradicts with $w\not\equiv 0.$
\end{remark}
\begin{remark} The expansion \eqref{expansion} was analyzed in \cite{HuHuang11,LuoLin04,Yang2000}. Herein we give
a new and much simpler proof. Moreover, compared with the
regularity $H^5$ needed therein, the analysis herein only needs the regularity $H^4$.
\end{remark}
\begin{remark} The idea herein can be directly extended to the eigenvalue problem investigated in \cite{HuHuang11,Yang2000},
which improves and simplifies the analysis therein and proves that the discrete eigenvalue produced
by the Adini element is smaller than the exact one provided that the meshsize is sufficiently small.
In addition, such a generalization weakens the regularity condition from $u\in H^5(\Omega)$
to $u\in H^4(\Omega)$ where $u$ is the eigenfunction.
\end{remark}
\section{An identity of $(-f, w-w_h)$}
This section establishes the identity of $(-f, w-w_h)$ which is one man ingredient for the proof of Theorem \ref{main}.
\begin{lemma}\label{Lemma3.1} Let $w$ and $w_h$ be solutions of problems \eqref{cont}
and \eqref{disc}, respectively. Then,
\begin{equation}\label{eq3.0}
\begin{split}
&(-f, w-w_h)_{L^2(\Omega)}\\
&=a_h(w,\Pi_hw-w_h)-(f, \Pi_hw-w_h)_{L^2(\Omega)}\\
&\quad +a_h(w-\Pi_hw,w-\Pi_hw)+a_h(w-\Pi_hw,w_h-\Pi_hw)\\
&\quad +2(f, \Pi_hw-w)_{L^2(\Omega)}+2a_h(w-\Pi_hw,\Pi_hw).
\end{split}
\end{equation}
\end{lemma}
\begin{proof} We start with the following decomposition
\begin{equation}\label{eq3.1}
\begin{split}
&(-f, w-w_h)_{L^2(\Omega)}\\
&=(-f, w-w_h)+a_h(w,w-w_h)-a_h(w, w-w_h)\\
&=(-f,w-\Pi_hw)_{L^2(\Omega)}+(-f, \Pi_hw-w_h)_{L^2(\Omega)}\\
&\quad +a(w, \Pi_hw-w_h)+a_h(w, w-\Pi_hw)-a_h(w,w-w_h).
\end{split}
\end{equation}
The last two terms on the right-hand side of \eqref{eq3.1} allow for
a further decomposition:
\begin{equation}\label{eq3.2}
\begin{split}
&a_h(w, w-\Pi_hw)-a_h(w,w-w_h)\\
&=a_h(w-\Pi_hw, w-\Pi_hw)
+a_h(\Pi_hw,w-\Pi_hw)\\
&\quad -a_h(w-\Pi_hw,w-w_h)-a_h(\Pi_hw, w-w_h)\\
&=a_h(w-\Pi_hw, w_h-\Pi_hw) +a_h(\Pi_hw,w-\Pi_hw)\\
&\quad -a_h(\Pi_hw, w-w_h).
\end{split}
\end{equation}
It follows from the discrete problem \eqref{disc} and the continuous problem
\eqref{cont} that the last term on the right-hand side of
\eqref{eq3.2} can be divided as
\begin{equation}\label{eq3.3}
\begin{split}
&-a_h(\Pi_hw, w-w_h)\\
&=(f, \Pi_hw-w)_{L^2(\Omega)}-a_h(w, \Pi_hw-w)\\
&=(f, \Pi_hw-w)_{L^2(\Omega)}-a_h(w-\Pi_hw, \Pi_hw-w)\\
&\quad -a_h(\Pi_hw, \Pi_hw-w).
\end{split}
\end{equation}
A summary of \eqref{eq3.1}-\eqref{eq3.3} completes the proof.
\end{proof}
\begin{remark} The importance of the identity of \eqref{eq3.0} lies in that such a decomposition separates the dominant term
$2a_h(w-\Pi_hw,\Pi_hw)$ from the other higher order terms, which is the key to employ Lemma \ref{Lemma2.2}.
\end{remark}
\section{The conclusion and comments}
This paper presents the analysis of the $L^2$ norm error estimate of the Adini element.
It is proved that the best $L^2$ norm error estimate is at most of order $\mathcal{O}(h^2)$ which can not be
improved in general. This result in fact indicates that the nonconforming Adini element space can not
contain any conforming space with an appropriate approximate property. This will cause further difficulty
for the a posteriori error analysis. In fact, the reliable and efficient a posteriori error estimate for
this element is still missing in the literature, see \cite{CGH12} for more details. | {"config": "arxiv", "file": "1211.4677.tex"} |
TITLE: Disproving Cantor's diagonal argument
QUESTION [2 upvotes]: I am familiar with Cantor's diagonal argument and how it can be used to prove the uncountability of the set of real numbers. However I have an extremely simple objection to make. Given the following:
Theorem: Every number with a finite number of digits has two representations in the set of rational numbers.
Proof: It follows from the fact that 0,9999... = 1, that any integer $n$ can be represented as $(n-1),9999...$
This similarly works for rational numbers with finite digits, as follows:
let $n_i$ be the $i$-th digit after the comma of a rational number with a finite number of digits $k$. Then $n,n_1n_2...n_k = n,n_1n_2...(n_k-1)999...$.
Now, going through Cantor's diagonalization argument, assume that I built what I claim to be an exhaustive list of all real numbers in binary representation. Then, Cantor would claim that the number created by concatenating the negated $i$-th digit of the $i$-th number is not contained in my list. To which I would reply: "Yes, but can you prove that your number is not the alternative representation of one already present in the list?", following the theorem above.
How does my remark not disprove or at least require further efforts in Cantor's proof? Has this point already been made?
EDIT:
The point is invalid if the base is different from 2, giving Cantor the choice of avoiding the more "offending" decimals 9 and 0. However, in the case of binary representation, which is the most commonly thought in universities, how could he make sure that happens by simply flipping bits as described above? It seems to me that he would have to go through the rather complex process of converting the binary number to another base, flipping its digits avoiding 0s and 9s, and converting back.
REPLY [1 votes]: Most people who think they are familiar with Cantor's Diagonalization argument, are not. They are familiar with a simplification of it for High School students. Chief among the simplifications are the facts that it was never about real numbers, and it isn't a proof by contradiction (it doesn't assume that the entire set it uses can be enumerated).
Following Wikipedia's outline, but correcting the simplifications:
What I call a Cantor String is an infinite-length string of 0's and 1's.
Let T be the set of all possible Cantor Strings.
Let s(n) be a function that produces a Cantor String for all n>=1, and S be the set of all the Cantor Strings it produces.
Let D be the string made by taking the nth character of s(n) and reversing the 0's and 1's.
D is a Cantor String, so it is a member of T.
D can't be a member of S, since it differs from every s(n) in the nth character.
This proves "If S can be enumerated, then S is not all of T."
The two statements "If A, then B" and "If not B, then not A" are logically equivalent. So proving one proves the other.
If S is all of T, then S can not be enumerated. QED.
Now, you can interpret each Cantor String as the binary representation of a real number between 0 and 1. Your issue is that two different Cantor Strings can represent the same number, making step 6 invalid. This is only a problem if use that interpretation. So don't. :) | {"set_name": "stack_exchange", "score": 2, "question_id": 2694917} |
TITLE: Fourier transform
QUESTION [0 upvotes]: I find some problems about pde, I guess, concerning Distribution solution.
Define $$<T,\phi> = \frac{1}{\pi}\lim_{\epsilon \rightarrow 0} \int_{|x|\geq \epsilon} \frac{\phi(x)}{x} dx.$$ Then $$ \mathcal{F}({T})(t) = -i \ \mbox{sgn} (t),$$ where $\mathcal{F}(T)$ is the Fourier transform of $T$.
That $<\cdot, \cdot>$ always involving ssomething related to distributional sence of solution of pde. I am not quite understand about it. How should I prove this statement ?
REPLY [1 votes]: $T$ is called the Hilberttransform. It is a straight forward calculation:
$$\langle \hat{T}, \varphi \rangle = \langle T, \hat{\varphi} \rangle = \frac{1}{\pi} \lim_{\varepsilon \to 0} \int_{\lvert \xi \rvert \geq \varepsilon} \frac{1}{\xi} \int_\mathbb{R} \varphi(x) e^{-ix\xi} \mathrm{d}x \mathrm{d}\xi.$$
There you have to use Fubini and write $e^{-ix\xi} = \cos(-x\xi) + i \sin(-x \xi)$. The integral with $\cos$ vanishes cause of the symmetry of $\frac{\cos(-x\xi)}{\xi}$. The rest follows with
$$\int_{-\infty}^\infty \frac{\sin(bx)}{x} \mathrm{d}x = \pi sgn(b).$$ | {"set_name": "stack_exchange", "score": 0, "question_id": 1739556} |
\begin{document}
\title[On $q$-Euler numbers related to the modified $q$-Bernstein polynomials]
{On $q$-Euler numbers related to the modified $q$-Bernstein polynomials}
\author{Min-Soo Kim, Daeyeoul Kim and Taekyun Kim}
\begin{abstract}
We consider $q$-Euler numbers and polynomials and $q$-Stirling numbers of first and second kinds.
Finally, we investigate some interesting properties of the modified $q$-Bernstein polynomials related to $q$-Euler numbers and
$q$-Stirling numbers
by using fermionic $p$-adic integrals on $\mathbb Z_p.$
\end{abstract}
\address{Department of Mathematics, KAIST, 373-1 Guseong-dong, Yuseong-gu, Daejeon 305-701, S. Korea}
\email{minsookim@kaist.ac.kr}
\address{National Institute for Mathematical Sciences, Doryong-dong, Yuseong-gu, Daejeon 305-340, Republic of KOREA}
\email{daeyeoul@nims.re.kr}
\address{Division of General Education-Mathematics, Kwangwoon University, Seoul, 139-701, S. Korea}
\email{tkkim@kw.ac.kr}
\subjclass[2000]{11B68, 11S80}
\keywords{$q$-Euler numbers and polynomials, $q$-Bernstein polynomials, $q$-Stirling numbers, Fermionic $p$-adic integrals}
\maketitle
\def\ord{\text{ord}_p}
\def\C{\mathbb C_p}
\def\BZ{\mathbb Z}
\def\Z{\mathbb Z_p}
\def\Q{\mathbb Q_p}
\def\wh{\widehat}
\section{Introduction}
\label{Intro}
Let $C[0,1]$ be the set of continuous functions on $[0,1].$ The classical Bernstein polynomials of degree $n$
for $f\in C[0,1]$ are defined by
\begin{equation}\label{cla-def-ber}
\mathbb B_{n}(f)=\sum_{k=0}^nf\left(\frac kn\right)B_{k,n}(x),\quad 0\leq x\leq1
\end{equation}
where $\mathbb B_{n}(f)$ is called the Bernstein operator and
\begin{equation}\label{or-ber-poly}
B_{k,n}(x)=\binom nk x^k(x-1)^{n-k}
\end{equation}
are called the Bernstein basis polynomials (or the Bernstein polynomials of
degree $n$) (see \cite{SA}).
Recently, Acikgoz and Araci have studied the generating function for Bernstein
polynomials (see \cite{AA,AA2}). Their generating function for $B_{k,n}(x)$ is given by
\begin{equation}\label{AA-gen}
F^{(k)}(t,x)=\frac{t^ke^{(1-x)t}x^k}{k!}=\sum_{n=0}^\infty B_{k,n}(x)\frac{t^n}{n!},
\end{equation}
where $k=0,1,\ldots$ and $x\in[0,1].$ Note that
$$B_{k,n}(x)=\begin{cases}
\binom nk x^k(1-x)^{n-k}&\text{if } n\geq k \\
0,&\text{if }n<k
\end{cases}$$
for $n=0,1,\ldots$ (see \cite{AA,AA2}).
Let $p$ be an odd prime number.
Throughout this paper, $\mathbb Z_p, \mathbb Q_p$ and $\mathbb C_p$
will denote the ring of $p$-adic rational
integers, the field of $p$-adic rational numbers and the completion
of the algebraic closure of $\mathbb Q_p,$ respectively.
Let $v_p$ be the normalized exponential valuation of $\mathbb C_p$ with
$|p|_p=p^{-1}.$
Throughout this paper, we use the following notation
$$[x]_q=\frac{1-q^x}{1-q}\quad\text{and}\quad [x]_{-q}=\frac{1-(-q)^x}{1+q}$$
(cf. \cite{KT1,KT2,KT3,KT7}).
Let $\mathbb N$ be the natural numbers and $\mathbb Z_+=\mathbb N\cup\{0\}.$
Let $UD(\mathbb Z_p)$ be the space of uniformly differentiable function on $\mathbb Z_p.$
Let $q\in\C$ with $|1-q|_p<p^{-1/(p-1)}$ and $x\in\Z.$ Then $q$-Bernstein type operator for $f\in UD(\Z)$ is defined by
(see \cite{KJY,KCK})
\begin{equation}\label{B-op}
\begin{aligned}
\mathbb B_{n,q}(f)&=\sum_{k=0}^nf\left(\frac kn\right)\binom nk[x]_q^k[1-x]_q^{n-k}\\
&=\sum_{k=0}^nf\left(\frac kn\right)B_{k,n}(x,q),
\end{aligned}
\end{equation}
for $k,n\in \mathbb Z_+,$ where
\begin{equation}\label{Ber-def}
B_{k,n}(x,q)=\binom nk[x]_q^k[1-x]_q^{n-k}
\end{equation}
is called the modified $q$-Bernstein polynomials
of degree $n.$
When we put $q\to1$ in (\ref{Ber-def}), $[x]_q^k\to x^k,[1-x]_q^{n-k}\to(1-x)^{n-k}$
and we obtain the classical Bernstein polynomial, defined by (\ref{or-ber-poly}).
We can deduce very easily from (\ref{Ber-def}) that
\begin{equation}\label{recu-ber}
B_{k,n}(x,q)=[1-x]_qB_{k,n-1}(x,q)+[x]_qB_{k-1,n-1}(x,q)
\end{equation}
(see \cite{KJY}).
For $0 \leq k \leq n,$ derivatives of the $n$th degree modified $q$-Bernstein polynomials are polynomials of
degree $n-1:$
\begin{equation}\label{der-Ber-def}
\frac{\text{d}}{\text{d}x}B_{k,n}(x,q)=n(q^xB_{k-1,n-1}(x,q)-q^{1-x}B_{k,n-1}(x,q))\frac{\ln q}{q-1}
\end{equation}
(see \cite{KJY}).
The Bernstein polynomials can also be defined in many
different ways. Thus, recently, many applications of these polynomials have been looked
for by many authors.
In recent years, the $q$-Bernstein polynomials have been investigated and studied by many authors in many
different ways (see \cite{KJY,KCK,SA} and references therein \cite{GG,Ph}).
In \cite{Ph}, Phillips gave many results concerning the $q$-integers, and an account
of the properties of $q$-Bernstein polynomials. He gave many applications of these polynomials
on approximation theory.
In \cite{AA,AA2}, Acikgoz and Araci have introduced several type Bernstein polynomials. The Acikgoz and Araci paper to announce in the
conference is actually motivated to write this paper.
In \cite{SA}, Simsek and Acikgoz constructed a new generating function of the $q$-Bernstein type polynomials
and established elementary properties of this function.
In \cite{KJY}, Kim, Jang and Yi proposed the modified $q$-Bernstein polynomials of degree $n,$ which
are different $q$-Bernstein polynomials of Phillips.
In \cite{KCK}, Kim, Choi and Kim investigated some interesting properties of the modified $q$-Bernstein polynomials of degree $n$
related to $q$-Stirling numbers and Carlitz's $q$-Bernoulli numbers.
In the present paper,
we consider $q$-Euler numbers, polynomials and $q$-Stirling numbers of first and second kinds.
We also investigate some interesting properties of the modified $q$-Bernstein polynomials of degree $n$
related to $q$-Euler numbers and
$q$-Stirling numbers by using fermionic $p$-adic integrals on $\mathbb Z_p.$
\section{$q$-Euler numbers and polynomials
related to the fermionic $p$-adic integrals on $\mathbb Z_p$}
For $N\geq1,$ the fermionic $q$-extension $\mu_q$
of the $p$-adic Haar distribution $\mu_{\text{Haar}}:$
\begin{equation}\label{mu}
\mu_{-q}(a+p^N\Z)=\frac{(-q)^a}{[p^{N}]_{-q}}
\end{equation}
is known as a measure on $\Z,$ where $a+p^N\Z=\{ x\in\Q\mid |x-a|_p\leq p^{-N}\}$ (cf. \cite{KT1,KT4}).
We shall write $d\mu_{-q}(x)$ to remind ourselves that $x$ is the variable
of integration.
Let $UD(\mathbb Z_p)$ be the space of uniformly differentiable function on $\mathbb Z_p.$
Then $\mu_{-q}$ yields the fermionic $p$-adic $q$-integral of a function $f\in UD(\mathbb Z_p):$
\begin{equation}\label{Iqf}
I_{-q}(f)=\int_{\Z} f(x)d\mu_{-q}(x)=\lim_{N\rightarrow\infty}\frac{1+q}{1+q^{p^N}}
\sum_{x=0}^{p^N-1}f(x)(-q)^x
\end{equation}
(cf. \cite{KT4,KT5,KT6,KT8}).
Many interesting properties of (\ref{Iqf}) were studied by many authors
(see \cite{KT4,KT5} and the references given there).
Using (\ref{Iqf}), we have the fermionic $p$-adic invariant integral on $\mathbb Z_p$ as
follows:
\begin{equation}\label{de-2}
\lim_{q\to-1}I_q(f)=I_{-1}(f)=\int_{\mathbb Z_p}f(a)d\mu_{-1}(x).
\end{equation}
For $n\in\mathbb N,$ write $f_n(x)=f(x+n).$ We have
\begin{equation}\label{de-3}
I_{-1}(f_n)=(-1)^nI_{-1}(f)+2\sum_{l=0}^{n-1}(-1)^{n-l-1}f(l).
\end{equation}
This identity is obtained by Kim in \cite{KT4}
to derives interesting properties and relationships involving $q$-Euler numbers and polynomials.
For $n\in\mathbb Z_+,$ we note that
\begin{equation}\label{q-Euler-numb}
I_{-1}([x]_q^n)=\int_{\Z} [x]_q^n d\mu_{-1}(x)=E_{n,q},
\end{equation}
where $E_{n,q}$ are the $q$-Euler numbers (see \cite{KT09}).
It is easy to see that $E_{0,q}=1.$ For $n\in\mathbb N,$ we have
\begin{equation}\label{q-Eu-rec}
\begin{aligned}
\sum_{l=0}^n\binom nlq^lE_{l,q}&=\sum_{l=0}^n\binom nlq^l \lim_{N\rightarrow\infty}\sum_{x=0}^{p^N-1}[x]_q^l(-1)^x \\
&=\lim_{N\rightarrow\infty}\sum_{x=0}^{p^N-1}(-1)^x(q[x]_q+1)^n \\
&=\lim_{N\rightarrow\infty}\sum_{x=0}^{p^N-1}(-1)^x[x+1]_q^{n} \\
&=-\lim_{N\rightarrow\infty}\sum_{x=0}^{p^N-1}(-1)^x([x]_q^{n}+[p^N]_q^n) \\
&=-E_{n,q}.
\end{aligned}
\end{equation}
From this formula, we have the following recurrence formula
\begin{equation}\label{q-Euler-recur}
E_{0,q}=1,\qquad (qE+1)^n+E_{n,q}=0\quad\text{if }n\in\mathbb N
\end{equation}
with the usual convention of replacing $E^l$ by $E_{l,q}.$
By the simple calculation of the fermionic $p$-adic invariant integral on $\mathbb Z_p,$
we see that
\begin{equation}\label{q-E-ex}
E_{n,q}=\frac{2}{(1-q)^n}\sum_{l=0}^n\binom nl(-1)^l\frac{1}{1+q^l},
\end{equation}
where $\binom nl=n!/l!(n-l)!=n(n-1)\cdots(n-l+1)/l!.$
Now, by introducing the following equations:
\begin{equation}\label{intr-q-ex}
[x]_{\frac1q}^n=q^{n}q^{-nx}[x]_q^n\quad\text{and}\quad q^{-nx}=\sum_{m=0}^\infty(1-q)^m\binom{n+m-1}{m}[x]_q^m
\end{equation}
into (\ref{q-Euler-numb}), we find that
\begin{equation}\label{q-Eu-inv}
E_{n,\frac1q}=q^{n}\sum_{m=0}^\infty(1-q)^m\binom{n+m-1}{m}E_{n+m,q}.
\end{equation}
This identity is a peculiarity of the $p$-adic $q$-Euler numbers, and the classical Euler numbers do not seem
to have a similar relation.
Let $F_q(t)$ be the generating function of the $q$-Euler numbers. Then we obtain
\begin{equation}\label{e-E-gen}
\begin{aligned}
F_q(t)&=\sum_{n=0}^\infty E_{n,q}\frac{t^n}{n!} \\
&=\sum_{n=0}^\infty\frac{2}{(1-q)^n}\sum_{l=0}^n(-1)^l\binom nl\frac{1}{1+q^l}\frac{t^n}{n!}\\
&=2e^{\frac{t}{1-q}}\sum_{k=0}^\infty\frac{(-1)^k}{(1-q)^k}\frac{1}{1+q^k}\frac{t^k}{k!}.
\end{aligned}
\end{equation}
From (\ref{e-E-gen}) we note that
\begin{equation}\label{e-E-gen-mod}
F_q(t)=2e^{\frac{t}{1-q}}\sum_{n=0}^\infty(-1)^ne^{\left(\frac{-q^n}{1-q}\right)t}
=2\sum_{n=0}^\infty(-1)^ne^{[n]_qt}.
\end{equation}
It is well-known that
\begin{equation}\label{q-Euler}
I_{-1}([x+y]^n)=\int_{\Z} [x+y]^n d\mu_{-1}(y)=E_{n,q}(x),
\end{equation}
where $E_{n,q}(x)$ are the $q$-Euler polynomials (see \cite{KT09}).
In the special case $x=0,$ the numbers $E_{n,q}(0)=E_{n,q}$ are referred to as the $q$-Euler numbers.
Thus we have
\begin{equation}\label{E-p-n}
\begin{aligned}
\int_{\Z} [x+y]^n d\mu_{-1}(y)&=\sum_{k=0}^n\binom nk[x]_q^{n-k}q^{kx}\int_{\Z} [y]^k d\mu_{-1}(y) \\
&=\sum_{k=0}^n\binom nk[x]_q^{n-k}q^{kx}E_{k,q} \\
&=(q^xE+[x]_q)^n.
\end{aligned}
\end{equation}
It is easily verified, using (\ref{e-E-gen-mod}) and (\ref{q-Euler}), that
the $q$-Euler polynomials $E_{n,q}(x)$ satisfy the following formula:
\begin{equation}\label{e-p-exp}
\begin{aligned}
\sum_{n=0}^\infty E_{n,q}(x)\frac{t^n}{n!}&=\int_{\Z}e^{[x+y]_qt}d\mu_{-1}(y) \\
&=\sum_{n=0}^\infty\frac{2}{(1-q)^n}\sum_{l=0}^n(-1)^l\binom nl\frac{q^{lx}}{1+q^l}\frac{t^n}{n!}\\
&=2\sum_{n=0}^\infty(-1)^ne^{[n+x]_qt}.
\end{aligned}
\end{equation}
Using formula (\ref{e-p-exp}) when $q$ tends to 1, we can readily derive the Euler polynomials, $E_n(x),$ namely,
$$\int_{\Z}e^{(x+y)t}d\mu_{-1}(y)=\frac{2e^{xt}}{e^t+1}=\sum_{n=0}^\infty E_n(x)\frac{t^n}{n!}$$
(see \cite{KT4}).
Note that $E_n(0)=E_n$ are referred to as the $n$th Euler numbers.
Comparing the coefficients of ${t^n}/{n!}$ on both sides of (\ref{e-p-exp}), we have
\begin{equation}\label{q-Euler-pro}
E_{n,q}(x)=2\sum_{m=0}^\infty(-1)^m[m+x]_q^n=\frac{2}{(1-q)^n}\sum_{l=0}^n(-1)^l\binom nl\frac{q^{lx}}{1+q^l}.
\end{equation}
We refer to $[n]_q$ as a $q$-integer and note that $[n]_q$ is a continuous function of $q.$ In an obvious way we also
define a $q$-factorial,
$$[n]_q!=\begin{cases}[n]_q[n-1]_q\cdots[1]_q&n\in\mathbb N, \\
1,&n=0
\end{cases}$$
and a $q$-analogue of binomial coefficient
\begin{equation}\label{q-binom}
\binom xn_q=\frac{[x]_q!}{[x-n]_q![n]_q!}=\frac{[x]_q[x-1]_q\cdots[x-n+1]_q}{[n]_q!}
\end{equation}
(cf. \cite{KT6,KT09}).
Note that
$$\lim_{q\to1}\binom xn_q=\binom xn=\frac{x(x-1)\cdots(x-n+1)}{n!}.$$
It readily follows from (\ref{q-binom}) that
\begin{equation}\label{q-binom-1}
\binom xn_q=\frac{(1-q)^nq^{-\binom n2}}{[n]_q!}\sum_{i=0}^nq^{\binom i2}\binom ni_q(-1)^{n+i}q^{(n-i)x}
\end{equation}
(cf. \cite{KT09,KT7}).
It can be readily seen that
\begin{equation}\label{q-id-1}
q^{lx}=([x]_q(q-1)+1)^l=\sum_{m=0}^l\binom lm(q-1)^m[x]_q^m.
\end{equation}
Thus by (\ref{q-Euler}) and (\ref{q-id-1}), we have
\begin{equation}\label{q-binom-int}
\int_{\Z}\binom xn_qd\mu_{-1}(x)=\frac{(q-1)^n}{[n]_q!q^{\binom n2}}\sum_{i=0}^nq^{\binom i2}\binom ni_q(-1)^{i}
\sum_{j=0}^{n-i}\binom{n-i}j(q-1)^jE_{j,q}.
\end{equation}
From now on, we use the following notation
\begin{equation}\label{1-st-ca}
\frac{[x]_q!}{[x-k]_{q}!}=q^{-\binom k2}\sum_{l=0}^ks_{1,q}(k,l)[x]_q^l,\quad k\in \mathbb Z_+,
\end{equation}
\begin{equation}\label{2-st-ca}
[x]_q^n=\sum_{k=0}^nq^{\binom k2}s_{2,q}(n,k)\frac{[x]_q!}{[x-k]_{q}!},\quad n\in \mathbb Z_+
\end{equation}
(see \cite{KT7}).
From (\ref{1-st-ca}), (\ref{2-st-ca}) and (\ref{q-id-1}), we calculate the following consequence
\begin{equation}\label{2-st-eq}
\begin{aligned}
\;[x]_q^n&=\sum_{k=0}^nq^{\binom k2}s_{2,q}(n,k)\frac1{(1-q)^k}\sum_{l=0}^k\binom kl_q
q^{\binom l2}(-1)^lq^{l(x-k+1)} \\
&=\sum_{k=0}^nq^{\binom k2}s_{2,q}(n,k)\frac1{(1-q)^k}\sum_{l=0}^k\binom kl_q q^{\binom l2+l(1-k)}(-1)^l \\
&\quad\times\sum_{m=0}^l\binom lm(q-1)^m[x]_q^m
\\
&=\sum_{k=0}^nq^{\binom k2}s_{2,q}(n,k)\frac1{(1-q)^k} \\
&\quad\times\sum_{m=0}^k(q-1)^m\left(\sum_{l=m}^k\binom kl_qq^{\binom l2+l(1-k)}\binom lm(-1)^l\right)[x]_q^m.
\end{aligned}
\end{equation}
Therefore, we obtain the following theorem.
\begin{theorem}
For $n\in \mathbb Z_+,$
$$E_{n,q}=\sum_{k=0}^n\sum_{m=0}^k\sum_{l=m}^k q^{\binom k2}s_{2,q}(n,k)
(q-1)^{m-k}\binom kl_qq^{\binom l2+l(1-k)}\binom lm(-1)^{l+k}E_{m,q}.$$
\end{theorem}
By (\ref{q-id-1}) and simple calculation, we find that
\begin{equation}\label{q-b-int}
\begin{aligned}
\sum_{m=0}^n\binom nm(q-1)^mE_{m,q}&=\int_{\Z}q^{nx}d\mu_{-1}(x)\\
&=\sum_{k=0}^n(q-1)^kq^{\binom k2}\binom nk_q\int_{\Z}\prod_{i=0}^{k-1}[x-i]_{q}d\mu_{-1}(x) \\
&=\sum_{k=0}^n(q-1)^k\binom nk_q\sum_{m=0}^ks_{1,q}(k,m)\int_{\Z}[x]_q^md\mu_{-1}(x) \\
&=\sum_{m=0}^n\left(\sum_{k=m}^n(q-1)^k\binom nk_qs_{1,q}(k,m)\right)E_{m,q}.
\end{aligned}
\end{equation}
Therefore, we deduce the following theorem.
\begin{theorem}
For $n\in \mathbb Z_+,$
$$\sum_{m=0}^n\binom nm(q-1)^mE_{m,q}=\sum_{m=0}^n\sum_{k=m}^n(q-1)^k\binom nk_qs_{1,q}(k,m)E_{m,q}.$$
\end{theorem}
\begin{corollary}\label{bi-qbi}
For $m,n\in \mathbb Z_+$ with $m\leq n,$
$$\binom nm(q-1)^m=\sum_{k=m}^n(q-1)^k\binom nk_qs_{1,q}(k,m).$$
\end{corollary}
By (\ref{q-Euler-pro}) and Corollary \ref{bi-qbi}, we obtain the following corollary.
\begin{corollary}\label{E-s1-bi}
For $n\in \mathbb Z_+,$
$$E_{n,q}(x)=\frac{2}{(1-q)^n}\sum_{l=0}^n\sum_{k=l}^n(-1)^l
(q-1)^{k-l}\binom nk_qs_{1,q}(k,l)
\frac{q^{lx}}{1+q^l}.$$
\end{corollary}
It is easy to see that
\begin{equation}\label{ea-id}
\binom nk_q=\sum_{l_0+\cdots+l_k=n-k}q^{\sum_{i=0}^kil_i}
\end{equation}
(cf. \cite{KT7}). From (\ref{ea-id}) and Corollary \ref{E-s1-bi}, we can also derive the following interesting formula for
$q$-Euler polynomials.
\begin{theorem}
For $n\in \mathbb Z_+,$
$$E_{n,q}(x)=2\sum_{l=0}^n\sum_{k=l}^n\sum_{l_0+\cdots+l_k=n-k}q^{\sum_{i=0}^kil_i}\frac1{(1-q)^{n+l-k}}
s_{1,q}(k,l)(-1)^k
\frac{q^{lx}}{1+q^l}.$$
\end{theorem}
These polynomials are related to the many branches of Mathematics, for example,
combinatorics, number theory, discrete probability distributions for finding higher-order
moments (cf. \cite{KT6,KT09,KT8}). By substituting $x=0$ into
the above, we have
$$E_{n,q}=2\sum_{l=0}^n\sum_{k=l}^n\sum_{l_0+\cdots+l_k=n-k}q^{\sum_{i=0}^kil_i}\frac1{(1-q)^{n+l-k}}
s_{1,q}(k,l)(-1)^k
\frac{1}{1+q^l}.$$
where $E_{n,q}$ is the $q$-Euler numbers.
\section{$q$-Euler numbers, $q$-Stirling numbers and $q$-Bernstein polynomials
related to the fermionic $p$-adic integrals on $\mathbb Z_p$}
First, we consider the $q$-extension of the generating function of Bernstein polynomials in (\ref{AA-gen}).
For $q\in\C$ with $|1-q|_p<p^{-1/(p-1)},$ we obtain
\begin{equation}\label{B-def-ge-ft}
\begin{aligned}
F_q^{(k)}(t,x)&=\frac{t^ke^{[1-x]_qt}[x]_q^k}{k!}\\
&=[x]_q^k\sum_{n=0}^\infty \binom{n+k}{k}[1-x]_q^n\frac{t^{n+k}}{(n+k)!}\\
&=\sum_{n=k}^\infty\binom nk[x]_q^k[1-x]_q^{n-k}\frac{t^n}{n!} \\
&=\sum_{n=0}^\infty B_{k,n}(x,q)\frac{t^n}{n!},
\end{aligned}
\end{equation}
which is the generating function of the modified $q$-Bernstein type polynomials (see \cite{KCK}).
Indeed, this
generating function is also treated by Simsek and Acikgoz (see \cite{SA}).
Note that $\lim_{q\to1}F_q^{(k)}(t,x)=F^{(k)}(t,x).$
It is easy to show that
\begin{equation}\label{id-1}
[1-x]_q^{n-k}=\sum_{m=0}^\infty\sum_{l=0}^{n-k}\binom{l+m-1}{m}\binom{n-k}l(-1)^{l+m}q^l[x]_q^{l+m}(q-1)^m.
\end{equation}
From (\ref{B-op}), (\ref{de-2}), (\ref{e-p-exp}) and (\ref{id-1}), we derive the following theorem.
\begin{theorem}
For $k,n\in\mathbb Z_+$ with $n\geq k,$
$$\begin{aligned}
\begin{aligned}
\int_{\Z}\frac{B_{k,n}(x,q)}{\binom nk}&d\mu_{-1}(y) \\
&=\sum_{m=0}^\infty\sum_{l=0}^{n-k}\binom{l+m-1}{m}\binom{n-k}l(-1)^{l+m}q^l(q-1)^m
E_{l+m+k,q},
\end{aligned}
\end{aligned}$$
where $E_{n,q}$ are the $q$-Euler numbers.
\end{theorem}
It is possible to write $[x]_q^k$ as a linear combination of the modified $q$-Bernstein
polynomials by using the degree evaluation formulae and mathematical induction.
Therefore we obtain the following theorem.
\begin{theorem}[{\cite[Theorem 7]{KJY}}] \label{ber-sum}
For $k,n\in\mathbb Z_{+},i\in\mathbb N$ and $x\in[0,1],$
$$\sum_{k=i-1}^n\frac{\binom ki}{\binom ni}B_{k,n}(x,q)=[x]_q^i([x]_q+[1-x]_q)^{n-i}.$$
\end{theorem}
Let $i-1\leq n.$ Then from (\ref{Ber-def}), (\ref{id-1}) and Theorem \ref{ber-sum}, we have
\begin{equation}\label{B-def-ge}
\begin{aligned}
\,[x]_q^i&=\frac{\sum_{k=i-1}^n\frac{\binom ki\binom nk}{\binom ni}[x]_q^k[1-x]_q^{n-k}}{[x]_q^{n-i}
\left(1+\frac{[1-x]_q}{[x]_q}\right)^{n-k}} \\
&=\sum_{m=0}^\infty\sum_{k=i-1}^n\sum_{l=0}^{m+n-k}\sum_{p=0}^\infty
\frac{\binom ki\binom nk}{\binom ni}\binom{l+p-1}{p}\binom{m+n-k}{l} \\
&\quad\times\binom{n-i+m-1}{m}(-1)^{l+p+m}q^l(q-1)^p[x]_q^{i-n-m+k+p+l}.
\end{aligned}
\end{equation}
Using (\ref{q-Euler}) and (\ref{B-def-ge}), we obtain the following theorem.
\begin{theorem}\label{q-eu-Ber}
For $k,n\in\mathbb Z_{+}$ and $i\in\mathbb N$ with $i-1\leq n,$
$$\begin{aligned}
E_{i,q}
&=\sum_{m=0}^\infty\sum_{k=i-1}^n\sum_{l=0}^{m+n-k}\sum_{p=0}^\infty
\frac{\binom ki\binom nk}{\binom ni}\binom{l+p-1}{p}\binom{m+n-k}{l} \\
&\quad\times\binom{n-i+m-1}{m}(-1)^{l+p+m}q^l(q-1)^pE_{i-n-m+k+p+l,q}.
\end{aligned}$$
\end{theorem}
The $q$-String numbers of the first kind is defined by
\begin{equation}\label{1-str}
\prod_{k=1}^n(1+[k]_qz)=\sum_{k=0}^nS_1(n,k;q)z^k,
\end{equation}
and the $q$-String number of the second kind is also defined by
\begin{equation}\label{2-str}
\prod_{k=1}^n(1+[k]_qz)^{-1}=\sum_{k=0}^nS_2(n,k;q)z^k
\end{equation}
(see \cite{KCK}). Therefore, we deduce the following theorem.
\begin{theorem}[{\cite[Theorem 4]{KCK}}] \label{ber-str}
For $k,n\in\mathbb Z_{+}$ and $i\in\mathbb N,$
$$\frac{\sum_{k=i-1}^n\frac{\binom ki}{\binom ni}B_{k,n}(x,q)}{([x]_q+[1-x]_q)^{n-i}}
=\sum_{k=0}^i\sum_{l=0}^kS_1(n,l;q)S_2(i,k;q)[x]_q^l.$$
\end{theorem}
By Theorem \ref{ber-sum}, Theorem \ref{ber-str} and the definition of fermionic $p$-adic integrals on $\mathbb Z_p,$
we obtain the following theorem.
\begin{theorem} \label{Eu-str}
For $k,n\in\mathbb Z_{+}$ and $i\in\mathbb N,$
$$\begin{aligned}
E_{i,q}&=\sum_{k=i-1}^n\frac{\binom ki}{\binom ni}\int_{\Z}\frac{B_{k,n}(x,q)}{([x]_q+[1-x]_q)^{n-i}}d\mu_{-1}(x)\\
&=\sum_{k=0}^i\sum_{l=0}^kS_1(n,l;q)S_2(i,k:q)E_{l,q},
\end{aligned}$$
where $E_{i,q}$ is the $q$-Euler numbers.
\end{theorem}
Let $i-1\leq n.$ It is easy to show that
\begin{equation}\label{Ber-new}
\begin{aligned}
&[x]_q^i([x]_q+[1-x]_q)^{n-i} \\&=\sum_{l=0}^{n-i}\binom{n-i}l[x]_q^{l+i}[1-x]_q^{n-i-l} \\
&=\sum_{l=0}^{n-i}\sum_{m=0}^{n-i-l}\binom{n-i}l\binom{n-i-l}{m}(-1)^mq^m[x]_q^{m+i+l}q^{-mx} \\
&=\sum_{l=0}^{n-i}\sum_{m=0}^{n-i-l}\sum_{s=0}^\infty\binom{n-i}l\binom{n-i-l}{m}\binom{m+s-1}s
\\
&\quad\times(-1)^mq^m(1-q)^s[x]_q^{m+i+l+s}.
\end{aligned}
\end{equation}
From (\ref{Ber-new}) and Theorem \ref{ber-sum}, we have the following theorem.
\begin{theorem} \label{Eu-str-2}
For $k,n\in\mathbb Z_{+}$ and $i\in\mathbb N,$
$$\begin{aligned}
\sum_{k=i-1}^n\frac{\binom ki}{\binom ni}\int_{\Z}B_{k,n}(x,q)d\mu_{-1}(x)
&=\sum_{l=0}^{n-i}\sum_{m=0}^{n-i-l}\sum_{s=0}^\infty\binom{n-i}l\binom{n-i-l}{m}\binom{m+s-1}s
\\
&\quad\times(-1)^mq^m(1-q)^sE_{m+i+l+s,q},
\end{aligned}$$
where $E_{i,q}$ are the $q$-Euler numbers.
\end{theorem}
In the same manner, we can obtain the following theorem.
\begin{theorem} \label{Eu-str-3}
For $k,n\in\mathbb Z_{+}$ and $i\in\mathbb N,$
$$\begin{aligned}
\int_{\Z}B_{k,n}(x,q)d\mu_{-1}(x)
&=\sum_{j=k}^n\sum_{m=0}^\infty\binom jk\binom nj\binom{j-k+m-1}{m}
\\
&\quad\times(-1)^{j-k+m}q^{j-k}(q-1)^mE_{m+j,q},
\end{aligned}$$
where $E_{i,q}$ are the $q$-Euler numbers.
\end{theorem}
\section{Further remarks and observations}
The $q$-binomial formulas are known,
\begin{equation}\label{q-bi-f-two}
\begin{aligned}
&(a;q)_n=(1-a)(1-aq)\cdots(1-aq^{n-1})=\sum_{i=0}^n\binom ni_q q^{\binom i2}(-1)^ia^i, \\
&\frac1{(a;q)_n}=\frac1{(1-a)(1-aq)\cdots(1-aq^{n-1})}=\sum_{i=0}^\infty\binom{n+i-1}{i}_qa^i.
\end{aligned}
\end{equation}
For $h\in\mathbb Z,n\in\mathbb Z_+$ and $r\in\mathbb N,$
we introduce the extended higher-order $q$-Euler polynomials as follows \cite{KT09}:
\begin{equation}\label{h-o-q-Eu}
E_{n,q}^{(h,r)}(x)=\int_{\Z}\cdots\int_{\Z}
q^{\sum_{j=1}^r(h-j)x_j}[x+x_1+\cdots+x_r]_q^nd\mu_{-1}(x_1)\cdots d\mu_{-1}(x_r).
\end{equation}
Then
\begin{equation}\label{h-o-ex}
\begin{aligned}
E_{n,q}^{(h,r)}(x)&=\frac{2^r}{(1-q)^n}\sum_{l=0}^n\binom nl (-1)^l\frac{q^{lx}}{(-q^{h-1+l};q^{-1})_r} \\
&=\frac{2^r}{(1-q)^n}\sum_{l=0}^n\binom nl (-1)^l\frac{q^{lx}}{(-q^{h-r+l};q)_r}.
\end{aligned}
\end{equation}
Let us now define the extended higher-order N\"orlund type $q$-Euler polynomials as follows \cite{KT09}:
\begin{equation}\label{h-o-ex-2}
\begin{aligned}
E_{n,q}^{(h,-r)}(x)&=\frac1{(1-q)^n}\sum_{l=0}^n\binom nl(-1)^l \\
&\quad\times\frac{q^{lx}}{\int_{\Z}\cdots\int_{\Z}
q^{l(x_1+\cdots+x_r)}q^{\sum_{j=1}^r(h-j)x_j}d\mu_{-1}(x_1)\cdots d\mu_{-1}(x_r)}.
\end{aligned}
\end{equation}
In the special case $x=0,$ $E_{n,q}^{(h,-r)}=E_{n,q}^{(h,-r)}(0)$ are called
the extended higher-order N\"orlund type $q$-Euler numbers.
From (\ref{h-o-ex-2}), we note that
\begin{equation}\label{h-o-ex-3}
\begin{aligned}
E_{n,q}^{(h,-r)}(x)&=\frac{1}{2^r(1-q)^n}\sum_{l=0}^n\binom nl (-1)^lq^{lx}(-q^{h-r+l};q)_r\\
&=\frac1{2^r}\sum_{m=0}^rq^{\binom m2}q^{(h-r)m}\binom rm_q[m+x]_q^n.
\end{aligned}
\end{equation}
A simple manipulation shows that
\begin{equation}\label{st-1}
q^{\binom m2}\binom rm_q =\frac{q^{\binom m2}[r]_q\cdots[r-m+1]_q}{[m]_q!}=\frac1{[m]_q!}
\prod_{k=0}^{m-1}([r]_q-[k]_q)
\end{equation}
and
\begin{equation}\label{st-2}
\prod_{k=0}^{n-1}(z-[k]_q)=z^n\prod_{k=0}^{n-1}\left(1-\frac{[k]_q}{z}\right)
=\sum_{k=0}^n S_1(n-1,k;q)(-1)^kz^{n-k}.
\end{equation}
Formulas (\ref{h-o-ex-3}), (\ref{st-1}) and (\ref{st-2}) imply the following lemma.
\begin{lemma} \label{Eu-str-pro}
For $h\in\mathbb Z,n\in\mathbb Z_+$ and $r\in\mathbb N,$
$$E_{n,q}^{(h,-r)}(x)=\frac{1}{2^r[m]_q!}\sum_{m=0}^r\sum_{k=0}^mq^{(h-r)m}S_1(m-1,k;q)(-1)^k[r]_q^{m-k}
[x+m]_q^n.$$
\end{lemma}
From (\ref{q-id-1}), we can easily see that
\begin{equation}\label{le-ber-id}
[x+m]_q^n=\frac1{(1-q)^n}\sum_{j=0}^n\sum_{l=0}^j\binom nj\binom jl(-1)^{j+l}(1-q)^lq^{mj}[x]_q^l.
\end{equation}
Using (\ref{q-Euler}) and (\ref{le-ber-id}), we obtain the following lemma.
\begin{lemma} \label{Eu-str-pro2}
For $m,n\in \mathbb Z_+,$
$$E_{n,q}(m)=\frac1{(1-q)^n}\sum_{j=0}^n\sum_{l=0}^j\binom nj\binom jl(-1)^{j+l}(1-q)^lq^{mj}E_{l,q}.$$
\end{lemma}
By Lemma \ref{le-ber-id}, lemma \ref{Eu-str-pro2} and the definition of fermionic $p$-adic integrals on $\mathbb Z_p,$
we obtain the following theorem.
\begin{theorem} \label{mul-q-Eu}
For $h\in\mathbb Z,n\in\mathbb Z_+$ and $r\in\mathbb N,$
$$\begin{aligned}
\int_{\Z}E_{n,q}^{(h,-r)}(x)d\mu_{-1}(x)&=\frac{2^{-r}}{[m]_q!}\sum_{m=0}^r\sum_{k=0}^mq^{(h-r)m}S_1(m-1,k;q)(-1)^k[r]_q^{m-k}
E_{n,q}(m) \\
&=\frac{1}{2^r[m]_q!}\sum_{m=0}^r\sum_{k=0}^mq^{(h-r)m}S_1(m-1,k;q)(-1)^k[r]_q^{m-k} \\
&\quad\times\frac1{(1-q)^n}\sum_{j=0}^n\sum_{l=0}^j\binom nj\binom jl(-1)^{j+l}(1-q)^lq^{mj}E_{l,q}.
\end{aligned}
$$
\end{theorem}
Put $h=0$ in (\ref{h-o-ex-2}). We consider the following polynomials $E_{n,q}^{(0,-r)}(x):$
\begin{equation}\label{h-o-ex-h=0}
\begin{aligned}
E_{n,q}^{(0,-r)}(x)&=\sum_{l=0}^n\frac{(1-q)^{-n}\binom nl(-1)^lq^{lx}}{\int_{\Z}\cdots\int_{\Z}
q^{l(x_1+\cdots+x_r)}q^{-\sum_{j=1}^rjx_j}d\mu_{-1}(x_1)\cdots d\mu_{-1}(x_r)}.
\end{aligned}
\end{equation}
Then
$$E_{n,q}^{(0,-r)}(x)=\frac1{2^r}\sum_{m=0}^r\binom rmq^{\binom m2-rm}[m+x]_q^n.$$
A simple calculation of the ferminionic $p$-adic invariant integral on $\Z$ shows that
$$\int_{\Z}E_{n,q}^{(0,-r)}(x)d\mu_{-1}(x)=\frac1{2^r}\sum_{m=0}^r\binom rmq^{\binom m2-rm}E_{n,q}(m).$$
Using Theorem \ref{mul-q-Eu}, we can also prove that
$$\int_{\Z}E_{n,q}^{(0,-r)}(x)d\mu_{-1}(x)=\frac{2^{-r}}{[m]_q!}\sum_{m=0}^r\sum_{k=0}^mq^{-rm}S_1(m-1,k;q)(-1)^k[r]_q^{m-k}
E_{n,q}(m).$$
Therefore, we obtain the following theorem.
\begin{theorem}
For $m\in\mathbb Z_+,r\in\mathbb N$ with $m\leq r,$
$$\binom rmq^{\binom m2-rm}
=\frac{1}{[m]_q!}\sum_{k=0}^mq^{-rm}S_1(m-1,k;q)(-1)^k[r]_q^{m-k}.$$
\end{theorem}
\bibliography{central} | {"config": "arxiv", "file": "1007.3317.tex"} |
TITLE: Is the image of an injective immersion automatically both closed and open, depending on what topology you use?
QUESTION [1 upvotes]: I know I must be missing something somewhere; at the risk of being laughed at (a lot) I am asking if someone could point out to me exactly where. The question has to do with submanifolds, topology and open maps.
Say I have an open subset of $R^n$, call it $P$, topology is the relative one. Further, I have an $(n-1)$-dimensional embedded submanifold $N$ of $P$, so its topology is the relative one as well. Now suppose there is a smooth map $\lambda : P \rightarrow R^{n-1}$ whose restriction to $N$ is injective and is of rank $n-1$ everywhere. Consequently, as I understand it, $\lambda$ is an open map, taking every open (in the topology of $N$) set to an open (in the topology of $R^{n-1}$) set. This should also (by complementation) mean that $\lambda$ takes closed sets to closed sets. So here's my question: since $N$ is both open and closed in its own (the relative) topology, shouldn't the image $\lambda(N)$ be both open and closed in $R^{n-1}$? For separate reasons I know this result cannot be true without further conditions, but I can't see my mistake. Is it that I'm not being precise enough about the topology on $R^{n-1}$; e.g. it's a tautology that $\lambda(N)$ is both open and closed in $\lambda(N)$? But then what does an "open map" mean? Only that $\lambda(V)$ is open in $\lambda(N)$ whenever $V$ is open in $N$? That really doesn't help if we want to infer that $\lambda(N)$ is open in $R^{n-1}$.
Sorry for the rambling but my head hurts.
Edit: It appears this question is intimately connected to the concepts of relatively open map and strongly open map. In particular, I'm not certain which one of these $\lambda$ turns out to be. I will clearly need to do more research, although there doesn't seem to be much available on these particular topics.
REPLY [2 votes]: An open map $f:X\to Y$ between spaces $X$ and $Y$ is not necessarily closed. Recall that, in general, $f(X\setminus A)\neq f(X)\setminus f(A)$, even if $f$ is injective (injectivity only implies $f(X\setminus A)\subset f(X)\setminus f(A)$). In any case, complementation won’t take you anywhere.
And what I said is true: there are a lot of examples of open (even injective) maps which are not closed. A simple one, which is similar to the situation you are facing, is the inclusion map from $(0,1)$ into $\mathbb{R}$:
$$j:(0,1)\to\mathbb R,\ j(x)=x$$
It is open, since any open subset $U$ of $(0,1)$ is open in $\mathbb{R}$, and $j(U)=U$, but it cannot be closed, since $(0,1)$ is closed in $(0,1)$ and $j(0,1)=(0,1)$ is not closed in $\mathbb{R}$. | {"set_name": "stack_exchange", "score": 1, "question_id": 4515791} |
TITLE: How to solve $x^{x^x}=(x^x)^x$?
QUESTION [2 upvotes]: How can we solve the equation :
$x^{x^x}=(x^x)^x$
with $x \in {\mathbb{R}+}^*$
Thanks for heping me :)
REPLY [1 votes]: Applying $\log$ to both sides
$$
x^x\log x = x^2\log x\Rightarrow (x^x-x^2)\log x=0
$$
so we have $\log x= 0\Rightarrow x = 1$ and $x^x=x^2$ with two solutions $x = 1$ and $x = 2$ because following $x\log x = 2\log x\Rightarrow (x-2)\log x = 0$ | {"set_name": "stack_exchange", "score": 2, "question_id": 3184200} |
\begin{document}
\title[Nodal deficiency]{Nodal deficiency of random spherical harmonics in presence of boundary}
\author{Valentina Cammarota\textsuperscript{1}}
\email{valentina.cammarota@uniroma1.it}
\address{\textsuperscript{1}Department of Statistics, Sapienza University of Rome}
\author{Domenico Marinucci\textsuperscript{2}}
\email{marinucc@mat.uniroma2.it}
\address{\textsuperscript{1}Department of Mathematics, Tor Vergata University of Rome}
\author{Igor Wigman\textsuperscript{3}}
\email{igor.wigman@kcl.ac.uk}
\address{\textsuperscript{3}Department of Mathematics, King's College London}
\dedicatory{Dedicated to the memory of Jean Bourgain}
\date{\today}
\begin{abstract}
We consider a random Gaussian model of Laplace eigenfunctions on the hemisphere satisfying the Dirichlet
boundary conditions along the equator. For this model we find a precise asymptotic law for the corresponding
zero density functions, in both short range (around the boundary) and long range (far away from the boundary)
regimes. As a corollary, we were able to find a logarithmic negative bias for the total nodal length of
this ensemble relatively to the rotation invariant model of random spherical harmonics.
Jean Bourgain's research, and his enthusiastic approach to the nodal geometry of Laplace eigenfunctions, has
made a crucial impact in the field and the current trends within.
His works on the spectral correlations ~\cite[Theorem 2.2]{KKW} and joint with
Bombieri ~\cite{BB} have opened a door for an active ongoing research on the nodal length
of functions defined on surfaces of arithmetic flavour, like the torus or the square.
Further, Bourgain's work ~\cite{Bourgain derandomization}
on toral Laplace eigenfunctions, also appealing to spectral correlations, allowed for inferring deterministic results
from their random Gaussian counterparts.
\end{abstract}
\maketitle
\section{Introduction}
\subsection{Nodal length of Laplace eigenfunctions}
The nodal line of a smooth function $f:\M\rightarrow\R$, defined on a smooth compact surface $\M$, with or without a boundary,
is its zero set $f^{-1}(0)$. If $f$ is non-singular, i.e. $f$ has no critical zeros, then its nodal line is a smooth
curve with no self-intersections. An important descriptor of $f$ is its {\em nodal length}, i.e. the length of $f^{-1}(0)$,
receiving much attention in the last couple of decades, in particular, concerning the nodal length of the eigenfunctions
of the Laplacian $\Delta$ on $\M$, in the high energy limit.
Let $(\phi_{j},\lambda_{j})_{j\ge 1}$ be the Laplace eigenfunctions on $\M$, with energies $\lambda_{j}$ in increasing order
counted with multiplicity, i.e.
\begin{equation}
\label{eq:Helmholts eq}
\Delta \phi_{j}+\lambda_{j}\phi_{j}=0,
\end{equation}
endowed with the Dirichlet boundary conditions $\phi|_{\partial \M}\equiv 0$ in presence of nontrivial boundary.
In this context Yau's conjecture asserts that the nodal length
$\Lc(\phi_{j})$ of $\phi_{j}$ is commensurable with $\sqrt{\lambda_{j}}$, in the sense that
$$c_{\M}\cdot \sqrt{\lambda_{j}}\le \Lc(\phi_{j})\le C_{\M}\cdot \sqrt{\lambda_{j}},$$ with some constants $C_{\M}>c_{\M}>0$. Yau's conjecture was resolved for $\M$ analytic ~\cite{Bruning,Bruning-Gromes,DF}, and, more recently, a lower bound ~\cite{Log Lower} and a polynomial upper bound
~\cite{Log Mal,Log Upper} were asserted in {\em full generality} (i.e., for $\M$ smooth).
\subsection{(Boundary-adapted) random wave model}
In his highly influential work ~\cite{Berry 1977} Berry proposed to compare the high-energy Laplace eigenfunctions on generic chaotic surfaces
and their nodal lines to random monochromatic waves and their nodal lines respectively.
The random monochromatic waves (also called Berry's ``Random Wave Model" or RWM) is a
centred isotropic Gaussian random field $u:\R^{2}\rightarrow\R$ prescribed uniquely by the covariance function
\begin{equation}
\label{eq:cov RWM def}
\E[u(x)\cdot u(y)] = J_{0}(\|x-y\|),
\end{equation}
with $x,y\in\R^{2}$ and $J_{0}(\cdot)$ the Bessel $J$ function.
Let
\begin{equation}
\label{eq:K1 density RWM def}
K_{1}^{u}(x)=\phi_{u(x)}(0)\cdot \E[\|\nabla u(x)\|\big| u(x)=0]
\end{equation}
be the zero density, also called the ``first intensity" function of $u$, with $\phi_{u(x)}$ the probability density function of the random variable
$u(x)$.
In this isotropic case, it is easy to directly evaluate
\begin{equation}
\label{eq:K1 dens isotr expl}
K_{1}^{u}(x) \equiv \frac{1}{2\sqrt{2}},
\end{equation}
and then appeal to the Kac-Rice formula, valid under the easily verified non-degeneracy conditions on the random field $u$,
to evaluate the expected nodal length $\Lc(u;R)$ of $u(\cdot)$ restricted to a radius-$R$ disc $\Bc(R)\subseteq\R^{2}$
to be precisely
\begin{equation}
\label{eq:KacRice isotr RWM}
\E[\Lc(u;R)] =\int\limits_{\Bc(R)}K_{1}^{u}(x)dx=
\frac{1}{2\sqrt{2}}\cdot \area(\Bc(R)).
\end{equation}
Berry ~\cite{Berry 2002}
found that, as $R\rightarrow\infty$, the variance $\var(\Lc(u;R))$ satisfies the asymptotic law
\begin{equation}
\label{eq:log var RWM}
\var(\Lc(u;R)) = \frac{1}{256}\cdot R^{2}\log{R} + O(R^{2}),
\end{equation}
much smaller than the a priori heuristic prediction $\var(\Lc(u;R))\approx R^{3}$ made based on the natural scaling of the problem,
due to what is now known as ``Berry's cancellation" ~\cite{wig} of the leading non-oscillatory term of the $2$-point correlation function (also known as the ``second zero intensity").
\vspace{2mm}
Further, in the same work ~\cite{Berry 2002}, Berry studied the effect induced on the nodal length
of eigenfunctions satisfying the Dirichlet condition on a nontrivial boundary, both in its vicinity and far away from it.
With the (infinite) horizonal axis $\{(x_{1},x_{2}):\: x_{2}=0\}\subseteq \R^{2}$ serving as a model for the boundary, he introduced a Gaussian random field $v(x_{1},x_{2}):\R\times\R_{>0}\rightarrow\R$ of {\em boundary-adapted} (non-stationary) monochromatic random waves,
forced to vanish at $x_{2}=0$. Formally, $v(x_{1},x_{2})$ is the limit, as $J\rightarrow\infty$, of the superposition
\begin{equation*}
\frac{2}{\sqrt{J}}\sum\limits_{j=1}^{J}\sin(x_{2}\sin(\theta_{j}))\cdot \cos(x_{1}\cos(\theta_{j})+\phi_{j})
\end{equation*}
of $J$ plane waves of wavenumber $1$ forced to vanish at $x_{2}=0$.
Alternatively, $v$ is the centred Gaussian random field prescribed by the covariance function
\begin{equation}
\label{eq:cov BARWM def}
r_{v}(x,y):=\E[v(x)\cdot v(y)] = J_{0}(\|x-y\|)-J_{0}(\|x-\widetilde{y}\|),
\end{equation}
$x=(x_{1},x_{2})$, $y=(y_{1},y_{2})$, and $\widetilde{y}=(y_{1},-y_{2})$ is the mirror symmetry of $y$;
the law of $v$ is invariant w.r.t. horizontal shifts
\begin{equation}
\label{eq:v hor shift invar}
v(\cdot,\cdot)\mapsto v(a+\cdot,\cdot),
\end{equation}
$a\in\R$, but not the vertical shifts.
By comparing \eqref{eq:cov RWM def} to \eqref{eq:cov BARWM def}, we observe that,
far away from the boundary (i.e. $x_{2},y_{2}\rightarrow\infty$),
$r_{v}(x,y) \approx J_{0}(\|x-y\|)$, so that, in that range,
the (covariance of) boundary-adapted waves converge to the (covariance of) isotropic ones \eqref{eq:cov RWM def}, though the decay of the error term in this approximation is slow and of oscillatory nature.
Intuitively, it means that, at infinity, the boundary has a small impact on the random waves, though
it takes its toll on the {\em nodal bias}, as it was demonstrated by Berry, as follows.
\vspace{2mm}
Let $K_{1}^{v}(x)=K_{1}^{v}(x_{2})$ be the zero density of $v$, defined analogously to \eqref{eq:K1 density RWM def}, depending on
the height $x_{2}$ only, independent of $x_{1}$ by the inherent invariance \eqref{eq:v hor shift invar}. Berry
showed\footnote{Though a significant proportion of the details of the computation were omitted,
we validated Berry's assertions for ourselves.}
that, as $x_{2}\rightarrow 0$,
\begin{equation}
\label{eq:K1 Berry local}
K_{1}^{v}(x_{2}) \rightarrow \frac{1}{2\pi},
\end{equation}
and attributed this ``nodal deficiency" $\frac{1}{2\pi} < \frac{1}{2\sqrt{2}}$, relatively to \eqref{eq:K1 dens isotr expl},
to the a.s. orthogonality of the nodal lines touching the boundary ~\cite[Theorem 2.5]{Cheng}.
Further, as $x_{2}\rightarrow\infty$,
\begin{equation}
\label{eq:K1 Berry global}
K_{1}^{v}(x_{2}) = \frac{1}{2\sqrt{2}} \cdot \left( 1 + \frac{\cos(2x_{2}-\pi/4)}{\sqrt{\pi x_{2}}} -\frac{1}{32\pi x_{2}}+E(x_{2}) \right) ,
\end{equation}
with some prescribed error term\footnote{Here $E(x_{2})$ is of order $\frac{1}{|x_{2}|}$, so not smaller by magnitude than $\frac{1}{32\pi x_{2}}$, but of oscillatory nature, and will not contribute to the Kac-Rice integral along expanding domains, as neither the term $\frac{\cos(2x_{2}-\pi/4)}{\sqrt{\pi x_{2}}} $.} $E(\cdot)$. In this situation a natural choice for expanding domains are the rectangles $\Dc_{R}:=[-1,1]\times [0,R]$, $R\rightarrow\infty$ (say).
As an application of the Kac-Rice formula \eqref{eq:KacRice isotr RWM} in this case, it easily follows that
\begin{equation}
\label{eq:log neg imp}
\E[\Lc(v;\Dc_{R})]= \frac{1}{2\sqrt{2}}\cdot\area(\Dc_{R}) -\frac{1}{32\sqrt{2}\pi}\log{R} + O(1)
\end{equation}
i.e., a logarithmic ``nodal deficiency" relatively to \eqref{eq:KacRice isotr RWM}, impacted by the boundary infinitely many wave lengths away from it. The logarithmic fluctuations \eqref{eq:log var RWM} in the isotropic case $u$, {\em possibly} also holding for $v$, give rise to a hope to be able to detect the said, also logarithmic, negative boundary impact \eqref{eq:log neg imp} via a single sample of the nodal length, or, at least, very few ones.
\subsection{Random spherical harmonics}
The (unit) sphere $\M=\Sc^{2}$ is one of but few surfaces, where the solutions to the Helmholtz equation \eqref{eq:Helmholts eq}
admit an explicit solution. For a number $\ell\in \Z_{\ge 0}$, the space of solutions of \eqref{eq:Helmholts eq} with
$\lambda=\ell(\ell+1)$ is the $(2\ell+1)$-dimensional space of degree-$\ell$ spherical harmonics, and conversely, all solutions
to \eqref{eq:Helmholts eq} are spherical harmonics of some degree $\ell\ge 0$. Given $\ell\ge 0$, let
$\Ec_{\ell}:=\{\eta_{\ell,1},\ldots \eta_{\ell,2\ell+1}\}$
be any $L^{2}$-orthonormal basis of the space of spherical harmonics of degree $\ell$.
The random field
\begin{equation}
\label{eq:Ttild spher harm full}
\widetilde{T_{\ell}}(x) =
\sqrt{\frac{4 \pi}{2\ell+1}}\sum\limits_{k=1}^{2\ell+1} a_{k}\cdot \eta_{\ell,k}(x),
\end{equation}
with $a_{k}$ i.i.d. standard Gaussian
random variables, is the degree-$\ell$ random spherical harmonics.
The law of $\widetilde{T_{\ell}}$
is invariant w.r.t. the chosen orthonormal basis
$\Ec_{\ell}$, uniquely defined via the covariance function
\begin{equation}
\label{eq:covar RSH}
\E[\widetilde{T_{\ell}}(x)\cdot \widetilde{T_{\ell}}(y)] = P_{\ell}(\cos {d(x,y)}),
\end{equation}
with $P_{\ell}(\cdot)$ the Legendre polynomial of degree $\ell$, and $d(\cdot,\cdot)$ is the spherical distance between $x,y\in\Sc^{2}$.
The random fields $\{\widetilde{T_{\ell}}\}$ are the Fourier components in the $L^{2}$-expansion of {\em every} isotropic random field
~\cite{MP}, of interest, for instance, in cosmology and the study of Cosmic Microwave Background radiation (CMB).
\vspace{2mm}
Let $\Lc(\widetilde{T_{\ell}})$ be the total nodal length of $\widetilde{T_{\ell}}$, of high interest for various pure and applied disciplines, including the above. Berard
~\cite{Berard} evaluated the expected nodal length to be precisely
\begin{equation}
\label{eq:Berard exp}
\E[\Lc(\widetilde{T_{\ell}})] = \sqrt{2\pi}\cdot \sqrt{\ell(\ell+1)},
\end{equation}
and, as $\ell\rightarrow\infty$ its variance is asymptotic ~\cite{wig} to
\begin{equation}
\label{eq:var log spher}
\var(\Lc(\widetilde{T_{\ell}})) \sim \frac{1}{32}\log{\ell},
\end{equation}
in accordance with Berry's \eqref{eq:log var RWM}, save for the scaling, and the invariance of the nodal lines w.r.t. the symmetry $x\mapsto -x$
of the sphere, resulting in a doubled leading constant in \eqref{eq:var log spher} relatively to \eqref{eq:log var RWM} suitably scaled.
A more recent proof ~\cite{MRWHP} of the Central Limit Theorem for $\Lc(\widetilde{T_{\ell}})$, asserting the asymptotic Gaussianity of
$$ \frac{\Lc(\widetilde{T_{\ell}}) - \E[\Lc(\widetilde{T_{\ell}})]}{\sqrt{\frac{1}{32}\log{\ell}}},$$ is sufficiently robust to also yield the Central Limit Theorem, as $R\rightarrow\infty$
for the nodal length $\Lc(u;R)$ of Berry's random waves, as it was recently demonstrated ~\cite{Vidotto}, also claimed by ~\cite{NPR}.
\subsection{Principal results: nodal bias for the hemisphere, at the boundary, and far away}
Our principal results concern the hemisphere $\Hc^{2}\subseteq \Sc^{2}$, endowed with the {\em Dirichlet} boundary conditions along the equator.
We will widely use the spherical coordinates $$\Hc^{2}=\{(\theta,\phi):\: \theta\in [0,\pi/2],\,\phi\in [0,2\pi)\},$$ with the equator identified
with $\{\theta=\pi/2\}\subseteq \Hc^{2}$. Here all the Laplace eigenfunctions are necessarily spherical harmonics restricted to $\Hc^{2}$, subject to some extra properties. Recall that a concrete (complex-valued) orthonormal basis of degree $\ell$ are the Laplace spherical harmonics
$\{Y_{\ell,m}\}_{m=-\ell}^{\ell}$, given in the spherical coordinates by
$$Y_{\ell,m}(\theta,\phi) = e^{im\phi}\cdot P_{\ell}^{m}(\cos{\theta}),$$ with $P_{\ell}^{m}(\cdot)$
the associated Legendre polynomials of degree $\ell$ on order $m$.
For $\ell\ge 0$, $|m|\le \ell$ the spherical harmonic $Y_{\ell,m}$ obeys the Dirichlet boundary condition on the equator, if and only if
$m\not\equiv\ell\mod{2}$, spanning a subspace of dimension $\ell$ inside the $(2\ell+1)$-dimensional space of spherical harmonics of degree $\ell$
~\cite[Example 4]{HT}.
(Its $(\ell+1)$-dimensional orthogonal complement is the subspace satisfying the Neumann boundary condition.) Conversely, every Laplace eigenfunction on $\Hc^{2}$ is necessarily a spherical harmonic of some degree $\ell\ge 0$ that is a linear combination of $Y_{\ell,m}$ with $m\not\equiv\ell\mod{2}$.
\vspace{2mm}
The principal results of this paper concern the following model of {\em boundary-adapted} random spherical
harmonics
\begin{equation}
\label{eq:BARSH def}
T_{\ell}(x) = \sqrt{ \frac{8 \pi }{2\ell+1} }\sum\limits_{\substack{m=-\ell\\m\not\equiv\ell\mod{2}}}^{\ell}a_{\ell,m}Y_{\ell,m}(x),
\end{equation}
where the $a_{\ell,m}$ are the standard (complex-valued) Gaussian random variables subject to the constraint $a_{\ell,-m}=\overline{a_{\ell,m}}$,
so that $T_{\ell}(\cdot)$ is real-valued. Our immediate concern is for the law of $T_{\ell}$, which, as for any centred Gaussian random field, is uniquely determined by its covariance function, claimed by the following proposition.
\begin{proposition} \label{16:52}
The covariance function of $T_{\ell}$ as in \eqref{eq:BARSH def} is given by
\begin{equation}
\label{eq:covar BARSH}
r_{\ell}(x,y):= \E[T_{\ell}(x)\cdot T_{\ell}(y)] = P_{\ell}(\cos d(x,y)) - P_{\ell}(\cos d(x,\overline{y})),
\end{equation}
where $\overline{y}$ is the mirror symmetry of $y$ around the equator, i.e. $y=(\theta,\phi)\mapsto \overline{y} = (\pi-\theta,\phi)$
in the spherical coordinates.
\end{proposition}
It is evident, either from the definition or the covariance, that the law of $T_{\ell}$ is invariant w.r.t. rotations of $\Hc^{2}$ around
the axis orthogonal to the equator, that is, in the spherical coordinates,
\begin{equation}
\label{eq:Tl rot z axis}
T_{\ell}(\theta,\phi)\mapsto T_{\ell}(\theta,\phi+\phi_{0}),
\end{equation}
$\phi\in [0,2\pi)$.
The boundary impact of \eqref{eq:covar BARSH} relatively to \eqref{eq:covar RSH} is in perfect harmony with the boundary impact of the covariance
\eqref{eq:cov BARWM def} of Berry's boundary-adapted model relatively to the isotropic case \eqref{eq:cov RWM def}, except that the mirror symmetry
$y\mapsto \widetilde{y}$ relatively to the $x$ axis in the Euclidean situation is substituted by mirror symmetry $y\mapsto\overline{y}$ relatively to the equator for the spherical geometry. These generalize to $2$ dimensions the boundary impact on the ensemble of stationary random trigonometric polynomials on the circle ~\cite{Qualls,GW} resulting in the ensemble of non-stationary random trigonometric polynomials vanishing at the endpoints ~\cite{Dunnage,ADL}.
Let
\begin{align} \label{K1}
K_{1,\ell}(x) = \frac{1}{\sqrt{2\pi}\cdot \sqrt{\var(T_{\ell}(x))}}\E\big[\|\nabla T_{\ell}(x)\|\big| T_{\ell}(x)=0\big],
\end{align}
be the zero density of $T_{\ell}$, that, unlike the rotation invariant the spherical harmonics \eqref{eq:Ttild spher harm full}, genuinely depends on $x\in\Hc$.
More precisely, by the said invariance w.r.t. \eqref{eq:Tl rot z axis}, the zero density $K_{1,\ell}(x)$ depends on the polar angle $\theta$ only.
We rescale by introducing the variable
\begin{equation}
\label{eq:psi rescale}
\psi = \ell (\pi - 2 \theta),
\end{equation}
and, with a slight abuse of notation, write
$$K_{1,\ell}(\psi)=K_{1,\ell}(x).$$
Our principal result deals with the asymptotics of $K_{1,\ell}(\cdot)$, in two different regimes,
in line with \eqref{eq:K1 Berry local} and \eqref{eq:K1 Berry global} respectively.
\begin{theorem}
\label{thm:main asympt}
\begin{enumerate}
\item For $C>0$ sufficiently large, as $\ell \to \infty$, one has
\begin{align}
\label{eq:nod bias hemi far from boundary}
K_{1,\ell}(\psi)&= \frac{\sqrt{\ell(\ell+1)}}{2 \sqrt 2} \left[ 1 + \sqrt{\frac 2 \pi} \frac{1}{\sqrt \psi} \cos\{(\ell+1/2)\psi/\ell-\pi/4\} -\frac{1}{16 \pi \psi} \right. \\
& \;\; \left.+ \frac{15}{16 \pi \psi} \cos\{(\ell+1/2)2\psi/\ell-\pi/2\}\right] +O(\psi^{-3/2} \ell^{-2} ), \nonumber
\end{align}
uniformly for $C < \psi < \pi \ell$, with the constant involved in the $`O'$-notation absolute.
\vspace{0.5cm}
\item For $\ell\ge 1$ one has the uniform asymptotics
\begin{align}
\label{eq:nod bias hemi close to boundary}
K_{1,\ell}(\psi)
= \frac{\ell}{2 \pi} \left[ 1 + O(\ell^{-1}) + O(\psi^2) \right],
\end{align}
with the constant involved in the $`O'$-notation absolute.
\end{enumerate}
\end{theorem}
Clearly, the statement \eqref{eq:nod bias hemi close to boundary} is asymptotic for $\psi$ small only, otherwise yielding the mere bound $K_{1,\ell}(\psi)=O(\ell)$, which is easy.
As a corollary to Theorem \ref{thm:main asympt}, one may evaluate the asymptotic law of the total expected nodal length of $T_{\ell}$,
and detect the negative logarithmic bias relatively to \eqref{eq:Berard exp}, in full accordance with Berry's \eqref{eq:log neg imp}.
\begin{corollary} \label{cor}
As $\ell \to \infty$, the expected nodal length has the following asymptotics:
\begin{align*}
\E[\Lc({T_{\ell}})] = 2 \pi \frac{\sqrt{\ell(\ell+1)}}{2 \sqrt 2} - \frac{1}{32 \sqrt 2} \log(\ell) + O(1).
\end{align*}
\end{corollary}
\vspace{2mm}
\subsection*{Acknowledgements}
We are grateful to Ze\'{e}v Rudnick for raising the question addressed within this manuscript.
V.C. has received funding from the Istituto Nazionale di Alta Matematica (INdAM)
through the GNAMPA Research Project 2020 ``Geometria stocastica e campi aleatori".
D.M. is supported by the MIUR Departments of Excellence Program
Math@Tov.
\section{Discussion}
\subsection{Toral eigenfunctions and spectral correlations}
Another surface admitting explicit solutions to the Helmholtz equation \eqref{eq:Helmholts eq} is the standard torus $\Tb^{2}=\R^{2}/\Z^{2}$. Here
the Laplace eigenfunctions with eigenvalue $4\pi^{2}n$ all correspond to an integer $n$ expressible as a sum of two squares, and are given by a sum
\begin{equation}
\label{eq:fn eig torus def}
f_{n}(x)= \sum\limits_{\|\mu\|^{2}=n} a_{\mu}e(\langle \mu,x\rangle)
\end{equation}
over all lattice points $\mu=(\mu_{1},\mu_{2})\in \Z^{2}$ lying on the radius-$\sqrt{n}$ centred circle,
$n$ is a sum of two squares with $e(y):=e^{2\pi iy}$,
$\langle\mu,x\rangle= \mu_{1}x_{1}+\mu_{2}x_{2}$, $x=(x_{1},x_{2})\in\Tb^{2}$. Following ~\cite{ORW}, one endows the eigenspace of $\{f_{n}\}$
with a Gaussian probability measure with the coefficients $a_{\mu}$ standard (complex-valued) i.i.d. Gaussian, save for $a_{-\mu}=\overline{a_{\mu}}$, resulting in the ensemble of ``arithmetic random waves".
\vspace{2mm}
The expected nodal length of $f_{n}$ was computed ~\cite{RW08} to be
\begin{equation}
\label{eq:RW nod length tor}
\E[\Lc(f_{n})] = \sqrt{2}\pi^{2}\cdot \sqrt{n},
\end{equation}
and the useful upper bound
\begin{equation*}
\var(\Lc(f_{n})) \ll \frac{n}{\sqrt{r_{2}(n)}}
\end{equation*}
was also asserted, with $r_{2}(n)$ the number of lattice points lying on the radius-$\sqrt{n}$ circle, or, equivalently, the dimension
of the eigenspace $\{f_{n}\}$ as in \eqref{eq:fn eig torus def}. A precise asymptotic law for $\var(\Lc(f_{n}))$ was subsequently established
~\cite{KKW}, shown to fluctuate, depending on the angular distribution of the lattice points. A non-central non-universal limit
theorem was asserted ~\cite{MRW}, also depending on the angular distribution of the lattice points.
An instrumental key input to both the said asymptotic variance and the limit law
was Bourgain's first nontrivial upper bound ~\cite[Theorem 2.2]{KKW} of $o_{r_{2}(n)\rightarrow\infty}\left(r_{2}(n)^{4}\right)$
for the number of length-$6$ {\em spectral correlations}, i.e. $6$-tuples of lattice points $\{\mu:\:\|\mu\|^{2}=n\}$ summing up to $0$.
Bourgain's bound was subsequently improved and generalized to higher order correlations ~\cite{BB}, in various degrees of generality,
conditionally or unconditionally. These results are still actively used within the subsequent and ongoing research, in particular,
~\cite{Bourgain derandomization} and its followers.
\subsection{Boundary impact}
It makes sense to compare the torus to the square with Dirichlet boundary, and test what kind of impact it would have relatively to
\eqref{eq:RW nod length tor} on the expected nodal length, as the ``boundary-adapted arithmetic random waves", that were addressed in ~\cite{CKW}.
It was concluded, building on Bourgain-Bombieri's ~\cite{BB}, and by appealing to a
different notion of spectral correlation,
namely, the spectral {\em semi-correlations}, that, even at the level of expectation, the total nodal bias is fluctuating from nodal deficiency (negative bias) to nodal surplus (positive bias),
depending on the angular distribution of the lattice points and its interaction with the direction of the square boundary, at least,
for generic energy levels.
A similar experiment conducted by Gnutzmann-Lois for cuboids of arbitrary dimensions, averaging for eigenfunctions admitting separation of variables belonging to different eigenspaces, revealed consistency with Berry's nodal deficiency ansatz
stemming from \eqref{eq:log neg imp}.
\vspace{2mm}
It would be useful to test whether different Gaussian random fields on the square would result in different limiting nodal bias around the boundary corresponding to \eqref{eq:nod bias hemi close to boundary}, that is likely to bring in a different notion of spectral correlation, not unlikely ``quasi-semi-correlation" ~\cite{BMW,KS}. Another question of interest is ``de-randomize" any of these results, i.e. infer the corresponding results on deterministic eigenfunctions following Bourgain ~\cite{Bourgain derandomization}. We leave all of these to be addressed elsewhere.
\section{Joint distribution of $(f_n(x),\nabla f_n(x))$}
In the analysis of $K_{1,\ell}(x)$ we naturally encounter the distribution of $T_{\ell}(x)$, determined by $${\rm Var}(T_{\ell}(x))=1-P_{\ell}(\cos d(x, \bar{x}));$$ and the distribution of $\nabla T_{\ell}(x)$ conditioned on $T_{\ell}(x)=0$, determined by its $2 \times 2$ covariance matrix
$${\bf \Omega}_{\ell}(x)=\mathbb{E}[\nabla T_{\ell}(x) \cdot \nabla^t T_{\ell}(x) | T_{\ell}(x)=0].$$
Let $x$ correspond to the spherical coordinates $(\theta,\phi)$. An explicit computation shows that the covariance matrix ${\bf \Omega}_{\ell}(x)$ depends only on $\theta$, and below we will often abuse notation to write ${\bf \Omega}_{\ell}(\theta)$ instead, and also, when convenient,
${\bf \Omega}_{\ell}(\psi)$ with $\psi$ as in \eqref{eq:psi rescale}. A direct computation shows that:
\begin{lemma} \label{13:48}
The $2 \times 2$ covariance matrix of $\nabla T_{\ell}(x)$ conditioned on $T_{\ell}(x)=0$ is the following real symmetric matrix
\begin{align}
\label{eq:Omega eval}
{\bf \Omega}_\ell(x)= \frac{\ell(\ell+1)}{2} \left[ {\bf I}_2 + {\bf S}_\ell(x) \right],
\end{align}
where
\begin{align*}
{\bf S}_\ell(x)&= \left( \begin{array}{cc}
S_{11,\ell}(x)& 0\\
0& S_{22,\ell}(x)
\end{array}\right),
\end{align*}
and for $x=(\theta,\phi)$
\begin{align*}
S_{11,\ell}(x)&=-\frac{2}{\ell (\ell+1)} \Big[\cos(2 \theta) \; P'_{\ell}(\cos(\pi-2 \theta))+ \sin^2(2 \theta) \; P''_{\ell}(\cos(\pi-2 \theta)) \\
&\;\;+ \frac{1}{1- P_{\ell}(\cos(\pi-2 \theta))} \sin^2(2 \theta) \; [P'_{\ell}(\cos(\pi-2 \theta)) ]^2\Big],\\
S_{22,\ell}(x)&= -\frac{2}{\ell (\ell+1)} P'_{\ell}(\cos(\pi-2 \theta)).
\end{align*}
\end{lemma}
In the next two sections we prove Lemma \ref{13:48}, that is, we evaluate the $2 \times 2$ covariance matrix of $\nabla T_{\ell}(x)$ conditioned upon $T_{\ell}(x)=0$. First, in section \ref{uncond}, we evaluate the unconditional $3 \times 3$ covariance matrix ${\bf \Sigma}_{\ell}(x)$ of $(T_{\ell}(x), \nabla T_{\ell}(x))$ and then, in section \ref{con_cov_matrix}, we apply the standard procedure for conditioning multivariate Gaussian random variables.
\subsection{The unconditional covariance matrix} \label{uncond}
The covariance matrix of $$(T_{\ell}(x), \nabla T_{\ell}(x)),$$ which could be expressed as
\begin{align*}
{\bf \Sigma}_{\ell}(x)=\left(
\begin{array}{cc}
{\bf A}_{\ell}(x) & {\bf B}_{\ell}(x) \\
{\bf B}_{\ell}^t(x) & {\bf C}_{\ell}(x)
\end{array}
\right),
\end{align*}
where
\begin{align*}
{\bf A}_{\ell}(x)&={\rm Var}(T_{\ell}(x)),\\%=P_{\ell}(1)- P_{\ell}(\cos(\pi-2 \theta)),\\
{\bf B}_{\ell}(x)&=\mathbb{E}[T_{\ell}(x)\cdot \nabla_{y} T_{\ell}(y)] \big|_{x=y},\\
{\bf C}_{\ell}(x)&=\mathbb{E}[ \nabla_x T_{\ell}(x) \otimes \nabla_{y} T_{\ell}(y)] \big|_{x=y}.
\end{align*}
The $1 \times 2$ matrix ${\bf B}_{\ell}(x)$ is
\begin{align*}
{\bf B}_{\ell}(x)=\left(
\begin{array}{cc}
B_{\ell,1}(x) &
B_{\ell,2}(x)
\end{array}
\right),
\end{align*}
where ${\bf B}_{\ell}(x)$ depends only on $\theta$, and by an abuse of notation we write
\begin{align*}
B_{\ell,1}(x)&=\frac{\partial}{\partial \theta_{y} }r_\ell(x,y) \Big|_{x=y} = - \sin(2 \theta) \cdot P'_{\ell}(\cos(\pi-2 \theta)), \\
B_{\ell,2}(x)&=\frac{1}{\sin \theta_{y}} \cdot\frac{\partial}{\partial \phi_{y}} r_\ell(x,y) \Big|_{x=y} =0.
\end{align*}
The entries of the $2 \times 2$ matrix ${\bf C}_{\ell}(x)$ are
\begin{align*}
{\bf C}_{\ell}(x)=\left(
\begin{array}{cc}
C_{\ell,11}(x) & C_{\ell,12}(x) \\
C_{\ell,21}(x) & C_{\ell,22}(x)
\end{array}
\right),
\end{align*}
where again recalling that $x=(\theta,\phi)$ we write
\begin{align*}
C_{\ell,11}(x)&=\frac{\partial}{\partial \theta_x} \frac{\partial}{\partial \theta_{y}} r_\ell(x,y) \Big|_{x=y}\\
&=P'_{\ell}(1)- \cos(2 \theta) \; P'_{\ell}(\cos(\pi-2 \theta)) - \sin^2(2 \theta) \; P''_{\ell}(\cos(\pi-2 \theta)),\\
C_{\ell,12}(x)&= C_{\ell,21}(x)= \frac{1}{\sin \theta_{y} }\frac{\partial}{\partial \phi_{y}} \frac{\partial}{\partial \theta_x} r_\ell (x,y) \Big|_{x=y} = 0,\\
C_{\ell,22}(x)&= \frac{1}{\sin \theta_{y}}\frac{\partial}{\partial \phi_{y} } \frac{1}{\sin \theta_x}\frac{\partial}{\partial \phi_x} r_\ell(x,y) \Big|_{x=y} = P'_{\ell}(1)-P'_{\ell}(\cos(\pi-2 \theta)).
\end{align*}
\subsection{Conditional covariance matrix} \label{con_cov_matrix}
The conditional covariance matrix of the Gaussian vector $(\nabla T_{\ell}(x)|T_{\ell}(x)=0)$ is given by the standard Gaussian transition formula:
\begin{align}
\label{eq:Omega transition}
{\bf \Omega}_\ell(x)={\bf C}_\ell(x) - \frac{1}{\var(T_{\ell}(x))} {\bf B}^t_\ell(x) {\bf B}_\ell(x).
\end{align}
Again taking $x=(\theta,\phi)$ and observing that
\begin{align*}
\frac{{\bf B}^t_\ell(x) {\bf B}_\ell(x) }{\var(T_{\ell}(x))} = \frac{1}{1- P_{\ell}(\cos(\pi-2 \theta))}
\left(
\begin{array}{cc}
\sin^2(2 \theta) \cdot [P'_{\ell}(\cos(\pi-2 \theta)) ]^2 &0 \\
0 &0
\end{array}
\right),
\end{align*}
and $$P'_{\ell}(1)=\frac{\ell(\ell+1)}{2},$$ we have
\begin{align*}
{\bf \Omega}_\ell(x) &= \frac{\ell (\ell+1)}{2} {\bf I}_2 \\
&\;\;- \left(
\begin{array}{cc}
\cos(2 \theta) \cdot P'_{\ell}(\cos(\pi-2 \theta)) + \sin^2(2 \theta)\cdot P''_{\ell}(\cos(\pi-2 \theta)) & 0 \\
0 & P'_{\ell}(\cos(\pi-2 \theta))
\end{array}
\right) \\
&\;\; - \frac{1}{1- P_{\ell}(\cos(\pi-2 \theta))}
\left(
\begin{array}{cc}
\sin^2(2 \theta) \cdot [P'_{\ell}(\cos(\pi-2 \theta)) ]^2 &0 \\
0 &0
\end{array}
\right),
\end{align*}
that is the statement of Lemma \ref{13:48}. \\
\vspace{0.5cm}
\section{Proof of Theorem \ref{thm:main asympt}(1): Perturbative analysis away from the boundary}
\subsection{Perturbative analysis}
The asymptotic analysis \eqref{eq:nod bias hemi far from boundary} is in two steps.
First, we evaluate the variance ${\rm Var}(T_{\ell}(x))$ and each entry in ${\bf S}_\ell(x)$ using the high degree asymptotics of the Legendre polynomials and its derivatives (Hilb's asymptotics). In the second step, performed within Proposition \ref{prop:19:31}, we exploit the
analyticity of the
Gaussian expectation \eqref{K1} as a function of the entries of the corresponding {\em non-singular} covariance matrix,
to Taylor expand $K_{1,\ell}(x)$ where both ${\rm Var}(T_{\ell}(x))-1$ and the entries of ${\bf S}_\ell(x)$ are assumed to be small.
\begin{lemma}[Hilb's asymptotics]
\label{hilb0}
\begin{equation*}
P_\ell(\cos \varphi)=\left( \frac{\varphi}{\sin \varphi}\right)^{1/2} J_0((\ell+1/2) \varphi)+\delta_\ell(\varphi),
\end{equation*}
uniformly for $0 \le \varphi \le \pi-\varepsilon$, where $J_0$ is the Bessel function of the first kind. For the error term we have the bounds
\begin{align*}
\delta_\ell(\varphi) \ll
\begin{cases}
\varphi^2 O(1), & 0 < \varphi \le C/\ell, \\
\varphi^{1/2} O(\ell^{-3/2}), & C/ \ell \le \varphi \le \pi - \varepsilon ,
\end{cases}
\end{align*}
where $C$ is a fixed positive constant and the constants involved in the $O$-notation depend on $C$ only.
\end{lemma}
\begin{lemma}
\label{bessel} The following asymptotic representation for the Bessel functions of the first kind holds:
\begin{align*}
J_0(x)&=\left( \frac{2}{\pi x} \right)^{1/2} \cos(x- \pi/4)
\sum_{k=0}^\infty (-1)^k g(2k) \; (2 x)^{-2 k} \\
&\;\;+\left( \frac{2}{\pi x} \right)^{1/2} \cos(x+ \pi/4)
\sum_{k=0}^\infty (-1)^k g(2k+1)\; (2 x)^{-2 k-1},
\end{align*}
where $\varepsilon>0$, $|\arg x|\le \pi-\varepsilon$, $g(0)=1$ and $g(k)=\frac{(-1)(-3^2) \cdots (-(2k-1)^2)}{2^{2k} k!}=(-1)^k \frac{[(2k)!!]^2}{2^{2k} k!}$.
\end{lemma}
For a proof of Lemma \ref{hilb0} and Lemma \ref{bessel} we refer to \cite[Theorem 8.21.6]{szego} and \cite[section 5.11]{lebedev} respectively. \\
Recall the scaled variable $\psi$ related to $\theta$ via \eqref{eq:psi rescale}, so that an application of lemmas \ref{hilb0} and \ref{bessel}, yields that, for $\ell \ge 1$ and $C< \psi < \ell \pi$,
\begin{align*}
P_{\ell}(\cos(\psi/\ell))
&=\sqrt{\frac{2}{\pi}} \frac{\ell^{-1/2}}{\sin^{1/2} (\psi/\ell)}\Big[ \cos ((\ell+1/2)\psi/\ell-\pi/4) - \frac{1}{2 \ell \psi/\ell} \cos ((\ell+1/2)\psi/\ell+\pi/4) \Big]\\
&\;\;+ O((\psi/\ell)^{1/2} \ell^{-3/2} ).
\end{align*}
Observing that
\begin{align*}
&\frac{\ell^{-1/2}}{\sin^{1/2} (\psi/\ell)} = \ell^{-1/2} \left[\frac{1}{\sqrt{\psi/\ell}} +O((\psi/\ell)^{\frac 3 2}) \right] = \frac{1}{\sqrt \psi} + O(\psi^{3/2} \ell^{-2}),\\
&\frac{\ell^{-1/2}}{\sin^{1/2} (\psi/\ell)} \frac{1}{2 \psi} =O( \psi^{-3/2}),
\end{align*}
we write
\begin{align} \label{22:31}
P_{\ell}(\cos(\psi/\ell))=\sqrt{\frac{2}{\pi}}\frac{1}{\sqrt \psi} \cos ((\ell+1/2)\psi/\ell-\pi/4)+O(\psi^{-3/2})+O(\psi^{3/2} \ell^{-2}).
\end{align}
A repeated application of lemmas \ref{hilb0} and \ref{bessel} also yields an asymptotic estimate for the first couple of derivatives of the Legendre Polynomials \cite[Lemma 9.3]{CMW}:
\begin{align}
P'_{\ell }(\cos (\psi/\ell) )&=\sqrt{\frac{2}{\pi }}\frac{\ell^{1-1/2}}
{\sin ^{1+1/2} (\psi/\ell) }\left[\sin ((\ell+1/2) \psi/\ell-\pi/4) -\frac{1}{8\ell \psi/\ell }\sin
((\ell+1/2)\psi/\ell+\pi/4) \right] \nonumber \\
&\;\;+O(\ell^{- \frac 1 2} (\psi/\ell)^{-\frac 5 2 }), \nonumber
\end{align}
and
\begin{align}
&P''_{\ell }(\cos (\psi/\ell) ) \nonumber\\
&=\sqrt{\frac{2}{\pi }} \frac{\ell^{2-1/2}}{\sin ^{2+1/2} (\psi/\ell) }\left[-\cos ((\ell+1/2)\psi/\ell-\pi/4) +\frac{1}{8\ell \psi/\ell }\cos ((\ell+1/2)\psi/\ell+\pi/4)\right] \nonumber \\
&\;\; -\sqrt{\frac{2}{\pi }} \frac{\ell ^{1-1/2}}{\sin ^{3+1/2} (\psi/\ell) }\left[\cos
((\ell-1+1/2)\psi/\ell+\pi/4)+\frac{1}{8\ell \psi/\ell }\cos ((\ell-1+1/2)\psi/\ell-\pi/4) \right] \nonumber \\
&\;\; +O(\psi^{-7/2} \ell^{4}). \nonumber
\end{align}
\vspace{2mm}
Since we have that
\begin{align*}
&\frac{\ell^{1-1/2}}{\sin^{1+1/2} (\psi/\ell) }=\ell^{1-1/2} \left[ \frac{1}{(\psi/\ell)^{3/2}}+O((\psi/\ell)^{1/2}) \right]= \frac{\ell^{2}}{\psi^{3/2}} +O(\psi^{1/2}) \\
&\frac{\ell^{1-1/2}}{\sin ^{1+1/2} (\psi/\ell) } \frac{1}{ \psi} =O( \psi^{-5/2} \ell^2),
\end{align*}
we have
\begin{align} \label{22:32}
P'_{\ell }(\cos (\psi/\ell) )=\sqrt{\frac{2}{\pi }} \frac{\ell ^{1-1/2}}{\sin ^{1+1/2} (\psi/\ell) } \sin ((\ell+1/2)\psi/\ell-\pi/4) + O(\psi^{-5/2} \ell^{2}), \end{align}
and observing that
\begin{align*}
& \frac{\ell^{2-1/2}}{\sin ^{2+1/2} (\psi/\ell) }=\ell^{2-1/2} \left[\frac{1}{(\psi/\ell)^{5/2}} + O((\psi/\ell)^{- 1/2} )\right]=\frac{\ell^4}{\psi^{5/2}} +(\psi^{-1/2} \ell^2) \\
& \frac{\ell^{2-1/2}}{\sin ^{2+1/2} (\psi/\ell) } \frac{1}{\psi} =O(\psi^{-7/2} \ell^{4}) \\
& \frac{\ell^{1-1/2}}{\sin ^{3+1/2} (\psi/\ell) }=\ell^{1-1/2} \left[\frac{1}{(\psi/\ell)^{7/2}} + O((\psi/\ell)^{-3/2}) \right]=\frac{\ell^4}{\psi^{7/2}}+O(\psi^{-3/2} \ell^2)
\end{align*}
we obtain
\begin{equation} \label{22:33}
\begin{split}
P''_{\ell }(\cos (\psi/\ell) ) &=- \sqrt{\frac{2}{\pi }} \frac{\ell^{2-1/2}}{\sin ^{2+1/2} (\psi/\ell) } \cos ((\ell+1/2)\psi/\ell-\pi/4) \\&+O(\psi^{-3/2} \ell^2) +O(\psi^{-7/2} \ell^4).
\end{split}
\end{equation}
The estimates in \eqref{22:31}, \eqref{22:32} and \eqref{22:33}, imply that for $\ell \ge 1$ and uniformly for $C< \psi < \ell \pi$, with $C>0$, we have
\begin{align} \label{sell}
P_{\ell}(\cos(\psi/\ell))&=\sqrt{\frac{2}{\pi}}\frac{1}{\sqrt \psi}\cos ((\ell+1/2)\psi/\ell -\pi/4)+O(\psi^{-3/2})+O(\psi^{3/2} \ell^{-2}),\\
\{P_{\ell}(\cos(\psi/\ell))\}^2&= \frac{2}{\pi} \frac{1}{ \psi} \cos^2 ((\ell+1/2)\psi/\ell -\pi/4)+O(\psi^{-2})+O(\psi \ell^{-2}). \nonumber
\end{align}
With the same abuse of notation as above,
we write ${\bf S}_\ell(\psi):={\bf S}_\ell(x)$ as in Lemma \ref{13:48}, and in analogous manner for its individual entries
$S_{ij;\ell}(\psi):=S_{11;\ell}(x)$. We have
\begin{align}
S_{11;\ell}(\psi)&=2 \sqrt{\frac 2 \pi} \frac{1}{\sqrt \psi} \cos((\ell+1/2)\psi/\ell-\pi/4)- \frac{4}{\pi} \frac{1}{\psi} \sin^2((\ell+1/2)\psi/\ell-\pi/4) \label{Sell11} \\&
\;\;+O(\psi^{-3/2})+O(\psi^{3/2} \ell^{-2}), \nonumber \\
S_{22;\ell}(\psi)
&=- 2 \sqrt{\frac{2}{\pi }} \frac{1}{\psi^{3/2}} \sin ((\ell+1/2)\psi/\ell-\pi/4) +O(\psi^{1/2} \ell^{-2})+ O(\psi^{-5/2} ). \label{Sell22}
\end{align}
The next proposition prescribes a precise asymptotic expression for the density function $K_{1,\ell}(\cdot)$ via a Taylor expansion of the relevant Gaussian expectation as a function of the associated covariance matrix entries.
\begin{proposition} \label{prop:19:31}
For $C >0$ sufficiently large we have the following expansion on $C < \psi < \ell \pi$:
\begin{align} \label{taylorx}
K_{1,\ell}(\psi) &=\frac{ \sqrt{\ell(\ell+1)}}{2 \sqrt 2} + L_{\ell}(\psi)+E_{\ell}(\psi),
\end{align}
with the leading term
\begin{align*}
L_{\ell}(\psi)&=\frac{ \sqrt{\ell(\ell+1)}}{4 \sqrt 2} \left[ s_{\ell}(\psi) + \frac{1}{2 } {\rm tr} \,{\bf S}_{\ell}(\psi) + \frac{3}{4 } s^2_{\ell}(\psi) + \frac{1}{4 } s_{\ell}(\psi)\,{\rm tr}\,{\bf S}_{\ell}(\psi)- \frac{1}{16 } {\rm tr} \, {\bf S}^2_{\ell}(\psi) - \frac{1}{32 } ({\rm tr} \, {\bf S}_{\ell}(\psi))^2 \right],
\end{align*}
where $s_{\ell}(\psi)=P_{\ell}(\cos(\psi/\ell))$, and the error term $E_{\ell}(\psi)$ is bounded by
\begin{align*}
|E_{\ell}(\psi)|=O(\ell \cdot (|s_\ell(\psi)|^3 + |{\bf S}_\ell(\psi)|^3 )),
\end{align*}
with constant involved in the $O$-notation absolute.
\end{proposition}
\begin{proof}
To prove Proposition \ref{prop:19:31} we perform a precise Taylor analysis for the density function $K_{1,\ell}(\psi)$, assuming that both $s_{\ell}(\psi)$ and the entries of ${\bf S}_{\ell}(\psi)$ are small.
We introduce the scaled covariance matrix (see \eqref{eq:Omega eval})
\begin{align*}
{\bf \Delta}_\ell(\psi)= \frac{2}{\ell (\ell+1)} {\bf \Omega}_\ell(\psi) &= {\bf I}_2 + {\bf S}_\ell(\psi).
\end{align*}
The density function $K_{1,\ell}(\cdot)$ could be expressed as
\begin{align*}
K_{1,\ell}(\psi)= \frac{1}{\sqrt{2 \pi}} \frac{1}{\sqrt{1-s_\ell(\psi)}} \frac{1}{2 \pi \sqrt{ \text{det} \, {\bf \Delta}_\ell(\psi) }}
\frac{\sqrt{\ell(\ell+1)}}{\sqrt 2} \iint_{\mathbb{R}^2} ||z|| \exp \Big\{ -\frac 1 2 z {\bf \Delta}^{-1}_\ell(\psi) z^t \Big\} d z,
\end{align*}
On $(C,\pi \ell)$, with $C$ sufficiently large, we Taylor expand
\begin{align*}
\frac{1}{\sqrt{1-s_\ell(\psi)}}=1+ \frac{1}{2} s_\ell(\psi) + \frac 3 8 s^2_\ell(\psi) +O(s^3_\ell(\psi)),
\end{align*}
since, using the high degree asymptotics of the Legendre polynomials (Hilb's asymptotics), we see that $|P_{\ell}(\cos(\psi/\ell))|$ is bounded away from $1$. Next, we consider the Gaussian integral
\begin{align*}
\mathcal{I}({\bf S}_\ell(\psi))= \iint_{\mathbb{R}^2} ||z|| \exp \Big\{ -\frac 1 2 z (I_2 + {\bf S}_\ell(\psi))^{-1} z^t \Big\} d z,
\end{align*}
observing that on $(C,\pi \ell)$, for $C$ sufficiently large, we can Taylor expand
\begin{align*}
(I_2 + {\bf S}_\ell(\psi))^{-1}&=I_2 - {\bf S}_\ell(\psi) + {\bf S}^2_\ell(\psi) +O({\bf S}^3_\ell(\psi)),
\end{align*}
and the exponential as follows
\begin{align*}
&\exp \Big\{ -\frac 1 2 z (I_2 + {\bf S}_\ell(\psi))^{-1} z^t \Big\}\\
&\;\; = \exp \Big\{ -\frac {z z^t} 2 \Big\} \Big[ 1+ \frac 1 2 z \Big( {\bf S}_\ell(\psi) - {\bf S}^2_\ell(\psi) +O({\bf S}^3_\ell(\psi)) \Big) z^t \\
&\hspace{0.5cm} + \frac{1}{2} \Big( \frac{1}{2} z ( {\bf S}_\ell(\psi) - {\bf S}^2_\ell(\psi) +O({\bf S}^3_\ell(\psi))) z^t \Big)^2 + O\Big( z ( {\bf S}_\ell(\psi) - {\bf S}^2_\ell(\psi) +O({\bf S}^3_\ell(\psi))) z^t \Big)^3 \Big],
\end{align*}
so that
\begin{align*}
\mathcal{I}({\bf S}_\ell(\psi))
&= \iint_{\mathbb{R}^2} ||z|| \exp \Big\{ -\frac {z z^t} 2 \Big\} \Big[ 1+ \frac 1 2 z {\bf S}_\ell(\psi) z^t - \frac 1 2 z {\bf S}^2_\ell(\psi) z^t + \frac{1}{8} \Big( z {\bf S}_\ell(\psi) z^t \Big)^2 \Big] d z+ O({\bf S}^3_\ell(\psi)).
\end{align*}
We introduce the following notation:
\begin{align*}
\mathcal{I}_0({\bf S}_\ell(\psi))&= \iint_{\mathbb{R}^2} ||z|| \exp \Big\{ -\frac { z z^t} 2 \Big\} d z= 2 \pi \int_{0}^{\infty} \rho \exp\big\{-\frac{1}{2} \rho^2 \big \} \rho \, d \rho = 2 \pi \sqrt{\frac{\pi}{2}} = \sqrt{2} \pi^{3/2},
\end{align*}
\begin{align*}
\mathcal{I}_1({\bf S}_\ell(\psi))&= \frac 1 2 \iint_{\mathbb{R}^2} ||z|| \exp \Big\{ -\frac {z z^t} 2 \Big\} z {\bf S}_\ell(\psi) z^t d z
= \frac{3 }{2^{3/2}} \pi^{3/2} {\rm tr} \, {\bf S}_{\ell}(\psi),
\end{align*}
and
\begin{align*}
\mathcal{I}_2({\bf S}_\ell(\psi))&= - \frac 1 2 \iint_{\mathbb{R}^2} ||z|| \exp \Big\{ -\frac {z z^t} 2 \Big\} z {\bf S}^2_\ell(\psi) z^t d z \\
&= - \frac 1 2 \iint_{\mathbb{R}^2} ||z|| \exp \Big\{ -\frac {z z^t} 2 \Big\} \big( S^2_{11;\ell}(\psi) z_1^2 +S^2_{22;\ell}(\psi) z_2^2 \big) d z \\
&= - \frac{3 }{2^{3/2}} \pi^{3/2} \; {\rm tr} \, {\bf S}_{\ell}(\psi).
\end{align*}
We also define
\begin{align} \label{14:22}
\mathcal{I}_3({\bf S}_\ell(\psi))&= \frac{1}{8} \iint_{\mathbb{R}^2} ||z|| \exp \Big\{ -\frac { z z^t} 2 \Big\} \Big( z {\bf S}_\ell(\psi) z^t \Big)^2 d z \nonumber \\
&=\frac{1}{8} \iint_{\mathbb{R}^2} ||z|| \exp \Big\{ -\frac {z z^t} 2 \Big\} \big( S^2_{11;\ell}(\psi) z_1^4+S^2_{22;\ell}(\psi) z_2^4
+2S_{11;\ell}(\psi) S_{22;\ell}(\psi) z_1^2 z_2^2 \big) d z,
\end{align}
and note that
\begin{equation}
\begin{split}
\label{eq:(z1^2+z2^2)^2}
&\iint_{\mathbb{R}^2} ||z|| \exp \Big\{ -\frac {z z^t} 2 \Big\} (z_1^2 + z_2^2)^2 d z = 2 \pi \int_0^{\infty} \rho \exp \Big\{ - \frac{\rho^2}{2} \Big\} \rho^4 \rho d \rho = 2 \frac{15}{\sqrt 2} \pi^{3/2},\\
&\iint_{\mathbb{R}^2} ||z|| \exp \Big\{ -\frac {z z^t} 2 \Big\} z_1^4 d z = \frac{15}{ \sqrt 2} \frac{3}{4} \pi^{3/2},
\end{split}
\end{equation}
and that
\begin{equation}
\label{int z1^2z2^2}
\begin{split}
& \iint_{\mathbb{R}^2} ||z|| \exp \Big\{ -\frac {z z^t} 2 \Big\} z_1^2 z_2^2 d z \\
&= \frac 1 2 \iint_{\mathbb{R}^2} ||z|| \exp \Big\{ -\frac 1 2 z z^t \Big\} (z_1^2 + z_2^2)^2 d z - \iint_{\mathbb{R}^2} ||z|| \exp \Big\{ -\frac 1 2 z z^t \Big\} z_1^4 d z
= \frac{15}{\sqrt 2} \frac 1 4 \pi^{3/2}.
\end{split}
\end{equation}
Substituting \eqref{eq:(z1^2+z2^2)^2} and \eqref{int z1^2z2^2} into \eqref{14:22}, we obtain
\begin{align*}
\mathcal{I}_3({\bf S}_\ell(\psi))&=\frac{1}{8} \frac{15}{4 \sqrt 2} \pi^{3/2} \Big( 3 S^2_{11;\ell}(\psi) + 3 S^2_{22;\ell}(\psi) +2S_{11;\ell}(\psi) S_{22;\ell}(\psi) \Big)\\
&= \frac{15 \sqrt 2 }{64} \pi^{3/2} \{2 {\rm tr} \,{\bf S}^2_{\ell}(\psi) + [ {\rm tr} \,{\bf S}^2_{\ell}(\psi)]^2 \}.
\end{align*}
Write
\begin{align*}
&\mathcal{I}({\bf S}_\ell(\psi))\\
&= \mathcal{I}_0({\bf S}_\ell(\psi)) +\mathcal{I}_1({\bf S}_\ell(\psi))+\mathcal{I}_2({\bf S}_\ell(\psi))+\mathcal{I}_3({\bf S}_\ell(\psi)) + O({\bf S}^3_{\ell}(\psi))\\
&= \sqrt{2} \pi^{3/2} + \frac{3 }{2^{3/2}} \pi^{3/2} \; {\rm tr} \,{\bf S}_{\ell}(\psi) - \frac{9 }{16 \sqrt 2} \pi^{3/2} \; {\rm tr} \,{\bf S}^2_{\ell}(\psi)+ \frac{15 \sqrt 2 }{64} \pi^{3/2} [{\rm tr} \,{\bf S}_{\ell}(\psi)]^2 + O({\bf S}^3_{\ell}(\psi)).
\end{align*}
We finally expand
\begin{align*}
\frac{1}{ \sqrt{ \text{det} \, {\bf \Delta}_\ell(\psi) }} = \frac{1}{\sqrt{ \text{det} ( I_2 + {\bf S}_\ell(\psi))}};
\end{align*}
note that
\begin{align*}
\text{det}(I_2+{\bf S}_{\ell}(\psi))&=[1+S_{11;\ell}(\psi)][1+S_{22;\ell}(\psi)]=1+{\rm tr} \,{\bf S}_{\ell}(\psi) + {\rm det} \,{\bf S}_{\ell}(\psi) ,
\end{align*}
and so,
\begin{align*}
\frac{1}{ \sqrt{ \text{det} \, {\bf \Delta}_\ell(\psi) }} &=1 - \frac{1}{2} \big[ {\rm tr} \,{\bf S}_{\ell}(\psi) + {\rm det} \,{\bf S}_{\ell}(\psi) \big] + \frac 3 8 \big[ {\rm tr} \,{\bf S}_{\ell}(\psi) + {\rm det} \,{\bf S}_{\ell}(\psi) \big]^2 + O({\bf S}^3_{\ell}(\psi))\\
&= 1- \frac{1}{2} {\rm tr} \,{\bf S}_{\ell}(\psi) - \frac{1}{2} {\rm det} \,{\bf S}_{\ell}(\psi) + \frac 3 8 [{\rm tr} \,{\bf S}_{\ell}(\psi)]^2 + O({\bf S}^3_{\ell}(\psi))\\
&=1- \frac{1}{2} {\rm tr} \,{\bf S}_{\ell}(\psi) + \frac{1}{4} {\rm tr} \,{\bf S}^2_{\ell}(\psi) + \frac 1 8 [{\rm tr} \,{\bf S}_{\ell}(\psi)]^2 + O({\bf S}^3_{\ell}(\psi)),
\end{align*}
where we have used the fact that $S^2_{11;\ell}(\psi)$ and $S^2_{22;\ell}(\psi)$ are the eigenvalues of ${\bf S}^2_{\ell}(\psi)$, and we have written ${\rm det} \,{\bf S}_{\ell}(\psi)$ as follows:
\begin{align*}
{\rm det} \,{\bf S}_{\ell}(\psi)
&= \frac{1}{2} \left\{ [S_{11;\ell}(\psi) + S_{22;\ell}(\psi)]^2 - [S^2_{11;\ell}(\psi)+ S^2_{22;\ell}(\psi)] \right\} = \frac 1 2 \left\{ \left[{\rm tr} \,{\bf S}_{\ell}(\psi) \right]^2 - {\rm tr} \,{\bf S}^2_{\ell}(\psi) \right\}.
\end{align*}
In conclusion, we have:
\begin{align*}
&K_{1,\ell}(\psi) \nonumber
=\frac{\sqrt{\ell(\ell+1)} }{2^2 \pi \sqrt{ \pi}} \Big[ 1+ \frac{1}{2} s_\ell(\psi) + \frac 3 8 s^2_\ell(\psi) +O(s^3_\ell(\psi)) \Big]\nonumber \\
& \;\; \times \left[ \sqrt{2} \pi^{3/2} + \frac{3 }{2^{3/2}} \pi^{3/2} \; {\rm tr} \,{\bf S}_{\ell}(\psi) - \frac{9 }{16 \sqrt 2} \pi^{3/2} \; {\rm tr} \,{\bf S}^2_{\ell}(\psi)+ \frac{15 \sqrt 2 }{64} \pi^{3/2} [{\rm tr} \,{\bf S}_{\ell}(\psi)]^2 + O({\bf S}^3_{\ell}(\psi)) \right] \nonumber \\
&\;\; \times \left[ 1- \frac{1}{2} {\rm tr} \,{\bf S}_{\ell}(\psi) + \frac{1}{4} {\rm tr} \,{\bf S}^2_{\ell}(\psi) + \frac 1 8 [{\rm tr} \,{\bf S}_{\ell}(\psi)]^2 + O({\bf S}^3_{\ell}(\psi)) \right] \nonumber \\
&=\frac{ \sqrt{\ell(\ell+1)}}{2^2 \sqrt 2} \left[ 2 + s_{\ell}(\psi) + \frac{1}{2 }{\rm tr} \,{\bf S}_{\ell}(\psi) + \frac{3}{4 } s^2_{\ell}(\psi) + \frac{1}{4 } s_{\ell}(\psi) {\rm tr} \,{\bf S}_{\ell}(\psi) - \frac{1}{16 } {\rm tr} \,{\bf S}^2_{\ell}(\psi)- \frac{1}{32 } [{\rm tr} \,{\bf S}_{\ell}(\psi)]^2 \right] \nonumber \\
&\;\;+O(\ell \cdot s^3_\ell(\psi)) + O(\ell \cdot {\bf S}^3_\ell(\psi)).
\end{align*}
\end{proof}
\subsection{Proof of Theorem \ref{thm:main asympt}(1)}
\begin{proof}
Substituting the estimates \eqref{sell}, \eqref{Sell11} and \eqref{Sell22} into \eqref{taylorx} we obtain
\begin{align*}
K_{1,\ell}(\psi)&= \frac{\sqrt{\ell(\ell+1)}}{2^2 \sqrt 2} \Big[ 2 + 2 \sqrt{\frac 2 \pi} \frac{1}{\sqrt \psi} \cos((\ell+1/2)\psi/\ell-\pi/4) \\
&\;\;+\frac{7}{4 \pi} \frac{1}{\psi} \cos^2((\ell+1/2)\psi/\ell-\pi/4)-\frac{2}{\pi} \frac{1}{\psi} \sin^2((\ell+1/2)\psi/\ell-\pi/4) \Big]+O(\psi^{-3/2} \ell^{-2} ),
\end{align*}
and, since $\cos^2(x)=\frac{1}{2}[1+ \cos(2x)]$ and $\sin^2(x)=\frac{1}{2}[1- \cos(2x)]$, we can write
\begin{align*}
&\frac{7}{4 \pi \psi} \cos^2((\ell+1/2)\psi/\ell-\pi/4)-\frac{2}{\pi \psi} \sin^2((\ell+1/2)\psi/\ell-\pi/4)\\
&=\frac{7}{4 \pi \psi} \frac 1 2 [1+\cos((\ell+1/2)2\psi/\ell-\pi/2)]-\frac{2}{\pi \psi} \frac 1 2 [1-\cos((\ell+1/2)2\psi/\ell-\pi/2) ]\\
&= \frac{7}{4 \pi \psi} \frac 1 2 -\frac{2}{\pi \psi} \frac 1 2 + \Big[\frac{7}{4 \pi \psi} \frac 1 2 + \frac{2}{\pi \psi} \frac 1 2 \Big] \cos((\ell+1/2)2\psi/\ell-\pi/2)\\
&= -\frac{1}{8 \pi \psi} + \frac{15}{8 \pi \psi} \cos((\ell+1/2)2\psi/\ell-\pi/2).
\end{align*}
The above implies
\begin{align*}
K_{1,\ell}(\psi)&= \frac{\sqrt{\ell(\ell+1)}}{2^2 \sqrt 2} \Big[ 2 + 2 \sqrt{\frac 2 \pi} \frac{1}{\sqrt \psi} \cos((\ell+1/2)\psi/\ell-\pi/4) \\
&\;\;-\frac{1}{8 \pi \psi} + \frac{15}{8 \pi \psi} \cos((\ell+1/2)2\psi/\ell-\pi/2)\Big]+O(\psi^{-3/2} \ell^{-2} )\\
&= \frac{\sqrt{\ell(\ell+1)}}{2 \sqrt 2} \Big[ 1 + \sqrt{\frac 2 \pi} \frac{1}{\sqrt \psi} \cos((\ell+1/2)\psi/\ell-\pi/4) \\
&\;\;-\frac{1}{16 \pi \psi} + \frac{15}{16 \pi \psi} \cos((\ell+1/2)2\psi/\ell-\pi/2)\Big]+O(\psi^{-3/2} \ell^{-2} ),
\end{align*}
the statement \eqref{eq:nod bias hemi far from boundary} of Theorem \ref{thm:main asympt}(1).
\end{proof}
\section{Proof of Theorem \ref{thm:main asympt}(2): perturbative analysis at the boundary}
The aim of this section is to study the asymptotic behaviour of the density function $K_{1,\ell}(\psi)$ for $0 < \psi < \epsilon_0$ with $\epsilon_0>0$ sufficiently small. We have
\begin{align*}
K_{1,\ell}(\psi)= \frac{1}{\sqrt{2 \pi} \sqrt{1-P_{\ell}(\cos(\psi/\ell))}} \frac{1}{2 \pi \sqrt{ \text{det}\, {{\bf \Delta}}_\ell(\psi)}}
\sqrt{\ell (\ell+1)} \iint_{\mathbb{R}^2} ||z|| \exp \Big\{ -\frac 1 2 z^t {{\bf \Delta}}^{-1}_\ell(\psi) z \Big\} d z,
\end{align*}
where ${{\bf \Delta}}_\ell(\psi)$ is the scaled conditional covariance matrix
\begin{align*}
{{\bf \Delta}}_\ell(\psi)&= {{\bf C}}_{\ell}(\psi)- \frac{{{\bf B}}^t_{\ell}(\psi) {{\bf B}}_{\ell}(\psi)}{1-P_{\ell}(\cos(\psi/\ell ))}.
\end{align*}
We have that
\begin{align} \label{omp}
1-P_{\ell}(\cos(\psi/\ell ))&=\frac{\ell (\ell+1)}{\ell^2} \frac{\psi^2}{2^2} - \frac{(\ell-1) \ell (\ell+1) (\ell+2)}{4\, \ell^4} \frac{\psi^4}{2^4} \\
&\;\;+ \frac{1}{36} \frac{(\ell-2)(\ell-1) \ell (\ell+1) (\ell+2)(\ell+3)}{\ell^6} \frac{\psi^6}{2^6}+ O(\psi^8), \nonumber
\end{align}
with constant involved in the $`O'$-notation absolute. We also have
\begin{align*}
{{\bf B}}^t_{\ell}(\psi)&=\left( \begin{array}{c}
-\sin (\psi/\ell ) P'_{\ell}(\cos(\psi/\ell )) \left( - \frac{1}{\ell} \right)\\ 0
\end{array}\right)\\
&= \left( \begin{array}{c}
\frac{\ell (\ell+1)}{\ell^2} \frac{\psi}{2} - \frac{(\ell-1) \ell (\ell+1) (\ell+2)}{2 \, \ell^4} \frac{\psi^3}{2^3}+\frac{1}{12} \frac{(\ell-2)(\ell-1) \ell (\ell+1) (\ell+2)(\ell+3)}{l^6} \frac{\psi^5}{2^5} +O(\psi^7) \\ 0
\end{array}\right),
\end{align*}
and ${{\bf C}}_{\ell}(\psi)$ is the $2 \times 2$ symmetric matrix with entries
\begin{align*}
{C}_{\ell,11}(\psi)&=\left[ P'_{\ell}(1)+ \cos(\psi/\ell ) \; P'_{\ell}(\cos(\psi/\ell )) - \sin^2(\psi/\ell ) \; P''_{\ell}(\cos(\psi/\ell )) \right] \left( - \frac{1}{\ell} \right)^2\\
&=1-\frac{3}{4} \frac{(\ell-1) \ell (\ell+1) (\ell+2)}{ \ell^4} \frac{\psi^2}{2^2} + \frac{5}{24} \frac{(\ell-2)(\ell-1) \ell (\ell+1) (\ell+2)(\ell+3)}{\ell^6 } \frac{\psi^4}{2^4} + O(\psi^6),\\
{{\bf C}}_{\ell,12}(\psi)&= 0,\\
{{\bf C}}_{\ell,22}(\psi)& = \left[ P'_{\ell}(1)-P'_{\ell}(\cos(\psi/\ell )) \right] \left( - \frac{1}{\ell} \right)^2\\
&=\frac{(\ell-1) \ell (\ell+1) (\ell+2)}{4\, \ell^4} \frac{\psi^2}{2^2} - \frac{(\ell-2)(\ell-1) \ell (\ell+1) (\ell+2)(\ell+3)}{24 \ell^6 } \frac{\psi^4}{2^4} + O(\psi^6).
\end{align*}
We obtain that
\begin{align*}
{{\bf \Delta}}_\ell(\psi)&=\left( \begin{array}{cc}
{\delta}_{11,\ell}(\psi)& 0\\
0& {\delta}_{22,\ell}(\psi)
\end{array}\right),
\end{align*}
with
\begin{align} \label{19:30}
{\delta}_{11,\ell}(\psi)&=\frac{1}{2^8 3^2} \frac{(\ell-2)(\ell-1) \ell (\ell+1) (\ell+2)(\ell+3)}{\ell^6} \psi^4+ O(\psi^6) \nonumber \\
&= \frac{1}{2^8 3^2} \psi^4+ O( \ell^{-1} \psi^4)+O(\psi^6),
\end{align}
and
\begin{align} \label{eq:19:31}
{\delta}_{22,\ell}(\psi)&= \frac{(\ell-1) \ell (\ell+1) (\ell+2)}{4\, \ell^4} \frac{\psi^2}{2^2}+ O(\psi^4)= \frac{\psi^2}{16} + O( \ell^{-1} \psi^2)+ O(\psi^4).
\end{align}
We introduce the change of variable $\xi={{\bf \Delta}}^{-1/2}_\ell(\psi) z$, and we write
\begin{align*}
K_{1,\ell}(\psi)= \frac{1}{\sqrt{2 \pi} \sqrt{1-P_{\ell}(\cos(\psi/\ell))}} \frac{1}{2 \pi }
\sqrt{\ell (\ell+1)} \iint_{\mathbb{R}^2} \sqrt{{\delta}_{11,\ell}(\psi) \xi_1^2+{\delta}_{22,\ell}(\psi) \xi_2^2} \exp \Big\{ -\frac{\xi^t \xi}2 \Big\} d \xi.
\end{align*}
Using the expansions in \eqref{omp}, \eqref{19:30} and \eqref{eq:19:31}, we write
\begin{align*}
K_{1,\ell}(\psi)&= \frac{1}{\sqrt{2 \pi} \sqrt{\psi^2/4 + O(\ell^{-1}\psi^2) + O(\psi^4)} }
\sqrt{\ell (\ell+1)} \left[ \frac{\psi}{4} + O(\ell^{-1}\psi) + O(\psi^3) \right] \sqrt{\frac{2}{\pi}}\\
&= \sqrt{\ell (\ell+1)} \frac{1}{2\pi} + O(1) + O(\ell \psi^2),
\end{align*}
which is \eqref{eq:nod bias hemi close to boundary}.
\section{Proof of Corollary \ref{cor}: expected nodal length}
\subsection{Kac-Rice formula for expected nodal length}
The Kac-Rice formula is a meta-theorem allowing one to evaluate the moments of the zero set of a random field satisfying some smoothness and non-degeneracy conditions. For $F:\R^{d}\rightarrow\R$, a sufficiently smooth centred Gaussian random field, we define
\begin{equation*}
K_{1,F}(x):= \frac{1}{\sqrt{2\pi} \sqrt{\var(F(x))}}\cdot \E[|\nabla F(x)|\big| F(x)=0]
\end{equation*}
the zero density (first intensity) of $F$. Then the Kac-Rice formula asserts that for some suitable class of random fields $F$
and $\overline{\Dpc}\subseteq \R^{d}$ a compact closed subdomain of $\R^{d}$, one has the equality
\begin{equation}
\label{eq:Kac Rice meta}
\E[\vol_{d-1}(F^{-1}(0)\cap \overline{\Dpc})] = \int\limits_{\overline{\Dpc}}K_{1,F}(x)dx.
\end{equation}
We would like to apply \eqref{eq:Kac Rice meta} to the boundary-adapted random spherical harmonics $T_{\ell}$ to evaluate the asymptotic law of the total expected nodal length of $T_{\ell}$. Unfortunately the aforementioned non-degeneracy conditions fail at the equator $$\mathcal{E}=\{(\theta, \phi):\: \theta=\pi/2 \}\subseteq\Hc^{2}.$$ Nevertheless, in a manner inspired by ~\cite[Proposition 2.1]{CKW}, we excise a small neighbourhood of this degenerate set, and apply the Monotone Convergence Theorem so to be able to prove that \eqref{eq:Kac Rice meta} holds precisely, save for the length of the equator that is bound to be contained in the nodal set of $T_{\ell}$, by the Dirichlet boundary condition.
\begin{proposition} \label{KKRR}
The expected nodal length of $T_{\ell}$ satisfies
\begin{equation} \label{cs}
\E[\Lc({T_{\ell}})] = \int_{\Hc^2} K_{1,\ell} (x) d x + 2 \pi,
\end{equation}
where $K_{1,\ell} (\cdot)$ is the zero density of $T_{\ell}$.
\end{proposition}
\begin{proof}
One way justify the Kac-Rice formula outside the equator is by using \cite[Theorem 6.8]{AW}, that assumes the non-degeneracy of the $3\times 3$ covariance matrix at all these points, a condition we were able to verify via an explicit, though somewhat long,
computation, omitted here. Alternatively,
to validate the Kac-Rice formula it is sufficient \cite[Lemma 3.7]{KW}
that the Gaussian distribution of $T_{\ell}$ is non-degenerate for every $x \in \Hc^2\setminus \mathcal{E}$, which is easily satisfied.
We construct a small neighbour of the equator $\mathcal{E}$, i.e. the set
$$\mathcal{E}_{\varepsilon}=\left\{(\theta, \phi):\: \theta \in \left [\frac \pi 2 , \frac \pi 2- \varepsilon \right ) \right\},$$
and we denote
$$ \Hc_{\varepsilon}= \Hc \setminus \mathcal{E}_{\varepsilon}.$$
Since Kac-Rice formula holds for $T_{\ell}$ restricted to $\Hc_{\varepsilon}$, the expected nodal length for $T_{\ell}$ restricted to $\Hc_{\varepsilon}$ is
\begin{equation*}
\E[\Lc({T_{\ell}}|_{\Hc_{\varepsilon}})] = \int_{\Hc_{\varepsilon}} K_{1,\ell} (x) d x.
\end{equation*}
Since the restricted nodal length $\{ \Lc({T_{\ell}}|_{\Hc_{\varepsilon}}) \}_{\varepsilon>0}$ is an increasing sequence of nonnegative random variables with a.s. limit
$$\lim_{\varepsilon \to 0} \Lc({T_{\ell}}|_{\Hc_{\varepsilon}}) = \Lc({T_{\ell}}) - 2 \pi,$$
the Monotone Convergence Theorem yields
\begin{equation} \label{lim1}
\lim_{\varepsilon \to 0} \E[\Lc({T_{\ell}}|_{\Hc_{\varepsilon}})] = \E[\Lc({T_{\ell}}) ]- 2 \pi.
\end{equation}
Moreover, by the definition
\begin{equation} \label{lim2}
\lim_{\varepsilon \to 0} \int_{\Hc_{\varepsilon}} K_{1,\ell} (x) d x = \int_{\Hc} K_{1,\ell} (x) d x.
\end{equation}
The equality of the limits in \eqref{lim1} and \eqref{lim2} show that Proposition \ref{KKRR} holds.
\end{proof}
\subsection{Expected nodal length}
\begin{proof}[Proof of Corollary \ref{cor}]
To analyse asymptotic behaviour of the expected nodal length, we separate the contribution of the following three subregions of the hemisphere $\Hc$ in the Kac-Rice integral on the r.h.s of \eqref{cs}:
$$ \Hc_C= \{ (\psi, \phi): 0 < \psi < \epsilon_0\}, \hspace{0.5cm} \Hc_I= \{ (\psi, \phi): \epsilon_0 < \psi < C \}, \hspace{0.5cm} \Hc_F= \{ (\psi, \phi): C < \psi < \pi \ell\};$$
note that we express the three subregions of $\Hc$ in terms of the scaled variable $\psi$. In what follows we argue that $\Hc_{F}$ gives
the main contribution. \\
In the (scaled) spherical coordinates we may rewrite the Kac-Rice integral \eqref{cs} as
\begin{equation*}
\E[\Lc({T_{\ell}})]-2\pi=\frac{ \pi}{ \ell} \int_{0}^{\ell \pi } K_{1,\ell}(\psi) \sin\left(\frac \pi 2 - \frac{\psi}{2 \ell} \right) d \psi,
\end{equation*}
and the contribution of the third range $\Hc_{F}$ as
\begin{align}
\label{eq:E[] int far}
\E[\Lc({T_{\ell}}|_{\Hc_{F}})] & = \frac{ \pi}{ \ell} \int_{C}^{\ell \pi } K_{1,\ell}(\psi) \sin\left(\frac \pi 2 - \frac{\psi}{2 \ell} \right) d \psi.
\end{align}
We are now going to invoke the asymptotics of $K_{1,\ell}(\psi)$, prescribed by \eqref{eq:nod bias hemi far from boundary} for this range.
The first term in \eqref{eq:nod bias hemi far from boundary} contributes
\begin{equation}
\label{eq:1st term contr}
\begin{split}
\frac{\pi}{ \ell} \frac{\sqrt{\ell(\ell+1)}}{2 \sqrt 2} \int_{C}^{\ell \pi } \sin\left(\frac \pi 2 - \frac{\psi}{2 \ell} \right) d \psi &= \frac{\pi}{ \ell} \frac{\sqrt{\ell(\ell+1)}}{2 \sqrt 2} 2 \ell \left[1-\sin\left( \frac{C}{2 \ell}\right) \right] \\
&= 2 \pi \frac{\sqrt{\ell(\ell+1)}}{2 \sqrt 2} \left[1- \frac{C}{2 \ell} + O\left( \frac{C}{\ell}\right) \right].
\end{split}
\end{equation}
to the integral \eqref{eq:E[] int far}.
The second term in \eqref{eq:nod bias hemi far from boundary} gives
\begin{equation}
\label{eq:2nd term cont}
\begin{split}
& \frac{\pi}{ \ell} \frac{\sqrt{\ell(\ell+1)}}{2 \sqrt 2} \int_{C}^{\ell \pi } \sqrt{\frac 2 \pi} \frac{1}{\sqrt \psi} \cos\{(\ell+1/2)\psi/\ell-\pi/4\} \sin\left(\frac \pi 2 - \frac{\psi}{2 \ell} \right) d \psi = O(\ell^{-1/2}),
\end{split}
\end{equation}
since, upon transforming the variables $w= \psi/\ell$, this term is bounded by
\begin{align*}
& \sqrt{\ell} \int_{C/\ell}^{\pi} \frac{1}{\sqrt{w} } \cos\{(\ell+1/2) w -\pi/4\} d w \\
& = \frac{\sqrt{\ell}}{\sqrt 2} \int_{C/\ell}^{\pi} \frac{1}{\sqrt{w} } [ \cos\{(\ell+1/2) w\} + \sin \{(\ell+1/2) w\} ] d w \\
&=\frac{\sqrt{\ell}}{\sqrt 2} \left\{ \left. \frac{1}{\sqrt{w} } \frac{\sin((\ell+1/2) w)}{\ell+1/2} \right|_{C/\ell}^{\pi} + \frac{1}{2} \int_{2 a_{\ell}/\ell}^{\pi} w^{-3/2} \frac{\sin((\ell+1/2) w)}{\ell+1/2} d w \right\} \\
&+ \frac{\sqrt{\ell}}{\sqrt 2} \left\{ \left. - \frac{1}{\sqrt{w} } \frac{\cos((\ell+1/2) w)}{\ell+1/2} \right|_{C/\ell}^{\pi} - \frac{1}{2} \int_{2 a_{\ell}/\ell}^{\pi} w^{-3/2} \frac{\cos((\ell+1/2) w)}{\ell+1/2} d w \right\} \\
&= O(1/\sqrt{\ell}).
\end{align*}
The logarithmic bias is an outcome of
\begin{equation}
\label{eq:log bias}
\begin{split}
\frac{\pi}{ \ell} \frac{\sqrt{\ell(\ell+1)}}{2 \sqrt 2} \int_{C}^{\ell \pi } \left( -\frac{1}{16 \pi \psi} \right) \sin\left(\frac \pi 2 - \frac{\psi}{2 \ell} \right) d \psi &= - \frac{1}{16 \ell} \frac{\sqrt{\ell(\ell+1)}}{2 \sqrt 2} \left[ - \log \left(\frac{C}{2 \ell} \right) +O(1) \right] \\
&=- \frac{1}{16 \ell} \frac{\sqrt{\ell(\ell+1)}}{2 \sqrt 2} \log(\ell) +O( 1).
\end{split}
\end{equation}
Consolidating all of the above estimates \eqref{eq:1st term contr},
\eqref{eq:2nd term cont} and \eqref{eq:log bias}, and the contribution of the error term
in \eqref{eq:nod bias hemi far from boundary}, we finally obtain
\begin{align*}
\E[\Lc({T_{\ell}}|_{\Hc_{F}})] = 2 \pi \frac{\sqrt{\ell(\ell+1)}}{2 \sqrt 2} - \frac{1}{16 \ell} \frac{\sqrt{\ell(\ell+1)}}{2 \sqrt 2} \log(\ell) + O(1).
\end{align*}
\vspace{0.3cm}
The contribution to the Kac-Rice integral on the r.h.s of \eqref{cs} of the set $\Hc_{C}$ is bounded by the straightforward
\begin{align*}
\E[\Lc({T_{\ell}}|_{\Hc_{C}})] &= \frac{ \pi}{ \ell} \int_{0}^{\varepsilon_0 } K_{1,\ell}(\psi) \sin\left(\frac \pi 2 - \frac{\psi}{2 \ell} \right) d \psi=O(1),
\end{align*}
on recalling the uniform estimate \eqref{eq:nod bias hemi close to boundary}.
Finally, we may bound the contribution of the intermediate range $\Hc_I$ as follows. We first write
\begin{align*}
\E[\Lc({T_{\ell}}|_{\Hc_{I}})] &=\frac{1}{\sqrt{2\pi}} \int_{\Hc_I} \frac{1}{\sqrt{1-P_{\ell}(\cos(\frac{\psi}{\ell}))}}\cdot \E\left[\left \|\nabla T_{\ell}\left({\psi}/{\ell}\right) \right\|\big| T_{\ell}({\psi}/{\ell})=0\right] d \psi
\end{align*}
then we observe that on the intermediate range
$$\Hc_I=\left\{({\psi}/{\ell}, \phi): \varepsilon_0< \psi< C \right\},$$
the variance at the denominator, i.e. $1-P_{\ell}(\cos(\psi/\ell))$, is bounded away from $0$, and moreover the diagonal entries of the unconditional covariance matrix ${\bf C}_{\ell}$ of the Gaussian vector $\nabla T_{\ell}$ are $O(\ell^2)$, and so are the diagonal entries of the conditional matrix ${\bf \Omega}_\ell$, since they are bounded by the unconditional ones, as it follows directly from \eqref{eq:Omega transition}, or, alternatively, from the vastly general Gaussian Correlation Inequality ~\cite{Royen}.
This easily gives the following upper bound:
$$\mathbb{E}[\|\nabla T_{\ell}({\psi}/{\ell}) \| \big| T_{\ell}({\psi}/{\ell})=0] \le \left(\mathbb{E}[\|\nabla T_{\ell}({\psi}/{\ell}) \|^2 \big| T_{\ell}({\psi}/{\ell})=0]\right)^{1/2} \le \left(\mathbb{E}[\| \nabla T_{\ell}({\psi}/{\ell}) \|^2 ] \right)^{1/2} =O(\ell) .$$
Since the area of $\Hc_C$ is $O(\ell^{-1})$, it follows that the total contribution this range to the expected nodal length is $O(1)$.
\end{proof}
\appendix
\section{Proof of Proposition \ref{16:52}}
We have that
\begin{align*}
\E[T_{\ell}(x)\cdot T_{\ell}(y)] &=\frac{8 \pi}{2 \ell+1} \sum\limits_{\substack{m=-\ell\\m\not\equiv\ell\mod{2}}}^{\ell} Y_{\ell,m} (x) \; \overline{Y}_{\ell, m} (y)\\
&=\frac 1 2 \frac{8 \pi}{2 \ell+1}\left[ \sum_{m=-\ell}^{\ell} Y_{\ell,m} (x) \; \overline{Y}_{\ell, m} (y)+\sum_{m=-\ell}^{\ell} (-1)^{m+\ell+1} Y_{\ell,m} (x) \; \overline{Y}_{\ell, m} (y) \right]\\
&=\frac 1 2 \frac{8 \pi}{2 \ell+1}\left[ \sum_{m=-\ell}^{\ell} Y_{\ell,m} (x) \; \overline{Y}_{\ell, m} (y) -\sum_{m=-\ell}^{\ell} Y_{\ell,m} (\overline{x}) \; \overline{Y}_{\ell,m} (y) \right],
\end{align*}
where we have used the fact that $Y_{\ell,m} (\theta, \phi)=(-1)^{\ell+m} Y_{\ell,m} (\pi- \theta, \phi)$.
We apply now the Addition Theorem for Spherical Harmonics:
\begin{align*}
P_{\ell}(\cos d(x,y))= \frac{4 \pi}{2 \ell+1} \sum_{m=-\ell}^{\ell} Y_{\ell,m} (x) \; \overline{Y}_{\ell,m} (y),
\end{align*}
so that
\begin{align*}
\E[T_{\ell}(x)\cdot T_{\ell}(y)] =P_{\ell}(\cos d(x,y)) - P_{\ell}(\cos d(\overline{x},y)).
\end{align*}
\begin{remark}
In particular, we note that,
\begin{align*}
\mathbb{E}[ T^2_{\ell}(x)] = P_{\ell}( \langle x,x \rangle) - P_{\ell}(\langle \overline{x} ,x \rangle) = 1- P_{\ell}(\cos(\pi-2 \theta)),
\end{align*}
this implies
\begin{align*}
\text{Var}(T_{\ell}(x))&=
\begin{cases}
1-P_{\ell}(\cos(\pi))= 1-(-1)^{\ell}& \rm{if\;\;} \theta=0,\\
1- P_{\ell}(1)=0& \rm{if\;\;} \theta=\pi/2,\\
\to 1 \text{ as } \ell \to \infty & \rm{if\;\;} \theta\ne 0, \pi/2.
\end{cases}
\end{align*}
Moreover, as $\ell \to \infty$, for $\theta\ne 0, \pi/2$,
\begin{align*}
\frac{\E[T_{\ell}(x)\cdot T_{\ell}(y)] }{P_{\ell}(\cos d(x,y))} \to 1.
\end{align*}
\end{remark} | {"config": "arxiv", "file": "2011.01571.tex"} |
TITLE: Derivative of angular velocity in a rotating frame
QUESTION [0 upvotes]: Taylor Relies on these relations
$v = \omega \times r$
$\frac{d}{dt}Q = \omega \times Q$
To show that
$a = a' + 2 \omega \times v' + \omega \times \omega \times r' + \alpha \times r' $
So we take the product rule of (1) and get:
$a = \dot{\omega} \times r + \omega \times \dot{r}$
The first terms is where I run into problems, which is my question, because
$\dot{\omega} \times r = \alpha \times r$
but shouldn't we use (2) on $\omega$ since its position can be written as
$\omega = \omega \hat{u} = \omega_x \bar{x} + \omega_y \bar{y} + \omega_z \bar{z}$
Hence, we should instead get
$\dot{\omega} \times r = \omega \times \omega \times r$
What makes $\omega$ so special that we avoid using property (2)? Does it have to do with the fact that $\omega = \omega '$? I saw something about Euler angles using $\omega$ written as a position vector
REPLY [0 votes]: First of all, the correct relation to start from is
$$\frac{dQ}{dt}= \frac{dQ’}{dt}+ \omega \times Q’, $$
where $Q$ is any vector.
Without this you will not be able to derive your formula.
Now, to your question. You have to remember that order matters when taking the cross product. We have $\omega\times(\omega\times r)\not\equiv 0$, but $(\omega\times \omega) \times r \equiv0$. Keep this in mind in your last equation and use the correct relation for the derivative, and you should be able to derive you formula. | {"set_name": "stack_exchange", "score": 0, "question_id": 535207} |
\begin{document}
\title{Maximal dissipation in Hunter-Saxton equation for bounded energy initial data.}
\author[1]{Tomasz Cie\'slak}
\author[1,2]{Grzegorz Jamr\'oz}
\affil[1]{\small Institute of Mathematics, Polish Academy of Sciences, \'Sniadeckich 8, 00-656 Warszawa, Poland \newline{\small e-mail: T.Cieslak@impan.pl, G.Jamroz@impan.pl}}
\affil[2]{\small Institute of Applied Mathematics and Mechanics, University of Warsaw, Banacha 2, 02-097 Warszawa, Poland}
\maketitle
\begin{abstract}
In \cite{zhang_zheng2005} it was conjectured by Zhang and Zheng that dissipative solutions of the Hunter-Saxton equation, which are known to be unique in the class of weak solutions, dissipate the energy at the highest possible rate.
The conjecture of Zhang and Zheng was proven in \cite{daf_jde} by Dafermos for monotone increasing initial data with bounded energy. In this note we prove the conjecture in \cite{zhang_zheng2005} in full generality. To this end we examine the evolution of the energy of \emph{any} weak solution of the Hunter-Saxton equation. Our proof shows in fact that for every time $t>0$ the energy of the dissipative solution is not greater than the energy of any weak solution with the same initial data.
\noindent
{\bf Keywords:} Hunter-Saxton equation, uniqueness, maximal dissipation of energy, generalized characteristics, Lebesgue-Stieltjes integral. \\
{\bf MSC 2010:} 35L65, 37K10. \\
\end{abstract}
\mysection{Introduction}\label{section1}
The Hunter-Saxton equation was introduced in \cite{hunter_saxton} as a simplified model to describe the evolution of perturbations of constant director fields in nematic liquid crystals. Liquid crystals share the mechanical properties of fluids and optical properties of crystals. Their description is essentially given by the evolution of two linearly independent vector fields, one denoting the fluid flow, another responsible for the dynamics of the so-called director fields, giving the orientation of the rod-like molecule.
When analyzing planar director fields in the neighborhood of a constant field, after a nonlinear change of variables, one arrives at the following problem for $u:\R\times[0,T)\rightarrow \R$
\begin{equation}\label{hs}
\partial _tu(x,t)+\partial_x\left[\frac{1}{2}u^2(x,t)\right]=\frac{1}{2}\int_{-\infty}^xw^2(y,t)dy,\;\;w(x,t)=\partial_xu(x,t).
\end{equation}
\[
u(x,0)=u_0(x), \;\;w(x,0)=w_0(x)=u_0'(x),\;\;-\infty<x<\infty.
\]
Besides its physical meaning, equation \eqref{hs} possesses a few interesting mathematical properties. First, it seems to be a nice toy model for hydrodynamical equations. Next, it is completely integrable and admits infinitely many conservations laws, see \cite{hunter_zhengPhD}.
Due to its physical interpretation it is natural to expect from the solutions to \eqref{hs} that $\int_{\R}w^2(y,t)dy<\infty$, which means that the energy is finite. According to the Sobolev embedding in 1d, this means that for fixed time a solution to \eqref{hs} should be H\"{o}lder continuous in space, hence discontinuity does not occur. However, Lipschitz continuity might not be preserved. The well-known example, see \cite{bressan_cons}, \cite{daf_jhde}, of a solution admitting a blowup of the Lipschitz constant is given by
\begin{equation}\label{z_P}
u(x,t) = \left\{\begin{array}{ccl}
0& 0\leq t<1 &-\infty<x\leq 0,\\
\frac{2x}{t-1}& 0\leq t<1 &0<x<(t-1)^2,\\
2(t-1)& 0\leq t<1 &(t-1)^2\leq x<\infty. \end{array}\right.
\end{equation}
It develops a cusp singularity at $x=0, t=1$. However, it is possible to define global-in-time weak solutions. Such solutions have been constructed in \cite{hunter_zheng}. The question on the admissibility criteria yielding uniqueness appears. It was studied in \cite{hunter_zheng1}, \cite{zhang_zheng} and in the latter paper the notion of dissipative solutions was introduced. It was proven that these are unique solutions to \eqref{hs}. Such solutions were further studied in \cite{bressan_cons}. Let us now recall the definitions of both, weak and dissipative solutions.
\begin{defi}\label{weak}
A continuous function $u$ on $(-\infty,\infty)\times[0, \infty)$ is a weak solution of \eqref{hs} if, for each $t\in [0,\infty)$, $u(\cdot,t)$ is absolutely continuous on $\R$ satisfying
\[
\partial_xu(x,t)=w(x,t)\in L^\infty([0,\infty);L^2(\R)),
\]
moreover the map $t\rightarrow u(\cdot,t)\in L^2_{loc}(\R)$ is absolutely continuous and \eqref{hs} holds on $\R\times[0, \infty)$, in the sense of distributions, $u(x,0)=u_0(x)$ and $w_0(x) \in L^2(\mathbb{R})$.
\end{defi}
\begin{defi}\label{dissipative}
A weak solution to \eqref{hs} is called dissipative if its derivative $\partial_xu$ is bounded from above on any compact subset of the upper half-plane and $w(\cdot,t)\rightarrow w_0(x)$ in $L^2(\R)$ as $t\downarrow 0$.
\end{defi}
As noticed in \cite{daf_jhde}, for any weak solution convergence of the solution to the initial data in $L^2$ is equivalent to the condition
\begin{equation}\label{en}
\limsup_{t\downarrow 0}E(t)\leq E(0),
\end{equation}
$E(t):=\frac{1}{2}\int_{\R}w^2(y,t)dy$.
In \cite{daf_jde} Dafermos proved that dissipative solution is being selected by the criterion of maximal dissipation rate of the energy (or entropy, see \cite{daf_jde73}) among weak solutions for initial data $u_0$ being monotone increasing. He also stated that the same is true for general initial data with finite energy. In the present article we prove with all the details the above claim. It turns out that the proof in the general situation requires more involved reasoning. Our proof is based on the strategy of the proof in \cite{daf_jde}, however in order to handle the general situation we need an essentially more complicated argument. The biggest obstacle is that in order to proceed with the strategy of Dafermos, quite a detailed information on characteristics associated to any weak solution is required. Actually one needs to know how characteristics associated to weak solutions which are not disspative behave and how they push forward the energy. Our studies related to characteristics associated to weak solutions furnish enough information to enable us to execute the strategy. Most of the facts we prove on characteristics of weak solutions which are not dissipative seem to be new in the studies of \eqref{hs}.
Let us now state the main results and recall an important definition from \cite{daf_jhde} which we shall need when dealing with the general case.
\begin{theo}\label{Tw.1}
Let $u_0(x)$ be absolutely continuous with the derivative $u_0'(x)=w_0(x)$ a.e. and such that $w_0(x)\in L^2(\R)$. Then the dissipative solution of \eqref{hs} minimizes, for every $t \in [0,\infty)$, the energy among all weak solutions with the same initial data. This means that if $\tilde{u}$ is the unique dissipative solution of \eqref{hs} and $u$ any weak solution of \eqref{hs} starting from $u_0$, then
\begin{itemize}
\item $E^{\tilde{u}} (t) \le E^u (t)$ for every $t \ge 0$,
\item if $E^{{u}}(t) = E^{\tilde{u}}(t)$ for every $t \ge 0$, then $u = \tilde{u}$.
\end{itemize}
\end{theo}
\begin{cor}
\label{Cor1}
The unique dissipative solution $\tilde{u}$ of \eqref{hs} maximizes the rate of the decay of energy among all weak solutions with the same initial data and consequently it is selected by the maximum dissipation principle. This means that if $u \neq \tilde{u}$ is a weak solution of \eqref{hs} such that $u(s) = \tilde{u}(s)$ for $s \in [0,t]$, then
\begin{itemize}
\item there exists $s > t$ such that $E^u(s) > E^{\tilde{u}}(s)$ and
\item there is no $s>t$ such that $E^u(s) < E^{\tilde{u}}(s)$.
\end{itemize}
\end{cor}
\begin{defi}\label{I_s}
For $s\in (0, \infty]$ we say that $I_s$ is a subset of the set
\[
I=\{\zeta\in \R:u_0'(\zeta) \;\mbox{exists, equal to}\; w_0(\zeta)\}
\]
consisting of such $\zeta\in I$ that $w_0(\zeta)>-\frac{2}{s}$. We denote $T_\zeta:=\infty$ if $w_0(\zeta)\geq 0$, $T_\zeta:=-\frac{2}{w_0(\zeta)}$ otherwise.
\end{defi}
\begin{comment}
Let us state the main theorem and recall an important definition from \cite{daf_jhde} which we shall need when dealing with the general case.
\begin{theo}\label{Tw.1}
Let $u_0(x)$ be absolutely continuous with the derivative $u_0'(x)=w_0(x)$ a.e. and such that $w_0(x)\in L^2(\R)$. Then the dissipative solution of \eqref{hs} is the unique one maximizing the rate of the decay of energy among weak solutions.
\end{theo}
\begin{defi}\label{I_s}
For $s\in (0, \infty]$ we say that $I_s$ is a subset of the set
\[
I=\{\zeta\in \R:u_0'(\zeta) \;\mbox{exists, equal to}\; w_0(\zeta)\}
\]
consisting of such $\zeta\in I$ that $w_0(\zeta)>-\frac{2}{s}$. We denote $T_\zeta:=\infty$ if $w_0(\zeta)\geq 0$, $T_\zeta:=-\frac{2}{w_0(\zeta)}$ otherwise.
\end{defi}
\end{comment}
The paper is organized in the following way. In the next section we provide a sketch of the strategy of the proof of Theorem \ref{Tw.1}. The third section is devoted to introducing a collection of facts concerning characteristics. The fourth section is devoted to the study of energy contained between some pairs of characteristics. In the fourth section
we study carefully the evolution of the positive part of the energy. Finally in the last section we formulate a proper averaging theory which enables us to arrive at the proof of Theorem \ref{Tw.1}.
\mysection{The strategy of the proof of the main result}\label{section1.5}
In this section we describe with some details Dafermos' strategy of proving that dissipative solutions of \eqref{hs} are selected as the unique ones by the maximal energy dissipation criterion. It was successfully applied in the case of nondecreasing data in \cite{daf_jde}. We shall follow this strategy and very often we will be using some of the facts obtained in \cite{daf_jde}. Since exposing our result in a clear way requires a good source of reference concerning some of the computations done by Dafermos, we decided to recall many details of the latter in the present section. Finally, we will emphasize main additional difficulties which appear when one wants to execute the strategy in the case of absolutely continuous initial data with finite energy.
We notice that given a dissipative solution $u$, see Definition \ref{dissipative}, by \cite[Theorem 4.1]{daf_jhde}, we know that
\begin{equation}\label{4.1}
\int_{\R}w^2(y,t)dy=\int_{I_t}w_0^2(\zeta) d\zeta.
\end{equation}
Notice that if we prove that the energy $E=\frac{1}{2}\int_{\R}w^2(y,t)dy$ associated to any weak solution of \eqref{hs} is bounded from below by $\int_{I_t}w_0^2(\zeta) d\zeta$ and any weak solution $u$ satisfying \eqref{4.1} is a disspative solution, then we are done. Hence, in order to complete the proof of Theorem \ref{Tw.1} it is enough to prove the following two propositions.
\begin{prop}\label{prop2.1}
Let $u$ be a weak solution of \eqref{hs} (see Definition \ref{weak}). Moreover, assume \eqref{4.1} holds. Then $u$ is actually a dissipative solution.
\end{prop}
\begin{prop}\label{prop2.2}
Let $u$ be a weak solution of \eqref{hs}. Then
\begin{equation}\label{ostatnie}
\int_{\R}w^2(y,t)dy\geq\int_{I_t}w_0^2(\zeta) d\zeta.
\end{equation}
\end{prop}
Now, we shall recall how the strategy outlined above was executed by Dafermos for nondecreasing initial data. Thus, we will be able to explain to the reader what difficulties appear for more general initial data. Moreover, some formulas which appear in this section will be used by us in a more complicated framework, and it seems to us useful to introduce them in the basic setting.
A characteristic associated to the weak solution $u$ of \eqref{hs} is a Lipschitz continuous function $x:[0,T]\rightarrow \R$ satisfying
\begin{equation}\label{2.0.2}
\dot{x}(t)=u(x(t),t)\;\mbox{for a.e.}\; t\in[0,T],
\end{equation}
\[
x(0)=x_0.
\]
By \cite[Lemma 3.1]{daf_jhde} we know that for every $x_0$ there exists a characteristic $x(t)$ of \eqref{hs} (perhaps not unique) passing through $x_0$. Morever, every characteristic is actually a $C^1$ function and satisfies
\begin{equation}\label{2.0.01}
\dot{x}=u(x(t),t):=u_{x(t)}(t),\;\;\; \dot{u}_{x(t)}(t)=\frac{1}{2}\int_{-\infty}^{x(t)}w^2(y,t)dy
\end{equation}
pointwise and a.e., respectively. The function $t\rightarrow u(x(t),t)$ is Lipschitz continuous.
Following \cite{daf_jde}, given characteristics $x_1(t), x_2(t)$ emanating from $x_1, x_2\in \R$ we introduce
\begin{eqnarray}\label{hpt}
h(t):=x_2(t)-x_1(t)&,& p(t):=u(x_2(t),t)-u(x_1(t),t), \\
\omega^{x_1, x_2}(t)&:=&\frac{p(t)}{h(t)}.
\end{eqnarray}
One sees that
\begin{equation}\label{hpt1}
\dot{h}(t)=p(t), \; \dot{p}(t)=\frac{1}{2}\int_{x_1(t)}^{x_2(t)}w^2(y,t)dy.
\end{equation}
An immediate consequence of \eqref{hpt1} is that if $p$ is initially positive, then $h$ stays positive during its evolution. In other words, nondecreasing initial data assure that characteristics do not intersect. Since $h(t)>0$ for $t\geq 0$,
\[
\dot{h}(t)=\omega^{x_1, x_2}(t)h(t)
\]
and so
\begin{equation}\label{wazne}
h(t)=h(0)e^{\int_0^t\omega^{x_1, x_2}(s)ds}.
\end{equation}
Moreover,
\begin{equation}
\label {omegaintegral}
\dot{\omega}^{x_1, x_2}(t)=-\left(\omega^{x_1, x_2}\right)^2(t)+\frac{1}{2h(t)}\int_{x_1(t)}^{x_2(t)}w^2(y,t)dy.
\end{equation}
Next, Dafermos shows that
\begin{equation}\label{hpt2}
\int_{x_1(t)}^{x_2(t)}w^2(y,t)dy\geq \int_{x_1}^{x_2}(w_0)^2(y)dy
\end{equation}
and in view of the fact that $h(t)>0$, which implies that $\R\setminus I_t$ is of measure $0$, concludes the proof of Proposition \ref{prop2.2}. On the other hand if $E(t)=E(0)$ for any $t>0$, then \eqref{hpt2} must also hold as equality for any pair of characteristics $x_1(t), x_2(t)$. This leads Dafermos to the fact that $u$ must be a dissipative solution, see \cite[the end of section 3]{daf_jde}. So Proposition \ref{prop2.1} also holds, thus Theorem \ref{Tw.1} is true for nondecreasing initial data.
Now, let us comment on the difficulties which appear when considering general initial data. First of all, \eqref{hpt1} does not guarantee that characteristics do not intersect. Actually, the collision of characteristics is possible. Next, it is also possible that characteristics of weak solutions branch. Moreover, our proof requires treating separately the positive and negative parts of the energy as well as change of variables formulas for Lebesgue-Stieltjes integral. It is enough to notice that a solution given in \eqref{z_P} can be continued for the times $t>1$ in the following non-unique way.
\begin{equation}\label{z_Q}
u(x,t) = \left\{\begin{array}{ccl}
0& t>1 &-\infty<x\leq 0,\\
\frac{2x}{t-1}& t>1 &0<x<k(t-1)^2,\\
2k(t-1)& t>1 &k(t-1)^2\leq x<\infty, \end{array}\right.
\end{equation}
$k\geq 0$. In order to deal with those obstacles and execute the strategy of Dafermos in the case of absolutely continuous initial data, we need detailed studies of characteristics of weak solutions which may collide and branch, which is done in the next section.
Finally, we observe that both Proposition \ref{prop2.1} and Proposition \ref{prop2.2} are consequences of the following one.
\begin{prop}\label{prop2.3}
Let $u$ be a weak solution of \eqref{hs}. Let $x_\xi$ and $x_\zeta$ be characteristics starting at $\xi, \zeta \in \R$, respectively, $\xi<\zeta$. Moreover, assume $x_\xi(t)\leq x_\zeta(t)$ for any $t\geq 0$. Then the following formula holds
\begin{equation}\label{ostatnie_po_char}
\int_{x_\xi(t)}^{x_\zeta(t)}w^2(y,t)dy\geq\int_{I_t\cap (\xi, \zeta)}w_0^2(\zeta) d\zeta\;.
\end{equation}
\end{prop}
Proposition \ref{prop2.2} is implied by Proposition \ref{prop2.3} in a straightforward way. To see the implication from
Proposition \ref{prop2.3} to Proposition \ref{prop2.1} one needs to take into account that, as was stated in Section \ref{section2}, to any weak solution a set of $C^1$ characteristics is associated. They however might not be unique. Clearly,
for any characteristic $x_\xi(t)$ emanating from a point $\xi$, we have (see \eqref{2.0.01})
\begin{equation}\label{raz}
\dot{u_\xi}(t)=\frac{1}{2}\int_{-\infty}^{x_\xi(t)}w^2(y,t)dy,
\end{equation}
where $u_\xi(t):=u(x_\xi(t),t)$.
On the other hand, we see that any weak solution to \eqref{hs} satisfying \eqref{4.1} and \eqref{ostatnie_po_char}, satisfies also
\begin{equation}\label{dwa}
\int_{x_\xi(t)}^{x_\zeta(t)}w^2(y,t)dy=\int_{I_t\cap (\xi, \zeta)}w_0^2(\zeta) d\zeta.
\end{equation}
Hence, taking into account \eqref{dwa} and integrating \eqref{raz} in time
we arrive at
\[
u(x_\xi(t),t)=u_0(\xi)+\frac{1}{2}\int_0^t\int_{I_s\cap(-\infty,\xi)}w_0^2(\zeta)d\zeta ds.
\]
The above equality tells us that $u$ is actually a dissipative solution to \eqref{hs} according to \cite[Theorem 2.1]{bressan_cons}, see also \cite[Theorem 2.1]{daf_jhde}.
In view of the above, all we have to show, in order to complete the proof of Theorem \ref{Tw.1}, is Proposition \ref{prop2.3}.
\mysection{Some information on characteristics}\label{section2}
In this section we study the behavior of characteristics associated to weak solutions of \eqref{hs}. First we notice the following lemma.
\begin{lem}\label{lemat1}
Consider any weak solution $u$ of \eqref{hs}. Let $\xi_0 \in I_t$. Choose $\xi_1\in \R$ such that $|\xi_1-\xi_0|$ is small enough. Then for any $x_{\xi_0}(t), x_{\xi_1}(t)$, characteristics associated to $u$ emanating from $\xi_0, \xi_1$ respectively, there exists a positive continuous function $\chi$ such that
\begin{equation}\label{trzy}
|x_{\xi_1}(s)-x_{\xi_0}(s)|\geq \chi(s)\;\;\mbox{for}\;s\in(0,t].
\end{equation}
\end{lem}
\proof
We take two characteristics emanating from $\xi_0$ and $\xi_1$. In view of \eqref{wazne},
as long as $\int_0^t\omega(s)ds>-\infty$, $h(t)>0$ and so characteristics do not intersect.
Moreover, \eqref{omegaintegral} is satisfied
as long as $h>0$. It implies, by the Schwarz inequality,
\begin{equation}\label{omega}
\dot{\omega}^{x_{\xi_0}, x_{\xi_1}}(t)\geq -\frac{1}{2}\left(\omega^{x_{\xi_0}, x_{\xi_1}}(t)\right)^2, \;\;\mbox{hence}\;\;\omega^{x_{\xi_0}, x_{\xi_1}}(t)\geq \frac{2\omega^{\xi_0, \xi_1}(0)}{2+t\omega^{\xi_0, \xi_1}(0)}.
\end{equation}
Next, we take $\xi_0\in I_t$ and observe that for $\xi_1$ in a sufficiently close neighborhood of $\xi_0$,
\[
\omega^{\xi_0, \xi_1}(0)>-\frac{2}{t}\;.
\]
Indeed, since $\xi_0\in I_t$ we have on the one hand $w(\xi_0)>-\frac{2}{t}+\varepsilon_0$ for some small $\varepsilon_0$, and on the other hand,
\[
\frac{u(x)-u(\xi_0)}{x-\xi_0}=u'(\xi_0)+\frac{o(|x-\xi_0|)}{x-\xi_0}\;.
\]
We choose $\xi_1$ such that $\frac{o(|\xi_1-\xi_0|)}{\xi_1-\xi_0}<\frac{\varepsilon_0}{2}$. Then,
\[ \omega ^{{\xi _0, \xi_1}}(0) > -\frac 2 t + \frac {\eps_0}{2}. \]
Hence, in view of \eqref{omega}, \eqref{wazne} leads to
\begin{eqnarray*}
|x_{\xi_1}(s) - x_{\xi_0}(s)| &=& |\xi_1 - \xi_0| \exp\left({\int_0^s \omega^{x_{\xi_0}, x_{\xi_1}}(\tau)d\tau}\right)\\ &\ge& |\xi_1 - \xi_0| \exp \left({\int_0^s \frac{2\omega^{\xi_0, \xi_1}(0)}{2+\tau\omega^{\xi_0, \xi_1}(0)}d\tau }\right) \\&=& |\xi_1 - \xi_0| \frac 1 4 (2+s \omega^{\xi_0,\xi_1}(0))^2 \\&\ge& |\xi_1 - \xi_0| \frac 1 4 (2+t \omega^{\xi_0,\xi_1}(0))^2 \ge \frac 1 {16}|\xi_1 - \xi_0| t^2 \eps_0^2.
\end{eqnarray*}
\qed
As a corollary we infer the following fact.
\begin{cor}\label{cor2.1}
Consider a characteristic $x(t)$, associated to a weak solution $u$ of \eqref{hs}, emanating from $x_0\in I_t$. This characteristic does
not cross with any other until time $t$.
\end{cor}
\proof
Any characteristic starting from the neighborhood of $x_0$ does not cross $x(t)$ by Lemma \ref{lemat1}. Next consider a characteristic starting from a point being outside of a neighborhood of $x_0$. If it crosses $x(t)$ then, in particular, it crosses one of the characteristics from the neighborhood of $x_0$. But this way we obtain a characteristic starting from a neighborhood of $x_0$ which intersects $x(t)$, which leads to contradiction.
\qed
However, as we have seen in \eqref{z_Q}, characteristics associated to a weak solution can branch. We need to find out how often it may happen in order to proceed with the proof. We have the following lemma.
\begin{lem}\label{lemat2}
Let $u$ be a weak solution of \eqref{hs}. For almost every (more precisely, all except a countable number) $x_0\in I_T$, the
characteristic associated to $u$ starting from $x_0$, does not branch before time $T$.
\end{lem}
Combining Lemma \ref{lemat2} with Lemma \ref{lemat1} we obtain the following claim.
\begin{cor}\label{cor2.2}
Let $u$ be a weak solution to \eqref{hs}. For every except a countable number $x_0\in I_t$, the characterictic emanating from $x_0$ is unique forwards and backwards up to time $t>0$.
\end{cor}
The proof of Lemma \ref{lemat2} requires some steps, in particular the introduction of leftmost and rightmost characteristics. To this end we show a few claims.
\begin{prop}\label{prop2.0.1}
Consider a family of Lipschitz continuous functions $f_\alpha:[a,b]\rightarrow \R$, $\alpha\in A\subset \R$ satisfying
\begin{equation}\label{2.0.1}
|f_\alpha(t)-f_\alpha(s)|\leq L|t-s|
\end{equation}
for all $t,s\in [a,b]$ and some $L>0$. Then both $\sup_{\alpha\in A}f_\alpha$ and $\inf_{\alpha\in A}f_\alpha$ are Lipschitz continuous.
\end{prop}
\proof
We shall prove the claim of the proposition only for $\sup f_\alpha$, for $\inf f_\alpha$ the proof is the same.
First we fix $t,s\in[a,b]$, $t>s$. We notice that for any $\varepsilon>0$ there exist $\alpha_0, \alpha_1\in A$ such that
\[
\sup_{\alpha\in A}f_\alpha(t)-\varepsilon< f_{\alpha_0}(t),
\]
\[
\sup_{\alpha\in A}f_\alpha(s)-\varepsilon< f_{\alpha_1}(s).
\]
Hence
\[
f_{\alpha_1}(t)-f_{\alpha_1}(s)< \sup_{\alpha\in A}f_\alpha(t)-\sup_{\alpha\in A}f_\alpha(s)+\varepsilon
\]
and
\[
f_{\alpha_0}(t)-f_{\alpha_0}(s)> \sup_{\alpha\in A}f_\alpha(t)-\sup_{\alpha\in A}f_\alpha(s)-\varepsilon,
\]
which together with \eqref{2.0.1} yields
\[
-L(t-s)-\varepsilon\leq f_{\alpha_1}(t)-f_{\alpha_1}(s)-\varepsilon< \sup_{\alpha\in A}f_\alpha(t)-\sup_{\alpha\in A}f_\alpha(s)< f_{\alpha_0}(t)-f_{\alpha_0}(s)+\varepsilon\leq L(t-s)+\varepsilon.
\]
Letting $\varepsilon$ go to $0$ in the above inequality we obtain the claim of the proposition.
\qed
Let us now state a proposition, which we will use in the sequel, which is a consequence of the Kneser's theorem, see \cite[Theorem II.4.1]{hartman}, as well as the fact that any weak solution to \eqref{hs} is continuous.
\begin{prop}\label{kneser}
Let the image of a point under characteristics emanating from $(x_0,t_0)$ be defined as
\[
A(t):=\{(z,t):z=x(t), x(t_0)=x_0, x(t)\;\mbox{is a characteristic of}\;\eqref{hs}\;\mbox{associated to the weak solution}\;u \}.
\]
Then $A(t)$ is a compact and connected set.
\end{prop}
The next lemma contains the proof of existence of rightmost and leftmost characteristics.
\begin{lem}\label{lemat3}
Let $u : [0, T]\times \R\rightarrow \R$ be a bounded continuous function solving \eqref{hs} in the weak sense and let $x_\alpha(t)$ be a family of characteristics associated to $u$, i.e.
$C^1$ functions on $[0,T]$ satisfying \eqref{2.0.2}.
Then function $y : [0,T]\times \R\rightarrow \R$ defined for $t\in [0,T]$ by $y(t):=\sup_{\alpha}x_\alpha(t)$ is also a characteristic of \eqref{hs} associated to the weak solution $u$. The same claim holds for $\inf_{\alpha}x_\alpha(t)$.
\end{lem}
\proof
We shall restrict the proof to the case of rightmost characteristics, the leftmost part being analogous.
To begin the proof let us consider a point $t_0>0$ such that $\dot{x_\alpha}(t_0)=u(x_\alpha(t_0),t_0)$ and that the characteristic $x_\alpha$ branches at this point. The set of values of characteristics emanating from the branching point $(x_\alpha(t_0),t_0)$
\[
\{(z,t):z=x_\alpha(t), x_\alpha(0)=x_0, t_0\leq t\leq T\}
\]
is compact and connected by Proposition \ref{kneser}. For any fixed $t_0\leq t\leq T$ one can take $y(t)$ as $\max$ of the elements of this set. By Proposition \ref{prop2.0.1}, $y(t)$ is also Lipschitz continuous. To prove the claim it is enough to show that $y(t)$ satisfies \eqref{2.0.2} at the points of differentiability. Indeed, by \cite[Lemma 3.1]{daf_jhde} we see that then $y(t)$ is $C^1$ regular.
Suppose, on the contrary, that $\dot{y}(t)\neq u(y(t))$ for some $t\in (t_0, T)$, which is a point of differentiability
of $y$. Without loss of generality, we may assume that
\[
\dot{y}(t)\leq u(y(t))-\delta
\]
for some $\delta>0$. Next, we notice that by the continuity of $u$, we can choose $\varepsilon>0$ in such a way that for every $(x,t)\in [y(t)-\varepsilon,y(t)+\varepsilon]\times [t-\varepsilon,t+\varepsilon]$ we have
\begin{equation}\label{2.0.5}
u(x,t)>\dot{y}(t)+\frac{\delta}{2},
\end{equation}
as well as $t+\varepsilon<T$, moreover for every $s\in [t-\varepsilon, t+\varepsilon]$
\begin{equation}\label{2.0.6}
|y(s)-y(t)-\dot{y}(t)(s-t)|<\frac{\delta(s-t)}{10}.
\end{equation}
By Proposition \ref{kneser} we can choose $\alpha_0$ such that $y(t)=x_{\alpha_0}(t)$. Then for $\tau$ small enough
\[
x_{\alpha_0}(t+\tau)=x_{\alpha_0}(t)+\int_t^{t+\tau}u(x_\alpha(s),s)ds\stackrel{\eqref{2.0.5}}{\geq} y(t)+\tau \left(\dot{y}(t)+\frac{\delta}{2}\right)\stackrel{\eqref{2.0.6}}{>}y(t+\tau),
\]
contradiction.
\qed
Basing on the above lemma we define leftmost and rightmost characteristics. Finally, we can proceed with the proof of Lemma \ref{lemat2}.
\vspace{0.3cm}
\textbf{Proof of Lemma \ref{lemat2}.}
Suppose that a characteristic $x(t)$ starting from $I_T$ branches for the first time at the point $(x(t_0),t_0)$. By Proposition \ref{kneser} and Lemma \ref{lemat3} the rightmost and leftmost characteristics emanating from the point $(x(t_0),t_0)$
together with the line $t=T$ bound a set of positive measure. We name such a set a branching set related to $(x(t_0),t_0)$. Consider now the interval $[x_0, x_1] \subset \mathbb{R}$ and all the characteristics emanating from $[x_0, x_1]\cap I_T$. First, we notice that by Corollary \ref{cor2.1} branching sets related to different points $(x',t'), (x'',t'')$ are disjoint, see Fig. \ref{Fig_branching}. Next, we claim that the set of points of first branching times $(x',t')$ is countable. Indeed, otherwise the measure of the branching sets related to all the branching points $(x',t')$ would be infinite, but this set is a subset of the set bounded by the interval $[x_0, x_1]$ from the bottom, by the curves $x_l[x_0]$ and $x_r[x_1]$-respectively the leftmost characteristic emanating from $x_0$ and the rightmost emanating from $x_1$, and the interval $[x_l[x_0](T),x_r[x_1](T)]$ from the top. The latter set is however of finite measure.
\qed
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=9cm]{./Fig1}
\caption{Schematic presentation of branching sets related to $(x',t')$ and $(x'',t'')$ which are unions of graphs, until time $T$, of all characteristics emanating from $(x',t')$ and $(x'',t'')$, respectively. Since $y', y'' \in I_T$, where $y',y''$ satisfy $x_{y'}(t')=x'$ and $x_{y''}(t'')=x''$, these branching sets are disjoint.}
\label{Fig_branching}
\end{center}
\end{figure}
In view of Corollary \ref{cor2.2} we define the following sets.
\begin{defi}\label{defi2.0.1}
By $I_t^{unique}$ we understand the full-measure subset of $I_t$ consisting of all points $x$ such that every characteristic of \eqref{hs} associated to the weak solution $u$ starting at $x$ stays unique up to time $t,\; t\in(0,\infty]$.
\end{defi}
\mysection{Time-monotonicity of positive part of the energy and consequences}\label{section3}
We first define the positive part of the energy.
\begin{defi}\label{3.01}
Let $w_+(y,t):=\max \{w(y,t),0\}$ be the positive part of $w$. We define the positive part of the energy as
\begin{equation}\label{3.0.1}
E^+_{[\eta,\xi]}(t):=\int_\eta^\xi w_+^2(y,t)dy.
\end{equation}
\end{defi}
Next, we recall that at $I_\infty$ the function $w_0(y)\geq 0$.
As is seen from the following proposition, the positive part of the energy defined above is nondecreasing.
\begin{prop}\label{prop3.0.1}
Let $u$ be a weak solution to \eqref{hs}, $w=u_x$ a.e. Take $a,b\in I_\infty^{unique}$ such that $a<b$. Then for every $t\in[0,\infty]$ we have
\begin{equation}\label{3.0.2}
E^+_{[x_a(t),x_b(t)]}(t)\geq E^+_{[a,b]}(0),
\end{equation}
where $x_a(t), x_b(t)$ are the unique characteristics emanating from $a$ and $b$, respectively.
\end{prop}
\proof
For every $\zeta\in I$ let $x_r[\zeta](t)$ denote the rightmost characteristic emanating from $\zeta$. Note that
for every $t > 0$ the mapping $M:\zeta\rightarrow x_r[\zeta](t)$ is monotone increasing.
This mapping may be constant on some intervals (if characteristics from some interval meet before time $t>0$) or have jumps (case of branching before time $t>0$).
Take $\zeta\in I_\infty$. Next, take $\eta$ from a neighborhood of $\zeta$. By \eqref{omega} we have
\[
\omega^{x_\eta, x_\zeta}(t)\geq \frac{2\omega^{\eta, \zeta}(0)}{2+t\omega^{\eta, \zeta}(0)},
\]
where we use the notation from the proof of Lemma \ref{lemat1}. Since $\zeta$ was chosen from $I_\infty$, if $\eta$ is close enough, then $\omega^{\eta, \zeta}(0)>0$, which in turn gives $\omega^{x_\eta, x_\zeta}(t)>0$. Consequently,
\begin{equation}\label{3.0.31}
I_\infty(t)\subset \{y: w(y,t)\;\mbox{exists and is non-negative}\;\},
\end{equation}
with
\[
A(t):=\{x_r[a](t), a\in A\}\]
for $x_r[a](t)$ the rightmost characteristic starting from $a$, and
\begin{equation}\label{3.0.3}
w_+(y,t)\geq \liminf_{\eta\rightarrow\zeta}\omega^{x_\eta, x_\zeta}(t)\geq\liminf_{\eta\rightarrow\zeta}\frac{2\omega^{\eta, \zeta}(0)}{2+t\omega^{\eta, \zeta}(0)}=\frac{2w(\zeta,0)}{2+tw(\zeta,0)}.
\end{equation}
On the other hand, in view of \eqref{3.0.31}, for $a,b\in I_\infty^{unique}$
\begin{equation}\label{3.0.33}
\int_{[x_a(t),x_b(t)]}w_+(y,t)^2dy\geq \int_{\left([a,b]\cap I_\infty\right)(t)}w_+(y,t)^2dy,
\end{equation}
where $x_a(t)$ is the characteristic starting from $a$.
Now observe that for $\zeta\in [a,b]\cap I_\infty$ equality $M(\zeta) = M(\eta)$ implies $\zeta= \eta$. Hence on the set
\begin{equation}\label{3.0.32}
M\left([a,b]\cap I_\infty\right)=\left([a,b]\cap I_\infty\right)(t)
\end{equation}
we can define a unique inverse mapping $M^{-1}$. This mapping can be prolonged to a right-continuous
generalized inverse of $M$ on $[a,b]$, which we call $W$. The definition of $W$, which can be taken for instance from \cite{embrechts}, reads
\[
W(y):=\inf\{x\in \R: M(x)\geq y\}, \;y\in [M(a), M(b)].
\]
Next, we are in a position to use the classical change of variable formula for the Lebesgue-Stieltjes integral, see
\cite[(1)]{falk_teschl}. We have
\begin{equation}\label{c_of_v}
\int_a^b f(x)dM(x)=\int_{M(a)}^{M(b)}f(W(y))dy
\end{equation}
for any bounded Borel function $f:[a,b]\rightarrow \R$, any nondecreasing function $M:[a,b]\rightarrow \R$ and
its generalized inverse $W$. Choosing $f$ in \eqref{c_of_v} as
\[
f(x):=\textbf{1}_{I_\infty}(x)\left(\frac{2w(x,0)}{2+tw(x,0)}\right)^2,
\]
which is nonnegative and bounded for fixed $t$, we arrive at
\begin{eqnarray}\label{3.0.4}
\int_{\left([a,b]\cap I_\infty\right)(t)}w_+(y,t)^2dy&\stackrel{\eqref{3.0.3}}{\geq}& \int_{[M(a),M(b)]}\textbf{1}_{I_\infty}(W(y))\left(\frac{2w(W(y),0)}{2+tw(W(y),0)}\right)^2dy\\ \nn
&=&\int_{[a,b]}f(x)dM(x)\geq \int_{[a,b]}f(x)M'(x)dx.
\end{eqnarray}
On the right-hand side of the above inequality we omitted integration over the singular part of the measure $dM$ due to positivity of $f$ and $M'(x)$ is computed a.e. Now,
observe that for every $\zeta\in I_\infty^{unique}$ we can estimate $M'(\zeta)$.
\begin{eqnarray}\label{M'}\nn
M'(\zeta)&=&\liminf_{\eta\rightarrow \zeta}\frac{x_\eta(t)-x_\zeta(t)}{\eta-\zeta}\stackrel{\eqref{wazne}}{=}\liminf_{\eta\rightarrow \zeta}e^{\int_0^t\omega^{x_\eta, x_\zeta}(s)ds}\stackrel{\eqref{omega}}{\geq} \liminf_{\eta\rightarrow \zeta}e^{\int_0^t\frac{2\omega^{\eta, \zeta}(0)}{2+s\omega^{\eta, \zeta}(0)}ds}
\\
&=&\liminf_{\eta\rightarrow \zeta}\frac{1}{4}\left(2+t\omega^{\eta, \zeta}(0)\right)^2 = \frac{1}{4}\left(2+tw(\zeta,0)\right)^2.
\end{eqnarray}
Applying the above estimate in \eqref{3.0.4}, and using \eqref{3.0.33}
we arrive at
\[
\int_{[x_a(t),x_b(t)]}w_+(y,t)^2dy\geq \int_{[a,b]\cap I_\infty}\left(\frac{2w(x,0)}{2+tw(x,0)}\right)^2\frac{1}{4}\left(2+tw(x,0)\right)^2 dx=\int_{[a,b]}(w_0)_+^2(x)dx.
\]
\qed
We notice that choosing $a, b\notin I_\infty^{unique}$ we arrive at a similar conclusion as in the above proposition. Indeed, we have the following claims with less restrictive assumptions.
\begin{cor}\label{cor3.1.1}
One can relax the assumptions of Proposition \ref{prop3.0.1}, assuming only that $a, b\in \R$. Then
\begin{equation}\label{3.1.0}
E^+_{[x_r[a](t),x_l[b](t)]}(t)\geq E^+_{[a,b]}(0),
\end{equation}
where $x_r, x_l$ stands for rightmost and leftmost characteristics.
\end{cor}
\proof
Indeed, assume first $a\in I_\infty^{unique}, b\in\R$. Then there exists an increasing sequence $\left(b_{n}\right)_ {n\in \N}$ belonging to $I_\infty^{unique}$ such that $b_n\rightarrow \bar{b}$, where $\bar{b}:=\sup_{x<b, x\in I_\infty^{unique}}$. If $\bar{b}<b$ then we notice that
\begin{equation}\label{3.1.01}
w_0 \leq 0\;\mbox{a.e. on}\; [\bar{b}, b].
\end{equation}
Indeed, otherwise we would have a set $B\subset (\bar{b}, b]$ of positive measure such that $w_0(x)>0$ for all $x\in B$. Hence, $B\subset I_\infty$ and so it must have a nonempty intersection with $I_\infty^{unique}$, so there exists $x_0\in B\cap I_\infty^{unique}$ such that
$w_0(x_0)>0$, which contradicts the definition of $\bar{b}$.
We obtain
\[
\int_{[a,b]}(w_0)_+(x)^2dx\stackrel{\eqref{3.1.01}}{=}\lim_{n\rightarrow \infty}\int_{[a,b_n]}(w_0)_+(x)^2dx\stackrel{Prop.\ref{prop3.0.1}}{\leq}\lim_{n\rightarrow \infty}\int_{[x_a(t),x_{b_n}(t)]}w(y,t)_+^2dy
\]
\[
=\int_{[x_a(t),x_l[\bar{b}](t)]}w(y,t)_+^2dy\leq \int_{[x_a(t),x_l[b](t)]}w(y,t)_+^2dy.
\]
If both $a,b\in \R$, $a<b$ and $a,b\notin I_\infty^{unique}$, we find sequences $a_n, b_n\in I_\infty^{unique}$, respectively decreasing and increasing such that $a_n\rightarrow \bar{a}$, $b_n\rightarrow \bar{b}$, $\bar{a}:=\inf_{x>a, x\in I_\infty^{unique}}$. The same computation as above yields the claim.
\qed
Moreover, we notice that repeating an adequate part of the proof of Proposition \ref{prop3.0.1} we arrive at the following fact.
\begin{cor}\label{cor3.1.2}
For $0<t<\infty$ and $\zeta\in I_t^{unique}$ it holds
\[
M'(\zeta)\geq\frac{1}{4}\left(2+tw(\zeta,0)\right)^2.
\]
\end{cor}
Finally, we are in a position to define $J\subset I$ of a full measure by
\[
J:=\{\zeta\in I:\;\mbox{ for a.e.}\; t\in [0,T_\zeta)\; (x_\zeta(t),t)\in \Gamma\},
\]
where $x_\zeta(t)$ is a characteristic, $T_\zeta$ was defined in the Definition \ref{I_s}, and
\[
\Gamma:=\{(x,t):w=\partial_xu(x,t) \;\mbox{exists}\;\}.
\]
By the definition, for $t\in (0, \infty]$ $J_t^{unique}:=I_t^{unique}\cap J$.
The next lemma is crucial in our proof of Theorem \ref{Tw.1}. It allows us to control the difference quotients $\omega$ on a subset of $J_t^{unique}$ of full measure.
\begin{lem}\label{lem3.1.1}
Let $u$ be any weak solution to \eqref{hs}. Fix $0<t<\infty$ and $0<\tau<t$. For almost every $\zeta\in J_t^{unique}$
there exist $M > 0$ and $\varepsilon>0$ such that for every $\eta \in (\zeta,\zeta+\eps]$ we have $\omega^{x_\zeta,x_l[\eta]}(s)\le M$ for every $0\leq s\leq \tau$.
\end{lem}
\proof
We denote by $J_t^{bad}$ the following set.
\[
J_t^{bad}:=\{\zeta\in J_t^{unique}:\forall \varepsilon>0\; \forall M>0 \;\exists \eta:\eta-\zeta<\varepsilon \;\exists s\in [0, \tau]\;\mbox{such that}\; \omega^{x_\zeta, x_l[\eta]}(s)>M \}.
\]
In order to prove Lemma \ref{lem3.1.1} it is enough to show that the measure of $J_t^{bad}$ is zero. To this end
fix $M>0$ and for $\zeta\in J_t^{bad}$ denote
\[
\Pi_\zeta^{M,\delta}:=[\zeta, \eta],
\]
where $\eta$ is a point that satisfies
\begin{equation}\label{3.0.45}
\eta\in (\zeta, \zeta+\delta)\;\mbox{and there exists}\;0\leq s\leq \tau\;\mbox{such that}\;\omega^{x_\zeta, x_l[\eta]}(s)>M.
\end{equation}
We observe that for fixed $M>0$
\[
{\cal E}^M:=\{\Pi_\zeta^{M,\delta}, \delta>0, \zeta\in J_t^{bad}\}
\]
is a covering of $J_t^{bad}$. Moreover, any point in $J_t^{bad}$ is contained in an element of ${\cal E}^M$ of arbitrarily small length. Indeed, by the definition of $J_t^{bad}$ one sees that given $\zeta\in J_t^{bad}$ for any small $\delta>0$ there exists $\eta \in(\zeta, \zeta+\delta)$ for which $\omega^{x_\zeta, x_l[\eta]}(s)>M$. So, ${\cal E}^M$ is a Vitali covering of $J_t^{bad}$. By the Vitali theorem, we obtain at most countable subfamily ${\cal F}^M\subset {\cal E}^M$ of pairwise disjoint closed intervals such that
\[
J_t^{bad}\subset \bigcup {\cal F}^M
\]
holds up to a set of measure $0$. Denote
\[
{\cal F}^M:=\{[\zeta_i, \eta_i]\}, i\in \N, \zeta_i\in J_t^{bad}, \eta_i \;\mbox{satisfy}\;\eqref{3.0.45}.
\]
Then, for any $i\in \N$ there exists $s_i\in [0, \tau]$ such that $\omega^{x_{\zeta_i}, x_l[\eta_i]}(s)>M$. This leads us to
\begin{equation}\label{3.0.451}
\int_{x_{\zeta_i}(s_i)}^{x_l[\eta_i](s_i)} w_+^2(y,s_i)dy\geq (x_l[\eta_i](s_i)-x_{\zeta_i}(s_i))M^2.
\end{equation}
Indeed, \eqref{3.0.451} holds since by the Schwarz inequality and in view of the obvious inequality $w_+\geq w$
\begin{eqnarray*}
\int_{x_{\zeta_i}(s_i)}^{x_l[\eta_i](s_i)} w_+^2(y,s_i)dy&\geq& \frac{1}{x_l[\eta_i](s_i)-x_{\zeta_i}(s_i)}\left(\int_{x_{\zeta_i}(s_i)}^{x_l[\eta_i](s_i)} w_+(y,s_i)dy\right)^2 \\
&\geq &\frac{\left(u(x_l[\eta_i](s_i))-u(x_{\zeta_i}(s_i))\right)^2}{x_l[\eta_i](s_i)-x_{\zeta_i}(s_i)}>(x_l[\eta_i](s_i)-x_{\zeta_i}(s_i))M^2.
\end{eqnarray*}
In view of Proposition \ref{prop3.0.1} and Corollary \ref{cor3.1.1}, \eqref{3.0.451} yields
\[
\int_{x_{\zeta_i}(\tau)}^{x_l[\eta_i](\tau)} w_+^2(y,\tau)dy\geq \int_{x_{\zeta_i}(s_i)}^{x_l[\eta_i](s_i)} w_+^2(y,s_i)dy \geq (x_l[\eta_i](s_i)-x_{\zeta_i}(s_i))M^2.
\]
Summing over $i\in \N$ we obtain
\begin{equation}\label{3.0.47}
\int_{\R}w^2(y,\tau)dy\geq \int_{\R}w_+^2(y,\tau)\geq \sum_{i=1}^\infty \int_{x_{\zeta_i}(\tau)}^{x_l[\eta_i](\tau)} w_+^2(y,\tau)dy \geq M^2\sum_{i=1}^\infty (x_l[\eta_i](s_i)-x_{\zeta_i}(s_i)).
\end{equation}
We observe the following estimate
\begin{equation}\label{3.0.475}
(x_l[\eta_i](s_i)-x_{\zeta_i}(s_i))\geq \frac{(\eta_i-\zeta_i)\left(1-\frac{\tau}{t}\right)^2}{2}\;.
\end{equation}
Indeed, for $h(s)=x_l[\eta_i](s)-x_{\zeta_i}(t)$, as long as $h>0$, \eqref{wazne} is satisfied. Hence
\[
h(t)\geq (\eta_i-\zeta_i)e^{\int_0^t\omega^{x_{\zeta_i}, x_l[\eta_i]}(s)ds}.
\]
On the other hand, one can estimate
\[
\omega^{x_{\zeta_i}, x_l[\eta_i]}(s)\geq \frac{2\omega^{\zeta_i, \eta_i}(0)}{2+s\omega^{\zeta_i,\eta_i}(0)}
\]
the same way as in \eqref{omega}. Consequently,
\begin{equation}\label{raz_dwa}
h(s_i)\geq \frac{\eta_i-\zeta_i}{4}\left(2+s_i\omega^{\zeta_i,\eta_i}(0)\right)^2\geq \frac{\eta_i-\zeta_i}{2}\left(1-\frac{\tau}{t}\right)^2,
\end{equation}
where in the last inequality we made use of the inequalities $s_i<\tau$ and $\omega^{\zeta_i, \eta_i}(0)>-\frac{2}{t}$, the latter holds on $J_t$.
Plugging \eqref{3.0.475} in \eqref{3.0.47} we arrive at
\[
\int_{\R}w^2(y,\tau)dy\geq M^2\sum_{i=1}^\infty \frac{(\eta_i-\zeta_i)\left(1-\frac{\tau}{t}\right)^2}{2}\geq \frac{M^2}{2}\left(1-\frac{\tau}{t}\right)^2|J_t^{bad}|.
\]
But the last inequality means
\[
|J_t^{bad}|\leq \frac{2\int_{\R}w^2(y,\tau)dy}{M^2\left(1-\frac{\tau}{t}\right)^2}\;,
\]
so enlarging $M$, we see that $|J_t^{bad}|=0$.
\qed
\mysection{Maximal dissipation selects the unique solution}\label{section4}
The present section consists of two subsections. In the first one we prove an averaging lemma, which will be used in
the second one in order to prove Proposition \ref{prop2.3}.
\subsection{Averaging lemma}
We prove the following proposition.
\begin{prop}\label{prop4.1.1}
For any $g\in L^1(\R)$ the following formula holds
\begin{equation}\label{4.1.1}
\lim_{\varepsilon\rightarrow 0} \int_{\R}\frac{1}{\varepsilon}\int_x^{x+\varepsilon}|g(y)-g(x)|dydx=0.
\end{equation}
\end{prop}
\proof
By the Fubini theorem we obtain
\begin{eqnarray*}
\int_{\R}\frac{1}{\varepsilon}\int_x^{x+\varepsilon}|g(y)-g(x)|dydx&=&\int_{\R}\frac{1}{\varepsilon}\int_0^\varepsilon|g(x+y)-g(x)|dydx\\ \nn
&=&\int_0^\varepsilon\frac{1}{\varepsilon}\left(\int_{\R}|g(x+y)-g(x)|dx\right)dy\\ \nn
&\leq& \sup_{y\in [0,\varepsilon] }\left(\int_{\R}|g(x+y)-g(x)|dx\right).\nn
\end{eqnarray*}
By the continuity of the translation in $L^1$ we infer
\[
\lim_{\varepsilon\rightarrow 0}\sup_{y\in [0,\varepsilon] }\left(\int_{\R}|g(x+y)-g(x)|dx\right)=0
\]
which yields the claim.
\qed
\subsection{Averaging over characteristics}
Following Lemma \ref{lem3.1.1} and definition of $I_t$, we can represent, up to a set of measure zero, $J_T^{unique}$ as a countable union of sets with $\omega$ bounded on $[0,\tau]$ (this property will be crucial in our proof and is the fundament of the decomposition of $J_T^{unique}$ which we introduce) and with $\omega(0)$ close to $w(0)$. More precisely, there exists a set $Z$ of measure $0$ such that
\[
J_T^{unique} = \left(\bigcup_{N=1}^{\infty} J_{T}^{unique,N}\right) \cup Z,
\]
where
\[ J_T^{unique, N} := \{\zeta \in J_T^{unique}: \omega^{x_\zeta, x_l[\zeta+\eps]}(s) \le N \mbox{ and } \omega^{\zeta, \zeta+\eps}(0) \ge -2 \slash T \mbox{ for } \eps\le \frac {1}{N} \mbox{ and } 0\le s \le \tau\}\],
and we used Lemma \ref{lem3.1.1} as well as the fact that $w_0(\zeta) > -2 \slash T$.
\begin{prop}\label{Cor_constants}
Let $0 \le t \le \tau <T$. If $\zeta \in J_T^{unique,N}$ then for $M(\zeta):=x_\zeta(t)$ and $\eps\le 1\slash N$ we have:
\begin{enumerate}[i)]
\item $M'(\zeta) \ge c(\tau)$,
\item $M(\zeta+\eps) - M(\zeta) \le \eps C_N(\tau)$,
\item $M(\zeta+\eps) - M(\zeta) \ge \eps {c}(\tau)$,
\end{enumerate}
where $c(\tau) := \frac 1 2 (1 -\tau\slash T)^2 $ and $C_N(\tau):=e^{N\tau}.$
\end{prop}
\proof
The proof of parts (i) and (iii) consists of repeating the argument in \eqref{M'} in the context of the present proposition. Since $\tau<T$ we arrive at the desired claim. As a consequence of \eqref{wazne} we obtain (ii).
\qed
Below we formulate and prove a result which is a slight extension of the Riesz lemma on choosing an a.e. convergent subsequence from a sequence convergent in $L^p$.
\begin{prop}\label{riesz}
Let $(X,\mu)$ be a measure space and let $D_1 \subset D_2 \subset \dots$ be an increasing family of subsets of $X$. Let $D\subset X$ satisfy $\mu(D \backslash \bigcup_{n=1}^{\infty} D_n) = 0$ and $\mu(\bigcup_{n=1}^{\infty} D_n \backslash D) = 0$. Consider a family of functions $d_\eps(\zeta):X\rightarrow \R$, $\eps\in(0, \eps_0)$, such that $d_\eps(\zeta)\stackrel{\eps\rightarrow 0}{\longrightarrow} 0$ in $L^1(D_n)$ for $n=1,2,\dots$. Then there exists a subsequence $\eps_k$ tending to zero such that
\[
d_{\eps_k}(\zeta)\rightarrow 0 \;\;a.e. \;\mbox{in}\; D.
\]
\end{prop}
\proof
We use the diagonal argument. Namely, convergence in $L^1$ implies convergence almost everywhere on a subsequence. Hence, there exists a convergent to $0$ and decreasing sequence
$\eps^1_1, \eps^1_2, \dots$ such that $d_{\eps^1_k} \stackrel{k \rightarrow 0}{\longrightarrow} 0$ a.e. on $D_1$. Define inductively sequence
$\eps^n_1,\eps^n_2, \dots$
as a subsequence of $\eps^{n-1}_2,\eps^{n-1}_3, \dots $ satisfying $d_{\eps^n_k} \stackrel{k \rightarrow 0}{\longrightarrow} 0$ a.e. on $D_n$.
Finally, take $\eps_k := \eps^k_1$ for $k = 1,2,\dots$. Since $\{\eps_k\}_{k=n}^{\infty}$ is a subsequence of $\{\eps^n_j\}$ for every $n$, we obtain
\[
d_{\eps_k}(\zeta)\rightarrow 0 \;\;a.e. \;\mbox{in}\; D_n
\]
for every $n$. Since almost every $\zeta \in D$ belongs in fact to some $D_n$, we conclude.
\qed
Let us now state and prove a crucial lemma on averaging the energy over characteristics.
\begin{lem}
\label{Lem_42}
Let $u$ be a weak solution to \eqref{hs}, $w=u_x$ a.e. Let $[0,\tau] \subset [0,T)$. Then for almost every $\zeta \in J_T^{unique}$ and every $0\le \sigma \le \tau$ we have
\begin{equation}
\label{Eq_conveps}
\limsup_{k \to \infty} \left| \int_{\sigma}^{\tau} \left( \frac {1}{x_l[\zeta+\eps_k](t) - x_\zeta(t)} \int_{x_\zeta(t)}^{x_l[\zeta+\eps_k](t)} w^2(y,t)dy - w^2(x_\zeta(t),t) \right) dt\right| = 0,
\end{equation}
where $\eps_k$ is some sequence convergent monotonically to $0$ and $x_l[\zeta+\eps_k](t)$ is the leftmost characteristic emanating from $\zeta + \eps_k$, associated to $u$.
\end{lem}
\proof
It is enough to show that for every $N \in \mathbb{N}$ we have
\begin{equation}
\label{Eq_toshow}
\lim_{\eps \to 0^+} \int_{J_T^{unique,N}} \int_{\sigma}^{\tau} \left( \frac {1}{x_l[\zeta+\eps](t) - x_\zeta(t)} \int_0^{x_l[\zeta+\eps](t)-x_\zeta(t)}|w^2(x_\zeta(t) + y,t) - w^2(x_\zeta(t),t)| dy \right) dt d\zeta = 0.
\end{equation}
Indeed, one applies Proposition \ref{riesz} with $X=\R$, $\mu$ being a Lebesgue measure, $D=J_T^{unique}, D_N=J_T^{unique,N}$ and
\[
d_\eps(w,\zeta):=\int_{\sigma}^{\tau} \left( \frac {1}{x_l[\zeta+\eps](t) - x_\zeta(t)} \int_0^{x_l[\zeta+\eps](t)-x_\zeta(t)}|w^2(x_\zeta(t) + y,t) - w^2(x_\zeta(t),t)| dy \right) dt.
\]
It remains to show \eqref{Eq_toshow}. To this end, first observe that for $\eps \le 1 \slash N$
\begin{eqnarray*}
&&\int_{J_T^{unique, N}} \left( \frac {1}{x_l[\zeta+\eps](t)-x_\zeta(t)} \int_0^{x_l[\zeta+\eps](t)-x_{\zeta}(t)}|w^2(x_\zeta(t) + y,t) - w^2(x_\zeta(t),t)| dy \right) d\zeta \\
&=& \int_{J_T^{unique, N}} \left( \frac {1}{M(\zeta+\eps)-M(\zeta)} \int_0^{M(\zeta+\eps)-M(\zeta)}|w^2(M(\zeta) + y,t) - w^2(M(\zeta),t)| dy \right)d\zeta \\ &\stackrel{\eqref{Cor_constants}\; ii), iii)}{\le}&
\int_{J_T^{unique, N}} \left( \frac {1}{c(\tau)\eps} \int_0^{C_N(\tau)\eps}|w^2(M(\zeta) + y,t) - w^2(M(\zeta),t)| dy \right)d\zeta \\&=&
\int_{J_T^{unique, N}} g^{\eps}(M(\zeta))d\zeta,
\end{eqnarray*}
where
\[
g^{\eps}(z):= \textbf{1}_{M(J_T^{unique,N})}(z)\frac {1}{c(\tau)\eps} \int_0^{C_N(\tau)\eps} |w^2(z+y,t) - w^2(z,t)|dy.
\]
Now, observe that for fixed $\eps \le 1 \slash N$ function $g^\eps$ is nonnegative, bounded and Borel measurable.
Using \cite[(6)]{falk_teschl}, we obtain
\[
\int_{M\left(J_T^{unique,N}\right)} g^\eps(z)dz = \int_{J_T^{unique,N}} g^\eps(M(\zeta))dM(\zeta)
\]
Next note that neglecting the singular part of $dM$, using nonnegativity of $g^\eps$ we continue
\[
\geq \int_{J_T^{unique,N}} g^\eps(M(\zeta))M'(\zeta)d\zeta \stackrel{\eqref{Cor_constants}\; i)}{\ge} c(\tau) \int_{J_T^{unique,N}} g^\eps(M(\zeta))d\zeta.
\]
Consequently,
\begin{eqnarray*}
S^{\eps}(t) &:=& \int_{J_T^{unique,N}} g^\eps(M(\zeta))d\zeta\\ &\le&
\frac {1}{c(\tau)} \int_{M\left(J_T^{unique,N}\right)} g^\eps (z)dz\\ &=&
\frac {1}{c(\tau)}\int_{M\left(J_T^{unique,N}\right)} \frac {1}{c(\tau)\eps} \int_0^{C_N(\tau)\eps} |w^2(z+y,t) - w^2(z,t)|dydz\\ &=&
\frac {C_N(\tau)}{c^2(\tau)}\int_{M\left(J_T^{unique,N}\right)} \frac {1}{C_N(\tau)\eps} \int_0^{C_N(\tau)\eps} |w^2(z+y,t) - w^2(z,t)|dydz\\ &\le&
\frac {C_N(\tau)}{c^2(\tau)}\int_{\mathbb{R}} \frac {1}{C_N(\tau)\eps} \int_0^{C_N(\tau)\eps} |w^2(z+y,t) - w^2(z,t)|dydz,
\end{eqnarray*}
where constants $c(\tau), C_N(\tau)$ are defined in Proposition \ref{Cor_constants}. Using Proposition \ref{prop4.1.1} we see that
\[
S^\eps(t) \to 0
\]
as $\eps \to 0$ for almost every $t\in [0,\tau]$. Note also that, by the Fubini theorem,
\[ S^\eps(t) \le 2 \frac {C_N(\tau)}{c^2(\tau)} \sup_{t \in [0,T)} E(t). \] Using the Fubini theorem once again as well as the Lebesgue dominated convergence theorem, we obtain
\begin{eqnarray*}
\lim_{\eps \to 0^+} \int_{J_T^{unique,N}} \int_{\sigma}^{\tau} \left( \frac {1}{x_l[\zeta+\eps](t) - x_\zeta(t)} \int_0^{x_l[\zeta+\eps](t)-x_\zeta(t)}|w^2(x_\zeta(t) + y,t) - w^2(x_\zeta(t),t)| dy \right) dt d\zeta \\
=\lim_{\eps \to 0^+} \int_\sigma^\tau S^\eps(t)dt = 0.
\end{eqnarray*}
This proves \eqref{Eq_toshow}.
\qed
Thus, we have completed all the preparatory steps and now we proceed with the proof of the main result.
\vspace{0.3cm}
\textbf{Proof of Theorem \ref{Tw.1}}
As it was explained in Section \ref{section1.5} in order to prove Theorem \ref{Tw.1}, it is enough to prove Proposition \ref{prop2.3}. It implies Propositions \ref{prop2.1} and \ref{prop2.2}, which in turn yield the main theorem.
Let the sequence $\eps_k$ be obtained as in Lemma \ref{Lem_42}.
Observe that $J_T^{unique}= \bigcup_{K=1}^{\infty} J_{T + 1 \slash K}^{unique}$ (up to a set of measure $0$). Now fix $K \in \mathbb{N}$. For $\zeta \in J_{T+1\slash K}^{unique}$ and $t\leq T$, by \eqref{omegaintegral}, we have
\[ \frac {d}{dt}{\omega}^{x_\zeta, x_l[\zeta+\eps_k]} (t) = -\omega^{x_\zeta, x_l[\zeta+\eps_k]}(t)^2 + \frac {1}{2 (x_l[\zeta+\eps_k](t) - x_\zeta(t))} \int_{x_\zeta(t)}^{x_l[\zeta+\eps_k](t)} w^2(y,t)dy. \]
Hence, for $0 \le \sigma \le t \le \tau < T + 1\slash K$ we obtain
\begin{equation}
\label{eq_limitomega}
{\omega}^{x_\zeta, x_l[\zeta+\eps_k]} (\tau) - {\omega}^{x_\zeta, x_l[\zeta+\eps_k]} (\sigma) = - \int_\sigma^\tau \omega^{x_\zeta, x_l[\zeta+\eps_k]}(t)^2 dt + \frac {1}{2} \int_\sigma^\tau \frac{1}{x_l[\zeta+\eps_k](t) - x_\zeta(t)} \int_{x_\zeta(t)}^{x_l[\zeta+\eps_k](t)} w^2(y,t)dy dt.
\end{equation}
Passing to the limit $k\to\infty$ in the left-hand side of \eqref{eq_limitomega} we obtain
\begin{equation}
\label{Eq_LHS}
\lim_{k \to \infty} \left({\omega}^{x_\zeta, x_l[\zeta+\eps_k]} (\tau) - {\omega}^{x_\zeta, x_l[\zeta+\eps_k]} (\sigma)\right) = w(x_\zeta(\tau),\tau) - w(x_\zeta(\sigma),\sigma)
\end{equation}
for almost every $\sigma,\tau$ (see the definition of $\Gamma$ above Lemma \ref{lem3.1.1}). To pass to the limit in the right-hand side, we first observe that for almost every $\zeta \in J_{T+ 1 \slash K}^{unique}$ there exists $N_\zeta$ such that $\zeta \in J_{T+ 1 \slash K}^{unique,N_\zeta}$. Hence, for $k$ large enough
\[
\omega^{x_\zeta, x_l[\zeta+\eps_k]}(s) \le N_\zeta
\]
for $s \in [0,\tau]$. On the other hand, by \eqref{omega}
\[
\omega^{x_\zeta, x_l[\zeta+\eps_k]}(s) \ge \frac {-2\slash (T + 1 \slash K)}{1-(\tau\slash (T+ 1 \slash K))}
\]
for $s\in [0,\tau]$ and $k$ large enough.
Hence, $ \omega^{x_\zeta, x_l[\zeta+\eps_k]}(t)^2$ is bounded on $[0,\tau]$ and using the Lebesgue dominated convergence theorem we obtain
\begin{equation}
\lim_{k\to \infty} \int_\sigma^\tau \omega^{x_\zeta, x_l[\zeta+\eps_k]}(t)^2 dt = \int_\sigma^\tau w^2(x_\zeta(t),t)dt.
\end{equation}
Finally, by Lemma \ref{Lem_42}
\begin{equation}
\label{Eq_final}
\lim_{k\to \infty} \frac 1 2 \int_\sigma^\tau \frac{1}{x_l[\zeta+\eps_k](t) - x_\zeta(t)} \int_{x_\zeta(t)}^{x_l[\zeta+\eps_k](t)} w^2(y,t)dy dt = \frac 1 2 \int_\sigma^\tau w^2(x_\zeta(t),t) dt
\end{equation}
Combining \eqref{eq_limitomega}-\eqref{Eq_final} and summing over $K \in \mathbb{N}$, for almost every $\zeta \in J_T^{unique}$ we have
\[
w(x_\zeta(\tau),\tau) - w(x_\zeta(\sigma),\sigma) = -\frac{1}{2} \int_{\sigma}^{\tau} w^2(x_\zeta(t),t) dt
\]
for almost every $0 \le \sigma \le \tau \le T$ (more precisely, for those $0 \le \sigma \le \tau \le T$ for which $w(x_{\zeta}(\sigma),\sigma)$ and $w(x_{\zeta}(\tau),\tau)$ exist).
Solving the above differential equation for a.e. $\zeta \in J_T^{unique}$, we obtain that
\begin{equation}\label{5.0.1}
w(x_\zeta(t),t) = \frac {2w_0(\zeta)}{2+tw_0(\zeta)}
\end{equation}
for those $t \in [0,T]$ for which $w(x_{\zeta}(t),t)$ exists. In particular, this holds for $t=T$ and almost every $\zeta \in J_T^{unique}$.
Since $x_a(T)\leq x_r[a](T)$ and $x_b(T)\geq x_l[b](T)$ by the definition of rightmost and leftmost characteristics we arrive at
\begin{eqnarray*}
\int_{x_a(T),x_b(T)}w^2(y,T)dy &\ge& \int_{[x_r[a](T),x_l[b](T)]}w^2(y,T)dy \ge \int_{M([a,b] \cap J_T^{unique})} w^2(y,T)dy\\ &\stackrel{\eqref{5.0.1}}{\ge}& \int_{M([a,b] \cap J_T^{unique})} \left( \frac {2 w_0(W(y))}{2+Tw_0(W(y))}\right)^2 dy \\
&\stackrel{\eqref{c_of_v}}{=}& \int_{[a,b] \cap J_{T} ^{unique}} \left( \frac {2 w_0(\zeta)}{2+Tw_0(\zeta)}\right)^2 dM(\zeta) \\
&\stackrel{Cor.\ref{cor3.1.2}}{\ge}& \int_{[a,b] \cap J_{T} ^{unique}} \left( \frac {2 w_0(\zeta)}{2+Tw_0(\zeta)}\right)^2 \times \frac {1}{4} [2+Tw_0(\zeta)]^2 d\zeta \\
&=& \int_{[a,b] \cap J_T^{unique}} w_0(\zeta)^2 d\zeta.
\end{eqnarray*}
The claim of Proposition \ref{prop2.3} follows in view of the fact that $I_T=J_T^{unique}$ up to a set of measure zero. This, in turn, implies Theorem \ref{Tw.1}.
\qed
\noindent
{\bf Acknowledgement.} T.C. was partially supported by the National Centre of Science (NCN) under grant 2013/09/D/ST1/03687. | {"config": "arxiv", "file": "1405.7558/TCGJ9.tex"} |
TITLE: Definition of the Limes Superior
QUESTION [0 upvotes]: I understand what the limes superior is insofar as I know that for example for an alternating series as $a_n = (-1)^n$ which does not diverge to $\infty$, we can define the bigger accumulation point 1 as the limes superior.
What I don't understand yet though is the corresponding notation which is:
$\sup_{n \in \mathbb N} \ \inf_{k \geq n} x_k$
I understand this so far as follows:
I pick one concrete $n \in \mathbb N$, say n = 3.
Then I pick the infimum of the set $\{x_3,x_4,.....x_{1000},...\}$, lets call this infimum $\inf_3$
Next I set n = 4. And pick the infimum of the set $\{x_4,x_5,....\}$ etc.
Naturally this sequence of infima must be monotonically falling (each set of we take the infimum of is a subset of the previous set). So, of what exactly are we then taking the Supremum of? I.e. where is my understanding wrong, since such as I understand it obviously is not making much sense...thanks
REPLY [1 votes]: The sequence is monotonically not decreasing, actually. Think at the sequence $(2 - 1/n)$; for $n=1$ the inf is 1, for $n=2$ is 3/2, and so on. | {"set_name": "stack_exchange", "score": 0, "question_id": 344770} |
TITLE: The dual of the dual of a group $G$ is isomorphic to $G$
QUESTION [4 upvotes]: This is exercise 3.3 of Serre's book on Representation theory of finite groups: Let $G$ be a finite abelian group and $G'$ the set of irreducible characters of $G$. It is easy to prove that $G'$ is also a finite abelian group of the same order as $G$ via the usual multiplication of characters. Also, for $x \in G$, it can be establish that the mapping $\Psi_x :\chi \to \chi(x)$ is an irreducible character of $G'$, so an element of $G''$, the dual of $G'$. This naturally defines a map $f$ from $G$ to $G''$, sending $x$ to $\Psi_x$ which I want to prove is an isomorphism. My question is:
Why is $f$ injective?
It is also easy to see that $f$ is a group homomorphism, so it is enough to show that its kernel is trivial. If $x$ is such that $\Psi_x = 1_{G''}$, then we know that $\chi(x)=1$ for every irreducible character $\chi$ of $G$, right? Why does this imply that $x=1$? This should be easy to prove, but I guess I am getting confused with the double dual and not seeing how to do it.
REPLY [4 votes]: If $\chi(x)=1$ for every irreducible character of $G$, then $\rho(x)=1$ for every representation $\rho$ of $G$, since every representation is a direct sum of irreducible representations. Taking $\rho$ to be the regular representation, this implies $x=1$.
Alternatively, you can use the classification of finite abelian groups to explicitly construct $\chi$ such that $\chi(x)\neq 1$ if $x\neq 1$. Identify $G$ with a direct sum of cyclic groups, and choose a coordinate on which $x$ is nontrivial. The projection onto this coordinate is a homomorphism $\varphi:G\to\mathbb{Z}/n$ such that $\varphi(x)\neq 0$, and then composing with a faithful 1-dimensional representation of $\mathbb{Z}/n$ gives a 1-dimensional representation $\chi$ of $G$ such that $\chi(x)\neq 1$. | {"set_name": "stack_exchange", "score": 4, "question_id": 1934915} |
TITLE: Zeros of the derivative of a function are real
QUESTION [0 upvotes]: I'm reviewing Complex Analysis from Ahlfors' book and stuck at this question.
"Show that if $f(z)$ is of genus $0$ or $1$ with real zeros, and if $f(z)$ is real
for real $z$, then all zeros of $f'(z)$ are real. Hint: Consider $Im (f'(z)/f(z))$."
I feel like this is hard even with the simple case. Say $f$ has genus 0. THen the book proved that it must be of the form
$$Cz^m\prod_1^\infty (1-\frac{z}{a_n})$$
with $\sum 1/|a_n|<\infty$. I have the form already, but I don't see how I can use the other hypothesis and see anything about the zeros of $f'$. So I'm stuck even in this case.
REPLY [0 votes]: We have
$$ f(z) = Cz^m e^{\alpha z} \prod_{n=1}^\infty \left(1 - \frac{z}{a_n} \right) e^{z/a_n}. $$
The condition $f(z)$ is real for real $z$ requires that $\alpha$ be real.
Taking the logarithmic derivative, we have
$$\frac{f'(z)}{f(z)}= \frac{m}{z}+ \alpha + \sum_{n=1}^\infty \frac{1}{z-a_n} + \frac{1}{a_n}. $$
Taking the imaginary part, we have
$$\text{Im} \frac{f'(z)}{f(z)}= -\left(\frac{m}{|z|^2}+\sum_{n=1}^\infty \frac{1}{|z-a_n|^2} \right) \text{Im} z.$$
The term inside the parentheses is strictly positive. Thus zeros of the derivative $f'(z)$ can only occur when $z$ is real. | {"set_name": "stack_exchange", "score": 0, "question_id": 3590217} |
TITLE: Why is $\sum_{i=1}^n\frac n{n-i+1}=n\sum_{i=1}^n\frac1i$?
QUESTION [2 upvotes]: While reading through my (algorithms and) probability script, I have seen this equality for calculating the first moment of the coupon-collector problem. However, I don't quite see how the sum of the fraction on the left can be split up in such a way, that the sum of the fraction on the right is equal to it.
$$\sum_{i=1}^n\frac n{n-i+1}=n\sum_{i=1}^n\frac1i$$
I assume it is about partial fraction decomposition, but I don't know how to apply it here.
REPLY [2 votes]: Pull $n$ out of the sum and make the chage of variable $j=n-i+1$.
REPLY [2 votes]: It's just reverse summation order:
$$\sum_{i=1}^n \frac1i = 1 + \frac12 + \frac13 + ... + \frac1n$$
$$\sum_{i=1}^n \frac1{n-i+1} = \frac1n + \frac1{n-1} + \frac1{n-2} + ... + 1.$$
REPLY [1 votes]: This is just reindexing. As $i$ runs from $1$ to $n$, $n-i+1$ runs from $n$ to $1$. So we may rewrite the LHS as $\sum_{i=1}^n\frac ni$. | {"set_name": "stack_exchange", "score": 2, "question_id": 3610538} |
TITLE: What problems are easier to solve in a higher dimension, i.e. 3D vs 2D?
QUESTION [12 upvotes]: I'd be interested in knowing if there are any problems that are easier to solve in a higher dimension, i.e. using solutions in a higher dimension that don't have an equally optimal counterpart in a lower dimension, particularly common (or uncommon) geometry and discrete math problems.
REPLY [1 votes]: Counting regular convex polytopes
In 2D, there are infinitely many regular polygons.
In 3D, there are five regular convex polyhedra.
In 4D, there are six regular convex polytopes.
In 5D and above, there are only three.
I was quite surprised when I first learned this.
List of regular polytopes and compounds (Wikipedia) | {"set_name": "stack_exchange", "score": 12, "question_id": 693485} |
TITLE: Notation to edges set incidents in a vertice $v$
QUESTION [1 upvotes]: How I can denote the set of edges incident a vertice $v$ in a graph? Any sugestion/reference that has this?
I knowed that the set of vertices incidents in other vertices is N(v) (set of neighbor of vertice), but the edges set i not knowed!
REPLY [3 votes]: For the undirected graphs: Assume $u$ is your given vertex, then what I would recommend is $\forall v \in V s.t. (u, v) \in E.
$
For the directed case:
If v is your starting point in the edge/arc, then $\forall u \in V, s.t. (v, u) \in E.$
If v is your endpoint, then $\forall u \in V, s.t. (u, v) \in E.$
And then you could basically redefine a set, like.
Given a vertex $v$, let $S(v) = \{ e \in E: \forall u \in V, e=(v, u) \in E\}$. | {"set_name": "stack_exchange", "score": 1, "question_id": 4385376} |
TITLE: Number of permutations of r items from T total items of which n are distinct.
QUESTION [2 upvotes]: Lets say I have T total items and out of them n are distinct. So n <= T.
In other words we can have m1 items of item 1 and m2 items of item 2....and so on, and that $m_1 + m_2 +\cdots +m_n = T$.
What are the number of permutations of r items taken out of those T items where r <= T?
If r == T, the answer is : $T!/( m_1! m_2! \cdots m_n!)$
Question is : What if r < T ? What are the number of permutations?
Note I am not looking for number of combinations where order doesn't matter, but number of permutations where order does matter.
I know the combination of r items from n distinct items is C(n+r-1,r).
REPLY [1 votes]: Consider the the following expansion
$$
\eqalign{
& \left( {x_{\,1} + x_{\,2} + \cdots + x_{\,n} } \right)^{\,2} = \cr
& \left( {x_{\,1} + x_{\,2} + \cdots + x_{\,n} } \right)\left( {x_{\,1} + x_{\,2} + \cdots + x_{\,n} } \right) = \cr
& = x_{\,1} ^{\,2} + x_{\,1} ^{\,1} x_{\,2} ^{\,1} + x_{\,2} ^{\,1} x_{\,1} ^{\,1} + x_{\,2} ^{\,2} + \cdots = \cr
& = \sum\limits_{\left\{ {\matrix{ {0\, \le \,j_{\,k} \left( { \le \,2} \right)} \cr {j_{\,1} + \,j_{\,2} + \, \cdots + \,j_{\,n} \, = \,2} \cr } } \right.\;}
{\left( \matrix{ 2 \cr j_{\,1} ,\,j_{\,2} ,\, \cdots ,\,j_{\,n} \cr} \right)\;x_{\,1} ^{j_{\,1} } \;x_{\,2} ^{j_{\,2} } \; \cdots \;x_{\,n} ^{j_{\,n} } } \cr}
$$
we can see that it counts either $x_1x_2$ and $x_2x_1$, i.e. order is taken into account
Here, each item can appear from $0$ to $2$ times.
You are instead looking for
$$
\eqalign{
& \left( {x_{\,1} + x_{\,2} + \cdots + x_{\,n} } \right)^{\,r} \quad \left| {\;rep.\,x_{\,k} \le m_{\,k} } \right.\quad = \cr
& = \sum\limits_{\left\{ {\matrix{
{0\, \le \,j_{\,k} \le \,m_{\,k} } \cr
{j_{\,1} + \,j_{\,2} + \, \cdots + \,j_{\,n} \, = \,r} \cr
} } \right.\;} {\left( \matrix{
r \cr
j_{\,1} ,\,j_{\,2} ,\, \cdots ,\,j_{\,n} \cr} \right)\;x_{\,1} ^{j_{\,1} } \;x_{\,2} ^{j_{\,2} } \; \cdots \;x_{\,n} ^{j_{\,n} } } \cr}
$$
where each index is limited to a given max repetition $m_k$., and thus for the corresponding number
$$ \bbox[lightyellow] {
N(r,n,{\bf m}) = \sum\limits_{\left\{ {\matrix{
{0\, \le \,j_{\,k} \le \,m_{\,k} } \cr
{j_{\,1} + \,j_{\,2} + \, \cdots + \,j_{\,n} \, = \,r} \cr
} } \right.\;} {\left( \matrix{
r \cr
j_{\,1} ,\,j_{\,2} ,\, \cdots ,\,j_{\,n} \cr} \right)}
} \tag{1}$$
The multinomial in fact, counts the number of permutations of $j_1$ objects of type 1 and $j_2$ objects of type 2 and etc.
In standard combinatorics jargon we are dealing with the number of
words of length $r$, from alphabet $\{1,2, \cdots , n \}$, with allowed repetitions of each character up to $m_k \; |\, 1 \le k \le n$.
Identity (1) can be recasted in various ways, which will result more or less appealing depending on the characteristics
of the vector $\bf m$.
Let's note first of all, that wlog $\bf m$ can be permuted , so arranged in (e.g.) non decreasing order.
Let's examine some particular cases.
a) $m_k=1 \quad \to$ words with different characters
Calling $\bf u$ the vector with all components at 1,
$$
\begin{array}{l}
N(r,n,{\bf u}) = \sum\limits_{\left\{ {\begin{array}{*{20}c} {0\, \le \,j_{\,k} \le \,1} \\ {j_{\,1} + \,j_{\,2} + \, \cdots + \,j_{\,n} \, = \,r} \\
\end{array}} \right.\;} {\left( \begin{array}{c} r \\ j_{\,1} ,\,j_{\,2} ,\, \cdots ,\,j_{\,n} \\
\end{array} \right)} = \\
= \left( \begin{array}{c} n \\ r \\
\end{array} \right)\sum\limits_{\left\{ {\begin{array}{*{20}c}
{j_{\,1} = j_{\,2} = \cdots = j_{\,r} = \,1} \\
{j_{\,r + 1} = j_{\,r + 2} = \cdots = j_{\,n} = \,0} \\
{j_{\,1} + \,j_{\,2} + \, \cdots + \,j_{\,r} \, = \,r} \\
\end{array}} \right.\;}
{\left( \begin{array}{c} r \\ j_{\,1} ,\,j_{\,2} ,\, \cdots ,\,j_{\,n} \\
\end{array} \right)} = \\
= \left( \begin{array}{c} n \\ r \\
\end{array} \right)\frac{{r!}}{{1! \cdots 1!0! \cdots 0!}} = n^{\,\underline {\,r\,} } \\
\end{array}
$$
which is obvious : $n^{\,\underline {\,r\,} } $ denotes the Falling Factorial.
b) $r \le m_k \quad \to$ general words
$$
N(r,\,n,\,r\,{\bf u} \le {\bf m}) = \sum\limits_{\left\{ {\begin{array}{*{20}c}
{0\, \le \,j_{\,k} \left( { \le \,r} \right)} \\
{j_{\,1} + \,j_{\,2} + \, \cdots + \,j_{\,n} \, = \,r} \\
\end{array}} \right.\;} {\left( \begin{array}{c} r \\ j_{\,1} ,\,j_{\,2} ,\, \cdots ,\,j_{\,n} \\
\end{array} \right)} = n^{\,r}
$$
which is obvious as well.
Therefore in general we will have
$$
n^{\,\underline {\,r\,} } \le N(r,\,n,\,{\bf m}) \le n^{\,r}
$$
c) composition of words
Let's split the alphabet into two, one with $q$ characters and the other with $n-q$ characters.
We divide the vector $\bf m$ in two, accordingly. We may well put at 0 the max repetition of the characters not included.
A word of $r$ characters may be formed by $s$ characters of the first alphabete and $r-s$ of the second one, with $0 \le s \le r$.
We may consider the words from the total alphabet as made by words from the first interspersed with words from the second.
The $s$ characters are dispersed into the $r$ places, maintaining their order in $\binom{r}{s}$ ways.
We obtain in fact the interesting relation
$$ \bbox[lightyellow] {
\begin{array}{l}
N(r,\,n,\,{\bf m}_{\,q} \oplus {\bf m}_{\,n - q} ) = \sum\limits_{\left\{ {\begin{array}{*{20}c}
{0\, \le \,j_{\,k} \le \,m_{\,k} \;\left| {\;1\, \le \,k\, \le \,n} \right.} \\
{j_{\,1} + \,j_{\,2} + \, \cdots + \,j_{\,n} \, = \,r} \\
\end{array}} \right.\;} {
\left( \begin{array}{c} r \\ j_{\,1} ,\,j_{\,2} ,\, \cdots ,\,j_{\,n} \\ \end{array} \right)} = \\
= \sum\limits_s {\sum\limits_{\left\{ {\begin{array}{*{20}c}
{0\, \le \,j_{\,k} \le \,m_{\,k} \;\left| {\;1\, \le \,k\, \le \,q} \right.} \\
{j_{\,1} + \,j_{\,2} + \, \cdots + \,j_{\,q} \, = \,s} \\
{0\, \le \,j_{\,k} \le \,m_{\,k} \;\left| {\;q + 1\, \le \,k\, \le \,n} \right.} \\
{j_{\,q + 1} + \,j_{\,q + 2} + \, \cdots + \,j_{\,n} \, = \,r - s} \\
\end{array}} \right.\;} {\frac{{r!}}{{s!\left( {r - s} \right)!}}\frac{{s!}}{{j_{\,1} !\,\, \cdots j_{\,q} !}}\frac{{\left( {r - s} \right)!}}{{j_{\,q + 1} !\,\, \cdots j_{\,n} !}}} } = \\
= \sum\limits_s {\left( \begin{array}{c} r \\ s \\ \end{array} \right)
\sum\limits_{\left\{ {\begin{array}{*{20}c}
{0\, \le \,j_{\,k} \le \,m_{\,k} \;\left| {\;1\, \le \,k\, \le \,q} \right.} \\
{j_{\,1} + \,j_{\,2} + \, \cdots + \,j_{\,q} \, = \,s} \\
\end{array}} \right.\;} {\frac{{s!}}{{j_{\,1} !\,\, \cdots j_{\,q} !}}} } \;\sum\limits_{\left\{ {\begin{array}{*{20}c}
{0\, \le \,j_{\,k} \le \,m_{\,k} \;\left| {\;q + 1\, \le \,k\, \le \,n} \right.} \\
{j_{\,q + 1} + \,j_{\,q + 2} + \, \cdots + \,j_{\,n} \, = \,r - s} \\
\end{array}} \right.\;} {\frac{{\left( {r - s} \right)!}}{{j_{\,q + 1} !\,\, \cdots j_{\,n} !}}} = \\
= \sum\limits_{0\, \le \,s\left( { \le \,r} \right)} {\left( \begin{array}{c} r \\ s \\
\end{array} \right)N(s,\,q,\,{\bf m}_{\,q} )N(r - s,\,n - q,\,{\bf m}_{\,n - q} )} =\\
= \sum\limits_{0\, \le \,s\left( { \le \,r} \right)} {\left( \begin{array}{c} r \\ s \\
\end{array} \right)N(s,\,n,\,{\bf m}_{\,q} )N(r - s,\,n,\,{\bf m}_{\,n - q} )} =\\
\end{array}
} \tag{2}$$
where the last two lines comes from understanding, equivalently:
- the vectors ${\bf m}_{\,q}$ and ${\bf m}_{\,n-q}$ as two vectors with the dimensions indicated
and then $\oplus$ meaning "join";
- ${\bf m}_{\,q}$ and ${\bf m}_{\,n-q}$ as vectors of dimension $n$ with complementary null components
and then $\oplus$ meaning effectively "plus".
That says that $N$ is given by the binomial convolution of its parts, same as it is
for its two limit expressions
$$
n^{\,r} = \sum\limits_{0\, \le \,s} {\binom{r}{s} q^{\,s} \left( {n - q} \right)^{\,r - s} }
\quad n^{\,\underline {\,r\,} } = \sum\limits_{0\, \le \,s} {\binom{r}{s}q^{\,\underline {\,s\,} } \left( {n - q} \right)^{\,\underline {\,r - s\,} } }
$$
So you are right, a recursive approach is well viable. | {"set_name": "stack_exchange", "score": 2, "question_id": 3258675} |
TITLE: Given $f(x)=-x^2+1$ and $g(x)=\sqrt{x+1}$, find $k(x)=(g\circ f)(x)$?
QUESTION [2 upvotes]: Given $f(x)=-x^2+1$ and $g(x)=\sqrt{x+1}$, find $k(x)=(g \circ f)(x)$?
Following the step's my teach told me to do this type of equation I did this... I feel like I'am not showing enough work and I'am going to college next year and I want to get full mark I'am not even sure If i'am correct. please help me learn how to do this...
My solution :
$$(g \circ f)(x) =
g(f(x)) =
\sqrt{(-x^2 + 1) + 1} =
\sqrt{2 - x^2}$$
My apologizes for not formatting this correctly..
REPLY [1 votes]: $\text{Just compare, few moments back, I found a function}\\$
$$\text{$f(x) = \sqrt x$}\\$$
$$\text{g(x)} =x^2\\$$
$\text{domain of $f(x)=\sqrt x$ is all non-negative real numbers, while}\\$
$\text{domain of $g(x) = x^2$ is all the real numbers,}\\$
$\text{the composed function is $(g \circ f)(x) = g(f(x))=(\sqrt x)^2=x$}\\$
$\text{Now, x would nomally have the domain of all real numbers,}\\$
$\text{But, because it is a composed function, we must also consider $f(x)$,}\\$
$\text{So, the domain is all non-negative real numbers}\\$
$\text{$(def:-$ where $Domain$ of a function is the set of input argument value,}\\$
$\text{$\qquad$ for which the function is defined.)}\\$ | {"set_name": "stack_exchange", "score": 2, "question_id": 425547} |
\begin{document}
\begin{abstract}
We introduce three new nonlinear continuous data assimilation algorithms. These models are compared with the linear continuous data assimilation algorithm introduced by Azouani, Olson, and Titi (AOT). As a proof-of-concept for these models, we computationally investigate these algorithms in the context of the 1D Kuramoto-Sivashinsky equation. We observe that the nonlinear models experience super-exponential convergence in time, and converge to machine precision significantly faster than the linear AOT algorithm in our tests.
\end{abstract}
\maketitle
\thispagestyle{empty}
\noindent
\section{Introduction}\label{secInt}
\noindent
Recently, a promising new approach to data assimilation was pioneered by Azouani, Olson, and Titi \cite{Azouani_Olson_Titi_2014,Azouani_Titi_2014} (see also \cite{Cao_Kevrekidis_Titi_2001,Hayden_Olson_Titi_2011,Olson_Titi_2003} for early ideas in this direction). This new approach, which we call AOT data assimilation or the linear AOT algorithm, is based on feedback control at the partial differential equation (PDE) level, described below. In the present work, we propose several nonlinear data assimilation algorithms based on the AOT algorithm, that exhibit significantly faster convergence in our simulations; indeed, the convergence rate appears to be super-exponential.
Let us describe the general idea of the AOT algorithm. Consider a dynamical system in the form,
\begin{empheq}[left=\empheqlbrace]{align}\label{ODE}
\begin{split}
\tfrac{d}{dt} u &= F(u),\\
u(0)&=u_0.
\end{split}
\end{empheq}
For example, this could represent a system of partial differential equations modeling fluid flow in the atmosphere or the ocean. A central difficulty is that, even if one were able to solve the system exactly, the initial data $u_0$ is largely unknown. For example, in a weather or climate simulation, the initial data may be measured at certain locations by weather stations, but the data at locations in between these stations may be unknown. Therefore, one might not have access to the complete initial data $u_0$, but only to the observational measurements, which we denote by $I_h(u_0)$. (Here, $I_h$ is assumed to be a linear operator that can be taken, for example, to be an interpolation operator between grid points of maximal spacing $h$, or as an orthogonal projection onto Fourier modes no larger than $k\sim 1/h$.)
Moreover, the data from measurements may be streaming in moment by moment, so in fact, one often has the information $I_h(u)=I_h(u(t))$, for a range of times $t$. Data assimilation is an approach that eliminates the need for complete initial data and also incorporates incoming data into simulations. Classical approaches to data assimilation are typically based on the Kalman filter. See, e.g., \cite{Daley_1993_atmospheric_book,Kalnay_2003_DA_book,Law_Stuart_Zygalakis_2015_book} and the references therein for more information about the Kalman filter. In 2014, an entirely new approach to data assimilation---the AOT algorithm---was introduced in \cite{Azouani_Olson_Titi_2014,Azouani_Titi_2014}. This new approach overcomes some of the drawbacks of the Kalman filter approach (see, e.g., \cite{Biswas_Hudson_Larios_Pei_2017} for further discussion). Moreover, it is implemented directly at the PDE level. The approach has been the subject of much recent study in various contexts, see, e.g., \cite{Albanez_Nussenzveig_Lopes_Titi_2016,Altaf_Titi_Knio_Zhao_Mc_Cabe_Hoteit_2015,Bessaih_Olson_Titi_2015,Biswas_Martinez_2017,Farhat_Jolly_Titi_2015,Farhat_Lunasin_Titi_2016abridged,Farhat_Lunasin_Titi_2016benard,Farhat_Lunasin_Titi_2016_Charney,Farhat_Lunasin_Titi_2017_Horizontal,Foias_Mondaini_Titi_2016,Gesho_Olson_Titi_2015,Jolly_Martinez_Titi_2017,Markowich_Titi_Trabelsi_2016_Darcy,Mondaini_Titi_2017}.
The following system was proposed and studied in \cite{Azouani_Olson_Titi_2014,Azouani_Titi_2014}:
\begin{empheq}[left=\empheqlbrace]{align}\label{AOT}
\begin{split}
\dfrac{d}{dt} v &= F(v) + \mu(I_h(u) - I_h(v)),\\
v(0)&=v_0.
\end{split}
\end{empheq}
This system, used in conjunction with \eqref{ODE}, is the AOT algorithm for data assimilation of system \eqref{ODE}. In the case where the dynamical system \eqref{ODE} is the 2D Navier-Stokes equations, it was proven in \cite{Azouani_Olson_Titi_2014,Azouani_Titi_2014} that, for any divergence-free initial data $v_0\in L^2$,
$
\|u(t) - v(t)\|_{L^2} \maps 0,
$
exponentially in time.
In particular, even without knowing the initial data $u_0$, the solution $u$ can be approximately reconstructed for large times. We emphasize that, as noted in \cite{Azouani_Olson_Titi_2014}, the initial data for \eqref{AOT} can be any $L^2$ function, even $v_0=0$. Thus, no information about the initial data is required to reconstruct the solution asymptotically in time.
The principal aim of this article is to develop a new class of nonlinear algorithms for data assimilation. The main idea is to use a nonlinear modification of the AOT algorithm for data assimilation to try to drive the algorithm toward the true solution at a faster rate. In particular, for a given, possibly nonlinear function $\mathcal{N}$, we consider a modification of \eqref{AOT} in the form:
\begin{empheq}[left=\empheqlbrace]{align}\label{AOT_NL}
\begin{split}
\dfrac{d}{dt} v &= F(v) + \mu \mathcal{N}(I_h(u) - I_h(v)),\\
v(0)&=v_0.
\end{split}
\end{empheq}
To begin, we first focus on the following form of the nonlinearity:
\begin{align}\label{NL_power}
\mathcal{N}(x) =\mathcal{N}_1(x) := x|x|^{-\gamma}, \quad x\neq0, \quad 0<\gamma<1,
\end{align}
with $\mathcal{N}_1(0)=0$.
\begin{remark}\label{concave_remark}
Note that by formally setting $\gamma=0$, one recovers the linear AOT algorithm \eqref{AOT}. The main idea behind using such a nonlinearity is that, when $I_h(u)$ is close to $I_h(v)$, the solution $v$ is driven toward the true solution $u$ more strongly than in the linear AOT algorithm. In particular, for any $c>0$, if $x>0$ is small enough, then $\mathcal{N}_1(x)> cx$, so no matter how large $\mu$ is chosen in the linear AOT algorithm, the nonlinear method with $\mathcal{N}=\mathcal{N}_1$ will always penalize small errors more strongly.
\end{remark}
As a preliminary test of the effectiveness of this approach, in this work we demonstrate the nonlinear data assimilation algorithm \eqref{AOT_NL} on a one-dimensional PDE; namely, the Kuramoto-Sivashinky equation (KSE), given in dimensionless units by:
\begin{empheq}[left=\empheqlbrace]{align}\label{KSE}
\begin{split}
u_t + uu_x + \lambda u_{xx}+ u_{xxxx}&=0,
\\
u(x,0) &=u_0(x),
\end{split}
\end{empheq}
in a periodic domain $\Omega = [-L/2,L/2] = \nR/L\nZ$ of length $L$. Here, $\lambda>0$ is a dimensionless parameter. For simplicity, we assume that the initial data is sufficiently smooth (made more precise below) and mean-free, i.e., $\int_{-L/2}^{L/2}u_0(x)\,dx=0$, which implies $\int_{-L/2}^{L/2}u(x,t)\,dx=0$ for all $t\geq0$. This equation has many similarities with the 2D Navier-Stokes equations. It is globally well-posed; it has chaotic large-time behavior; and it has a finite-dimensional global attractor, making it an excellent candidate for studying large-time behavior. It governs various physical phenomena, such as the evolution of flame-fronts, the flow of viscous fluids down inclined planes, and certain types of crystal growth (see, e.g., \cite{Kuramoto_Tsuzuki_1976,Sivashinsky_1977,Sivashinsky_1980_stoichiometry}). Much of the theory of the 1D Kuramoto-Sivashinsky equation was developed in the periodic case in \cite{Collet_Eckmann_Epstein_Stubbe_1993_Attractor, Constantin_Foias_Nicolaenko_Temam_1989_IM_Book, Goodman_1994, Tadmor_1986, Temam_1997_IDDS,Tadmor_1986,Ilyashenko_1992} (see also \cite{Azouani_Olson_Titi_2014,Bronski_Gambill_2006,Cheskidov_Foias_2001,Foias_Kukavica_1995,Foias_Nicolaenko_Sell_1985,Foias_Nicolaenko_Sell_Temam_1988,Goldman_Josien_Otto_2015,Golovin_Davis_Nepomnyashchy_2000,Hyman_Nicolaenko_Zaleski_1986,Jardak_Navon_Zupanski_2010,Jolly_Kevrekidis_Titi_1990,Jolly_Rosa_Temam_2000,Kuramoto_Tsuzuki_1976,Lions_Perthame_Tadmor_1994,Liu_1991_KSE_Gevrey,Molinet_2000,Otto_2009,Sivashinsky_1977,Sivashinsky_1980_stoichiometry,Hyman_Nicolaenko_1986}). For a discussion of other boundary conditions for \eqref{KSE}, see, e.g., \cite{Kuramoto_Tsuzuki_1976,Sivashinsky_1977, Robinson_2001, Pokhozhaev_2008,Larios_Titi_2015_BC_Blowup}.
Discussions about the numerical simulations of the KSE, can be found in, e.g., \cite{Foias_Jolly_Kevrekidis_Titi_1991_Nonlinearity,Foias_Titi_1991_Nonlinearity,Foias_Jolly_Kevrekidis_Titi_1994_KSE,Lopez_1994_pseudospectral_KSE,Kalogirou_Keaveny_Papageorgiou_2015}. Data assimilation in several different contexts for the 1D Kuramoto-Sivashinsky equation was investigated in \cite{Jardak_Navon_Zupanski_2010,Lunasin_Titi_2015}, who also recognized its potential as an excellent test-bed for data assimilation.
Using the nonlinear data assimilation algorithm \eqref{AOT_NL} in the setting of the Kuramoto-Sivashinsky equation, with the choice \eqref{NL_power} for the nonlinearity for some $0<\gamma<1$, we arrive at:
\begin{empheq}[left=\empheqlbrace]{align}\label{KSE_DA}
\begin{split}
v_t + vv_x + v_{xx}+ v_{xxxx}&=\mu\,\,\text{sign}(I_h(u) - I_h(v))|I_h(u) - I_h(v)|^{1-\gamma},
\\
v(x,0) &=v_0(x).
\end{split}
\end{empheq}
We take $I_h$ to be orthogonal projection onto the first $c/h$ Fourier modes, for some constant $c$. Other physically relevant choices of $I_h$, such as a nodal interpolation operator, have been considered in the case of the linear AOT algorithm (see, e.g., \cite{Gesho_Olson_Titi_2015}).
\begin{remark}
In view of the ODE example $y'=y^{1/2}$, $y(0)=0$, which has multiple solutions, one might wonder about the well-posedness of equation \eqref{KSE_DA}. While lack of well-posedness is a possibility, our simulations do not appear to be strongly affected by such a hindrance. In any case, we believe the greatly reduced convergence time we observe makes the equations worth studying. A similar remark can be made for an equation we examine in a later section with power larger than one, in view of the ODE example $z'=z^2$, $z(0)=1$, which develops a singularity in finite time. A study of the well-posedness of \eqref{AOT_NL}, in various settings, will be the subject of a forthcoming work.
\end{remark}
\section{Preliminaries}\label{secPre}
\noindent
In this work, we compute norms of the error given by the difference between the data assimilation solution and the reference solution. We focus on the $L^2$ and $H^1$ norms, defined by
\begin{align*}
\|u\|_{L^2}^2 = \frac{1}{L}\int_{-L/2}^{L/2}|u(x)|^2\,dx,
\qquad
\|u\|_{H^1} = \|\nabla u\|_{L^2}.
\end{align*}
(Note that, by Poincar\'e's inequality, $\|u\|_{L^2}\leq C\|u\|_{H^1}$, which holds on any bounded domain, $\|u\|_{H^1}$ is indeed a norm.)
We briefly mention the scaling arguments used to justify the form \eqref{KSE}. For $a>0$, $b>0$, $L>0$, consider an equation in the form
\begin{align}
u_t + uu_x + au_{xx} + bu_{xxxx}=0,\qquad x\in[-L/2,L/2]
\end{align}
Choose time scale $T=L^4/b$, characteristic velocity $U=L/T=b/L^3$, and dimensionless number $\lambda = aL^2/b$. Write $u' = u/U$, $x'=(x+L/2)/L$, $t' = t/T$, where the prime denotes a dimensionless variable. Then
\begin{align}\label{KSE_lambda}
\frac{U}{T}u'_{t'} + \frac{U^2}{L}u'u'_{x'} + \frac{aU}{L^2}u'_{x'x'} + \frac{bU}{L^4}u'_{x'x'x'x'}=0,\qquad x'\in[0,1]
\end{align}
Multiply by $L^4/(bU)$. The equation in dimensionless form then becomes
\begin{align}
u'_{t'} + u'u'_{x'} + \lambda u'_{x'x'} + u'_{x'x'x'x'}=0,\qquad x'\in[0,1].
\end{align}
Thus, $\lambda$ acts as a parameter which influences the dynamics, in the same way that the Reynolds number influences dynamics in turbulent flows.
Another approach is to set $\ell = (b/a)^{1/2}$, $T=\ell^4/b=b/a^2$, and $U = \ell/T = a^{3/2}/b^{1/2}$. Then define dimensionless quantities (denoted again by primes) $u' = u'/U$, $x'=(x+L/2)/\ell$, $t' = t/T$. The equation now becomes
\begin{align}\label{KSE_lambda_1}
\frac{U}{T}u'_{t'} + \frac{U^2}{\ell}u'u'_{x'} + \frac{aU}{\ell^2}u'_{x'x'} + \frac{bU}{\ell^4}u'_{x'x'x'x'}=0,\qquad x'\in[0,\tfrac{L}{\ell}]
\end{align}
Multiplying by $\ell^4/(bU)$ yields
\begin{align}
u'_{t'} + u'u'_{x'} + u'_{x'x'} + u'_{x'x'x'x'}=0,\qquad x'\in[0,\tfrac{L}{\ell}].
\end{align}
Thus, equation \eqref{KSE_lambda_1} is similar to equation \eqref{KSE_lambda}, with $\lambda=1$, except that the dynamics are influenced by the dimensionless parameter $L/\ell$. In particular, the dynamics can be thought of as influenced by parameter $\lambda$ with $L$ fixed, or equivalently influenced by the length of the domain $L$ with $\lambda$ fixed, where $\lambda\sim (L/\ell)^2$. In this work, for the sake of matching the initial data used in \cite{Kassam_Trefethen_2005}, we choose the domain to be $[-16\pi,16\pi]$, so $L=32\pi$ is fixed, and we let $\lambda$ be the parameter affecting the dynamics.
\section{Computational Results}\label{secComputations}
In this section, we demonstrate some computational results for the nonlinear data assimilation algorithm given by \eqref{KSE_DA}.
\subsection{Numerical Methods}\label{subsecNumericalMethods}
It was observed in \cite{Gesho_Olson_Titi_2015} that no higher-order multi-stage Runge-Kutta-type method exists for solving \eqref{KSE_DA} due to the need to evaluate at fractional time steps, for which the data $I_h(u)$ is not available. Therefore, we use a semi-implicit spectral method with Euler time stepping. The linear terms are treated via a first-order exponential time differencing (ETD) method (see, e.g., \cite{Kassam_Trefethen_2005} for a detailed description of this method). The nonlinear term is computed explicitly, and in the standard way, i.e., by computing the derivatives in spectral space, and products in physical space, respecting the usual 2/3's dealiasing rule. We use $N=2^{13}=8192$ spatial grid points on the interval $[-16\pi,16\pi)$, so $ \Delta x= 32\pi/N \approx 0.0123$. We use a fixed time-step respecting the advective CFL; in fact, we choose $\Delta t =1.2207\times 10^{-4}$.
For simplicity, we choose $\mu = 1$, however, the results reported here are qualitatively similar for a wide range of $\mu$ values. For example, when $\mu=10$, convergence times are shorter for all methods, but the error plots are qualitatively similar.
In \cite{Kassam_Trefethen_2005}, the case $\lambda=1$ is examined. However, to examine a slightly more chaotic setting, we take $\lambda=2$, which is still well-resolved with $N=8192$. Our results are qualitatively similar for smaller values of $\lambda$.
Here, we let $I_h$ be the projection onto the lowest $M=\lfloor 32\pi/h\rfloor$ Fourier modes. In this work, we set $M=32$ (i.e, $h = \pi$); so only the lowest 32 modes of $u$ are passed to the assimilation equation via $I_h(u)$.
One can consider a more general interpolation operator as well, such as nodal interpolation, but we focus on projection onto low Fourier modes.
To fix ideas, in this paper we mainly use the initial data used in \cite{Kassam_Trefethen_2005} to simulate \eqref{KSE}; namely
\begin{align}\label{KSE_ic}
u_0(x) = \cos(x/16)(1+\sin(x/16));
\end{align}
on the interval $[-16\pi,16\pi]$. However, we also investigated several other choices of initial data. In all cases, the results were qualitatively similar to the ones reported here. We present one such test near the end of this paper.
Note that explicit treatment of the term $\mu(I_h(u) - I_h(v))$ imposes a constraint on the time step, namely $\Delta t<2/\mu$ (which follows from a standard stability analysis for Euler's method). This is not a series restriction in this work, since we choose $\mu=1$.
All the simulations in the present work are well-resolved. In Figure \eqref{figs:KSE_DA_Spectrum} we show plots of time-averaged spectra of all the PDEs simulated in the present work. One can see that all relevant wave-modes are captured to within machine precision.
\begin{figure}[htp]
\begin{center}
\subfigure[Time-averaged spectrum of the reference solution $u$ to the 1D-Kuramoto-Sivashinsky equation.]
{\includegraphics[width=.49\textwidth]{Spectrum_KSE_u.jpg}\label{FIG_spec}}
\hfill
\subfigure[Time-averaged spectrum of the data assimilation solution $v$ with nonlinear pure-power ($\mathcal{N}_1$) algorithm.]
{\includegraphics[width=.49\textwidth]{Spectrum_KSE_DA_v.jpg}\label{FIG_spec_DA_v}}
\hfill
\subfigure[Time-averaged spectrum of the data assimilation solution $v$ with nonlinear hybrid ($\mathcal{N}_2$) algorithm.]
{\includegraphics[width=.49\textwidth]{Spectrum_KSE_DAH_v.jpg}\label{FIG_spec_DAH_v}}
\hfill
\subfigure[Time-averaged spectrum of the data assimilation solution $v$ with nonlinear concave-convex ($\mathcal{N}_3$) algorithm.]
{\includegraphics[width=.49\textwidth]{Spectrum_KSE_DANL_v.jpg}\label{FIG_spec_DANL_v}}
\caption{\label{figs:KSE_DA_Spectrum}\footnotesize
Log-log plots of the spectra for the above scenarios. Plots are averaged over all time steps between times $t=20$ and $t=60$. ($\lambda=2$)}
\end{center}
\end{figure}
\subsection{Simple Power Nonlinearity}
We compare the error in the nonlinear data assimilation algorithm \eqref{AOT_NL} with the error in the linear AOT algorithm.
We first focus on nonlinearity given by a power according to \eqref{NL_power}; i.e., we consider equation \eqref{KSE_DA} together with equation \eqref{KSE}.
In Figure \ref{figs:KSE_waterfall}(a), the solution to \eqref{KSE} (which we call the ``reference'' solution) evolves from the smooth, low-mode initial condition \eqref{KSE_ic} to a chaotic state after about time $t=20$. In Figure \ref{figs:KSE_waterfall}(b), the difference between this solution and the AOT data assimilation solution is plotted. It rapidly decays to zero in a short time.
\begin{figure}[htp]
\begin{center}
\subfigure[A chaotic solution to the Kuramoto-Sivashinsky equation evolving in time.]
{\includegraphics[width=.32\textwidth]{KSE_DA_u.jpg}}
\hfill
\subfigure[Error in data assimilation solution using linear AOT algorithm ($\gamma=0$).]
{\includegraphics[width=.32\textwidth]{KSE_DA_err_g00_h0.jpg}}
\hfill
\subfigure[Error in data assimilation solution using nonlinear algorithm \eqref{AOT_NL} ($\gamma=0.125$).]
{\includegraphics[width=.32\textwidth]{KSE_DA_err_g0125_h0.jpg}}
\caption{\label{figs:KSE_waterfall}\footnotesize
Data Assimilation for the Kuramoto-Sivashinsky equation ($\lambda=2$) using linear and nonlinear algorithms.
The difference rapidly decays to zero in time, and visually the errors look similar. The assimilation equations were initialized with $v_0(x)=0$. $I_h$ is the orthogonal projection onto the lowest $32$ Fourier modes. Similar results appear in tests of a wide variety of initial data, and for $0<\gamma<0.125$.}
\end{center}
\end{figure}
We observe in Figure \ref{figs:KSE_error} that errors in the linear AOT algorithm \eqref{AOT} and the nonlinear algorithm \eqref{AOT_NL} solutions both decay. The error in the nonlinear algorithm has oscillations for roughly $5\lesssim t\lesssim 15$ which are not present in the error for the AOT algorithm. However, by tracking norms of the difference of the solutions, one can see in Figure \ref{figs:KSE_error} that the nonlinear algorithm reaches machine precision significantly faster than the linear AOT algorithm, for a range of $\gamma$ values.
\begin{figure}[htp]
\begin{center}
\subfigure[Errors in $L^2$-norm vs. time.]
{\includegraphics[width=.49\textwidth]{KSE_DA_energy_h0.jpg}}
\hfill
\subfigure[Errors in $H^1$-norm vs. time.]
{\includegraphics[width=.49\textwidth]{KSE_DA_enstrophy_h0.jpg}}
\caption{\label{figs:KSE_error}\footnotesize Error for the linear AOT ($\gamma=0$) solution and the nonlinear \eqref{AOT_NL} ($\gamma>0$) solution for various values of $\gamma$. Resolution 8192. (Log-linear scale.) }
\end{center}
\end{figure}
When $\gamma>0.2$, our simulations appear to no longer converge (not shown here). The error in the linear AOT algorithm (i.e., $\gamma=0$) reaches machine precision at roughly time $t\approx49.8$. For $0<\gamma<0.2$, there seems to be an optimal choice in our simulations around $0.75\lesssim\gamma\lesssim 0.1$, reaching machine precision around $t\approx27.3$, a speedup factor of roughly $49.8/27.3\approx 1.8$. Moreover, the shape of the curves with $\gamma>0$ indicate super-exponential convergence, indicated by the concave curve on the log-linear plot in Figure \ref{figs:KSE_error}, while for the linear AOT algorithm, the convergence is only exponential, indicated by the linear shape on the log-linear plot. Currently, the super exponential convergence is only an observation in simulations. An analytical investigation of the convergence rate will be the subject of a forthcoming work.
\subsection{Hybrid Linear/Nonlinear Methods}\label{subsecHybrid}
In this subsection, we investigate a family of hybrid linear/nonlinear data assimilation algorithms.
One can see from Figure \ref{figs:KSE_error}) in the previous subsection that, although the nonlinear methods converge to machine precision at earlier times than the linear method, the nonlinear method suffers from larger errors than the linear method for short times. This motivates the possibility of using a hybrid linear/nonlinear method. For example, one could look for an optimal time to switch between the models, say, perhaps around time $t\approx18\pm2$, according to Figure \ref{figs:KSE_error}), but this seems highly situationally dependent and difficult to implement in general. Instead, the approach we consider here is to let $\mathcal{N}(x)$ be given by \eqref{NL_power} for $|x|\leq1$ but let it be linear for $|x|>1$. The idea is that, when the error is small, deviations are strongly penalized, as in Remark \eqref{concave_remark}. However, where the error is large, the linear AOT algorithm should give the greater penalization (i.e., $\mathcal{N}_1(x)<x$ when $x>1$).
Therefore, we consider algorithm \eqref{AOT_NL} with the following choice of nonlinearity, for some choice of $\gamma$, $0<\gamma<1$ (we take $\gamma=0.1$ in all following simulations).
\begin{align}\label{NL_hybrid}
\mathcal{N}(x) =\mathcal{N}_2(x) :=
\begin{cases}
x,& |x|\geq 1,\\
x|x|^{-\gamma}, & 0< |x|< 1,\\
0,& x = 0.
\end{cases}
\end{align}
\begin{figure}[htp]
\begin{center}
\subfigure[Error in $L^2$-norm vs. time.]
{\includegraphics[width=.49\textwidth]{KSE_DA_energy_h1.jpg}}
\hfill
\subfigure[Error in $H^1$-norm vs. time.]
{\includegraphics[width=.49\textwidth]{KSE_DA_enstrophy_h1.jpg}}
\caption{\label{figs:KSE_DAH_error}\footnotesize Error in linear ($\gamma=0$), nonlinear ($\gamma=0.1$), and hybrid ($\gamma=0.1$) algorithms ($\lambda=2$). Resolution 8192. (Log-linear scale.)}
\end{center}
\end{figure}
In Figure \ref{figs:KSE_DAH_error},we compare the linear algorithm \eqref{AOT} with the nonlinear algorithm \eqref{AOT_NL} with pure-power nonlinearity $\mathcal{N}_1$, given by \eqref{NL_power}, and also with hybrid nonlinearity $\mathcal{N}_2$, given by \eqref{NL_hybrid}. The convergence to machine precision happens approximately at $t\approx49.8$ (for AOT), $t\approx27.3$ (for $\mathcal{N}_1$), and $t\approx20.0$ (for $\mathcal{N}_2$), respectively. In addition, one can see that the hybrid algorithm remains close to the linear AOT algorithm for short times. Moreover, after a short time, the hybrid algorithm undergoes super-exponential convergence, converging faster than every algorithm analyzed so far. The benefits of this splitting of the nonlinearity between $|x|>1$ and $|x|<1$ seem clear. Moreover, this approach can be exploited further, which is the topic of the next subsection.
\subsection{Concave-Convex Nonlinearity}\label{subsecHybrid_NL}
Inspired by the success of the hybrid method, in this subsection, we further exploit the effect of the feedback control term $\mu\mathcal{N}(I_h(u)-I_h(v))$ by accentuating the nonlinearity for $|x|>1$. We consider the following nonlinearity in conjunction with \eqref{AOT_NL} for the Kuramoto-Sivashinky equation.
\begin{align}\label{NL_CC}
\mathcal{N}(x) =\mathcal{N}_3(x) :=
\begin{cases}
x|x|^{\gamma},& |x|\geq 1,\\
x|x|^{-\gamma}, & 0< |x|< 1,\\
0,& x = 0.
\end{cases}
\end{align}
Note that this choice of $\mathcal{N}_3$ is concave for $|x|< 1$, and convex for $|x|\geq 1$. The convexity for $|x|\geq 1$ serves to more strongly penalize large deviations from the reference solution. In Figure \ref{FIG_err_L2_H1_all}, we see that \textit{at every positive time} this method has significantly smaller error than the linear AOT method, and the methods involving $\mathcal{N}_1$ and $\mathcal{N}_2$. Convergence to machine precision happens at roughly $t\approx17.4$, a speedup factor of roughly $49.8/17.4\approx 2.8$ compared to the linear AOT algorithm.
\begin{figure}[htp]
\begin{center}
\subfigure[Error in $L^2$-norm vs. time.]
{\includegraphics[width=.49\textwidth]{KSE_DA_energy_h2.jpg}}
\hfill
\subfigure[Error in $H^1$-norm vs. time.]
{\includegraphics[width=.49\textwidth]{KSE_DA_enstrophy_h2.jpg}}
\caption{\label{FIG_err_L2_H1_all}\footnotesize Error in linear AOT ($\gamma=0$) algorithm, and the non-linear algorithm with nonlinearities $\mathcal{N}_1$, $\mathcal{N}_2$, and $\mathcal{N}_3$ (each with $\gamma=0.1$). ($\lambda=2$) Resolution 8192. (Log-linear scale.)}
\end{center}
\end{figure}
\subsection{Comparison of All Methods}\label{subsecComparison}
Let us also consider the error at every Fourier mode. In Figure \ref{FIG_err_spec_all}, one can see these errors at various times. We examine a time before the transition to fully-developed chaos ($t\approx 4$), at a time during the transition ($t\approx 14$), a time after the solution has settled down to an approximately statistically steady state ($t\approx 24$), and a later time ($t\approx 34$). At each mode, and at each positive time, the error in the solution with nonlinearity \eqref{NL_CC} is the smallest.
\begin{figure}[htp]
\begin{center}
\subfigure[$t = 4$]
{\includegraphics[width=.49\textwidth]{KSE_DA_spec_t4.jpg}\label{FIG_err_spec4}}
\hfill
\subfigure[$t = 14$]
{\includegraphics[width=.49\textwidth]{KSE_DA_spec_t14.jpg}\label{FIG_err_spec14}}
\hfill
\subfigure[$t = 24$]
{\includegraphics[width=.49\textwidth]{KSE_DA_spec_t24.jpg}\label{FIG_err_spec24}}
\hfill
\subfigure[$t = 34$]
{\includegraphics[width=.49\textwidth]{KSE_DA_spec_t34.jpg}\label{FIG_err_spec34}}
\caption{\label{FIG_err_spec_all}\footnotesize
Error in spectrum (mode amplitude vs. wave number) at different times for all methods.}
\end{center}
\end{figure}
Next, we point out that our results hold qualitatively with different choices of initial data for the reference equation \eqref{KSE}. Therefore, we wait until the solution to \eqref{KSE} with initial data \eqref{KSE_ic} has reached an approximately statistically steady state (this happens roughly at $t\approx20$). Then, we use this data to re-initialize the solution to \eqref{KSE} (in fact, we use the solution at $t=30$ to be well within the time interval of fully developed chaos). We still initialize \eqref{KSE_DA} with $v_0\equiv0$. Norms of the errors are shown in Figure \ref{FIG_err_chaos_L2_H1_all}. We observe that, although convergence time is increased for all methods, the qualitative observations discussed above still hold.
\begin{figure}[htp]
\begin{center}
\subfigure[Error in $L^2$-norm vs. time.]
{\includegraphics[width=.49\textwidth]{KSE_DA_energy_chaos_h2.jpg}}
\hfill
\subfigure[Error in $H^1$-norm vs. time.]
{\includegraphics[width=.49\textwidth]{KSE_DA_enstrophy_chaos_h2.jpg}}
\caption{\label{FIG_err_chaos_L2_H1_all}\footnotesize Error in all algorithms with chaotic initialization for reference solution. ($\lambda=2$) Resolution 8192. (Log-linear scale.)}
\end{center}
\end{figure}
\section{Conclusions}\label{secConclusions}
Our results indicate that advantages might be gained by looking at nonlinear data assimilation. We used the Kuramoto-Sivashinky equation as a proof-of-concept for this method; however, in a forthcoming work, we will extend the method to more challenging equations, including the Navier-Stokes equations of fluid flow. Mathematical analysis of these methods will also be subject of future work.
We note that other choices of nonlinearity may very well be useful to consider. Indeed, one may imagine a functional given by
\begin{align}
\mathfrak{F}(\mathcal{N}) = t_*
\end{align}
where $t_*$ is the time of convergence to within a certain error tolerence, such as to within machine precision. (One would need to show that admissible functions $\mathcal{N}$ are in some sense independent of the parameters and initial data, say, after some normalization.) One could also consider a functional whose value at $\mathcal{N}$ is given by a particular norm of the error. By minimizing such functionals, one might discover even better data-assimilation methods.
\bibliographystyle{abbrv} | {"config": "arxiv", "file": "1703.03546/KSE1D_DA_NL_arXiv.tex"} |
TITLE: Can you add an extra $e^x$ when integrating?
QUESTION [0 upvotes]: So I've been given this problem to solve (pretend it's a fraction or click the link to see the question please)
$$\int \frac{-26e^x-144}{e^{2x} + 13e^x + 36}dx$$
and I got this far:
$$-2\int\frac{13e^x + 72}{e^{2x} + 13e^x + 36}dx$$
The next step is a simple u-substitution which I am unable to do.
I looked to a step-by-step calculator for help, and what I see is
$$\int\frac{13u+ 72}{u(u^2 + 13u + 36)}du$$
But I'm confused. Where did the extra $u$ come from?
I'm guessing that its like in the situation where you have $\int \cos(2θ)dθ$ and you have $u = 2θ$ and $du = 2dθ$, but there is no extra 2, so the end result is $\frac{1}{2}\int\cos(u)du$.
But I thought you could only do that with constants! You don't just add an extra $x^2$ when you need it. As thus, you shouldn't be able to add an extra $e^x$ just because you need it. Or am I missing something?
REPLY [1 votes]: The extra $e^x$ comes from the $dx$. If you have $$\int \frac{13e^x+72}{e^{2x}+13e^x+36}dx$$ and you say $u=e^x$, then you also need to substitute the $dx$.
If $$u=e^x$$ then $$\frac{du}{dx}=e^x$$ $$du=e^xdx$$ $$\frac{du}{e^x}=dx$$ Since we said $u=e^x$, we can substitute that in here to get $$dx=\frac{du}{u}.$$ Then when you substitute these into your integral, you get $$\int \frac{13u+72}{u^2+13u+36}\cdot \frac{du}{u}$$ $$=\int \frac{13u+72}{u(u^2+13u+36)} du$$ | {"set_name": "stack_exchange", "score": 0, "question_id": 3135813} |
TITLE: Decidability and Cluster algebras
QUESTION [6 upvotes]: Recall the definition of a cluster algebra,
which can be seen as a (possibly infinite) graph, where each vertex is a tuple of a quiver and Laurent expressions at some of the vertices of the quiver. The edges of this graph is given by mutations, and every vertex has the same degree.
Whenever the quiver is a Dynkin diagram, the graph is finite, and the Laurent polynomials have nice interpretations.
However, for a general seed quiver, the graph is infinite.
Given a seed, and a Laurent polynomial, is there an algorithm to determine if the given Laurent polynomial appear as a cluster variable in the cluster algebra? Or might this be undecidable?
The reason it might be undecidable, is that a priori, the graph is infinite, so there is no upper bound on how many vertices we need to visit (by successive mutations), before concluding that the expression cannot be constructed, and the problem has "the same feel" as say the Collatz problem, or the problem of deciding if a set of matrices can produce the zero matrix by multiplication (the latter problem is undecidable).
REPLY [2 votes]: For rank 2 it is decidable since there is a combinatorial formula for cluster variables. This case is certainly simpler than the general case, but some nice properties on the rank 2 case are conjectured to hold more generally. So, I'll write a bit about the rank 2 case and a bit about how we may hope to generalize. This does not answer the whole question, but hopefully it is still useful to someone.
See Theorem 1.11 of Greedy elements in rank 2 cluster algebras by Lee, Li, and Zelevinsky. For the rank 2 cluster algebra $\mathcal{A}(a,b)$ the theorem gives this formula
$$x[a_1,a_2] = x_1^{-a_1} x_2^{-a_2} \sum_{(S_1, S_b)}x_1^{b|S_2|}x_2^{a|S_1|} $$
where $S_1$ and $S_2$ are defined in terms of certain Dyck paths (see the paper for details). For each $(a_1,a_2) \in \mathbb{Z} \times \mathbb{Z}$ the theorem describes corresponding the so call greedy element of the cluster algebra. It is known that each cluster variable is a greedy element, and it is known which pairs $(a_1,a_2)$ correspond to cluster variables (see Remark 1.9). As a result we can decide if a given Laurent polynomial is a cluster variable in any rank 2 cluster algebra.
The $(a_1,a_2)$ is known as the denominator vector, and such denominator vectors exists in other cluster algebras too. It is conjectured that different cluster variables have different denominator vectors (see this MO question). It seems like a hard problem, but perhaps it's possible to classifying denominator vectors corresponding to cluster variables then give a combinatorial formula for the remaining polynomial like the rank 2 case. | {"set_name": "stack_exchange", "score": 6, "question_id": 271192} |
TITLE: Would an atom OR its nucleus alone affect its surrounding empty space, with respect to the particles that come and go of existence within that space?
QUESTION [0 upvotes]: I understand that the particles come and go of existence in the empty space. So, what effect could a stable particle (one that do not go out of existence in space) like an atom, or its nucleus alone, have on the formation and disappearance of those particles? Would the mass, charge or the mere presence of the atom, or its nucleus, induce some effect? If so, please explain it to a non-physicist.
Thank you,
Ravi
REPLY [0 votes]: The popularization of virtual particle-antiparticle pairs that briefly appear and disappear in a vacuum does require the existence of fields, so yes the existence of nearby stable particles is required. | {"set_name": "stack_exchange", "score": 0, "question_id": 325880} |
TITLE: Separating the topics of general and special relativity
QUESTION [2 upvotes]: So, I have managed to confuse myself beyond the point of repair. I am not a physics student, so my physical knowledge is limited.
Anyway, there are a few topics of relativity, which I can not seem to be able to seperate into either the general or special one. So, for one, I know a lot of things probably overlap, but what is more important to know is which topics do NOT belong into either.
So, when trying to explain special relativity I think I mixed up a couple of things. Correct me if I am wrong:
Special relativity describes that transformation of mass into energy and back is possible. Time is moving relatively (slower for faster moving objects). A ball falling to earth is equivalent to a ball 'falling' in a at g accelerating rocket. A ball in a falling elevator is equivalent to a ball floating in space. Nothing can move faster than light. Light in vacuum is always travelling at the same speed. Light is affected by gravity. Mass alters the time-space-fabric and attracts mass. The actual mass of something depends on the objects absolute velocity. Physical rules are the same in all not-moving systems. Everything moves along spacetime on the shortest possible route.
I know those are in no order what so ever, but are there some things which do not belong in this heap or did I forget anything? I find it hard to find anything that directly separates all those topics.
REPLY [8 votes]: Instead of trying to sort out the details, I'll offer a perspective that can help the details fall into place:
In general relativity (GR), the geometry of spacetime is dynamic (reacts to its contents) and can therefore be curved.
In special relativity (SR), the geometry of spacetime is neither dynamic nor curved; it is fixed and flat.
There is also an intermediate subject that doesn't have an accepted name, but it needs a name because we use the subject often. I'll call it "generalized special relativity":
In generalized special relativity (GSR), the geometry of spacetime is fixed (not dynamic), like it is in special relativity, but it can be curved, like it can be in general relativity.
A fourth combination (the geometry of spacetime is dynamic but flat) does not occur, because if the geometry of spacetime is constrained to be flat, then it's completely specified (except for its global topology), so is has no freedom to react to its contents.
Here are some comments about the importance of each of the three subjects:
GR is important becasue, of the three models listed here, GR is the best model of the real world. In the real world, influences always go both ways; so if the geometry of spacetime can influence the motion of material objects, then the motion of material object must also be able to influence the geometry of spacetime, like it does in GR. Most importantly, GR includes the fact that matter produces gravity.
GSR is important because fixing the geometry of spacetime simplifies the mathematics greatly, even if it's curved, and this is sufficient for studying things like the motion of test-objects that wouldn't significantly affect the curvature of spacetime anyway. GSR includes how gravity affects test-objects but doesn't include the fact that matter produces gravity. In GSR, the influence goes only one way. It includes SR as a special case, namely the case when the spacetime curvature is zero.
SR is important because even if spacetime is curved, the effects of the curvature are negligible in any sufficiently small region of space and time (if no singularities are included). In this sense, SR is a good approximation for the "local" physics in GSR, and therefore also in GR. | {"set_name": "stack_exchange", "score": 2, "question_id": 480401} |
\section{NP-hardness of Quasi MLE}
\label{section_appendix_nphard}
\begin{theorem}\label{mle_alignmentproblem}
Consider the problem $\mathsf{ALIGN}(y_1, \ldots, y_N)$: for vectors $y_1,\dots,y_N \in \mathbb{R}^L$, find the shifts $\boldsymbol{\ell} = (l_1,\dots,l_N)$ which maximize
\[
\mathcal{A}(l_1, \ldots, l_N) = \sum_{i,j \in [N]} \langle R_{-l_i} y_i, R_{-l_i}y_j \rangle.
\] Let $\mathcal{A}^* = \max_{\boldsymbol{\ell}} \mathcal{A}(\boldsymbol{\ell})$. It is \np{NP}-hard (under randomized reductions) to esimate $\mathcal{A}^*$ within $\frac{16}{17} + \varepsilon$ of its true value. It is \np{UG}-hard (under randomized reductions) to estimate
$\mathcal{A}^*$ within any constant factor.
\end{theorem}
We demonstrate this by a poly-time approximation preserving reduction from a special class of \np{MAX-2LIN}$(q)$ instances called $\Gamma$-\np{MAX-2LIN}$(q)$.
Consider a (connected) \np{MAX-2LIN}$(q)$ instance where each constraint has the form $x_i - x_j \equiv c_{ij} \pmod{q}$, of which at most $\rho^{*}$ are satisfiable. Representing each variable as a vertex of a graph and each constraint as an edge, the instance is associated with a connected graph $G = (V(G), E(G)),$ where $V(G) = [N]$ and $|E(G)| = M$.
Let $\np{poly}(M)$ be the space of integer functions which are bounded by polynomial order, i.e. $f \in \np{poly}(M)$ iff there are constants $C,k$ such that $f(M) \le CM^k$ for all $M \ge 1$. We say that an event occurs w.h.p if it occurs with probability $1-\epsilon(q,G)$, where
$\frac{1}{\epsilon(q,G)} \notin \np{poly}(q \cdot |E(G)|)$. Notice under this definition that if $\np{poly}(qM)$ events occur w.h.p, then by an union bound the event that all occur will also be w.h.p.
Construct a parameter $\templen = \templen(q,M) \in \np{poly}(qM)$. We take the vectors $y_1, \ldots, y_N$ to be of length $L = qM\templen$. The indices of the vector $y_i$ can be expressed in mixed radix notation by elements of the tuple $(\mathbb{Z}_q, E(G), [\templen])$. For example, we would associate the tuple index $(x, e_k, t)$ of $y_i$ by the index $x \cdot qM + e_k \cdot M + t$, where $e_k$ is the $k$th edge in some enumeration of $E(G)$. Note that a shift by $c \cdot qM$ takes this tuple index to $(x, e_k, t) + c \cdot qM \to (x + c, e_k, t)$.
For each edge constraint $x_i - x_j \equiv c_{ij}$, let $z_{ij}$ be a vector uniformly at random chosen from $\{\pm 1\}^\templen$. Assign the entries $(0, \{i,j\}, \cdot)$ of $y_i$ to $z_{ij}$, and the entries $(c_{ij}, \{i,j\}, \cdot)$ of $y_j$ to $z_{ij}$. Assign all of the remaining entries of the $y_i$'s to i.i.d Rademacher random variables ($\{\pm 1\}$ with probability $1/2$). Intuitively, a relative shift of $c_{ij} \cdot qM$ between $y_i$ and $y_j$ will produce a large inner product due to the overlapping of the $z_{ij}$'s, while any other shift between them would produce low inner products.
\begin{lemma}\label{mle_hoeffding}
Suppose $\gamma \in \mathsf{poly}(qM)$. Consider two random vectors $y_1, y_2$ of length $\gamma$ whose entries are i.i.d sampled from the Rademacher distribution. W.h.p, for any $0 < \epsilon \ll 1$, the inner product of every possible shift $R_\ell y_1$ of $y_1$ with $y_2$ is bounded in absolute value by $\sqrt{\gamma} \cdot m^\epsilon$.
\end{lemma}
\begin{proof}
By independence, each inner product is the sum of $\gamma$ independent Bernoulli random variables which take values $\pm 1$ with probability $1/2$. Hoeffding's inequality indicates
\[
\prob(\langle R_\ell y_1, y_2 \rangle \geq \sqrt{\gamma} \cdot (qM)^\epsilon) \leq 2\exp\left\{ - (qM)^{\epsilon}/2 \right\},
\] so $\frac{1}{\prob(\langle R_\ell y_1, y_2 \rangle \geq \sqrt{\gamma} (qM)^\epsilon)} \notin \mathsf{poly}(qM)$. Union bounding over all $\gamma = \mathsf{poly}(qM)$ such inner products, w.h.p all of the inner products are simultaneously bounded by $\sqrt{\gamma} \cdot (qM)^\epsilon$.
\end{proof}
We say an edge $\{i,j\} \in E(G)$ is $c_{ij}$-satisfied under a labelling $\boldsymbol{\ell}$ if $l_i-l_j \equiv c_{ij}qM \pmod{L}$. From Lemma \ref{mle_hoeffding}, we observe that w.h.p, for any choices of shifts $l_i, l_j$,
\[
|\langle R_{-l_i} y_i,\ R_{-l_j}y_j \rangle - \templen \cdot \delta(\{i,j\} \in E(G)\text{ is }c_{ij}\text{-satisfied})| < (qM)^\epsilon \sqrt{L}.
\]
It follows that if the labelling $\boldsymbol{\ell}$ induces exactly $k$ $c_{ij}$-satisfied edges,
w.h.p
\begin{align}
\label{mle_objectivealphashift} \big|\mathcal{A}(\boldsymbol{\ell}) - 2k\templen\big| < k (qM)^\epsilon \sqrt{L} + \left(N^2 - k\right) (qM)^\epsilon \sqrt{L} \le q^\epsilon M^{2+\epsilon} \sqrt{L}.
\end{align}
\begin{theorem}
\label{mle_labelstocut}
From a given labelling $\boldsymbol{\ell}$, w.h.p one may in polynomial time construct $(x_1, \ldots, x_N) \in \mathbb{Z}_q^N$ satisfying at least $\big(\mathcal{A}(\ell) - q^\epsilon M^{2+\epsilon}\sqrt{L}\,\big)/(2\templen)$ edge-constraints $x_i - x_j \equiv c_{ij}$. Conversely, w.h.p there is a labelling $\boldsymbol{\ell}^{\np{max}}$ w.h.p satisfying $\mathcal{A}\left(\boldsymbol{\ell}^{\np{max}}\right) > 2\templen \cdot \rho^{*} M - q^\epsilon M^{2+\epsilon} \sqrt{L}$.
\end{theorem}
\begin{proof}
Consider the subgraph $H$ of $G$ with vertex set $V(G)$ and edge set comprising all edges $c_{ij}$-satisfied under $\boldsymbol{\ell}$.
For each connected component of $H$, arbitrarily choose a vertex $i$ of the component to have $x_i = 0$. Follow a spanning tree of each connected component and assign each neighbor $j$ by $x_j \equiv x_i - c_{ij} \pmod{q}$.
The first implication of the lemma follows immediately by applying
(\ref{mle_objectivealphashift}).
Conversely, construct the labelling $\boldsymbol{\ell}^{\np{max}}$ by setting $l^{\np{max}}_i = x_i \cdot qM$. This labelling induces at least $\rho^{*}M$ $c_{ij}$-satisfied edges, and (\ref{mle_objectivealphashift}) completes the lemma.
\end{proof}
\begin{corollary}
Take $\templen > q^{2}M^{4}$.
If $\boldsymbol{\ell}^{\np{max}}$ satisfies $\mathcal{A}(\boldsymbol{\ell}^{\np{max}}) \ge \left(\alpha + \delta\right) \mathcal{A}^*$, then w.h.p, from $\boldsymbol{\ell}^{\np{max}}$ one may construct $(x_1, \ldots, x_N) \in \mathbb{Z}_q^N$ in poly-time satisfying $(\alpha + \delta')\rho^*$ fraction of $\mathbb{Z}_q$\np{-MAX-2LIN} constraints, for some $\delta' > 0$.
\end{corollary}
\begin{proof}
Let $\rho$ be the fraction of variable assignments satisfied by the construction of Theorem \ref{mle_labelstocut} for the labelling $\boldsymbol{\ell}^{\np{max}}$. W.h.p, \[
\rho > \frac{\mathcal{A}(\boldsymbol{\ell}^{\np{max}}) - q^\epsilon M^{2 + \varepsilon} \sqrt{L}}{2\templen M} >
\frac{(\alpha + \delta)\mathcal{A}^*
- q^{1/2+\epsilon} M^{5/2 + \epsilon} \templen^{1/2}}{2\templen M} > (\alpha + \delta) \rho^{*} - \frac{q^{1/2+\epsilon} M^{3/2 + \epsilon}}{\templen^{1/2}}.
\]
\end{proof}
Assuming the Unique Games conjecture, it is NP-hard to approximate $\Gamma$-\np{MAX-2LIN}$(q)$ to any constant factor \cite{KKMO}.
Hence it is UG-hard (under randomized reductions) to approximate {\sf ALIGN} within
any constant factor.
\textsf{MAX-CUT} is a special case of $\Gamma$-\np{MAX-2LIN}$(q)$.
Since it is \textsf{NP}-hard to approximate \textsf{MAX-CUT} within
$\frac{16}{17} + \varepsilon$, it is \textsf{NP}-hard (under randomized reductions)
to approximate \textsf{ALIGN} within $\frac{16}{17} + \varepsilon$. | {"config": "arxiv", "file": "1308.5256/appendix_nphard.tex"} |
\begin{document}
\title{Gromov-Witten theory of locally conformally symplectic
manifolds}
\author{Yasha Savelyev}
\thanks {Partially supported by PRODEP grant of SEP, Mexico}
\email{yasha.savelyev@gmail.com}
\address{University of Colima, CUICBAS}
\keywords{locally conformally symplectic manifolds, conformal symplectic non-squeezing, Gromov-Witten theory, virtual fundamental class, Fuller index, Seifert conjecture, Weinstein conjecture}
\begin{abstract} We initiate here the study of Gromov-Witten theory of locally conformally
symplectic manifolds or $\lcs$ manifolds, $\lcsm$'s for short, which are a natural generalization of both contact and symplectic manifolds.
We find that the main new phenomenon (relative to the symplectic case) is the potential existence of holomorphic sky catastrophes, an analogue for pseudo-holomorphic curves of sky catastrophes in dynamical systems originally discovered by Fuller. We are able to rule these out in some situations, particularly for certain $\lcs$ 4-folds, and as
one application we show that in dimension 4 the classical Gromov non-squeezing theorem has certain $C ^{0} $ rigidity or persistence with respect to $\lcs$ deformations, this is one version of $\lcs$ non-squeezing a first result of its kind.
In a different direction we study Gromov-Witten theory of the
$\lcsm$ $C \times S ^{1} $ induced by a contact manifold
$(C, \lambda)$, and show that the Gromov-Witten invariant (as defined here) counting certain elliptic curves in $C \times S ^{1} $ is identified with the classical Fuller index of the Reeb vector field $R ^{\lambda} $. This has some non-classical applications, and based on the story we develop, we give a kind of ``holomorphic Seifert/Weinstein conjecture'' which is a direct extension for some types of $\lcsm$'s of the classical Seifert/Weinstein conjecture. This is proved for $\lcs$ structures $C ^{\infty} $ nearby to the Hopf $\lcs$ structure on $S ^{2k+1} \times S ^{1} $.
\end{abstract}
\maketitle
\tableofcontents
\section {Introduction}
The theory of pseudo-holomorphic curves in symplectic manifolds as initiated by Gromov and Floer has revolutionized the study of symplectic and contact manifolds. What the symplectic form gives that is missing for a general almost complex manifold is a priori compactness for moduli spaces of pseudo-holomorphic curves in the manifold.
On the other hand there is a natural structure which directly generalizes both symplectic and contact manifolds, called locally conformally symplectic structure or $\lcs$ structure for short. A locally conformally symplectic manifold or $\lcsm$ is a smooth $2n$-fold $M$ with an $\lcs$ structure: which is a
non-degenerate 2-form $\omega$, which is locally diffeomorphic to $
{f} \cdot \omega _{st} $, for some (non-fixed) positive smooth function $f$, with $\omega _{st} $ the standard symplectic form on
$\mathbb{R} ^{2n} $. It is natural to try to do Gromov-Witten theory for such manifolds.
The first problem that occurs is that a priori compactness is gone, as since $\omega$ is not necessarily closed, the $L ^{2} $-energy can now be unbounded on the moduli spaces of $J$-holomorphic curves in such a $(M,\omega)$. Strangely a more acute problem is potential presence of sky catastrophes - given a smooth family $\{J _{t} \}$, $t \in [0,1]$, of $\{\omega _{t} \}$-compatible almost complex structures, we may have a continuous family $\{u _{t} \}$ of $J _{t} $-holomorphic curves s.t. $\energy (u _{t} ) \mapsto \infty$ as $t \mapsto a \in (0,1)$ and s.t. there are no holomorphic curves for $t \geq a$. These are analogues of sky catastrophes discovered by Fuller \cite{citeFullerBlueSky}.
We are able to tame these problems in certain situations, for example for some 4-d $\lcsm$'s, and this is how we arrive at a version of Gromov non-squeezing theorem for such $\lcsm$'s.
Even when it is impossible to tame these problems we show that there is still a potentially interesting theory which is analogous to the theory of Fuller index in dynamical systems.
For example we show that there is a direct generalization of Seifert, Weinstein conjectures for certain $\lcsm$'s, and which postulates existence of certain elliptic curves in these $\lcsm$'s.
We prove this conjecture for $\lcs$ structures $C ^{\infty} $ nearby to the Hopf $\lcs$ structure on $S ^{2k+1} \times S ^{1} $.
We begin with the well known observation:
\begin{theorem} \label{thm:quantization} [\cite{citeKatrinQuantization}] Let $(M, J)$ be a compact almost complex manifold, $\Sigma$ a closed Riemann surface, and $u: \Sigma \to
M$ be $J$-holomorphic map. Given a Riemannian metric $g$ on $M$, there is an $\hbar >0$ s.t. if $\energy _{g}
(u) < \hbar $ then $u$ is constant, where $\energy$ is the $L ^{2} $
energy.
\end{theorem}
Using this we get the following extension of Gromov compactness to this setting.
Let $$ \mathcal{M} _{g,n} (J, A) = \mathcal{M} _{g,n} (M, J, A)$$ denote the moduli space of isomorphism classes of class $A$, $J$-holomorphic curves in
$M$,
with domain a genus $g$ closed
Riemann surface, with $n$ marked labeled points. Here
an isomorphism between $u _{1} : \Sigma _{1} \to M$, and $u _{2}: \Sigma _{2} \to M $ is a biholomorphism of marked Riemann surfaces $\phi: \Sigma _{1} \to \Sigma _{2} $ s.t. $u_2 \circ \phi = u _{1} $.
\begin{theorem} \label{thm:complete}
Let $ (M,
J)$ be an almost complex manifold.
Then $\mathcal{M} _{g,n} (J, A)$ has a pre-compactification
\begin{equation*}
\overline{\mathcal{M}} _{g,n} ( J, A),
\end{equation*}
by Kontsevich stable maps, with respect to the natural
metrizable Gromov topology see for instance
\cite{citeMcDuffSalamon$J$--holomorphiccurvesandsymplectictopology},
for genus 0 case. Moreover given $E>0$, the subspace
$\overline{\mathcal{M}} _{g,n} ( J,
A) _{E} \subset \overline{\mathcal{M}}_{g,n} ( J,
A) $ consisting of elements $u$ with $\energy (u) \leq E$ is
compact. In other words $\energy$ is a proper and bounded from below function.
\end{theorem}
Thus the most basic situation where we can talk
about Gromov-Witten ``invariants'' of $(M, J)$ is when the $\energy$ function is bounded on
$\overline{\mathcal{M}} _{g,n} (J, A)$, and we shall say that $J$ is \textbf{\emph{bounded}} (in class $A$).
In this case $ \overline{\mathcal{M}} _{g,n} (J, A)$ is compact, and has a
virtual moduli cycle following for example \cite{citePardonAlgebraicApproach}.
So we may define functionals:
\begin{equation} \label{eq:functional1}
GW _{g,n} (\omega, A,J): H_* (\overline{M} _{g,n}) \otimes H _{*} (M ^{n} ) \to
\mathbb{Q}.
\end{equation}
Of course symplectic manifolds with any tame almost complex structure is one class of examples, another class of examples comes from some locally conformally symplectic manifolds.
\subsection {Locally conformally symplectic manifolds and Gromov non-squeezing}
Let us give a bit of background on $\lcsm$'s.
These were originally considered by Lee
in \cite{citeLee}, arising naturally as part of an abstract study of
``a kind of even dimensional Riemannian geometry'', and then further studied by
a number of authors see for instance, \cite{citeBanyagaConformal} and
\cite{citeVaismanConformal}.
This is a
fascinating object, an $\lcsm$ admits all the interesting classical notions of
a symplectic manifold, like Lagrangian submanifolds and Hamiltonian
dynamics, while at the same time forming a much more
flexible class. For example Eliashberg and Murphy show that if a
closed almost complex $2n$-fold $M$ has $H ^{1} (M, \mathbb{R}) \neq 0
$ then it admits a $\lcs$ structure,
\cite{citeEliashbergMurphyMakingcobordisms}, see
also \cite{citeMurphyConformalsymp}.
As we mentioned $\lcsm$'s can also be understood to generalize contact manifolds. This works as follows.
First we have a natural class of explicit examples of $\lcsm$'s, obtained
by starting with a symplectic cobordism (see \cite{citeEliashbergMurphyMakingcobordisms}) of a closed contact manifold
$C$ to itself, arranging for the contact forms at the two ends of the
cobordism to be proportional (which can always be done) and then
gluing together the boundary components.
As a particular case of
this we get Banyaga's basic example.
\begin{example} [Banyaga] \label{example:banyaga} Let $(C, \xi)
$ be a contact manifold with a contact form
$\lambda$ and take $M=C \times S ^{1} $ with 2-form $\omega= d
^{\alpha}
\lambda : = d \lambda - \alpha \wedge \lambda$, for $\alpha$ the pull-back of the
volume form on $S ^{1} $ to $C \times S ^{1} $ under the
projection.
\end{example}
Using above we may then faithfully embed the category of contact manifolds, and contactomorphism into the category of $\lcsm$'s, and \textbf{\emph{loose}} $\lcs$ morphisms. These can be defined as diffeomorphisms $f: (M _{1} , \omega _{1} ) \to (M _{2}, \omega _{2} ) $ s.t. $f ^{*} \omega _{2} $ is deformation equivalent through $\lcs$ structures to $\omega _{1} $. Note that when $\omega _{i} $ are symplectic this is just a global conformal symplectomorphism by Moser's trick.
Banyaga type $\lcsm$'s give immediate examples of almost complex manifolds
where the $\energy$ function is unbounded on the moduli spaces of fixed class pseudo-holomorphic curves, as well as where null-homologous $J$-holomorphic curves can be non-constant.
We show that it is still possible to extract a variant of Gromov-Witten theory here.
The story is closely analogous to that of the Fuller index in dynamical
systems, which is concerned with certain rational counts of periodic orbits. In
that case sky catastrophes prevent us from obtaining a completely well
defined invariant, but Fuller constructs certain partial invariants which give
dynamical information. In a very particular situation the relationship with the
Fuller index becomes perfect as one of the results
of this paper obtains the classical Fuller index for Reeb vector fields on a
contact manifold $C$ as a
certain genus 1 Gromov-Witten invariant of the $\lcsm$ $C \times S ^{1} $. The
latter also gives a conceptual interpretation for why the Fuller index is
rational, as it is reinterpreted as an (virtual) orbifold Euler number.
\subsubsection {Non-squeezing}
One of the most fascinating early results in symplectic geometry is the so called Gromov non-squeezing theorem appearing in the seminal paper of Gromov~\cite{citeGromovPseudoholomorphiccurvesinsymplecticmanifolds.}.
The most well known formulation of this is that there does not exist a symplectic embedding $B _{R} \to D ^{2} (r) \times \mathbb{R} ^{2n-2} $ for $R>r$, with $ B _{R} $ the standard closed radius $R$ ball
in $\mathbb{R} ^{2n} $ centered at $0$.
Gromov's non-squeezing is $C ^{0} $ persistent in the following sense.
\begin{theorem} Given $R>r$, there is an $\epsilon>0$ s.t. for any symplectic form $\omega' $ on $S ^{2} \times T ^{2n-2} $ $C ^{0} $ close to a split symplectic form $\omega$ and satisfying $$ \langle \omega, (A=[S ^{2} ] \otimes [pt]) \rangle = \pi r ^{2}, $$ there is no symplectic embedding $\phi: B _{R} \hookrightarrow (S ^{2} \times T ^{2n-2}, \omega') $.
\end{theorem}
The above follows immediately by Gromov's argument in \cite{citeGromovPseudoholomorphiccurvesinsymplecticmanifolds.}. On the other hand it is natural to ask:
\begin{question} \label{thm:nonrigidity} Given $R>r$ and every $\epsilon > 0 $ is there
a 2-form $\omega'$ on $S ^{2} \times T ^{2n-2} $ $C ^{0} $ or even $C ^{\infty} $ $\epsilon$-close to a split symplectic form $\omega $, satisfying $ \langle \omega, A \rangle = \pi r ^{2} $, and s.t. there is an embedding $\phi: B _{R} \hookrightarrow S ^{2} \times T ^{2n-2} $, with $\phi ^{*}\omega'=\omega _{st} $?
\end{question}
We shall give a certain generalization of the above theorem for $\lcs$ structures.
\begin{definition}
Given a pair of $\lcsm$'s $(M _{i}, \omega _{i} )$,
$i=0,1$, we say that $f: M _{1} \to M _{2} $ is a \textbf{\emph{morphism}}, if
$f^{*} \omega _{2} = \omega _{1}$. A morphism is called an $\lcs$ embedding if it is injective.
\end{definition}
A pair $(\omega,J)$ for $\omega$ $\lcs$ and $J$ compatible will be called a compatible $\lcs$ pair, or just a compatible pair, where there is no confusion.
Note that the pair of hypersurfaces $\Sigma _{1} =S ^{2} \times S ^{1} \times \{pt \} \subset S ^{2} \times T ^{2} $, $\Sigma _{2} =S ^{2} \times \{pt \} \times S ^{1} \subset S ^{2} \times T ^{2} $ are naturally foliated by symplectic spheres, we denote by $T ^{fol} \Sigma _{i} $ the sub-bundle of the tangent bundle consisting of vectors tangent to the foliation. The following theorem says that it is impossible to have certain ``nearby'' $\lcs$ embeddings, which means that we have a first rigidity phenomenon for $\lcs$ structures.
There is a small caveat here that in what follows we take the $C ^{0} $ norm on the space of $\lcs$ structures that is (likely) stronger then the natural $C ^{0} $ norm (with respect to a metric) on the space of forms.
\begin{theorem} \label{cor:nonsqueezing} Let $\omega$ be a split symplectic form on $M =S ^{2} \times T ^{2} $, and $A$ as above with $ \langle \omega, A\rangle = \pi r ^{2} $. Let $R>r$, then there is an $\epsilon>0$ s.t. if $\omega _{1} $ is an $\lcs$ on $M$ $C ^{0} $ $\epsilon$-close to $\omega$, then
there is no $\lcs$ embedding $$\phi: (B _{R}, \omega _{st}) \hookrightarrow (M, \omega _{1}), $$ s.t
$\phi _{*} j$ \text{ preserves the bundles } $T ^{fol} \Sigma _{i},$ for $j$ the standard almost complex structure.
\end{theorem}
We note that the image of the embedding $\phi$ would be of course a symplectic submanifold of $(M, \omega _{1} )$. However it could be highly distorted, so that it might be impossible to complete $\phi _{*} \omega _{st} $ to a symplectic form on $M$ nearby to $\omega$.
We also note that it is certainly possible to have a nearby volume preserving as opposed to $\lcs$ embedding which satisfies all other conditions.
Take $\omega = \omega _{1} $, then if the symplectic form on $T ^{2} $ has enough volume, we can find a volume preserving map $\phi: B _{R} \to M $ s.t. $\phi _{*} j$ preserves $T ^{fol} \Sigma _{i} $.
This is just the squeeze map which as a map $\mathbb{C} ^{2} \to \mathbb{C}^{2} $ is $(z_1, z _{2} ) \mapsto (\frac{z _{1} }{a}, a \cdot {z _{2}}) $.
In fact we can just take any volume preserving map $\phi$, which doesn't hit $\Sigma _{i} $.
\subsubsection {Toward non-squeezing for loose morphisms} In some ways loose morphisms of $\lcsm$'s are more natural, particularly when we think about $\lcsm$'s from the contact angle.
So what about non-squeezing for loose morphisms as defined above? We can try a direct generalization of contact non-squeezing of Eliashberg-Polterovich \cite{citeEKPcontactnonsqueezing}, and Fraser in \cite{citeFraserNonsqueezing}.
Specifically let $R ^{2n} \times S ^{1} $ be the prequantization space of $R ^{2n} $, or in other words the contact manifold with the contact form $d\theta - \lambda$, for $\lambda = \frac{1}{2}(ydx - xdy)$. Let $B _{R} $ now denote the open radius $R$ ball in $\mathbb{R} ^{2n} $. \begin{question} If $R\geq 1$ is there a compactly supported, loose endomorphism of the $\lcsm$ $\mathbb{R} ^{2n} \times S ^{1} \times S ^{1} $ which takes the closure of $U := B _{R} \times S ^{1} \times S ^{1} $ into $U$?
\end{question}
We expect the answer is no,
but our methods here are not sufficiently developed for this conjecture, as we likely have to extend contact homology rather the Gromov-Witten theory as we do here.
\subsection {Sky catastrophes}
Given a continuous family $\{J _{t} \}$, $t \in [0,1]$ we denote by $\overline{\mathcal{M}} _{g}
(\{J _{t} \}, A)$ the space of pairs $(u,t)$, $u \in \overline{\mathcal{M}} _{g}(J _{t}, A)$.
\begin{definition} We say that a continuous family $\{J _{t}
\}$ on a compact manifold $M$ has a \textbf{\emph{sky catastrophe}} in class
$A$ if
there is an element $u \in \overline{\mathcal{M}} _{g} (J _{i}, A) $, $i=0,1$
which does not belong to any open compact (equivalently energy bounded) subset of $\overline{\mathcal{M}} _{g}
(\{J _{t} \}, A)$.
\end{definition}
Let us slightly expand this definition. If the connected components of $\overline{\mathcal{M}} _{g}
(\{J _{t} \}, A)$ are open subspaces of this space, then we have a sky catastrophe in the sense above if and only if there is a $u \in \overline{\mathcal{M}} _{g} (J _{i}, A) $ which has a non-compact connected component in $\overline{\mathcal{M}} _{g}
(\{J _{t} \}, A)$.
\begin{proposition} \label{prop:boundeddeformation0}
Let $M$ be a closed manifold, and suppose that $\{J _{t}\} $, $t \in [0,1]$ has no sky catastrophes, then if $J _{i}$, $i =0,1$ are bounded:
\begin{equation*}
GW _{g,n} (A, J _{0} ) = GW _{g,n} (A, J _{1} ),
\end{equation*}
if $A \neq 0$. If only $J _{0} $ is bounded then there is at least one class $A$ $J _{1} $-holomorphic curve in $M$.
\end {proposition}
The assumption on $A$ is for simplicity in this case.
At this point in time there are no known examples of families $\{ J _{t} \}$ with sky catastrophes, cf. \cite{citeFullerBlueSky}.
\begin{question}
Do sky catastrophes exist?
\end{question}
Really what we are interested in is whether they exist generically.
The author's opinion is that they do appear even generically. However for locally conformally symplectic deformations $\{(\omega _{t}, J _{t} )\}$ as previously defined, it might be possible that sky catastrophes cannot exist generically, for example it looks very unlikely that an example can be constructed with Reeb tori (see the following section), cf.
\cite{citeSavelyevFuller}.
If sky catastrophes do not exist then the theory of Gromov-Witten invariants of an $\lcsm$ would be very different.
In this direction we have the following. For a $\lcsm$ $(M,\omega)$ we have
a uniquely associated class (the Lee class) $\alpha = \alpha _{\omega} \in H ^{1} (M, \mathbb{R}) $ s.t. on the associated covering space $\widetilde{M} $, $\widetilde{\omega} $ is globally conformally symplectic. The class $\alpha$ is the Cech 1-cocycle, given as follows.
Let $\phi _{a,b}$ be the transition map for $\lcs$ charts $\phi _{a}, \phi _{b} $ of $(M, \omega)$. Then $\phi _{a,b} ^{*} \omega _{st} = g _{a,b} \cdot \omega _{st} $ for a positive real constant $g _{a,b} $ and $\{\ln g _{a,b} \}$ gives our 1-cocycle.
\begin{theorem} \label{thm:noSkycatastrophe} Let $M$ be a closed 4-fold and $\{(\omega _{t}, J _{t} )\}$, $t \in [0,1]$, a continuous family of compatible $\lcs$ pairs on $M$.
Let $\Sigma _{i} \subset M$, $i=0, \ldots, m$ be a collection of hypersurfaces s.t. $PD(\alpha _{t} ) = \sum _{i} a _{i,t} [\Sigma _{i} ] $ for each $t$.
Let $\{J _{t} \}$ be such that for each $t$ there is a foliation of $\Sigma _{i} $ by $J _{t} $-holomorphic class $B$ curves, then $\{J _{t} \}$ has no sky catastrophes in every class $A$ s.t. $A \cdot B \leq 0$.
\end{theorem}
\section{Introduction part II, beyond bounded case and the Fuller index} \label{sec:highergenus}
Suppose $(M,J)$ is a compact almost complex manifold, let
$N \subset
\overline{\mathcal{M}} _{g,k} (J,
A) $ be an open compact subset with $\energy$ positive on $N$. The latter condition is only relevant when $A =0$.
We shall primarily refer in what follows to work of Pardon in \cite{citePardonAlgebraicApproach}, only because this is what is more familiar to the author, due to greater comfort with algebraic topology as opposed to analysis.
But we should mention that the latter is a follow up to a profound theory that is originally created by Fukaya-Ono \cite{citeFukayaOnoArnoldandGW}, and later expanded with Ohta \cite{citeFukayaLagrangianIntersectionFloertheoryAnomalyandObstructionIandII}.
The construction in \cite{citePardonAlgebraicApproach} of implicit atlas, on the moduli space $\mathcal{M}$ of curves in a symplectic manifold, only needs a neighborhood of $\mathcal{M}$ in the space of all curves. So more generally if we have an almost complex manifold and an \emph{open} compact component $N$ as above, this will likewise have a natural implicit atlas, or a Kuranishi structure in the setup of \cite{citeFukayaOnoArnoldandGW}. And so such an $N$ will have
a virtual fundamental class in the sense of Pardon~\cite{citePardonAlgebraicApproach}, (or in any other approach to virtual fundamental cycle, particularly the original approach of Fukaya-Oh-Ohta-Ono).
This understanding will be used in other parts of the paper, following Pardon for the explicit setup.
We may thus define functionals:
\begin{equation*}
GW _{g,n} (N,A,J): H_* (\overline{M} _{g,n}) \otimes H _{*} (M ^{n} ) \to
\mathbb{Q}.
\end{equation*}
The first question is: how do these functionals depend on $N,J$?
\begin{lemma} \label{prop:invariance1} Let $\{J _{t} \}$, $t \in [0,1]$ be a continuous in the $C ^{\infty} $ topology homotopy.
Suppose that $\widetilde{N}$ is an open compact subset of the cobordism moduli
space $\overline{\mathcal{M}} _{g,n} (\{J _{t} \},
A)
$.
Let $$N _{i} = \widetilde{N} \cap \left( \overline{\mathcal{M}} _{g,n} (J _{i},
A)\right), $$ with $\energy$ positive on $N _{i} $, then $$GW _{g,n} (N _{0}, A, J _{0} ) = GW _{g,n} (N _{1}, A,
J _{1}). $$ In particular if $GW _{g,n} (N _{0}, A, J _{0} )
\neq 0$, there is a class $A$ $J _{1} $-holomorphic stable map in $M$.
\end{lemma}
The most basic lemma in this setting is the following, and we shall use it in the following section.
\begin{definition} An \textbf{\emph{almost symplectic pair}} on $M$ is a tuple $(M, \omega, J)$, where $\omega$ is a non-degenerate 2-form on $M$, and $J$ is $\omega$-compatible.
\end {definition}
\begin{definition} We say that a pair of almost symplectic pairs $(\omega _{i}, J _{i} )$ are \textbf{$\delta$-close}, if $\{\omega _{i}\}$, and $\{J _{i} \}$ are $C ^{\infty} $ $\delta$-close, $i=0,1$.
\end{definition}
\begin{lemma} \label{thm:nearbyGW}
Given a compact $M$ and an almost symplectic tuple $(\omega, J)$ on $M$, suppose that $N \subset \overline{\mathcal{M}} _{g,n} (J,A)
$ is a compact and open component
which is energy isolated meaning that $$N \subset \left( U =
\energy _{\omega} ^{-1} (E ^{0}, E ^{1}) \right)
\subset \left( V =
\energy _{\omega} ^{-1} (E ^{0} - \epsilon, E ^{1} + \epsilon ) \right),
$$ with $\epsilon>0$, $E ^{0} >0$ and with $V \cap \overline{\mathcal{M}} _{g,n} (J,A)
= N$. Suppose also that $GW _{g,n} (N, J,A) \neq 0$.
Then there is a $\delta>0$ s.t. whenever $(\omega', J')$ is a compatible almost symplectic pair
$\delta$-close to $(\omega, J)$, there exists $u \in \overline {\mathcal{M}} _{g,n} (J',A) \neq \emptyset $, with $$E ^{0} < \energy _{\omega'} (u) < E ^{1}.
$$
\end{lemma}
\subsection{Gromov-Witten theory of the $\lcsm$ $C \times S ^{1}$}
\label{sec:gromov_witten_theory_of_the_lcs_c_times_s_1_}
Let $(C, \lambda)$ be a closed contact manifold with contact form
$\lambda$.
Then $T = S ^{1} $ acts on $C \times S ^{1} $ by rotation in the $S ^{1} $
coordinate.
Let $J$ be an almost complex structure on the contact distribution,
compatible with $d \lambda $. There is an induced almost complex
structure $J ^{\lambda} $ on $C \times S ^{1} $, which is
$T$-invariant, coincides with
$J$ on the contact distribution $$\xi \subset TC \oplus \{\theta\} \subset T (C
\times S ^{1} ),$$ for each $\theta$ and which maps the Reeb vector
field $$R ^{\lambda} \in TC \oplus 0 \subset T (C \times S ^{1} ) $$
to $$\frac{d}{d
\theta} \in \{0\} \oplus TS ^{1} \subset T (C \times S ^{1} ), $$
for $\theta \in [0, 2 \pi]$ the global angular coordinate on $S ^{1} $.
This almost complex structure
is compatible with $d ^{\alpha} \lambda $.
We shall be looking below at the moduli space of marked holomorphic tori, (elliptic curves) in
$C \times S ^{1} $, in a certain class $A$.
Our notation for this is $\overline{\mathcal{M}}
_{1,1} (
{J} ^{\lambda},
A )$, where $A$ is a class of the maps, (to be explained).
The elements are
equivalence classes of pairs $(u, \Sigma)$: $u$ a $J
^{\lambda} $-holomorphic map of a stable genus $1$, elliptic curve
$\Sigma$ into $C \times S ^{1} $. So $\Sigma$ is a nodal curve
with principal component an elliptic curve, and other components
spherical. So the principal component
determines an element of ${M}_{1,1} $ the compactified moduli space of elliptic curves, which is understood as an orbifold. The equivalence relation is $(u, \Sigma) \sim (u',
\Sigma')$ if there is an isomorphism of marked elliptic curves $\phi: \Sigma \to \Sigma'$
s.t. $u' \circ \phi =u$.
When $\Sigma$ is smooth, we may write $[u,j]$ for an equivalence class where
$j$ is understood as a complex structure on the sole principal
component of the domain, and $u$ the map. Or we may just write $[u]$, or even just $u$ keeping
track of $j$, and of the fact that we are dealing with equivalence classes, implicitly.
Let us explain what class $A$ means. We need to be careful because it is now possible for non-constant holomorphic curves to be null-homologous. Here is a simple example take $S ^{3} \times S ^{1} $ with $J$ determined by the Hopf contact form as above, then all the Reeb tori are null-homologous.
In general we can just work with homology classes $A \neq 0$, and this will remove some headache, but in the above specific situation this is inadequate.
Given $u \in \overline{\mathcal{M}}
_{1,1} (
{J} ^{\lambda},
A )$ we may compose $\Sigma \xrightarrow{u} C \times S ^{1} \xrightarrow{pr} S ^{1} $, for $\Sigma$ the nodal domain of $u$.
\begin{definition} \label{definition:class}
In the setting above we say that $u$ is in class $A$, if $(pr \circ u) ^{*} d \theta $ can be completed to an integral basis of $H ^{1} (\Sigma, \mathbb{Z}) $, and if the homology class of $u$ is $A$, possibly zero.
\end{definition}
It is easy to see that the above notion of class is preserved under Gromov convergence, and that a class $A$ $J$-holomorphic map cannot be constant for any $A$, in particular by Theorem \ref{thm:quantization} a class $A$ map has energy bounded from below by a positive constant, depending on $(\omega,J)$. And this holds for any $\lcs$ pair $(\omega,J)$ on $C \times S ^{1} $.
\subsubsection {Reeb tori}
For the almost complex structure $J ^{\lambda} $ as above we have
one natural class of holomorphic tori in $C \times S ^{1} $ that we
call \emph{Reeb tori}. Given a closed orbit $o$ of $R ^{\lambda} $, a Reeb
torus $u _{o} $ for $o$, is the
map $$u_o (\theta _{1}, \theta _{2} )= (o (\theta _{1} ), \theta
_{2} ),$$ $\theta _{1},
\theta _{2} \in S ^{1}$
A Reeb torus is $J ^{\lambda} $-holomorphic for a uniquely determined holomorphic
structure $j$ on $T ^{2} $.
If $$D _{t} o (t) = c \cdot R ^{\lambda} (o (t)), $$ then $$j
(\frac{\partial}{\partial \theta _{1} }) = c \frac{\partial} {\partial \theta
_{2}}. $$
\begin{proposition} \label{prop:abstractmomentmap}
Let $(C, \lambda)$ be as above. Let $A$ be a class in the sense above, and ${J}^ {\lambda} $
be as above. Then the entire moduli space $\overline{\mathcal{M}}
_{1,1} (
{J} ^{\lambda},
A )$ consists of Reeb tori.
\end{proposition}
Note that the formal dimension of $\overline{\mathcal{M}}
_{1,1} (
{J} ^{\lambda},
A )$ is 0, for $A$ as in the proposition above.
It is given by the
Fredholm index of the operator \eqref{eq:fullD} which is 2, minus the dimension of the reparametrization group (for smooth curves) which is 2. We shall relate the count of these curves to the classical Fuller index, which is reviewed in the Appendix \ref{appendix:Fuller}.
If $\beta $ is a free homotopy class of a loop in $C$ denote by ${A} _{\beta} $ the induced homology class of a Reeb torus in $C \times S ^{1} $.
We show:
\begin{theorem} \label{thm:GWFullerMain}
\begin{equation*}
GW _{1,1} (N,A _{\beta} ,J ^{\lambda} ) ([M _{1,1}] \otimes [C \times S ^{1} ]) = i (\widetilde{N}, R ^{\lambda}, \beta),
\end{equation*}
where $N \subset \overline{\mathcal{M}}
_{1} (
{J} ^{\lambda},
A )$ is open-compact as before, $\widetilde{N} $ is the corresponding subset of periodic orbits of $R ^{\lambda} $, $\beta, A _{\beta} $ as above, and where $i (\widetilde{N}, R ^{\lambda}, \beta)$ is the Fuller index.
\end{theorem}
What about higher genus invariants of $C \times S ^{1} $? Following Proposition \ref{prop:abstractmomentmap}, it is not hard to see that all $J ^{\lambda} $-holomorphic curves must be branched covers of Reeb tori. If one can show that these branched covers are
regular when the underlying tori are regular, the calculation of invariants would be fairly automatic from this data, see \cite{citeWendlSuperRigid}, \cite{citeWendlChris} where these kinds of regularity calculation are made.
What follows is one non-classical application of the above theory.
\begin{theorem} \label{thm:holomorphicSeifert} Let $(S ^{2k+1} \times S ^{1}, d ^{\alpha} \lambda _{st} )$ be the $\lcsm$ associated to a contact manifold $(S ^{2k+1},\lambda _{st} )$ for $\lambda _{st} $ the standard contact form.
There exists a $\delta>0$ s.t. for any $\lcs$ pair $(\omega, J )$ $\delta$-close to $( d ^{\alpha} \lambda, J ^{\lambda} )$, there exists an elliptic, class $A$, $J$-holomorphic curve in $C \times S ^{1} $. (Where $A$ is as in the discussion above.)
\end{theorem}
It is natural to conjecture that the $\delta$-nearby condition can be removed. Indeed if $\omega= d^{\alpha} \lambda $ for $\lambda$ the standard contact form on $S ^{2k+1} $, or any contact structure on a threefold, and $J=J ^{\lambda} $ then we know there are $J$-holomorphic class $A$ tori, since we know there are $\lambda$-Reeb orbits, as the Weinstein conjecture is known to hold in these cases, \cite{citeViterboWeinstein}, \cite{citeTaubesWeinsteinconjecture} and hence there are Reeb tori.
Thus the following is a kind of ``holomorphic Seifert/Weinstein conjecture''.
\begin{conjecture} For any $\lcs$ pair $(\omega, J)$ on $S ^{2k+1} \times S ^{1} $ or on $C \times S ^{1} $ for $C$ a threefold, there is an elliptic, non-constant $J$-holomorphic curve.
\end{conjecture}
We note that it implies the Weinstein conjecture for $S ^{2k+1} $ and for $C$ a contact three-fold by Proposition \ref{prop:abstractmomentmap}.
\section {Basic results, and non-squeezing} \label{section:basics}
\begin{proof} [Proof of Theorem \ref{thm:complete}] (Outline, as the argument is
standard.) Suppose that we have a sequence $u ^{k} $ of
$J$-holomorphic maps with $L ^{2} $-energy $\leq E$.
By
\cite
[4.1.1]{citeMcDuffSalamon$J$--holomorphiccurvesandsymplectictopology},
a sequence $u ^{k} $ of $J$-holomorphic curves has a convergent
subsequence if $sup _{k} ||du ^{k} || _{L ^{\infty} } < \infty$.
On the other hand when this condition does not hold rescaling
argument tells us that a holomorphic sphere bubbles off. The
quantization Theorem \ref{thm:quantization}, then tells us that these
bubbles have some minimal energy, so if the total energy is capped by
$E$, only finitely many bubbles may appear, so that a subsequence of
$u ^{k} $ must converge in the Gromov topology to a Kontsevich stable map.
\end{proof}
\begin{proof} [Proof of Lemma \ref{prop:invariance1}]
Let $\widetilde{N} $ be as in the hypothesis. By Theorem \ref{thm:quantization} there is an $\epsilon>0$ s.t. $$\energy ^{-1} ([0,\epsilon)) \subset \overline{\mathcal{M}} _{g,n} (\{J _{t} \},A)$$ consists of constant curves. (Only relevant when $A=0$.)
Thus $\widetilde{N}'= \widetilde{N} - \energy ^{-1} ([0,\epsilon)) $ is an open-closed subset of $\overline{\mathcal{M}} _{g,n} (\{J _{t} \},A)$ and is compact, as $\widetilde{N} $ is compact.
We may then construct exactly as in \cite{citePardonAlgebraicApproach} a natural implicit atlas on $\widetilde{N}' $, with boundary $N _{0} ^{op} \sqcup N _{1} $, ($op$ denoting opposite orientation). And so
\begin{equation*}
GW _{g,n} (N _{0}, A, J _{0} ) = GW _{g,n} (N _{1}, A, J _{1} ).
\end{equation*}
\end{proof}
\begin{proof} [Proof of Lemma \ref{thm:nearbyGW}]
\begin{lemma} \label{lemma:NearbyEnergy} Given a Riemannian manifold $(M,g)$,
and $J$
an almost complex structure, suppose that $N \subset \overline{\mathcal{M}} _{g,n} (J,A)
$ is a compact and open component
which is energy isolated meaning that $$N \subset \left( U =
\energy _{g} ^{-1} (E ^{0}, E ^{1}) \right)
\subset \left( V =
\energy _{g} ^{-1} (E ^{0} - \epsilon, E ^{1} + \epsilon ) \right),
$$ with $\epsilon>0$, $E _{0}>0 $, and with $V \cap \overline{\mathcal{M}} _{g,n} (J,A)
= N$. Then there is a $\delta>0$ s.t. whenever $(g', J')$ is $C ^{\infty} $
$\delta$-close to $(g, J)$ if $u \in \overline{\mathcal{M}} _{g,n} (J',A)
$ and $$E ^{0} -\epsilon < \energy _{g'} (u) < E ^{1} +\epsilon $$
then $$E ^{0} < \energy _{g'} (u) < E ^{1} .
$$
\end{lemma}
\begin{proof} [Proof of Lemma \ref{lemma:NearbyEnergy}]
Suppose otherwise then there is a sequence $\{(g _{k}, J _{k}) \}$ $C ^{\infty} $ converging to $(g,
J)$, and a sequence $\{u _{k} \}$ of $J _{k} $-holomorphic stable maps satisfying
$$E ^{0} - \epsilon < \energy _{g _{k} } (u _{k} ) \leq E ^{0} $$
or $$E
^{1} \leq \energy _{g
_{k} } (u _{k} ) < E ^{1} +\epsilon.
$$ By Gromov compactness we may find a Gromov convergent subsequence $\{u _{k _{j} } \}$ to a $J$-holomorphic
stable map $u$, with $$E ^{0} - \epsilon < \energy _{g} (u) \leq E ^{0}
$$ or $$E
^{1} \leq \energy _{g}
(u) < E ^{1} +\epsilon.
$$ But by our assumptions such a $u$ does not exist.
\end{proof}
\begin{lemma} \label{lemma:NearbyEnergyDeformation} Given a compact almost symplectic compatible triple $(M,\omega,J)$,
so that $N \subset \overline{\mathcal{M}} _{g,n} (J,A)
$ is exactly as in the lemma above.
There is a $\delta'>0$ s.t. the following is satisfied.
Let $(\omega',J')$ be $\delta'$-close to
$(\omega,J)$, then there is a smooth family of almost symplectic pairs $\{(\omega _{t} , J _{t} )\}$,
$(\omega _{0}, J_0)= (g, J)$, $(\omega _{1}, J_1)= (g', J')$
s.t. there is open compact subset
\begin{equation*}
\widetilde{N} \subset \overline{\mathcal{M}} _{g,n} (\{J _{t}\},A),
\end{equation*}
and
with $$\widetilde{N} \cap \overline{\mathcal{M}} (J,A) = N.
$$
Moreover if $(u,t) \in \widetilde{N}
$ then $$E ^{0} < \energy _{g _{t} } (u) < E ^{1}.
$$
\end{lemma}
\begin{proof} Given $\epsilon$ as in the hypothesis let $\delta$ be as in Lemma \ref{lemma:NearbyEnergy}.
\begin{lemma} \label{lemma:Ret} Given a $\delta>0$ there is a $\delta'>0$ s.t. if $(\omega',J')$ is $\delta'$-near $(\omega, J)$ there is an interpolating family $\{(\omega _{t}, J _{t} )\}$ with $(\omega _{t},J _{t} )$ $\delta$-close to $(\omega,J)$ for each $t$.
\end{lemma}
\begin{proof} Let $\{g _{t} \} $ be the family of metrics on $M$ given by the convex linear combination of $g=g _{\omega _{J} } ,g' = g _{\omega',J'} $. Clearly $g _{t} $ is $\delta'$-close to $g _{0} $ for each $t$. Likewise the family of 2 forms $\{\omega _{t} \}$ given by the convex linear combination of $\omega $, $\omega'$ is non-degenerate for each $t$ if $\delta'$ was chosen to be sufficiently small and
is $\delta'$-close to $\omega _{0} = \omega _{g,J} $ for each moment.
Let $$ret: Met (M) \times \Omega (M) \to \mathcal{J} (M) $$ be the ``retraction map'' (it can be understood as a retraction followed by projection) as defined in \cite [Prop 2.50]{citeMcDuffSalamonIntroductiontosymplectictopology}, where $Met (M)$ is space of metrics on $M$, $\Omega (M)$ the space of 2-forms on $M$, and $ \mathcal{J} (M)$ the space of almost complex structures. This map has the property that the almost complex structure $ret (g,\omega)$ is compatible with $\omega$.
Then $ \{(\omega _{t}, ret (g _{t}, \omega _{t}) \} $ is a compatible family.
As $ret $ is continuous in the $C ^{\infty} $-topology, $\delta'$ can be chosen so that $ \{ ret _{t} (g _{t}, \omega _{t} \} $ are $\delta$-nearby.
\end {proof}
Let $\delta'$ be chosen with respect to $\delta$ as in the above lemma and $\{ (\omega _{t}, J _{t} )\}$ be the corresponding family.
Let $\widetilde{N} $ consist of all elements $(u,t) \in \overline{\mathcal{M}} (\{J _{t} \},A)$ s.t. $$E ^{0} -\epsilon < \energy _{\omega _{t} } (u) < E ^{1} +\epsilon.
$$
Then by Lemma \ref{lemma:NearbyEnergy} for each $(u,t) \in \widetilde{N} $, we have:
$$E ^{0} < \energy _{\omega _{t} } (u) < E ^{1}.$$
In particular $\widetilde{N}$ must be closed, it is also clearly open, and is compact as $\energy$ is a proper function.
\end{proof}
To finish the proof of the main lemma, let $N$ be as in the hypothesis, $\delta'$ as in Lemma \ref{lemma:NearbyEnergyDeformation}, and $\widetilde{N} $ as in the conclusion to Lemma \ref{lemma:NearbyEnergyDeformation}, then by Lemma \ref{prop:invariance1} $$GW _{g,n} (N_1, J', A) = GW _{g,n} (N, J, A) \neq 0,$$ where $N _{1} = \widetilde{N} \cap \overline{\mathcal{M}} _{g,n} (J _{1},A) $. So the conclusion follows.
\end {proof}
\begin {proof}[Proof of Proposition \ref{prop:boundeddeformation0}]
For each $u \in \overline{\mathcal{M}} _{g,n} (J _{i} , A)$, $i=0,1$ fix an open-compact subset $V _{u} $ of $\overline{\mathcal{M}} _{g,n} (\{J _{t} \}, A)$ containing $u$.
We can do this by the hypothesis that there are no sky catastrophes.
Since $\overline{\mathcal{M}} _{g,n} (J _{i} , A)$ are compact we
may find a finite subcover $$\{V _{u _{i} } \} \cap (\overline{\mathcal{M}} _{g,n}
(J _{0} , A) \cup \overline{\mathcal{M}} _{g,n} (J _{1} , A))$$ of $\overline{\mathcal{M}} _{g,n}
(J _{0} , A) \cup \overline{\mathcal{M}} _{g,n} (J _{1} , A)$, considering $\overline{\mathcal{M}} _{g,n}
(J _{0} , A) \cup \overline{\mathcal{M}} _{g,n} (J _{1} , A)$ as a subset of $\overline{\mathcal{M}} _{g,n} (\{J _{t} \}, A)$ naturally.
Then $V=\bigcup_{i} V _{u _{i} } $ is an open compact subset of
$\overline{\mathcal{M}} _{g,n} (\{J _{t} \}, A)$, s.t. $$V \cap \overline{\mathcal{M}} _{g,n} (J _{i} , A) = \overline{\mathcal{M}} _{g,n} (J _{i} , A).
$$ Now apply Lemma \ref{prop:invariance1}.
Likewise if only $J _{0} $ is bounded, for each $u \in \overline{\mathcal{M}} _{g,n} (J _{0} , A)$, fix an open-compact subset $V _{u} $ of $\overline{\mathcal{M}} _{g,n} (\{J _{t} \}, A)$ containing $u$. Since $\overline{\mathcal{M}} _{g,n} (J _{0} , A)$ is compact we
may find a finite subcover $$\{V _{u _{i} } \} \cap (\overline{\mathcal{M}} _{g,n}
(J _{0} , A) $$ of $\overline{\mathcal{M}} _{g,n}
(J _{0} , A) $.
Then $V=\bigcup_{i} V _{u _{i} } $ is an open compact subset of
$\overline{\mathcal{M}} _{g,n} (\{J _{t} \}, A)$, s.t. $$V \cap \overline{\mathcal{M}} _{g,n} (J _{i} , A) = \overline{\mathcal{M}} _{g,n} (J _{i} , A).
$$
Again apply Lemma \ref{prop:invariance1}.
\end {proof}
\begin{proof} [Proof of Theorem] \ref{thm:noSkycatastrophe}
We shall actually prove a stronger statement that there is a universal energy bound from above for class $A$, $J _{t} $-holomorphic curves. Suppose otherwise,
then there is a sequence $\{u _{k} \}$ of $J _{t _{k} } $-holomorphic class $A$ curves, with $\energy _{\omega _{t _{k} } } u _{k} \mapsto \infty $ as $k \mapsto \infty$.
We may assume that $t _{k} $ is convergent to $t _{0} \in [0,1] $. Let $\{ \widetilde{u}_{k} \}$ be a lift of the curves to the covering space $\widetilde{M} \xrightarrow{\pi} M $ determined by the class $\alpha$ as described prior to the formulation of the theorem. If the image of $\{\widetilde{u}_{k} \}$ is contained (for a specific choice of lifts) in a compact $K \subset \widetilde{M} $ then we have: \begin{equation*}
\energy _{t _{k} } (\widetilde{u} _{k} ) \simeq \energy _{t _{0} } (\widetilde{u} _{k} ) \leq C \langle \widetilde{\omega } _{t _{0} } ^{symp} , A \rangle,
\end{equation*}
where $\widetilde{\omega} _{t _{0} } = f \widetilde{\omega} ^{symp} $ for $\widetilde{\omega} ^{symp} $ symplectic on $\widetilde{M} $, $f >0$ and $C = sup _{K} f $.
Hence energy would be universally bounded for all $\{u _{k} \}$.
Suppose there is no such $K$. Let $\{u _{k} \}$ be the corresponding sequence. We may suppose that $\image u _{k}$ does not intersect $\Sigma _{i} $ for all $k$ and $i$, since otherwise $u _{k} $ must be a branched covering map of a leaf of the $J _{t _{k} } $-holomorphic foliation of $\Sigma _{i} $ by the positivity of intersections, and consequently all such $u _{k} $ have lifts contained in a specific compact subset of $\widetilde{M} $.
The class $\alpha$ has a natural differential form representative, called the Lee form and defined as follows. We take a cover of $M$ by open sets $U _{a} $ in which $\omega= f _{a} \cdot \omega _{a} $ for $\omega _{a} $ symplectic, and $f _{a} $ a positive smooth function.
Then we have 1-forms $d (\ln f _{a} )$ in each $U _{a} $ which glue to a well defined closed 1-form on $M$. We conflate the notation for this 1-form with its cohomology class, and the Cech 1-cocycle $\alpha$ defined as before.
By our hypothesis that $PD(\alpha) = \sum _{i} a _{i} [\Sigma _{i} ] $ we have that $\pi ^{-1} ({M - \bigcup _{i} \Sigma _{i} })$ is a disjoint union $\sqcup _{i} K _{i} $ of bounded subsets, with respect to the proper function on $\widetilde{M} $ determined by the Lee 1-form $\alpha$.
Then for some $k'$ sufficiently large, the image of some lift $\widetilde{u} _{k'} $ intersects more then one of the $K _{i} $, and so $u _{k'} $ intersects some $\Sigma _{i} $, a contradiction.
\end{proof}
\begin{proof} [Proof of Theorem \ref{cor:nonsqueezing}] \label{section:proofnonsqueezing}
We need to say what is our $C ^{0} $ norm on the space of $\lcs$ forms.
\begin{definition} \label{def:norm}
The $C ^{0} $ norm on the space of $\lcs$ $2$-forms on $M$, is defined with respect to a fixed Riemannian metric $g$ on $M$, and is given by
\begin{equation*}
||\omega|| = ||\omega|| _{mass} + ||\alpha|| _{mass},
\end{equation*}
for $\alpha$ the Lee form as above and $||\cdot|| _{mass} $ the co-mass norms with respect to $g$ on differential $k$-forms. That is $||\eta|| _{mass} $ is the supremum over unit $k$-vectors $v$ of $|\eta (v)|$.
\end{definition} Explicitly this means the following: a sequence of $\lcs$ forms $\{\omega _{k}\} $ converges to $\omega$ iff the lift sequence $\{\widetilde{\omega} _{k} \}$, on the associated (to $\alpha$) cover $\widetilde{M} $, converges to $\omega$ on compact sets, and $\widetilde{\omega} _{k} = f _{k} \widetilde{\omega} _{k} ^{symp} $, with $\omega _{k} ^{symp} $ symplectic and $\{f _{k}\} $ a sequence of positive functions converging to $1$ on compact sets.
Fix an $\epsilon' >0$ s.t. any 2-form $\omega _{1} $ on $M$, $C ^{0} $ $\epsilon'$-close to $\omega$, is non-degenerate, and is non-degenerate on the leaves of $\Sigma _{i} $.
Suppose by contradiction that for every $\epsilon>0$ there exists an $\lcs$ embedding
$$\phi: B _{R} \hookrightarrow (M, \omega _{1}),
$$ satisfying the conditions. Assume that $\epsilon< \epsilon'$, and let $\{\omega _{t} \}$ denote the convex linear combination of $\omega$ and $\omega _{1} $, with $\omega _{0}=\omega $. By assumptions $\omega _{t} $ is an $\lcs$ form for each $t$, and is non-degenerate on the leaves of $\Sigma _{i} $.
Extend $\phi _{*}j $ to an almost complex structure $J _{1} $ on $M$, preserving $T ^{fol} \Sigma _{i}$.
We may then extend this to a family $\{{J} _{t} \} $ of almost complex structures $M$, s.t. $J _{t} $ is $\omega _{t} $ compatible for each $t$, and such that $J _{t} $ preserves $T ^{fol} \Sigma $ for each $i$, since the foliation of $\Sigma _{i} $ is $\omega _{t} $-symplectic for each $t$.
(For construction of $\{J _{t} \}$ use for example the map $ret$ from Lemma \ref{lemma:Ret}). Then the family $\{(\omega _{t}, J _{t} )\}$ satisfies the hypothesis of Theorem \ref{thm:noSkycatastrophe}, and so has no sky catastrophes in class $A$. Consequently by Lemma \ref{prop:invariance1} there is a class $A$ $J _{1}$-holomorphic curve $u$ passing through $\phi ({0}) $.
By the proof of Theorem \ref{thm:noSkycatastrophe} we may choose a lift to $\widetilde{M} $ for each such curve $u$ so that it is contained in a compact set $K \subset \widetilde{M} $, (independent of $\epsilon$ and all other choices). Now by definition of our $C ^{0} $-norm for every $\delta$ we may find an $\epsilon$
so that if $\omega _{1} $ is $\epsilon$-close to $\omega$ then $\widetilde{\omega} ^{symp} $ is $\delta$-close to $\widetilde{\omega} _{1} ^{symp} $ on $K$.
Since $ \langle \widetilde{\omega} ^{symp}, [\widetilde{u} ] \rangle =\pi r ^{2} $, if $\delta$ above is chosen to be sufficiently small then $$| \max _{K} f _{1} \langle \widetilde{\omega} _{1} ^{symp}, [\widetilde{u} ] \rangle - \pi \cdot r ^{2} | < \pi R ^{2} - \pi r^{2}, $$ since $$|\langle \widetilde{\omega} _{1} ^{symp}, [\widetilde{u} ] \rangle - \pi \cdot r ^{2}| \simeq |\langle \widetilde{\omega} ^{symp}, [\widetilde{u} ] \rangle - \pi \cdot r ^{2}| = 0,$$ for $\delta$ small enough, and $\max _{K} f _{1} \simeq 1$ for $\delta$ small enough, where $\simeq$ denotes approximate equality.
In particular we get that $\omega _{1} $-area of $u$ is less then $\pi R ^{2} $.
We may then proceed as in the classical proof Gromov~\cite{citeGromovPseudoholomorphiccurvesinsymplecticmanifolds.} of the non-squeezing theorem to get a contradiction and finish the proof. More specifically $\phi ^{-1} ({\image \phi \cap \image u}) $ is a (nodal) minimal surface in $B _{R} $, with boundary on the boundary of $B _{R} $, and passing through $0 \in B _{R} $. By construction it has area strictly less then $\pi R ^{2} $ which is impossible by a classical result of differential geometry, (the monotonicity theorem.)
\end{proof}
\section {Proof of Theorem \ref{thm:GWFullerMain} and Theorem \ref{thm:holomorphicSeifert}}
\begin{proof}[Proof of Proposition \ref{prop:abstractmomentmap}]
Suppose we a have a curve without spherical nodal components $u \in \overline{\mathcal{M}}_{1,1} (
{J} ^{\lambda} ,
A). $
We claim that
$(pr _{C } \circ u )_*$,
has rank everywhere
$\leq 1$, for $pr _{C}:C \times S ^{1} \to C $ the projection.
Suppose otherwise than it is immediate
by construction of $J ^{\lambda} $, that $\int _{\Sigma} u ^{*}
d\lambda > 0$, for $\Sigma$ domain of $u$, but $d\lambda$ is exact so that that this is
impossible. It clearly follows from this that $\Sigma$ must be smooth, (non-nodal).
Next observe that when the rank of $( pr _{C} \circ u )
_{*} $ is $1$,
its image is in the
Reeb line sub-bundle of $TC$, for otherwise the image
has a contact component, but this is $J ^{\lambda} $ invariant and so
again we get that $\int _{\Sigma } u ^{*}
d\lambda > 0$. We now show that the image of $pr _{C} \circ u $ is in
fact the image of some Reeb orbit.
Pick an identification of the domain $\Sigma$ of $u$ with a marked Riemann surface $(T
^{2}, j) $, $T ^{2} $ the standard torus.
We shall use throughout coordinates $(\theta _{1}, \theta _{2} )$ on $T ^{2} $
$\theta _{1}, \theta _{2} \in S ^{1} $, with $S ^{1}$ unit complex
numbers.
Then by assumption on the class $A$ (and WLOG) $\theta \mapsto pr _{S ^{1} } \circ u(\{\theta ^{1} _{0} \}
\times \{\theta\})$, is a degree $1$ curve, where $pr _{S ^{1} } : C \times S ^{1}
\to S ^{1} $ is the projection.
And so by the Sard theorem we have a regular value $\theta _{0} $, so that
$S _{0} = u ^{-1} \circ pr _{S ^{1} } ^{-1} (\theta _{0}) $ is an
embedded circle in $T ^{2} $.
Now
$d (pr _{S ^{1} } \circ u )$ is surjective along
$T (T ^{2} )| _{S _{0} } $, which means, since $u$ is $J ^{\lambda}
$-holomorphic that $pr _{C} \circ u| _{S _{0} } $ has non-vanishing differential.
From this and the discussion above it follows that image of $pr _{C }
\circ u $ is the image of some embedded Reeb orbit $o _{u} $. Consequently
the image of $u$ is contained in the image of the Reeb torus of $o
_{u} $, and so (again by the assumption on $A$) $u$ is a itself a Reeb torus map for some $o$ covering
$o _{u} $.
The statement of the lemma
follows when $u$ has no spherical nodal components. On the
other hand non-constant holomorphic spheres are impossible also by the
previous argument. So there are no nodal elements in $\overline{\mathcal{M}}_{1,1} (
{J} ^{\lambda},A) $ which completes the argument.
\end{proof}
\begin{proposition} \label{prop:regular} Let $(C, \xi)$ be a general
contact manifold. If
$\lambda$ is non-degenerate contact 1-form for $\xi$,
then all the elements of $\overline{\mathcal{M}}_{1,1} ( J ^{\lambda}
, {A} )$ are regular curves. Moreover if $\lambda$ is degenerate then
for a period $P$ Reeb orbit $o$
the kernel of the associated real linear Cauchy-Riemann operator for
the Reeb torus of $o$ is naturally identified with the 1-eigenspace of $\phi
_{P,*} ^{\lambda} $ -
the time $P$ linearized
return map $\xi (o (0)) \to \xi (o (0)) $
induced by the $R^{\lambda}$ Reeb flow.
\end{proposition}
\begin{proof}
We have previously shown that all $[u,j] \in \overline{\mathcal{M}}_{1,1} (J ^{\lambda}
, {A} ),$ are represented by smooth immersed curves, (covering maps of Reeb tori).
Since each $u$ is immersed we may naturally get a splitting $u ^{*}T (C \times S ^{1} )
\simeq N _{u} \times T (T ^{2}) $,
using $g _{J} $ metric, where $N _{u} $ denotes the pull-back normal
bundle, which is identified with the pullback along the projection of $C \times S ^{1} \to C $ of the distribution $\xi$.
The full associated real linear Cauchy-Riemann operator takes the
form:
\begin{equation} \label{eq:fullD}
D ^{J}_{u}: \Omega ^{0} (N _{u} \oplus T (T ^{2}) ) \oplus T _{j} M _{1,1} \to \Omega ^{0,1}
(T(T ^{2}), N _{u} \oplus T (T ^{2}) ).
\end{equation}
This is an index 2 Fredholm operator (after standard Sobolev
completions), whose restriction to $\Omega
^{0} (N _{u} \oplus T (T ^{2}) )$ preserves the splitting, that is the
restricted operator splits as
\begin{equation*}
D \oplus D': \Omega ^{0} (N _{u}) \oplus \Omega ^{0} (T (T ^{2}) ) \to \Omega ^{0,1}
(T (T ^{2}), N _{u}) \oplus \Omega ^{0,1}(T (T ^{2}), T (T ^{2}) ).
\end{equation*}
On the other hand the restricted Fredholm index 2 operator
\begin{equation*}
\Omega ^{0} (T (T ^{2})) \oplus T _{j} M _{1,1} \to \Omega ^{0,1}(T (T ^{2}) ),
\end{equation*}
is surjective by classical algebraic geometry.
It follows that $D ^{J}_{u} $ will be surjective
if
the restricted Fredholm index 0 operator
\begin{equation*}
D: \Omega ^{0} (N _{u} ) \to \Omega ^{0,1}
(N _{u} ),
\end{equation*}
has no kernel.
The bundle $N _{u} $ is symplectic with symplectic form on
the fibers given by restriction of $u ^{*} d \lambda$, and together with $J
^{\lambda} $ this gives a Hermitian structure on $N _{u} $. We have a
linear symplectic connection $A$ on $N _{u} $, which over the slices $S ^{1}
\times \{\theta _{2}' \} \subset T ^{2} $ is induced by the pullback
by $u$ of the
linearized $R ^{\lambda} $ Reeb flow. Specifically the $A$-transport map
from $N | _{(\theta _{1}', \theta _{2}' )} $ to $N | _{(\theta
_{1}'', \theta _{2}' )} $ over $ [\theta' _{1}, \theta _{2}'' ]
\times \{\theta _{2}' \} \subset T ^{2} $, $0 \leq \theta' _{1} \leq
\theta '' _{2} \leq 2\pi $ is given by $$(u_*| _{N | _{(\theta
_{1}'', \theta _{2}' )} }) ^{-1} \circ \phi ^{\lambda}
_{mult \cdot (\theta '' _{1} - \theta' _{1}) }
\circ u_*| _{N | _{(\theta
_{1}', \theta _{2}' )} }, $$ where $mult$ is the multiplicity of $o$
and where $\phi ^{\lambda}
_{mult \cdot (\theta '' _{1} - \theta' _{1}) } $ is the time $mult
\cdot (\theta
'' _{1} - \theta' _{1}) $ map for the $R ^{\lambda} $ Reeb flow.
The connection $A$ is defined to be trivial in the $\theta
_{2} $ direction, where trivial means that the parallel transport maps are
the
$id$ maps over $\theta _{2} $ rays. In particular the curvature $R _{A} $ of this connection
vanishes. The connection $A$ determines a real linear CR operator on
$N _{u} $ in the standard way (take the complex anti-linear part of
the vertical differential of a section). It is easy (but perhaps a little tedious) to verify from the definitions that this
operator is exactly $D$.
We have a differential 2-form $\Omega$ on the
$N _{u} $ which in the fibers of $N _{u} $ is just the fiber symplectic form and
which is defined to vanish on the horizontal distribution.
The 2-form $\Omega$
is closed, which we may check explicitly by using that $R _{A} $ vanishes
to obtain local symplectic trivializations of $N _{u} $ in which $A$ is trivial.
Clearly $\Omega$ must vanish on the
0-section since it is a $A$-flat section. But any section is homotopic to
the 0-section and so in particular if $\mu \in \ker D$ then $\Omega$
vanishes on $\mu$. But then since $\mu \in \ker D$, and so its
vertical differential is complex linear, it must follow that
the vertical differential vanishes, since $\Omega (v, J ^{\lambda}v )
>0$, for $0 \neq v \in T ^{vert}N _{u} $ and so otherwise we would
have $\int _{\mu} \Omega>0 $. So $\mu$ is
$A$-flat, in particular the
restriction of $\mu$ over all slices $S ^{1} \times \{\theta _{2}' \} $ is
identified with a period $P$ orbit of the linearized at $o$
$R ^{\lambda} $ Reeb flow, which
does not depend on $\theta _{2}' $ as $A$ is trivial in the $\theta
_{2} $ variable. So the kernel of $D$ is identified with the vector
space of period $P$ orbits of the linearized at $o$ $R
^{\lambda} $ Reeb flow,
as needed.
\end{proof}
\begin{proposition} \label{prop:regular2} Let $\lambda$ be a contact form on a $2n+1$-fold $C$, and $o$ a non-degenerate, period $P$,
$R^{\lambda}$-Reeb orbit, then the orientation of $[u _{o} ]$
induced by the determinant line bundle orientation of $\overline{\mathcal{M}}_{1,1} ( J ^{\lambda}
, {A} ),$ is $(-1) ^{CZ (o) -n} $, which is $$\sign \Det (\Id|
_{\xi (o(0))} - \phi _{P, *}
^{\lambda}| _{\xi (o(0))} ).$$
\end{proposition}
\begin{proof}[Proof of Proposition \ref{prop:regular2}]
Abbreviate $u _{o} $ by $u$.
Fix a trivialization $\phi$ of $N _{u} $ induced by a trivialization of the
contact distribution $\xi$ along $o$ in the obvious sense: $N _{u} $
is the pullback of $\xi$ along the composition $$T ^{2} \to S ^{1}
\xrightarrow{o} C. $$
Then the pullback $A' $ of $A$ (as above) to $T ^{2} \times
\mathbb{R} ^{2n} $ is a connection whose parallel transport
path along $S ^{1} \times \{\theta _{2} \} $, $p: [0,1] \to \Symp (\mathbb{R} ^{2n} )$, starting at $\id$, is $\theta _{2} $
independent and so that the parallel transport path of $A' $
along $\{\theta _{1} \} \times S ^{1} $ is constant, that is $A'
$ is trivial in the $\theta _{2} $ variable. We shall call such a
connection $A'$ on $T ^{2} \times
\mathbb{R} ^{2n} $ \emph{induced by $p$}. By non-degeneracy assumption on $o$, the map $p(1) $
has no 1-eigenvalues. Let $p'': [0,1] \to \Symp (\mathbb{R} ^{2n} )$ be a path from $p (1)$ to a unitary
map $p'' (1)$, with $p'' (1) $ having no $1$-eigenvalues, s.t. $p''$
has only simple crossings with the Maslov cycle. Let $p'$ be the concatenation of $p$ and $p''$. We then get $$CZ (p') - \frac{1}{2}\sign \Gamma (p', 0) \equiv
CZ (p') - n \equiv 0 \mod {2}, $$ since
$p'$ is homotopic relative end points to a unitary geodesic path $h$ starting at
$id$, having regular crossings, and since the number of
negative, positive eigenvalues is even at each regular crossing of $h$ by unitarity. Here $\sign \Gamma (p', 0)$ is the index of the crossing form of the path $p'$ at time $0$, in the notation of \cite{citeRobbinSalamonTheMaslovindexforpaths.}.
Consequently
\begin{equation} \label{eq:mod2}
CZ (p'') \equiv CZ (p) -n \mod {2},
\end{equation}
by additivity of
the Conley-Zehnder index.
Let us then define a free homotopy $\{p _{t} \}$ of $p$ to
$p'$, $p _{t} $ is the concatenation of $p$ with $p''| _{[0,t]} $,
reparametrized to have domain $[0,1]$ at each moment $t$. This
determines a homotopy $\{A' _{t} \}$ of connections induced by $\{p
_{t} \}$. By the proof of Proposition \ref{prop:regular}, the CR operator $D _{t} $ determined by each $A ' _{t} $ is surjective except at some finite
collection of times $t _{i} \in (0,1) $, $i \in N$ determined by the crossing
times of $p''$ with the Maslov cycle, and the dimension of the kernel
of $D _{t _{i} } $ is the 1-eigenspace of $p'' (t _{i} )$, which is 1
by the assumption that the crossings of $p''$ are simple.
The operator
$D _{1} $ is not complex linear so we concatenate the homotopy $\{D _{t}
\}$ with the homotopy $\{\widetilde{D} _{t} \}$ induced by the homotopy $\{\widetilde{A} _{t} \}$ of $A' _{1} $ to a
unitary connection $\widetilde{A} _{1} $, where the homotopy
$\{\widetilde{A} _{t} \}$, is through connections induced by paths
$\{\widetilde{p} _{t} \} $, giving a homotopy relative end points of $p'= \widetilde{p} _{0} $ to a unitary path
$\widetilde{p} _{1} $ (for example $h$ above).
Let us denote by $\{D' _{t} \}$ the
concatenation of $\{D _{t} \}$ with $\{ \widetilde{D} _{t} \}$. By
construction in the second half of the homotopy $\{ {D}' _{t}
\}$, ${D}' _{t} $ is surjective. And $D' _{1} $ is induced by a
unitary
connection, since it is induced by unitary path $\widetilde{p}_{1} $.
Consequently $D' _{1} $ is complex linear. By the above construction,
for the homotopy $\{D' _{t} \}$, $D' _{t} $ is surjective except for
$N$ times in $(0,1)$, where the kernel has dimension one.
In
particular the sign of $[u]$ by the definition via the determinant
line bundle is exactly $$-1^{N}= -1^{CZ (p) -n},$$
by \eqref{eq:mod2}, which was what to be proved.
\end {proof}
Thus if ${N} \subset \overline{\mathcal{M}} _{1,1} (J ^{\lambda}, A _{\beta} ) $ is open-compact and consists of isolated regular Reeb tori $\{u _{i} \}$, corresponding to orbits $\{o _{i} \}$ we have:
\begin{equation*}
GW _{1,1} (N, A _{\beta}, J ^{\lambda} ) ([M _{1,1} ] \otimes [C \times S ^{1} ]) = \sum _{i} \frac{(-1) ^{CZ (o _{i} ) - n} }{mult (o _{i} )},
\end{equation*}
where the denominator $mult (o _{i} )$ is there because our moduli space is understood as a non-effective orbifold, see Appendix \ref{sec:GromovWittenprelims}.
The expression on the right is exactly the Fuller index $i (\widetilde{N}, R ^{\lambda}, \beta)$.
Thus the theorem follows for $N$ as above. However in general if $N$ is open and compact then perturbing slightly we obtain a smooth family $\{R ^{\lambda _{t} } \}$, $\lambda _{0} =\lambda $, s.t.
$\lambda _{1} $ is non-degenerate, that is has non-degenerate orbits.
And such that there is an open-compact subset $\widetilde{N} $ of $\overline{\mathcal{M}} _{1,1} (\{J ^{\lambda _{t} } \}, A _{\beta} )$ with $(\widetilde{N} \cap \overline{\mathcal{M}} _{1,1} (J ^{\lambda}, A _{\beta} )) = N $, cf. Lemma \ref{lemma:NearbyEnergyDeformation}.
Then by Lemma \ref{prop:invariance1} if $$N_1=(\widetilde{N} \cap \overline{\mathcal{M}} _{1,1} (J ^{\lambda _{1} }, A _{\beta} ))
$$ we get $$GW _{1,1} (N, A _{\beta}, J ^{\lambda} ) ([M _{1,1} ] \otimes [C \times S ^{1} ]) = GW _{1,1} (N _{1} , A _{\beta}, J ^{\lambda _{1} } ) ([M _{1,1} ] \otimes [C \times S ^{1} ]).
$$
By previous discussion \begin{equation*}
GW _{1,1} (N _{1} , A _{\beta}, J ^{\lambda _{1} } ) ([M _{1,1} ] \otimes [C \times S ^{1} ]) = i (N_1, R ^{\lambda_1}, \beta),
\end{equation*}
but by the invariance of Fuller index (see Appendix \ref{appendix:Fuller}), $$i (N_1, R ^{\lambda_1}, \beta) = i (N, R ^{\lambda}, \beta).
$$ This finishes the proof of Theorem \ref{thm:GWFullerMain}
\qed
\begin{proof} [Proof of Theorem \ref{thm:holomorphicSeifert}]
Let $N \subset \overline{\mathcal{M}} _{1,1} (A, J ^{\lambda} ) $, be the subspace corresponding to the subspace $\widetilde{N} $ of all period $2 \pi$ $R ^{\lambda} $-orbits. (Under the Reeb tori correspondence.) It is easy to compute see for instance \cite{citeFullerIndex} $$i (\widetilde{N}, R ^{\lambda}) = \pm \chi (\mathbb{CP} ^{k}) \neq 0.
$$ By Theorem \ref{thm:GWFullerMain} $GW _{1,1} (N, J ^{\lambda}, A) \neq 0 $. The theorem then follows by Lemma \ref{thm:nearbyGW}.
\end{proof}
\begin {appendices}
\section {Fuller index} \label{appendix:Fuller} Let $X$ be a vector field on $M$. Set \begin{equation*} S (X) = S (X, \beta) =
\{(o, p) \in L _{\beta} M \times (0, \infty) \,\left . \right | \, \text{ $o: \mathbb{R}/\mathbb{Z} \to M $ is a
periodic orbit of $p X $} \},
\end{equation*}
where $L _{\beta} M $ denotes
the free homotopy class $\beta$ component of the free loop space.
Elements of $S (X)$ will be called orbits. There
is a natural $S ^{1}$ reparametrization action on $S (X)$, and elements of $S
(X)/S ^{1} $ will be called \emph{unparametrized orbits}, or just orbits. Slightly abusing notation we write $(o,p)$ for
the equivalence class of $(o,p)$.
The multiplicity $m (o,p)$ of a periodic orbit is
the ratio $p/l$ for $l>0$ the least period of $o$.
We want a kind of fixed point index which counts orbits
$(o,p)$ with certain weights - however in general to get
invariance we must
have period bounds. This is due to potential existence of sky catastrophes as
described in the introduction.
Let $N \subset S (X) $ be a compact open set.
Assume for simplicity that elements $(o,p) \in N$ are
isolated. (Otherwise we need to perturb.) Then
to such an $(N,X, \beta)$
Fuller associates an index:
\begin{equation*}
i (N,X, \beta) = \sum _{(o,p) \in N/ S ^{1}} \frac{1}{m
(o,p)} i (o,p),
\end{equation*}
where $i (o,p)$ is the fixed point index of the time $p$ return
map of the flow of $X$ with respect to
a local surface of section in $M$ transverse to the image of $o$.
Fuller then shows that $i (N, X, \beta )$ has the following invariance property.
Given a continuous homotopy $\{X _{t}
\}$, $t \in [0,1]$ let \begin{equation*} S ( \{X _{t} \}, \beta) =
\{(o, p, t) \in L _{\beta} M \times (0, \infty) \times [0,1] \,\left . \right | \, \text{ $o: \mathbb{R}/\mathbb{Z} \to M $ is a
periodic orbit of $p X _{t} $} \}.
\end{equation*}
Given a continuous homotopy $\{X _{t}
\}$, $X _{0} =X $, $t \in [0,1]$, suppose that $\widetilde{N} $ is an open compact subset of $S (\{X _{t} \})$, such that $$\widetilde{N} \cap \left (LM \times \mathbb{R} _{+} \times \{0\} \right) =N.
$$ Then if $$N_1 = \widetilde{N} \cap \left (LM \times \mathbb{R} _{+} \times \{1\} \right) $$ we have \begin{equation*}
i (N, X, \beta ) = i (N_1, X_1, \beta).
\end{equation*}
In the case where $X$ is the $R ^{\lambda} $-Reeb vector field on a contact manifold $(C ^{2n+1} , \xi)$,
and if $(o,p)$ is
non-degenerate, we have:
\begin{equation} \label{eq:conleyzenhnder}
i (o,p) = \sign \Det (\Id|
_{\xi (x)} - F _{p, *}
^{\lambda}| _{\xi (x)} ) = (-1)^{CZ (o)-n},
\end{equation}
where $F _{p, *}
^{\lambda}$ is the differential at $x$ of the time $p$ flow map of $R ^{\lambda} $,
and where $CZ ( o)$ is the Conley-Zehnder index, (which is a special
kind of Maslov index) see
\cite{citeRobbinSalamonTheMaslovindexforpaths.}.
\section {Virtual fundamental class} \label{sec:GromovWittenprelims}
This is a small note on how one deals with curves having non-trivial isotropy groups, in the virtual fundamental class technology. We primarily need this for the proof of Theorem \ref{thm:GWFullerMain}.
Given a closed oriented orbifold $X$, with an orbibundle $E$ over $X$
Fukaya-Ono \cite{citeFukayaOnoArnoldandGW} show how to construct
using multi-sections its rational homology Euler
class, which when $X$ represents the moduli space of some stable
curves, is the virtual moduli cycle $[X] ^{vir} $. (Note that the story of the
Euler class is older than the work of Fukaya-Ono, and there is
possibly prior work in this direction.)
When this is in degree 0, the corresponding Gromov-Witten invariant is $\int _{[X] ^{vir} } 1. $
However they assume that their orbifolds are
effective. This assumption is not really necessary for the purpose of
construction of the Euler class but is convenient for other technical reasons. A
different approach to the virtual fundamental class which emphasizes branched manifolds is used by
McDuff-Wehrheim, see for example McDuff
\cite{citeMcDuffNotesOnKuranishi}, which does not have
the effectivity assumption, a similar use of branched manifolds appears in
\cite{citeCieliebakRieraSalamonEquivariantmoduli}. In the case of a
non-effective orbibundle $E \to X$ McDuff \cite{citeMcDuffGroupidsMultisections}, constructs a homological
Euler class $e (E)$ using multi-sections, which extends the construction
\cite{citeFukayaOnoArnoldandGW}. McDuff shows that this class $e (E)$ is
Poincare dual to the completely formally natural cohomological Euler class of
$E$, constructed by other authors. In other words there is a natural notion of a
homological Euler class of a possibly non-effective orbibundle.
We shall assume the following black box property of the virtual fundamental
class technology.
\begin{axiom} \label{axiom:GW} Suppose that the moduli space of stable maps is cleanly cut out,
which means that it is represented by a (non-effective) orbifold $X$ with an orbifold
obstruction bundle $E$, that is the bundle over $X$ of cokernel spaces of the linearized CR operators.
Then the virtual fundamental class $[X]^ {vir} $
coincides with $e (E)$.
\end{axiom}
Given this axiom it does not matter to us which virtual moduli cycle technique
we use. It is satisfied automatically by the construction of McDuff-Wehrheim,
(at the moment in genus 0, but surely extending).
It can be shown to be satisfied in the approach of John Pardon~\cite{citePardonAlgebraicApproach}.
And it is satisfied by the construction of Fukaya-Oh-Ono-Ohta
\cite{citeFOOOTechnicaldetails}, although not quiet immediately. This is also
communicated to me by Kaoru Ono.
When $X$ is 0-dimensional this does follow
immediately by the construction in
\cite{citeFukayaOnoArnoldandGW}, taking any effective Kuranishi neighborhood
at the isolated points of $X$, (this actually suffices for our paper.)
As a special case most relevant to us here, suppose we have a moduli space
of elliptic curves in $X$, which is regular with
expected dimension 0. Then its underlying space is a collection of oriented points.
However as some curves are multiply covered, and so have isotropy
groups, we must treat this is a non-effective 0 dimensional oriented orbifold.
The contribution of each curve $[u]$ to the Gromov-Witten invariant $\int
_{[X] ^{vir} } 1 $ is $\frac{\pm 1}{[\Gamma ([u])]}$, where $[\Gamma ([u])]$ is
the order of the isotropy group $\Gamma ([u])$ of $[u]$,
in the McDuff-Wehrheim setup this is explained in \cite[Section
5]{citeMcDuffNotesOnKuranishi}. In the setup of Fukaya-Ono
\cite{citeFukayaOnoArnoldandGW} we may readily calculate to get the same thing
taking any effective Kuranishi neighborhood
at the isolated points of $X$.
\end {appendices}
\section{Acknowledgements}
I thank John Pardon, for suggesting Proposition \ref{prop:abstractmomentmap}. Viktor Ginzburg for telling me about the Fuller index which was crucial and Dusa McDuff, as well as
Kaoru Ono for related discussions. Thanks also to Yael Karshon, Yasha
Eliashberg, Emy Murphy, Peter Topping and Kenji Fukaya for helpful comments.
\bibliographystyle{siam}
\bibliography{/home/yasha/texmf/bibtex/bib/link}
\end {document} | {"config": "arxiv", "file": "1609.08991/conformalsymplectic21.tex"} |
TITLE: Solving limits by Taylor series
QUESTION [0 upvotes]: Perhaps I didn't fully understand the concept of Taylor series. I would like to compute
$$\lim_{x \to 1} \frac{\ln x}{x^{2}-1}$$
using Taylor expansion around the right point (point is not given).
So far I only solved 'series problems' when the point was given. I could just use l'Hopital's rule, but that's not the point of the exercise, I guess.
Answer: $0.5$.
REPLY [4 votes]: May be helpful to change variables to let $u = x-1 \to 0$ then the limit becomes
$$
\begin{split}
\lim_{x \to 1} \frac{\ln x}{x^2-1}
&= \lim_{x \to 1} \frac{\ln x}{(x+1)(x-1)} \\
&= \lim_{u \to 0} \frac{\ln (1+u)}{u(u+2)} \\
&= \lim_{u \to 0} \frac{u - \frac{u^2}{2}+\frac{u^3}{3} \ldots}{u(u+2)} \\
&= \lim_{u \to 0} \frac{1 - \frac{u}{2}+\frac{u^2}{3} \ldots}{u+2} \\
&= \frac12
\end{split}
$$
REPLY [1 votes]: You need expansion around the point you want to find limit in. $\ln x = \ln(1 + (x - 1)) = (x - 1) + o(x - 1)$, $x^2 - 1 = (x - 1) (x + 1) = (x - 1) \cdot 2 + o(x - 1)$ (all $o$ with base $x \to 1$). So the expression under limit is $\frac{(x - 1) + o(x - 1)}{2(x - 1) + o(x - 1)} = \frac{1}{2} + o(1)$. | {"set_name": "stack_exchange", "score": 0, "question_id": 3265842} |
TITLE: About generalized planar graphs and generalized outerplanar graphs
QUESTION [16 upvotes]: Any planar, respectively, outerplanar graph $G=(V,E)$ satisfies $|E'|\le 3|V'|-6$,
respectively, $|E'|\le 2|V'|-3$, for every subgraph $G'=(V',E')$ of $G$.
Also, (outer-)planar graphs can be recognized in polynomial time.
What is known about graphs $G=(V,E)$ such that $|E'|\le 3|V'|-6$
(resp. $|E'|\le 2|V'|-3$) for every subgraph $G'=(V',E')$ of $G$? Is it possible to
recognize them in polynomial time?
Edit (after Eppstein's nice answer): Any planar graph $G=(V,E)$ satisfies $|E'|\le 3|V'|-6$ for every subgraph $G'=(V',E')$ of $G$ with at least three vertices $|V'|\ge 3$. So, "generalized planar graphs" would be those satisfying this property, and recognizing them in polynomial time seems to be an (interesting) open question.
REPLY [13 votes]: What is known about "generalized outerplanar graphs" or (2,3)-sparse graphs?
Some additional facts to DavidEppstein's answer:
In 1982, in this paper
(Corollaries 1 and 2), Lovász and Yemini characterized generalized
outerplanar graphs (in their notation, generic independent graphs) as those
graphs $G$ having the property that doubling any edge of $G$ results in a graph
which is the edge-disjoint union of two forests.
Very previously, in 1970, Henneberg and Laman proved that generalized outerplanar
graphs can be recursively obtained from $K_2$ by three so-called Henneberg moves
(adding a degree-1 vertex, adding a degree-2 vertex, and a certain kind of
adding a degree-3 vertex).
These characterizations give the first polynomial recognitions for generalized
outerplanar graphs.
Some remarks related to generalized planar graphs can be found in the last section
of this paper.
I think, characterizing and recognizing generalized planar graphs still
remain very interesting open questions. | {"set_name": "stack_exchange", "score": 16, "question_id": 18517} |
TITLE: If $A$, $B$ are both pairwise independent of $C$, do we have that $A \cap B$ is also independent of $C$?
QUESTION [0 upvotes]: If $A$, $B$ are both pairwise independent of $C$, do we have that $A \cap B$ is also independent of $C$?
REPLY [0 votes]: Toss two fair coins. Let $A$ = "first toss heads", $B$= "second toss heads, $C$ = "first same as second". | {"set_name": "stack_exchange", "score": 0, "question_id": 3679281} |
TITLE: If $\pi(x,y)=x$ and $F$ is closed, $\pi(F)$ is closed.
QUESTION [1 upvotes]: I have a problem with this because I didn't use one of the hypotheses (compactness of $K$) during the demonstration. Thank you in advance for your help
Question: Let $K\subset \mathbb{R}^n$ a compact. Show that the projection $\pi: \mathbb{R}^m \times \mathbb{R}^n \to \mathbb{R}^m$, with $\pi(x,y)=x$, transforms all closed subset $F\subset \mathbb{R}^m \times K$ in a closed set $\pi(F)\subset\mathbb{R}^m.$
My attempt:
Continuity of $\pi$:
Take $(a,b)\in \mathbb{R}^m \times \mathbb{R}^n$. Since that $\delta \leq \epsilon$, then
$$||x-a||\leq ||(x,y)-(a,b)||<\delta \implies ||\pi(x,y)-\pi(a,b)||=||x-a||<\delta\leq \epsilon.$$
Transform closed in closed:
Take $a\in \pi(F)$. Then, $a=\pi(a,b)$, with $(a,b) \in F$. Since that $F$ is closed, exists a sequence $\{(a_k, b_k)\}$ in $F$ such that
$$(a,b)=\lim (a_k, b_k).$$
By the continuity of $\pi$,
$$a=\pi(a,b)=\pi(\lim (a_k, b_k))=\lim \pi(a_k,b_k)=\lim a_k.$$
Since every element in the $\pi(F)$ is limit of a sequence of points in the $\pi(F)$, it follows that the set is closed.
REPLY [1 votes]: You haven't shown that $\pi(F)$ is closed because you carried out the opposite process of showing something is closed in a metric space: you first picked $a \in \pi(F)$ and then tried to construct a sequence converging to it. This is always possible in arbitrary sets and it does not say anything about whether a set is closed or not. What you have to do is the opposite. You are first given an $a \in \mathbb{R}^m$ and a sequence $a_n \in \pi(F)$ (for $n \in \mathbb{Z}_+$) converging to it. Then, you have to derive that $a \in \pi(F)$.
To do this, note that as each sequence element $a_n$ is in the image $\pi(F)$, you get corresponding sequence elements $x_n \in F \subseteq \mathbb{R}^m \times K$ such that $x_n = (a_n, b_n)$ for some $b_n \in K$. So, $b_n$ is a sequence in $K$. Thus, $K$ being compact, we must have a convergent subsequence $b_{n(s)}$ (where $s \mapsto n(s)$ from $\mathbb{Z_+}$ to $\mathbb{Z}_+$ is an increasing map) converging to some $b \in K$. Now, as the subsequences converge to the same limit as their parent sequences, the subsequence $a_{n(s)}$ must converge to $a$. So, as a whole, $x_{n(s)} = (a_{n(s)}, b_{n(s)})$ in $F$ must converge to $(a, b)$. As $F$ is closed, this means that $(a, b)$ must be in $F$. But then, $a = \pi(a, b)$ must belong to $\pi(F)$ as desired. | {"set_name": "stack_exchange", "score": 1, "question_id": 3179138} |
As pointed out in~\cite{bottou}, one is usually not interested in the
minimization of an \emph{empirical cost} on a finite training set, but instead
in minimizing an \emph{expected cost}. Thus, we assume from now on that $f$ has the form of an expectation:
\begin{equation}
\min_{\theta \in \Theta} \left[ f(\theta) \defin \E_{\x} [ \ell(\x,\theta) ] \right], \label{eq:obj2}
\end{equation}
where $\x$ from some set $\XX$ represents a data point, which is drawn according to
some unknown distribution, and $\ell$ is a continuous loss function. As often
done in the literature~\cite{nemirovski}, we assume
that the expectations are well defined and finite valued; we also assume that $f$ is bounded below.
We present our approach for tackling~(\ref{eq:obj2}) in Algorithm~\ref{alg:stochastic}.
At each iteration, we draw a training point~$\x_n$, assuming that these points
are i.i.d. samples from the data distribution. Note that in practice, since it
is often difficult to obtain true i.i.d. samples, the points $\x_n$ are computed by
cycling on a randomly permuted training set~\cite{bottou}.
Then, we choose a surrogate $g_n$ for the function $\theta
\mapsto \ell(\x_n,\theta)$, and we use it to update a function
$\barg_n$ that behaves as an approximate surrogate for the
expected cost~$f$. The function $\barg_n$ is in fact a weighted average of previously computed surrogates,
and involves a sequence of weights $(w_n)_{n \geq 1}$ that will
be discussed later. Then, we minimize $\barg_n$, and obtain a new estimate
$\theta_n$. For convex problems, we also propose to use averaging schemes, denoted by ``option 2'' and ``option 3'' in Alg.~\ref{alg:stochastic}. Averaging is a classical technique for improving
convergence rates in convex optimization~\citep{hazan3,nemirovski} for reasons
that are clear in the convergence proofs.
\begin{algorithm}[hbtp]
\caption{Stochastic Majorization-Minimization Scheme}\label{alg:stochastic}
\begin{algorithmic}[1]
\INPUT $\theta_0 \in \Theta$ (initial estimate); $N$ (number of iterations); $(w_n)_{n \geq 1}$, weights in $(0,1]$;
\STATE initialize the approximate surrogate: $\barg_0: \theta \mapsto \frac{\rho}{2}\|\theta-\theta_0\|_2^2$; $\bartheta_0=\theta_0$; $\hattheta_0=\theta_0$;
\FOR{ $n=1,\ldots,N$}
\STATE draw a training point $\x_n$; define $f_n: \theta \mapsto \ell(\x_n,\theta)$;
\STATE choose a surrogate function $g_n$ in $\S_{L,\rho}(f_n,\theta_{n-1})$;
\STATE update the approximate surrogate: $\barg_n = (1-w_n) \barg_{n-1} + w_n g_n$;
\STATE update the current estimate:
\begin{displaymath}
\theta_n \in \argmin_{\theta \in \Theta} \barg_n(\theta);
\end{displaymath}
\STATE for option 2, update the averaged iterate: $\hattheta_{n} \defin (1-w_{n+1})\hattheta_{n-1} + w_{n+1} \theta_{n}$;
\STATE for option 3, update the averaged iterate: $\bartheta_{n} \defin \frac{(1-w_{n+1})\bartheta_{n-1} + w_{n+1} \theta_{n}}{\sum_{k=1}^{n+1}w_k}$;
\ENDFOR
\OUTPUT {\bfseries (option 1):} $\theta_{N}$ (current estimate, no averaging);
\OUTPUT {\bfseries (option 2):} $\bartheta_{N}$ (first averaging scheme);
\OUTPUT {\bfseries (option 3):} $\hattheta_{N}$ (second averaging scheme).
\end{algorithmic}
\end{algorithm}
We remark that Algorithm~\ref{alg:stochastic} is only practical when
the functions $\barg_n$ can be parameterized with a small number of
variables, and when they can be easily minimized over $\Theta$. Concrete
examples are discussed in Section~\ref{sec:exp}.
Before that, we proceed with the convergence analysis.
\subsection{Convergence Analysis - Convex Case}\label{subsec:convex}
\input{convex.tex}
\subsection{Convergence Analysis - Strongly Convex Case}\label{subsec:strongconvex}
\input{strongconvex.tex}
\subsection{Convergence Analysis - Non-Convex Case}\label{subsec:nonconvex}
\input{nonconvex.tex} | {"config": "arxiv", "file": "1306.4650/stochastic.tex"} |
\subsection{The idea of proof}
We return to the proof of the estimates obtained in \ref{vns_solving}.
We will consider the matrices of the following kind
$$
A_* =
\left(
\begin{array}{ccccccccc}
a & 0 & \cdots & 0 & 0 & 0 & \cdots & 0 & 0 \\
0 & a & \cdots & 0 & 0 & 0 & \cdots & 0 & 0 \\
\hdotsfor{9} \\
0 & 0 & \cdots & a & 0 & 0 & \cdots & 0 & 0 \\
0 & 0 & \cdots & 0 & a_1 & a_1 & \cdots & 0 & 0 \\
0 & 0 & \cdots & 0 & -a_1 & a_1 & \cdots & 0 & 0 \\
\hdotsfor{9} \\
0 & 0 & \cdots & 0 & 0 & 0 & \cdots & a_k & a_k \\
0 & 0 & \cdots & 0 & 0 & 0 & \cdots & -a_k & a_k
\end{array}
\right).
$$
Then
$$
A_*^{-1} =
\left(
\begin{array}{ccccccccc}
\cfrac 1 a & 0 & \cdots & 0 & 0 & 0 & \cdots & 0 & 0 \\
0 & \cfrac 1 a & \cdots & 0 & 0 & 0 & \cdots & 0 & 0 \\
\hdotsfor{9} \\
0 & 0 & \cdots & \cfrac 1 a & 0 & 0 & \cdots & 0 & 0 \\
0 & 0 & \cdots & 0 & \cfrac 1 { 2 a_1 } & \cfrac 1 { 2 a_1 } & \cdots & 0 & 0 \\
0 & 0 & \cdots & 0 & -\cfrac 1 { 2 a_1 } & \cfrac 1 { 2 a_1 } & \cdots & 0 & 0 \\
\hdotsfor{9} \\
0 & 0 & \cdots & 0 & 0 & 0 & \cdots & \cfrac 1 { 2 a_k } & \cfrac 1 { 2 a_k } \\
0 & 0 & \cdots & 0 & 0 & 0 & \cdots & -\cfrac 1 { 2 a_k } & \cfrac 1 { 2 a_k } \\
\end{array}
\right).
$$
The task (\ref{EqOpt1}) takes the form
$$
\begin{array}{c}
f_{n,[n/2]} = \frac 1 {2^{[n/2]}} \prod\limits_{i = 1}^{[n/2]} | x_i^2 + x_{[n/2]+i}^2 | \prod\limits_{i = 2[n/2]}^{n} | x_i | \rightarrow \max, \\
\left| \cfrac {x_1} a \right| \leq 1, \quad \cdots \quad \left| \cfrac {x_{n - 2k}} a \right| \leq 1, \\
\left| \cfrac {x_{n - 2k + 1}} {2a_1} + \cfrac {x_{n - 2k + 2}} {2a_1} \right| \leq 1, \quad \left| \cfrac {x_{n - 2k + 1}} {2a_1} - \cfrac {x_{n - 2k + 2}} {2a_1} \right| \leq 1, \\
\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots \\
\left| \cfrac {x_{n - 1}} {2a_k} + \cfrac {x_{n}} {2a_k} \right| \leq 1, \quad \left| \cfrac {x_{n - 1}} {2a_k} - \cfrac {x_{n}} {2a_k} \right| \leq 1\\
\end{array}
$$
Making the replacement
\begin{equation}
\begin{array}{c}
x_i = a y_i, \qquad i = \overline{1, n - 2k} \\
x_{n - 2(k - i) - 1} = a_i y_{n - 2(k - i) - 1}, \quad x_{n - 2(k - i)} = a_i y_{n - 2(k - i)}, \qquad i = \overline{1, k} \\
\end{array}
\label{Exchange1}
\end{equation}
the task takes the form
\begin{equation}
\begin{array}{c}
f_{n,[n/2]} = \frac 1 {2^{[n/2]}} \prod\limits_{i = 1}^{[n/2]} | x_i^2 + x_{[n/2]+i}^2 | \prod\limits_{i = 2[n/2]}^{n} | x_i | \rightarrow \max, \\
| y_1 | \leq 1, \quad \cdots \quad | y_{n - 2k} | \leq 1, \\
| y_{n - 2k + 1} + y_{n - 2k + 2} | \leq 2, \quad | y_{n - 2k + 1} - y_{n - 2k + 2} | \leq 2, \\
\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots \\
| y_{n - 1} + y_{n} | \leq 2, \quad | y_{n - 1} - y_{n} | \leq 2
\end{array}
\label{EqOpt2}
\end{equation}
In this task the restrictions do not depend on the original matrix $ A $. This property we will use later. | {"config": "arxiv", "file": "1804.05385/vns_proof_concept.tex"} |
TITLE: Modification of the Ramsey number
QUESTION [0 upvotes]: Let us denote by $n=r(k_1,k_2,\ldots,k_s)$ the minimal number of vertices such that for every coloring of the edges of the complete graph $K_n$ by $s$ different colors, there is some color $1\le i\le s$ such that the $i$-th graph contains a $k_i$ clique. By the $i$-th graph we mean the graph on the given $n$ vertices and this edges of the complete graph that are colored $i$.
let us denote $r_s = r(3,3,3,3,\ldots)$, with $3$ appearing $s$ times.
a) show that $r(k,s)$ is exactly the regular Ramsey number.
b) show that $r_s \le s(r_{s-1}-1)+ 2$.
c) show that $r_s \le s! \cdot e + 1$.
d) show that $r_3 \le 17$.
Now I don't get how to even start solving this, considering (c) for example, I don't see how we can achieve a factorial...
Any hints for the solution, or a better explanation of the definition will be really helpful, thank you!
REPLY [3 votes]: The definition seems clear enough, especially if you've seen Ramsey numbers before. The questions are all about the special case of $r_s$, which is just the smallest $n$ such that every coloring of the edges of $K_n$ with $s$ colors contains a monochromatic triangle, that is, a triangle whose edges are all the same color; e.g., $r_1=3$, $r_2=6$.
a) I hope this follows directly from the definitions.
b) Have you seen a proof of an upper bound for the "regular" Ramsey numbers? This is a lot like that. Pick a point $v$, consider the edges incident with $v$, and use the pigeonhole principle to show that enough of them are of the same color. [*]
c) This is misstated. The upper bound should be $\lfloor s!e\rfloor+1$ where $e=2.718\dots$ is the base of natural logarithms and $\lfloor x\rfloor$ is the floor function; e.g., if $s=2$ then $\lfloor s!e\rfloor+1=6$. Prove by induction on $s$, using b) and the fact that, for $s\ge1$,
$$\lfloor s!e\rfloor=\frac{s!}{0!}+\frac{s!}{1!}+\frac{s!}{2!}+\cdots+\frac{s!}{s!}.$$
d) Just set $s=3$ in c) and do the calculation.
[*] Consider an integer $s\gt1$, let $n=s(r_{s-1}-1)+2$, and suppose the edges of $K_n$ are colored with $s$ colors. Choose a vertex $v$ and consider the $n-1=s(r_{s-1}-1)+1$ edges incident with $v$. By the pigeonhole principle, at least $r_{s-1}$ of those edges have the same color, which I will call "red". That is, there is a set $W\subset V(K_n)$ of size $|W|=r_{s-1}$ such that each vertex in $W$ is joined to $v$ by a red edge. If two vertices $w_1,w_2\in W$ are joined to each other by a red edge, then $v,w_1,w_2$ are the vertices of a red triangle. On the other hand, if no two vertices in $W$ are joined by a red edge, then the edges of the complete graph on $W$ are colored with $s-1$ colors, and the existence of a monochromatic triangle follows from the definition of $r_{s-1}$. This proves that $r_s\le n=s(r_{s-1}-1)+2$. | {"set_name": "stack_exchange", "score": 0, "question_id": 1055435} |
TITLE: Morphisms between affine schemes
QUESTION [1 upvotes]: Suppose we have two affine schemes $X=\operatorname{Spec} A$ and $Y=\operatorname{Spec} B$ for commutative rings $A,B$. I encountered this statement in my homework that $\operatorname{Mor}(X,Y)=\operatorname{Hom}(B,A)$. I understand that a ring homomorphism between $B$ and $A$ would induce a continuous map between $X$ and $Y$. But isn't that only for the underlying topological space? Why we don't need to check it is indeed a morphism between the sheaves here?
More generally, if $X$ and $Y$ are general schemes, is it still enough to specify a continuous map between the underlying topological space to define a morphism between the two schemes?
REPLY [2 votes]: A map $B \longrightarrow A$ does not generally describe a homeomorphism $\operatorname{Spec} A \longrightarrow \operatorname{Spec} B$, but it does describe a continuous map. For instance, the map $\mathbb Z \longrightarrow \mathbb Z/2$ corresponds to the inclusion $\{(2)\} \subseteq \operatorname{Spec} \mathbb Z$ - certainly not a homeomorphism.
Also, you're right that a map of schemes requires a map of the structure sheaves. This is not derivable from the map on the underlying space. Instead it comes from the map of rings. Indeed, the idea is that for a basic open set $D(f) \subseteq \operatorname{Spec} B$, its pullback under the map induced by $\phi: B \longrightarrow A$ is going to be $D(\phi(f))$. Recall that $\Gamma(D(f), \operatorname{Spec} B) = B_f$. We're looking for a map $\Gamma(D(f), \operatorname{Spec} B) \longrightarrow \Gamma(D(\phi(f)), \operatorname{Spec} A)$ to define our map of structure sheaves. We'll then take this to be the map $B_f \longrightarrow A_{\phi(f)}$, which we get from the universal property of localization applied to $\phi$. Now, I've only defined the map on a basis of open sets, but general properties of sheaves imply that this is enough. Also, you need to check that the induced maps on stalks are all local, but I won't show this. Here is a reference to the Stacks project for this fact.
Now I hope this construction explains why a ring homomorphism induces a reverse map of affine schemes, structure sheaf and all. But I did say that you cannot derive this map purely from the map on the underlying topological spaces, and I'd like to explain that further. Indeed, in defining a map of affine schemes it would suffice to just specify a continuous map if, for instance, the forgetful functor $F: \mathbf{AffineSchemes} \longrightarrow \mathbf{Top}$ was fully faithful. In fact, it is neither full mor faithful.
For fullness, consider maps $\operatorname{Spec} \mathbb Z \longrightarrow \operatorname{Spec} \mathbb Z$. There is only one ring homomorphism $\mathbb Z \longrightarrow \mathbb Z$, so the only map between the affine schemes $\operatorname{Spec} \mathbb Z \longrightarrow \operatorname{Spec} \mathbb Z$ is the identity. However, there are many continuous maps between the underlying spaces $F(\operatorname{Spec} \mathbb Z) \longrightarrow F(\operatorname{Spec} \mathbb Z)$. For instance, there are infinitely many constant maps, all of which are continuous. Hence, the forgetful functor $F$ cannot be full. In other words, not every map of the underlying topological space can arise from a map of affine schemes.
Now for faithfulness, consider $\mathbb Q(i)$. We have two automorphisms of this field, the identity and complex conjugation. They therefore define two automorphisms of the affine scheme $\operatorname{Spec} \mathbb Q(i)$. However, $\mathbb Q(i)$ is a field so the topological space of the affine scheme, $F(\operatorname{Spec} \mathbb Q(i))$, is a single point. There is only one continuous map from a point to itself, so $F$ cannot be faithful. That is, it is impossible in general to take a continuous map between affine schemes and derive a corresponding map on the structure sheaves as the same continuous map can come from many scheme maps.
These examples also show that the forgetful functor $\mathbf{Schemes} \longrightarrow \mathbf{Top}$ is neither full nor faithful, so you cannot define a map of schemes purely from a continuous map. I'd also like to point out that unlike affine schemes, a map between general schemes is not determined by a (reversed direction) ring homomorphism on global sections. Indeed, consider projective space over an algebraically closed field $k$. The global sections of $\mathbb P_k^n$ is $k$ for all $n$. There are many maps between projective spaces that are constant on global sections. For instance, linear automorphisms of $\mathbb P_k^n$ are defined by matrices in $PGL_n(k)$, and in general this group is nontrivial. | {"set_name": "stack_exchange", "score": 1, "question_id": 4090290} |
TITLE: Find and prove a formula for the order of an element $k\in\mathbb Z_n$
QUESTION [3 upvotes]: I've looked at the orders of all of the elements in the group $\mathbb Z_{12}$ and from that guessed a formula that might be right: $$\mid k\mid =\frac{n}{hcf(k,n)}$$ where $\mid k \mid$ denotes the order of the element $k$ and $hcf(k,n)$ denotes the highest common factor of $k$ and $n$. Is this formula correct? And if so, how would I prove it?
REPLY [1 votes]: You are right. You can also prove a more general result.
Question: Let $G$ be a group, $g \in G$, $|g| = n < \infty$, and $m \in \mathbb{N}$. Then $|g^m| = \frac{n}{d}$ where $d =$ gcd$(m, n)$.
Proof: Write $n = db, m = dc$ for some $b, c \in \mathbb{N}$. First note that $(g^m)^b = g^{mb} = g^{dcb} = g^{nc} = (g^n)^c = 1$. Thus $|g^m|$ divides $b$. Let $|g^m| = k$. Then
$(g^m)^k = 1 \Rightarrow g^{mk} = 1 \Rightarrow n|mk \Rightarrow b|ck$. Since $d =$ gcd$(m, n)$, gcd$(b, c) = 1$. Thus $b|k$ and $|g^m| = b$. | {"set_name": "stack_exchange", "score": 3, "question_id": 1021526} |
\begin{document}
\maketitle
\begin{abstract}
In a recent paper, Chapuy conjectured that, for any positive integer $k$, the law for the fractions of total area covered by the $k$ Vorono\"\i\ cells
defined by $k$ points picked uniformly at random in the Brownian map of any fixed genus is the same law as that of a uniform $k$-division of
the unit interval. For $k=2$, i.e.\ with two points chosen uniformly at random, it means that the law for the ratio of the area of one of the two Vorono\"\i\ cells
by the total area of the map is uniform between $0$ and $1$. Here, by a direct computation of the desired law, we show that this latter conjecture for $k=2$
actually holds in the case of large planar (genus $0$) quadrangulations as well as for large general planar maps (i.e.\ maps whose faces have
arbitrary degrees). This corroborates Chapuy's conjecture in its simplest realizations.
\end{abstract}
\section{Introduction}
\label{sec:introduction}
The asymptotics of the number of maps of some arbitrary given genus has been known for quite a while \cite{BC86} and involves some universal constants
$t_g$, whose value may be determined recursively. In its simplest form, the $t_g$-recurrence is a simple quadratic recursion for the $t_g$'s,
first established in the physics literature \cite{GM90,DS90,BK90} in the context of matrix integrals, then proven rigorously in the mathematical literature \cite{BGR08,GJ08,CC15}.
In a recent paper \cite{GC16}, Chapuy addressed the question of reproducing the $t_g$-recurrence in a purely combinatorial way.
By a series of clever arguments involving various bijections, he could from his analysis extract exact values for a number of moments of the law for
the area of Vorono\"\i\ cells defined by uniform points in the Brownian map of some arbitrary fixed genus.
In view of these results and other evidence, he was eventually led to formulate the following conjecture: \emph{for any integer $k\geq 2$, the proportions of the total area
covered by the $k$ Vorono\"\i\ cells defined by $k$ points picked uniformly at random in the Brownian map of any fixed genus have the same law as a uniform $k$-division of
the unit interval}. The simplest instance of this conjecture is for the planar case (genus $0$) and for $k=2$. It may be rephrased by
saying that, given two points picked uniformly at random in the planar Brownian map and the corresponding two Vorono\"\i\ cells, \emph{the law for the ratio of the area of one of the
cells by the total area of the map is uniform between $0$ and $1$}.
The aim of this paper is to show that this latter conjecture ($k=2$ and genus $0$) is actually true by computing the desired law for particular realizations of the planar Brownian map,
namely large random planar quadrangulations and large random general planar maps (i.e.\ maps whose faces have
arbitrary degrees). We will indeed show that, for planar quadrangulations with a fixed area $N$ ($=$ number of faces) and with two marked vertices
picked uniformly at random, the law for ratio $\phi=n/N$ between the area $n$ of the Vorono\"\i\ cell around, say, the second vertex and the total area $N$ is, for large $N$
and finite $\phi$, the uniform law in the interval $[0,1]$. This property is derived by a \emph{direct computation of the law} itself from explicit discrete or asymptotic enumeration results.
The result is then trivially extended to Vorono\"\i\ cells in general planar maps of large area (measured in this case by the number of edges).
\section{Vorono\"\i\ cells in bi-pointed quadrangulations}
\label{sec:voronoi}
\begin{figure}
\begin{center}
\includegraphics[width=8cm]{SR.pdf}
\end{center}
\caption{The local rules of the Miermont bijection. These rules are the same as those of the Schaeffer bijection.}
\label{fig:SR}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=8cm]{cells.pdf}
\end{center}
\caption{A bi-pointed planar quadrangulation and the associated i-l.2.f.m via the Miermont bijection. The two faces of the i-l.2.f.m delimit
two domains on the quadrangulation which define our two Vorono\"\i\ cells. Here, one of the cells has been filled in light-blue.}
\label{fig:cells}
\end{figure}
This paper deals exclusively with planar maps, which are connected graphs embedded on the sphere. Our starting point are \emph{bi-pointed planar quadrangulations}, which
are planar maps whose all faces have degree $4$, and with two marked distinct vertices, distinguished as $v_1$ and $v_2$. For convenience, we will assume
here and throughout the paper that \emph{the graph distance $d(v_1,v_2)$ between $v_1$ and $v_2$ is even}. As discussed at the end of Section~\ref{sec:scaling}, this requirement is not crucial
but it will make our discussion slightly simpler. The Vorono\"\i\ cells associated to $v_1$ and $v_2$ regroup, so to say, the set of vertices which are closer to
one vertex than to the other. A precise definition of the Vorono\"\i\ cells in bi-pointed planar quadrangulations may be given upon coding these maps via the
well-known Miermont bijection\footnote{We use here a particular instance of the Miermont bijection for two ``sources" and with vanishing ``delays".} \cite{Miermont2009}. It goes as follows: we first assign to each vertex $v$ of the quadrangulation
its label $\ell(v)=\min(d(v,v_1),d(v,v_2))$ where $d(v,v')$ denotes the graph distance between two vertices $v$ and $v'$ in the quadrangulation. The label $\ell(v)$ is thus the distance from $v$ to
the closest marked vertex $v_1$ or $v_2$. The labels are non-negative integers
which satisfy $\ell(v)-\ell(v')=\pm 1$ if $v$ and $v'$ are adjacent vertices. Indeed, it is clear from their definition that labels between adjacent vertices can differ by at most $1$.
Moreover, a planar quadrangulation is bipartite so we may color its vertices in black and white in such a way that adjacent vertices carry different colors. Then if we chose
$v_1$ black, $v_2$ will also be black since $d(v_1,v_2)$ is even. Both $d(v,v_1)$ and $d(v,v_2)$ are then simultaneously even if $v$ is black and so is thus $\ell(v)$.
Similarly, $d(v,v_1)$, $d(v,v_2)$ and thus $\ell(v)$ are odd if $v$ is white so that the parity of labels changes between adjacent neighbors. We conclude that labels between adjacent vertices necessarily differ by $\pm 1$.
The cyclic sequence of labels around a face is then necessarily of one of the two types displayed in Figure~\ref{fig:SR}, namely, if $\ell$ is the smallest label around the face, of
the form $\ell\to\ell+1\to\ell\to\ell+1$ or $\ell\to\ell+1\to\ell+2\to\ell+1$. Miermont's coding is similar to that of the well-known Schaeffer bijection \cite{SchPhD} and consists in drawing inside each face an edge connecting the two corners within
the face which are followed \emph{clockwise} by a corner with smaller label (here the label of a corner is that of the incident vertex). Removing all the original edges, we obtain a graph embedded
on the sphere whose vertices are {\it de facto} labelled by integers (see Figure~\ref{fig:cells}). It was shown by Miermont \cite{Miermont2009} that this graph spans all the original vertices of the quadrangulation but $v_1$ and $v_2$, is connected and defines a planar map \emph{with exactly $2$ faces} $f_1$ and $f_2$, where $v_1$ (which is not part of the two-face map) lies strictly inside $f_1$, and $v_2$ strictly inside $f_2$. As for the
vertex labels on this two-face map, they are easily shown to satisfy:
\begin{enumerate}[$\langle \hbox{a}_1\rangle$]
\item{Labels on adjacent vertices differ by $0$ or $\pm1$.}
\item{The minimum label for the set of vertices incident to $f_1$ is $1$.}
\item{The minimum label for the set of vertices incident to $f_2$ is $1$.}
\end{enumerate}
In view of this result, we define a planar \emph{iso-labelled two-face map} (i-l.2.f.m) as a planar map with exactly two faces, \emph{distinguished} as $f_1$ and $f_2$, and whose vertices carry integer labels
satisfying the constraints $\langle \hbox{a}_1\rangle$-$\langle \hbox{a}_3\rangle$ above. Miermont's result is that the construction presented above actually provides a \emph{bijection between bi-pointed planar quadrangulations
whose two distinct and distinguished marked vertices are at some even graph distance
from each other and planar i-l.2.f.m}.
Moreover, the Miermont bijection guarantees that (identifying the vertices $v$ of the i-l.2.f.m with their pre-image in the associated quadrangulation):
\begin{itemize}
\item{The label $\ell(v)$ of a vertex $v$ in an i-l.2.f.m corresponds to the minimum distance $\min(d(v,v_1),d(v,v_2))$ from $v$ to the marked vertices
$v_1$ and $v_2$ in the associated bi-pointed quadrangulation.}
\item{All the vertices incident to the first face $f_1$ (respectively the second face $f_2$) in the i-l.2.f.m are closer to $v_1$ than to $v_2$ (respectively closer to $v_2$ than to $v_1$)
or at the same distance from both vertices in the associated quadrangulation.}
\item{The minimum label $s$ among vertices incident to both $f_1$ and $f_2$ and the distance $d(v_1,v_2)$ between the marked vertices in the associated quadrangulation
are related by $d(v_1,v_2)=2s$.}
\end{itemize}
Clearly, all vertices incident to both $f_1$ and $f_2$ are at the same distance from both $v_1$ and $v_2$. Note however that the reverse is not true and that
vertices at equal distance from both $v_1$ and $v_2$ might very well lie strictly inside a given face.
Nevertheless, the coding of bi-pointed quadrangulations by i-l.2.f.m provides us
with a well defined notion of Vorono\"\i\ cells. Indeed, since it has exactly two faces, the i-l.2.f.m is made of a simple closed loop separating the two faces, completed by
(possibly empty) subtrees
attached on both sides of each of the loop vertices (see Figure~\ref{fig:cells}). Drawing the quadrangulation and its associated i-l.2.f.m on the same picture,
we \emph{define} the two Vorono\"\i\ cells of a bi-pointed quadrangulation
as the two domains obtained by cutting along the loop of the associated i-l.2.f.m. Clearly, each Vorono\"\i\ cell contains only vertices closer from one of the marked vertices that
from the other (or possibly at the same distance). As just mentioned, vertices at the border between the two cells are necessarily at the same distance from $v_1$ and $v_2$.
Note also that all the edges of the quadrangulation lie strictly in one cell or the other. This is not the case for all the faces of the quadrangulation whose situation
is slightly more subtle. Clearly, these faces are in bijection with the edges of the i-l.2.f.m. The latter come in three species, those lying strictly inside the first face of the i-l.2.f.m, in which case the associated face in the quadrangulation lies strictly inside the first cell, those lying strictly inside the second face of the i-l.2.f.m, in which case the associated face in the quadrangulation lies strictly inside the second cell, and those belonging to the loop separating the two faces of the i-l.2.f.m, in which case the associated face in the quadrangulation is split in two by the cutting and shared
by the two cells.
If we now want to measure the area of the Vorono\"\i\ cells, i.e.\ the number of faces which they contain, several prescriptions may be taken to properly account for the shared faces.
The simplest one is to count them as half-faces, hence contributing a factor $1/2$ to the total area of each of the cells. For generating functions, this prescription amounts to assign a weight
$g$ per face strictly within the first Vorono\"\i\ cell, a weight $h$ per face strictly within the second cell and a weight $\sqrt{g\, h}$ per face shared by the two cells.
A different prescription would consist in attributing
each shared face to one cell or the other randomly with probability $1/2$ and averaging over all possible such attributions. In terms of generating functions,
this would amount to now give a weight $(g+h)/2$ to the faces shared by the two cells. As discussed below, the precise prescription for shared faces turns out to be irrelevant
in the limit of large quadrangulations and for large Vorono\"\i\ cells. In particular, both rules above lead to the
same asymptotic law for the dispatching of area between the two cells.
In this paper, we decide to adopt the first prescription and we define accordingly $F(g,h)$ as the generating function of planar bi-pointed quadrangulation
with a weight $g$ per face strictly within the first Vorono\"\i\ cell, a weight $h$ per face strictly within the second cell and a weight $\sqrt{g\, h}$ per face shared by the two cells.
Alternatively, $F(g,h)$ is the generating function of i-l.2.f.m with a weight $g$ per
edge lying strictly in the first face, a weight $h$ per edge lying strictly in the second face and a weight $\sqrt{g\, h}$ per edge incident to both faces.
Our aim will now be to evaluate $F(g,h)$.
\section{Generating function for iso-labelled two-face maps}
\subsection{Connection with the generating function for labelled chains}
\begin{figure}
\begin{center}
\includegraphics[width=9cm]{lc.pdf}
\end{center}
\caption{A schematic picture of a labelled chain (l.c) contributing to $X_{s,t}(g,h)$ for $s,t>0$. The labels of the vertices in the blue domain $B$ have to be $\geq 1-s$ and
those in the gray domain $G$ have to be $\geq 1-t$. The spine vertices, i.e.\ the vertices of $B\cap G$ are required to have non-negative labels,
and the spin endpoints $w_1$ and $w_2$ are labelled $0$. The edge weights are: $g$ if they have at least one endpoint in $B\setminus(B\cap G)$, $h$ if they have at least one endpoint in
$G\setminus(B\cap G)$ and $\sqrt{g\, h}$ if they have their two endpoints in $B\cap G$.}
\label{fig:lc}
\end{figure}
In order to compute $F(g,h)$, let us start by introducing what we call \emph{labelled chains} (l.c), which are planar labelled one-face maps, i.e.\ trees whose vertices carry integer labels
satisfying $\langle \hbox{a}_1\rangle$ and with two distinct (and distinguished)
marked vertices $w_1$ and $w_2$. Such maps are made of a \emph{spine} which is the unique shortest path in the map joining the two vertices,
naturally oriented from $w_1$ to $w_2$, and of a number of labelled subtrees attached to the spine vertices. All internal (i.e.\ other than $w_1$ and $w_2$)
spine vertices have two (possibly empty) attached labelled subtrees, one on the left and one on the right. As for $w_1$ and $w_2$, they have a single (possibly empty)
such attached labelled subtree.
For $s$ and $t$ two positive integers, we denote by $X_{s,t}\equiv X_{s,t}(g,h)$ the generating function of planar l.c satisfying (see Figure~\ref{fig:lc}):
\begin{enumerate}[$\langle \hbox{b}_1\rangle$]
\item{$w_1$ and $w_2$ have label $0$. The minimal label for the set of spine vertices is $0$. The edges of the spine receive a weight $\sqrt{g\, h}$.}
\item{The minimal label for the set of vertices belonging to the subtree attached to $w_1$ or to any of the subtrees attached to the left of an internal spine vertex is \emph{larger than or equal to}
$1-s$.
The edges of these subtrees receive a weight $g$.}
\item{The minimal label for the set of vertices belonging to the subtree attached to $w_2$ or to any of the subtrees attached to the right of an internal spine vertex is \emph{larger than or equal to} $1-t$.
The edges of these subtrees receive a weight $h$.}
\end{enumerate}
For convenience, we incorporate in $X_{s,t}$ a first additional term $1$ (which may be viewed as the contribution of some ``empty" l.c). For $s,t>0$, we also set $X_{s,0}=X_{0,t}=X_{0,0}=1$.
We now return to the generating function $F(g,h)$ of planar i-l.2.f.m. Let us show that $F(g,h)$ is related to $X_{s,t}$ by the relation:
\begin{equation}
F(g,h)=\sum_{s\geq 1} \Delta_s\Delta_t \log(X_{s,t}(g,h))\Big\vert_{t=s} = \sum_{s\geq 1} \log\left(\frac{X_{s,s}(g,h)X_{s-1,s-1}(g,h)}{X_{s-1,s}(g,h)X_{s,s-1}(g,h)}\right)
\label{eq:FfromX}
\end{equation}
(here $\Delta_s$ denotes the finite difference operator $\Delta_s f(s) \equiv f(s)-f(s-1)$).
As already mentioned, a planar i-l.2.f.m is made of a simple closed loop separating the two faces and with (possibly empty) labelled subtrees attached on both sides of the loop vertices.
The loop may be oriented so as to have the face $f_1$ on its left.
Calling $s$ the minimum label for vertices along the loop, with $s\geq 1$, we may shift all labels by $-s$ and use shifted labels instead of the original ones.
With these shifted labels, the planar i-l.2.f.m enumerated by $F(g,h)$ may alternatively be characterized as follows: there exists positive integers $s$ and $t$ such that:
\begin{enumerate}[$\langle \hbox{c}_1\rangle$]
\item{The minimal label for the set of loop vertices is $0$. The edges of the loop receive a weight $\sqrt{g\, h}$.}
\item{The minimal label for the set of vertices belonging to the subtrees attached to the left of loop vertices (including the loop vertices themselves) is \emph{equal to $1-s$}.
The edges of these subtrees receive a weight $g$.}
\item{The minimal label for the set of vertices belonging to the subtrees attached to the right of loop vertices (including the loop vertices themselves) is \emph{equal to $1-t$}.
The edges of these subtrees receive a weight $h$.}
\item{$s=t$}.
\end{enumerate}
The distinction between $s$ and $t$ might seem somewhat artificial in view of $\langle \hbox{c}_4\rangle$ but it was introduced so that $\langle \hbox{c}_2\rangle$ and $\langle \hbox{c}_3\rangle$
actually mimic the (slightly weaker) constraints $\langle \hbox{b}_2\rangle$ and $\langle \hbox{b}_3\rangle$.
Returning now to a l.c enumerated by $X_{s,t}(g,h)$, it may, upon cutting the chain at all the internal spine vertices with label $0$, be viewed as a
(possibly empty) \emph{sequence} of an arbitrary number $n\geq 0$ of more restricted l.c whose internal spine vertices all have strictly positive labels,
enumerated say, by $Z_{s,t}=Z_{s,t}(g,h)$ (with the same edge weights as for $X_{s,t}$). This leads to the simple relation $X_{x,t}=1/ (1-Z_{s,t})$. Similarly, a
\emph{cyclic sequence} of an arbitrary number $n\geq 1$ of these more restricted l.c is enumerated by $\log(1/ (1-Z_{s,t}))=\log(X_{s,t})$.
For such a cyclic sequence, the concatenation of the spines now forms an oriented loop and $\log(X_{s,t})$ therefore enumerates planar labelled two-face maps with the same characterizations
as $\langle \hbox{c}_1\rangle$-$\langle \hbox{c}_3\rangle$ above except that the minimum labels on both sides of the loop are now larger than or equal to $1-s$ or $1-t$,
instead of being exactly equal to $1-s$ and $1-t$. The discrepancy is easily corrected by applying finite difference operators\footnote{Indeed, removing from the set of maps with a minimum label $\geq 1-s$ in $f_1$ those maps with a minimum label $\geq 1-(s-1)=2-s$ amounts
to keeping those maps with minimum label in $f_1$ exactly equal to $1-s$.}, namely by taking instead of $\log(X_{s,t})$ the function $ \Delta_s\Delta_t \log(X_{s,t})$.
The last requirement $\langle \hbox{c}_4\rangle$ is then trivially enforced by setting $t=s$ in this latter generating function and the summation over the arbitrary value of $s\geq 1$ leads directly to the announced expression \eqref{eq:FfromX}.
The reader will easily check that, as customary in map enumeration problems, the generating function $F(g,h)$ incorporates a symmetry factor $1/k$ for those i-l.2.f.m which display a $k$-fold symmetry\footnote{Maps with
two faces may display a $k$-fold symmetry by rotating them around two ``poles" placed at the centers of the two faces.}. In this paper, we will eventually discuss
results for maps with a large number of edges for which $k$-fold symmetric configurations are negligible.
\subsection{Recursion relations and known expressions}
Our problem of estimating $F(g,h)$ therefore translates into that of evaluating $X_{s,t}(g,h)$. To this end, we shall need to introduce yet another family of maps,
which are planar one-face labelled maps (i.e\ trees whose vertices carry integer labels
satisfying $\langle \hbox{a}_1\rangle$) which are rooted (i.e.\ have a marked oriented edge), whose root vertex (origin of the root edge) has label $0$ and whose minimal label is \emph{larger than or equal to} $1-s$,
with $s\geq 1$. We shall denote by $R_s(g)$ ($s\geq 1$) their generating function with a weight $g$ per edge and again, a first term $1$ added for convenience.
This new generating function satisfies the following relation, easily derived by looking at the two subtrees obtained by removing the root edge:
\begin{equation*}
R_s(g)=1+g R_s(g)\left(R_{s-1}(g)+R_s(g)+R_{s+1}(g)\right)
\end{equation*}
for $s\geq 1$, with the convention $R_0(g)=0$. This ``recursion relation" determines $R_s(g)$ for all $s\geq 1$, order by order in $g$.
Its solution was obtained in \cite{GEOD} and reads:
\begin{equation}
R_s(g)=\frac{1+4x+x^2}{1+x+x^2}\frac{(1-x^s)(1-x^{s+3})}{(1-x^{s+1})(1-x^{s+2})}
\quad \hbox{for}\ g= x\frac{1+x+x^2}{(1+4x+x^2)^2}\ .
\label{eq:exactRs}
\end{equation}
Here $x$ is a real in the range $0\leq x\leq 1$, so that $g$ is a real in the range $0\leq g\leq 1/12$.
Note that the above generating function has a singularity for $g\to 1/12$ even though the above expression has a well-defined limit for $x\to 1$.
Knowing $R_s(g)$, we may easily write down a similar recursion relation for $X_{s,t}(g,h)$, obtained by removing the first edge of the spine: the end
point of this edge either has label $0$ and the remainder of the spine is again a l.c enumerated by $X_{s,t}(g,h)$ or it has label $1$
and the remainder of the chain may now be decomposed, by removing the first spine edge leading back to label $0$, into two l.c enumerated by $X_{s+1,t+1}(g,h)$
and $X_{s,t}(g,h)$ respectively. Extra factors $\sqrt{g\, h}$, $R_s(g)$, $R_t(h)$, $R_{s+1}(g)$ and $R_{t+1}(h)$ are needed to account for the removed edges and
their attached subtrees (those which are not part of the sub-chains), so that we eventually end up with the relation (see \cite{BG08} for a detailed derivation of this relation
when $h=g$):
\begin{equation}
\hskip -.35cm X_{s,t}(g,h)=1+\sqrt{g\, h}\, R_s(g)R_t(h)X_{s,t}(g,h)\left(\!1\!+\sqrt{g\, h}\, R_{s+1}(g)R_{t+1}(h)X_{s+1,t+1}(g,h)\!\right)
\label{eq:Xstrec}
\end{equation}
valid for non-negative $s$ and $t$. This relation again determines $X_s(g,h)$ for all $s,t\geq 1$ order by order\footnote{By this, we mean that
$X_{s,t}(\rho g,\rho h)$ is determined order by order in $\rho$.} in $g$ and $h$.
Finding an explicit expression for $X_{s,t}(g,h)$ for arbitrary $g$ and $h$ is a challenging issue which we have not been able to solve. As it will appear, this lack
of explicit formula is not an unsurmountable obstacle in our quest. Indeed, only the \emph{singularity} of $F(g,h)$ for $g$ and
$h$ tending to their common critical value $1/12$ will eventually matter to enumerate large maps.
Clearly, the absence of explicit expression or $X_{s,t}(g,h)$ will however make our further discussion much more involved.
Still, we way, as a guideline, rely on the following important result. For $g=h$, an explicit expression for $X_{s,t}(g,h)$ was obtained in \cite{BG08}, namely, for $s,t\geq 0$:
\begin{equation*}
X_{s,t}(g,g)=\frac{(1-x^3)(1-x^{s+1})(1-x^{t+1})(1-x^{s+t+3})}{(1-x)(1-x^{s+3})(1-x^{t+3})(1-x^{s+t+1})}
\quad \hbox{where}\ g= x\frac{1+x+x^2}{(1+4x+x^2)^2}\ .
\end{equation*}
\subsection{Local and scaling limits for the generating functions}
Chapuy's conjecture is for quadrangulations with a \emph{fixed number} $N$ of faces, in the limit of large $N$. Via the Miermont bijection, this corresponds to i-l.2.f.m with
a large fixed number $N$ of edges. Proving the conjecture therefore requires an estimate of the coefficient $[g^{N-\frac{p}{2}}h^\frac{p}{2}]F[g,h]$ (recall that,
due to the weight $\sqrt{g\, h}$ per edge of the loop in i-l.2.f.m, $F(g,h)$ has half integer powers in $g$ and $h$), corresponding to a second Vorono\"\i\ cell of area
$n=p/2$, in the limit of large $N$ and for $\phi=n/N$ of order $1$.
Such estimate in entirely encoded in the singularity of the generating function $F(g,h)$ when the edge weights $g$ and $h$ tend simultaneously to their common singular
value $1/12$. This leads us to set
\begin{equation}
g=\frac{1}{12}\left(1-\frac{a^4}{36}\epsilon^4\right)\ , \qquad h=\frac{1}{12}\left(1-\frac{b^4}{36}\epsilon^4\right)
\label{eq:scalgh}
\end{equation}
(with a factor $1/36$ and a fourth power in $\epsilon$ for future convenience) and to look at the small $\epsilon$ expansion of $F(g,h)$.
Before we discuss the case of $F(g,h)$ itself, let us return for a while to the quantities $R_s(g)$ and $X_s(g,g)$ for
which we have explicit expressions. The small $\epsilon$ expansion for $R_s(g)$ may be obtained from \eqref{eq:exactRs} upon
first inverting the relation between $g$ and $x$ so as to get the expansion:
\begin{equation*}
x=1-a\, \epsilon +\frac{a^2 \epsilon ^2}{2}-\frac{5\, a^3 \epsilon ^3}{24}+\frac{a^4 \epsilon ^4}{12}-\frac{13\, a^5 \epsilon ^5}{384}+\frac{a^6 \epsilon ^6}{72}-\frac{157\, a^7 \epsilon ^7}{27648}+\frac{a^8 \epsilon
^8}{432}+O(\epsilon ^{9})\ .
\end{equation*}
Inserting this expansion in \eqref{eq:exactRs}, we easily get, for any \emph{finite} $s$:
\begin{equation*}
\begin{split}
R_s(g)& =2-\frac{4}{(s+1) (s+2)}
-\frac{s (s+3) \left(3 s^2+9 s-2\right)\, a^4 \epsilon ^4}{180 (s+1) (s+2)}\\
&\qquad +\frac{s (s+3) \left(5 s^4+30 s^3+59 s^2+42 s+4\right)\, a^6 \epsilon ^6}{7560 (s+1) (s+2)}
+O(\epsilon ^8)\ .\\
\end{split}
\end{equation*}
The most singular term of $R_s(g)$ corresponds to the term of order $\epsilon^6=(216/a^6) (1-12g)^{3/2}$ (the constant term and the term
proportional to $\epsilon^4= (36/a^4) (1-12g)$ being
regular) and we immediately deduce the large $N$ estimate:
\begin{equation*}
[g^N]R_s(g) \underset{N \to \infty}{\sim} \frac{3}{4} \frac{12^{N}}{\sqrt{\pi} N^{5/2}} \frac{s (s+3) \left(5 s^4+30 s^3+59 s^2+42 s+4\right)}{35 (s+1) (s+2)}\ .
\end{equation*}
The above $\epsilon$ expansion corresponds to what is called the \emph{local limit} where $s$ is kept finite when $g$ tends to $1/12$ in $R_s(g)$ (or equivalently when $N\to \infty$
in $[g^N]R_s(g)$).
Another important limit corresponds to the so-called \emph{scaling limit} where we let $s$ tend to infinity when $\epsilon\to 0$ by setting
\begin{equation*}
\hskip 5.cm s=\frac{S}{\epsilon}
\end{equation*}
with $S$ of order $1$.
Inserting this value in the local limit expansion above, we now get at leading order the expansion
\begin{equation*}
R_{\left\lfloor S/\epsilon\right\rfloor}(g) =2-\frac{4}{S^2}\epsilon^2
-\frac{a^4\, S^2}{60}\epsilon^2
+\frac{a^6\, S^4}{1512}\epsilon^2
+\cdots
\end{equation*}
where all the terms but the first term $2$ now contribute to the \emph{the same order} $\epsilon^2$.
This is also the case for \emph{all the higher order terms} of the local limit expansion (which we did not display) and
a proper re-summation, incorporating all these higher order terms, is thus required. Again, it is easily deduced directly from
the exact expression \eqref{eq:exactRs} and reads:
\begin{equation}
R_{\left\lfloor S/\epsilon\right\rfloor}(g) =2+r(S,a)\ \epsilon^2+O(\epsilon^3)\ , \qquad r(S,a)=-\frac{a^2 \left(1+10 e^{-a S}+e^{-2 a S}\right)}{3 \left(1-e^{-a S}\right)^2}\ .
\label{eq:expRg}
\end{equation}
At this stage, it is interesting to note that the successive terms of the local limit expansion, at leading order in $\epsilon$ for $s=S/\epsilon$, correspond
precisely to the small $S$ expansion of the scaling function $r(S,a)$, namely:
\begin{equation*}
r(S,a)=-\frac{4}{S^2}-\frac{a^4\, S^2}{60}+\frac{a^6 S^4}{1512}+O(S^6)\ .
\end{equation*}
In other words, we read from the small $S$ expansion of the scaling function the leading large $s$ behavior of the successive coefficients of
the local limit expansion of the associated generating function.
Similarly, from the exact expression of $X_{s,t}(g,g)$,
we have the local limit expansion
\begin{equation*}
\begin{split}
\hskip -1.2cm X_{s,t}(g,g) &=3-\frac{6 \left(3+4(s+t)+s^2+ s t +t^2\right)}{(s+3) (t+3) (s+t+1)}-\frac{s (s+1) t (t+1) (s+t+3) (s+t+4) a^4 \, \epsilon ^4}{40 (s+3) (t+3) (s+t+1)}\\
&+ \frac{s (s+1) t (t+1) (s+t+3) (s+t+4) \left(5 (s^2+st+t^2)+20 (s+t)+29\right) a^6 \, \epsilon ^6}{5040 (s+3) (t+3) (s+t+1)}+O(\epsilon ^8)\\
\end{split}
\end{equation*}
and thus
\begin{equation}
\begin{split}
\Delta_s\Delta_t \log(X_{s,t}(g,g))\Big\vert_{t=s}&=\log \left(\frac{s^2 (2 s+3)}{(s+1)^2 (2 s-1)}\right)-\frac{(2 s+1)a^4 \, \epsilon ^4}{60}
\\
&\ \ +\frac{(2 s+1) \left(10 s^2+10 s+1\right) a^6 \, \epsilon ^6}{1890}+O(\epsilon ^8)\ .\\
\end{split}
\label{eq:logloc}
\end{equation}
Alternatively, we also have the corresponding scaling limit counterparts
\begin{equation}
\begin{split}
& X_{\left\lfloor S/\epsilon\right\rfloor,\left\lfloor T/\epsilon\right\rfloor}(g,g) =3+x(S,T,a)\ \epsilon+O(\epsilon^2)\ , \\
& \hskip 2.cm x(S,T,a) =-3\, a-\frac{6 a \left(e^{-a S}+e^{-a T}-3 e^{-a (S+T)}+e^{-2a (S+T)}\right)}{\left(1-e^{-a S}\right) \left(1-e^{-a T}\right) \left(1-e^{-a (S+T)}\right)}\\
\end{split}
\label{eq:xa}
\end{equation}
and
\begin{equation}
\begin{split}
\Delta_s\Delta_t \log\left(X_{\left\lfloor S/\epsilon\right\rfloor,\left\lfloor T/\epsilon\right\rfloor}(g,g) \right) \Big\vert_{T=S}&=
\epsilon^2 \partial_S\partial_T \log\left(3+x (S,T,a)\, \epsilon \right)\Big\vert_{T=S}+O(\epsilon^4)\\ &=
\epsilon^3\, \frac{1}{3}\ \partial_S\partial_T x(S,T,a)\Big\vert_{T=S} +O(\epsilon^4)\\
&=\epsilon^3\ \frac{2 \, a^3\, e^{-2 a S} \left(1+e^{-2 a S}\right)}{\left(1-e^{-2 a S}\right)^3} +O(\epsilon^4)\\
&= \epsilon^3\, \left(\frac{1}{2\, S^3}-\frac{a^4 S}{30}+\frac{2 a^6 S^3}{189}+O(S^5)\right)+O(\epsilon^4)\ .\\
\end{split}
\label{eq:logscal}
\end{equation}
Again, we directly read on the small $S$ expansion above the large $s$ leading behaviors of the coefficients
in the local limit expansion \eqref{eq:logloc}. In particular, we have the large $s$ behavior:
\begin{equation*}
\log \left(\frac{s^2 (2 s+3)}{(s+1)^2 (2 s-1)}\right)= \frac{1}{2\, s^3}+O\!\left(\frac{1}{s^4}\right)\ .
\end{equation*}
\subsection{Getting singularities from scaling functions}
We will now discuss how the connection between the local limit and the scaling limit allows us to estimate the dominant singularity of
generating functions of the type of \eqref{eq:FfromX} \emph{from the knowledge of scaling functions only}. As a starter, let us
suppose that we wish to estimate the leading singularity of the
quantity
\begin{equation}
F(g,g)=\sum_{s\geq 1} \Delta_s\Delta_t \log(X_{s,t}(g,g))\Big\vert_{t=s}
\label{eq:Fgg}
\end{equation}
from the knowledge of $x(S,T,a)$ only.
The existence of the scaling limit allows us to write, for any fixed $S_0$ :
\begin{equation*}
\sum_{s\geq \left\lfloor S_0/\epsilon\right\rfloor} \Delta_s\Delta_t \log(X_{s,t}(g,g))\Big\vert_{t=s}= \epsilon^2 \int_{S_0}^\infty dS\, \frac{1}{3} \partial_S\partial_T x(S,T,a)\Big\vert_{T=S}+O(\epsilon^3)\ .
\end{equation*}
To estimate the missing part in the sum \eqref{eq:Fgg}, corresponding to values of $s$ between
$1$ and $\left\lfloor S_0/\epsilon\right\rfloor -1$, we recall that the local limit expansion \eqref{eq:logloc}
and its scaling limit counterpart \eqref{eq:logscal}
are intimately related in the sense the we directly read on the small $S$ expansion \eqref{eq:logscal} the large $s$ leading behaviors of the coefficients
in the local limit expansion \eqref{eq:logloc}. More precisely, for $k>0$, the coefficient of $\epsilon^k$ in \eqref{eq:logloc} is a rational function of $s$
which behaves at large $s$ like $A_{k-3} s^{k-3}$ where $A_{k-3}$ is the coefficient of $S^{k-3}$
in the small $S$ expansion \eqref{eq:logscal} . Here it is important to note that the allowed values of $k>0$ are even integers starting from $k=4$
(with in particular no $k=2$ term\footnote{If present, this term would give the leading singularity. In its absence, the leading singularity is given by the $\epsilon^6$ term.}).
Subtracting the $k=0$ term in \eqref{eq:logloc} and \eqref{eq:logscal}, taking the difference and summing over $s$, the above remark implies that
\begin{equation*}
\begin{split}
\hskip -1.2cm \sum_{s=1}^{\left\lfloor S_0/\epsilon\right\rfloor-1}&\left( \left( \Delta_s\Delta_t \log(X_{s,t}(g,g))\Big\vert_{t=s}- \log \left(\frac{s^2 (2 s+3)}{(s+1)^2 (2 s-1)}\right)\right)
\right.\\& - \epsilon^3\, \left.\left(\frac{1}{3} \partial_S\partial_T x(s\, \epsilon,t\, \epsilon,a)\Big\vert_{t=s}-\frac{1}{2\, (s\, \epsilon)^3}\right)\right)= \sum_{s=1}^{\left\lfloor S_0/\epsilon\right\rfloor-1} \sum_{k\geq 4} H_{k-3}(s) \epsilon^k \\
\end{split}
\end{equation*}
where $H_{k-3}(s)$ is a rational function of $s$ which now behaves like $B_{k-3} s^{k-4}$ at large $s$ since the terms of order $s^{k-3}$ cancel out in the difference.
Now, for $k\geq 4$, $\sum_{s=1}^{S_0/\epsilon} H_{k-3}(s)$ behaves for small $\epsilon$ like $B_{k-3}S_0^{k-3}\epsilon^{3-k}/(k-3)$
and the sum above over all terms $k\geq 4$ behaves like $\epsilon^3 \sum_{k\geq 4} B_{k-3} S_0^{k-3}/(k-3)$, hence \emph{is of order $\epsilon^{3}$}.
Since the function $(1/3)\partial_S\partial_T x(S,T,a)\vert_{T=S}-1/(2\, S^3)$ is regular at $S=0$, we may use the approximation
\begin{equation*}
\hskip -1.2 cm \sum_{s=1}^{\left\lfloor S_0/\epsilon\right\rfloor-1}\!\!\!\! \epsilon^3\, \left(\frac{1}{3} \partial_S\partial_T x(s\, \epsilon,t\, \epsilon,a)\Big\vert_{t=s}\!\!\!\! -\frac{1}{2\, (s\, \epsilon)^3}\right)= \epsilon^2 \int_{\epsilon}^{S_0} \!\! \!\! dS\, \left(\frac{1}{3} \partial_S\partial_T x(S,T,a)\Big\vert_{T=S}\!\!\!\! -\frac{1}{2\, S^3}\right)
+O(\epsilon^3)
\end{equation*}
so that we end up with the estimate
\begin{equation}
\begin{split}
F(g,g)
&= \epsilon^2 \int_{\epsilon}^{\infty} dS\, \frac{1}{3} \partial_S\partial_T x(S,T,a)\Big\vert_{T=S} \\
&+\sum_{s=1}^{\left\lfloor S_0/\epsilon\right\rfloor-1} \log \left(\frac{s^2 (2 s+3)}{(s+1)^2 (2 s-1)}\right)-\epsilon^2 \int_\epsilon^{S_0}dS\, \frac{1}{2S^3}+O(\epsilon^3)\ .\\
\end{split}
\label{eq:expFgg}
\end{equation}
The first term is easily computed to be
\begin{equation*}
\begin{split}
\epsilon^2 \int_{\epsilon}^{\infty} dS\, \frac{1}{3} \partial_S\partial_T x(S,T,a)\Big\vert_{T=S}
& =
\epsilon^2 \int_{\epsilon}^{\infty} dS\, \frac{2 a^3 e^{-2 a S} \left(1+e^{-2 a S}\right)}{\left(1-e^{-2 a S}\right)^3}
\\& =\epsilon^2 \frac{a^2 e^{-2 a \epsilon }}{\left(1-e^{-2 a \epsilon }\right)^2}= \frac{1}{4}-\frac{a^2 \epsilon ^2}{12}+O(\epsilon ^4)\\
\end{split}
\end{equation*}
and gives us the leading singularity of $F(g,g)$, namely $-a^2\, \epsilon ^2/12=-(1/2)\sqrt{1-12g}$.
As for the last two terms, their value at small $\epsilon$ is easily evaluated to be
\begin{equation*}
-\frac{1}{4}+\log \left(\frac{4}{3}\right)+O(\epsilon ^4)\ .
\end{equation*}
These terms do not contribute to the leading singularity of $F(g,g)$ and serve only to correct the constant term in the expansion,
leading eventually to the result:
\begin{equation}
F(g,g)
= \log \left(\frac{4}{3}\right)-\frac{a^2 \epsilon ^2}{12}+O(\epsilon ^3)\ .
\label{eq:singFgg}
\end{equation}
Of course, this result may be verified from the exact expression
\begin{equation*}
\begin{split}
F(g,g)=\sum_{s\geq 1} \log\left(\frac{X_{s,s}(g,g)X_{s-1,s-1}(g,g)}{X_{s-1,s}(g,g)X_{s,s-1}(g,g)}\right)& =\log\left(\frac{\left(1-x^2\right)^2}{(1-x) \left(1-x^3\right)}\right)
\\ &= \log \left(\frac{4}{3}\right)-\frac{a^2 \epsilon ^2}{12}+O(\epsilon ^3)\\
\end{split}
\end{equation*}
for $x=1-a \epsilon+O(\epsilon^2)$.
The reader might thus find our previous calculation both cumbersome and useless but the lesson of this
calculation is not the precise result itself but the fact that the leading singularity of a sum like \eqref{eq:Fgg}
is, via \eqref{eq:expFgg}, fully predicable from the knowledge of the scaling function $x(S,T,a)$ only. Note indeed that the singularity is \emph{entirely contained
in the first term of \eqref{eq:expFgg}} and that the
last two terms, whose precise form requires the additional knowledge of the first coefficient
of the local limit of $\Delta_s\Delta_t \log(X_{s,t}(g,g))\vert_{t=s}$ do not contribute to the singularity but serve only to correct the constant term in the expansion which is
not properly captured by the integral of the scaling function. This additional knowledge is therefore not needed strico sensu if we are only interested in
the singularity of \eqref{eq:Fgg}.
To end this section, we note that we immediately deduce from the leading singularity $-(1/2)\sqrt{1-12g}$ of $F(g,g)$ the large $N$ asymptotics
\begin{equation}
[g^N]F(g,g) \underset{N \to \infty}{\sim} \frac{1}{4} \frac{12^{N}}{\sqrt{\pi} N^{3/2}}
\label{eq:norm}
\end{equation}
for the number of i-l.2.f.m with $N$ edges, or equivalently, of planar quadrangulations with $N$ faces and with two marked (distinct and distinguished) vertices at even distance from each other.
\section{Scaling functions with two weights $g$ and $h$}
\label{sec:scaling}
\subsection{An expression for the singularity of ${\boldsymbol{F(g,h)}}$}
The above technique gives us a way to access to the singularity of the function $F(g,h)$ via the following small $\epsilon$ estimate,
which straightforwardly generalizes \eqref{eq:expFgg}:
\begin{equation}
\begin{split}
F(g,h)
& =\epsilon^2 \int_{\epsilon}^{\infty} dS\, \frac{1}{3} \partial_S\partial_T x(S,T,a,b)\Big\vert_{T=S} \\
&+\sum_{s=1}^{\left\lfloor S_0/\epsilon\right\rfloor-1} \log \left(\frac{s^2 (2 s+3)}{(s+1)^2 (2 s-1)}\right)-\epsilon^2 \int_\epsilon^{S_0}dS\, \frac{1}{2S^3}+O(\epsilon^3)\ .
\\
\end{split}
\label{eq:expFgh}
\end{equation}
Here $x(S,T,a,b)$ is the scaling function associated to $X_{s,t}(g,h)$ via
\begin{equation}
X_{\left\lfloor S/\epsilon\right\rfloor,\left\lfloor T/\epsilon\right\rfloor}(g,h) =3+x(S,T,a,b)\ \epsilon+O(\epsilon^2)
\label{eq:xab}
\end{equation}
when $g$ and $h$ tend to $1/12$ as in \eqref{eq:scalgh}. As before, the last two terms of \eqref{eq:expFgh} do not contribute
to the singularly but give rise only to a constant at this order in the expansion.
The reader may wonder why these terms are \emph{exactly the same} as those of \eqref{eq:expFgg},
as well as why the leading term $3$ in \eqref{eq:xab} is the same as that of \eqref{eq:xa}
although $h$ is no longer equal to $g$. This comes from the simple remark that these terms
all come from the behavior of $X_{s,t}(g,h)$ \emph{exactly at $\epsilon=0$} which is the same as that
of $X_{s,t}(g,g)$ since, for $\epsilon=0$, both $g$ and $h$ have the same value $1/12$.
In other words, we have
\begin{equation}
X_{s,t}(g,h)=3-\frac{6 \left(3+4(s+t)+s^2+ s t +t^2\right)}{(s+3) (t+3) (s+t+1)}+O(\epsilon ^4)
\label{eq:expXgh}
\end{equation}
and consequently, for small $S$ and $T$ of the same order (i.e. $T/S$ finite), we must have an expansion
of the form \eqref{eq:xab} with
\begin{equation}
x(S,T,a,b) =-\frac{6 \left(S^2+ S T +T^2\right)}{S\,T\,(S+T)}+O(S^3)
\label{eq:smalST}
\end{equation}
in order to reproduce the large $s$ and $t$ behavior of the local limit just above.
We thus have
\begin{equation*}
\frac{1}{3} \partial_S\partial_T x(S,T,a,b)\Big\vert_{T=S} = \frac{1}{2\, S^3}+O(S)
\end{equation*}
while
\begin{equation*}
\Delta_s\Delta_t \log(X_{s,t}(g,h))\Big\vert_{t=s}=\log \left(\frac{s^2 (2 s+3)}{(s+1)^2 (2 s-1)}\right)+O(\epsilon ^4)\ ,
\end{equation*}
hence the last two terms in \eqref{eq:expFgh}.
\subsection{An expression for the scaling function ${\boldsymbol {x(S,T,a,b)}}$}
Writing the recursion relation \eqref{eq:Xstrec} for $s=S/\epsilon$ and $t=T/\epsilon$ and using the small $\epsilon$ expansions
\eqref{eq:expRg} and \eqref{eq:xab}, we get at leading order in $\epsilon$ (i.e.\ at order $\epsilon^2$) the following partial differential
equation\footnote{Here, choosing $(g+h)/2$ instead of $\sqrt{g\, h}$ for the weight of spine edges in the l.c would not
change the differential equation. It can indeed be verified that only the leading value $1/12$ of this weight matters.}
\begin{equation*}
2 \big(x(S,T,a,b)\big)^2+6 \big(\partial_Sx(S,T,a,b)+\partial_Tx(S,T,a,b)\big)+27 \big(r(S,a)+r(T,b)\big)=0
\end{equation*}
which, together with the small $S$ and $T$ behavior \eqref{eq:smalST}, fully determines $x(S,T,a,b)$.
To simplify our formulas, we shall introduce new variables
\begin{equation*}
\sigma\equiv e^{-a S}\ , \qquad \tau\equiv e^{-b T}\ ,
\end{equation*}
together with the associated functions
\begin{equation*}
\mathfrak{X}(\sigma,\tau,a,b)\equiv x(S,T,a,b)\ , \qquad \mathfrak{R}(\sigma,a)\equiv r(S,a)\ .
\end{equation*}
With these variables, the above partial differential equation becomes:
\begin{equation}
\begin{split}
& \hskip -1.2cm 2 \big(\mathfrak{X}(\sigma,\tau,a,b)\big)^2-6 \big(a\, \sigma\, \partial_\sigma\mathfrak{X}(\sigma,\tau,a,b)+b\, \tau\, \partial_\tau\mathfrak{X}(\sigma,\tau,a,b)\big)+27 \big(\mathfrak{R}(\sigma,a)+\mathfrak{R}(\tau,b)\big)=0\\
&\hbox{with}\ \
\mathfrak{R}(\sigma,a)=-\frac{a^2 \left(1+10 \sigma +\sigma^2\right)}{3 (1-\sigma)^2}
\ \ \hbox{and}\ \
\mathfrak{R}(\tau,b)=-\frac{b^2 \left(1+10 \tau +\tau^2\right)}{3 (1-\tau)^2}\ .
\\
\end{split}
\label{eq:pardif}
\end{equation}
For $b=a$, i.e.\ $h=g$, we already know from \eqref{eq:xa} the solution
\begin{equation*}
\mathfrak{X}(\sigma,\tau,a,a)=-3a-\frac{6 a \left(\sigma+\tau-3 \sigma \tau + \sigma ^2 \tau ^2 \right)}{(1-\sigma ) (1-\tau) (1-\sigma \tau)}
\end{equation*}
and it is a simple exercise to check that it satisfies the above partial differential equation in this particular case. This suggests to
look for a solution of \eqref{eq:pardif} in the form:
\begin{equation*}
\mathfrak{X}(\sigma,\tau,a,b)=-3\sqrt{\frac{a^2+b^2}{2}}-\frac{\mathfrak{N}(\sigma,\tau,a,b)}{(1-\sigma ) (1-\tau) \mathfrak{D}(\sigma,\tau,a,b)}
\end{equation*}
where $\mathfrak{N}(\sigma,\tau,a,b)$ and $\mathfrak{D}(\sigma,\tau,a,b)$ are polynomials in the variables $\sigma$ and $\tau$. The first
constant term is singularized for pure convenience (as it could be incorporated in $\mathfrak{N}$). Its value is chosen by assuming that the function $\mathfrak{X}(\sigma,\tau,a,b)$
is regular for small $\sigma$ and $\tau$ (an assumption which will be indeed verified a posteriori) in which case, from \eqref{eq:pardif},
we expect:
\begin{equation*}
(\mathfrak{X}(0,0,a,b))^2=-\frac{27}{2}\left(\mathfrak{R}(0,a)+\mathfrak{R}(0,b)\right)= 9\, \frac{a^2+b^2}{2}
\end{equation*}
(the $-$ sign is then chosen so as to reproduce the known value $-3a$ for $b=a$).
To test our Ansatz, we tried for $\mathfrak{N}(\sigma,\tau,a,b)$ a polynomial of maximum degree $3$ in $\sigma$ and in $\tau$
and for $\mathfrak{D}(\sigma,\tau,a,b)$ a polynomial of maximum degree $2$, namely
\begin{equation*}
\begin{split}
\mathfrak{N}(\sigma,\tau,a,b)&=\sum_{i=0}^{3}\sum_{j=0}^3 n_{i,j}\, \sigma^i\tau^j \ , \\
\mathfrak{D}(\sigma,\tau,a,b)& =\sum_{i=0}^{2}\sum_{j=0}^2 d_{i,j}\, \sigma^i\tau^j \ , \\
\end{split}
\end{equation*}
with $d_{0,0}=1$ (so as to fix the, otherwise arbitrary, normalization of all coefficients, assuming that $d_{0,0}$ does not vanish).
With this particular choice, solving \eqref{eq:pardif} translates, after reducing to the same denominator, into canceling all coefficients of a polynomial
of degree $6$ in $\sigma$ as well as in $\tau$, hence into solving a system of $7\times 7 = 49$ equations for the $4\times 4+3\times 3-1=24$ variables
$(n_{i,j})_{0\leq i,j\leq 3}$ and $(d_{i,j})_{0\leq i,j\leq 2\atop (i,j)\neq(0,0)}$. Remarkably enough, this system, although clearly
over-determined, \emph{admits a unique solution}
displayed explicitly in Appendix~A. Moreover, we can check from the explicit form of $\mathfrak{N}(\sigma,\tau,a,b)$ and $\mathfrak{D}(\sigma,\tau,a,b)$
the small $S$ and $T$ expansions (with $T/S$ finite):
\begin{equation*}
\begin{split}
& \hskip -1.2cm \mathfrak{N}(e^{-a S},e^{-b T},a,b)= 6\, a\, b\, (S^2+S\, T+T^2)\, \mathfrak{C}(a,b) +O(S^3)\ ,\\
& \hskip -1.2cm \mathfrak{D}(e^{-a S},e^{-b T},a,b)= (S+T)\, \mathfrak{C}(a,b) +O(S^2)\ ,\\
&\hskip -1.2cm \hbox{with}\ \mathfrak{C}(a,b) = \frac{216\, a^2 b^2 \left(a^2+b^2\right) \left(a^2+a\, b+b^2\right)}{(a-b)^2 (a+b) \left(2 a^2+b^2\right) \left(a^2+2 b^2\right)}
-\frac{36\, \sqrt{2} \, a^2 b^2 \sqrt{a^2+b^2} \left(4\, a^2+a\, b+4\, b^2\right)}{(a-b)^2 \left(2 a^2+b^2\right) \left(a^2+2 b^2\right)}
\end{split}
\end{equation*}
and (by further pushing the expansion for $\mathfrak{X}$ up to order $S^2$) that
\begin{equation*}
\mathfrak{X}(e^{-a S},e^{-b T},a,b) =-\frac{6 \left(S^2+ S T +T^2\right)}{S\,T\,(S+T)}+O(S^3)
\end{equation*}
which is the desired initial condition \eqref{eq:smalST}. We thus have at our disposal \emph{an explicit expression} for the
scaling function $\mathfrak{X}(\sigma,\tau,a,b) $, or equivalently $x(S,T,a,b)$ for arbitrary $a$ and $b$.
\subsection{The integration step}
Having an explicit expression for $x(S,T,a,b)$, the next step is to compute the first integral in \eqref{eq:expFgh}.
We have, since setting $T=S$ amounts to setting $\tau=\sigma^{b/a}$:
\begin{equation*}
\int_{\epsilon}^{\infty} dS\, \frac{1}{3} \partial_S\partial_T x(S,T,a,b)\Big\vert_{T=S}=
\int_{0}^{e^{-a\, \epsilon}} d\sigma\, \frac{1}{3}\, b\, \sigma^{b/a} \partial_\sigma\partial_\tau \mathfrak{X}(\sigma,\tau,a,b)\Big\vert_{\tau=\sigma^{b/a}}\ .
\end{equation*}
To compute this latter integral, it is sufficient to find a primitive of its integrand, namely a function $\mathfrak{K}(\sigma,a,b)$ such that:
\begin{equation}
\partial_\sigma \mathfrak{K}(\sigma,a,b) = \frac{1}{3}\, b\, \sigma^{b/a} \partial_\sigma\partial_\tau \mathfrak{X}(\sigma,\tau,a,b)\Big\vert_{\tau=\sigma^{b/a}}\ .
\label{eq:K}
\end{equation}
For $b=a$, we have from the explicit expression of $\mathfrak{X}(\sigma,\tau,a,a)$:
\begin{equation*}
\begin{split}
\hskip -1.cm \frac{1}{3}\, a\, \sigma \partial_\sigma\partial_\tau \mathfrak{X}(\sigma,\tau,a,a)\Big\vert_{\tau=\sigma}= \frac{2 a^2 \sigma \left(1+\sigma ^2\right)}{\left(1-\sigma ^2\right)^3} & =\partial_\sigma
\left(\frac{a^2 \sigma ^2}{\left(1-\sigma ^2\right)^2}\right)\\
&
= \partial_\sigma \left(\frac{a\, b \, \sigma\, \tau }{\left(1-\sigma\, \tau\right)^2}\Bigg\vert_{\tau=\sigma^{b/a}}\right)\Bigg\vert_{b=a}\ .\\
\end{split}
\end{equation*}
In the last expression, we recognize the square of the last factor $(1-\sigma\, \tau)$ appearing in the denominator in $\mathfrak{X}(\sigma,\tau,a,a)$.
This factor is replaced by $\mathfrak{D}(\sigma,\tau,a,b)$ when $b\neq a$ and this suggest to look for
an expression of the form:
\begin{equation*}
\mathfrak{K}(\sigma,a,b)=\frac{a\, b\, \sigma\, \tau\, \mathfrak{H}(\sigma,\tau,a,b)}{\left(\mathfrak{D}(\sigma,\tau,a,b)\right)^2}\Bigg\vert_{\tau=\sigma^{b/a}}
\end{equation*}
with the same function $\mathfrak{D}(\sigma,\tau,a,b)$ as before and where $\mathfrak{H}(\sigma,\tau,a,b)$ is now a polynomial of the form
\begin{equation*}
\mathfrak{H}(\sigma,\tau,a,b)=\sum_{i=0}^{2}\sum_{j=0}^2 h_{i,j}\, \sigma^i\tau^j
\end{equation*}
(here again the degree $2$ in each variable $\sigma$ and $\tau$ is a pure guess).
With this Ansatz, eq.~\eqref{eq:K} translates, after some elementary manipulations, into
\begin{equation*}
\frac{1}{3}\partial_\sigma\partial_\tau \mathfrak{X}(\sigma,\tau,a,b)= \left\{(a+b)+a\, \sigma\, \partial_\sigma +b\, \tau\, \partial_\tau \right\}\frac{\mathfrak{H}(\sigma,\tau,a,b)}{\left(\mathfrak{D}(\sigma,\tau,a,b)\right)^2}
\end{equation*}
which needs being satisfied only for $\tau=\sigma^{b/a}$. We may however decide to look for a function $\mathfrak{H}(\sigma,\tau,a,b)$ which
satisfies the above requirement for arbitrary independent values of $\sigma$ and $\tau$. After reducing to the same denominator,
we again have to cancel the coefficients of a polynomial of degree $6$ in $\sigma$ as well as in $\tau$. This gives rise to a system of $7\times 7=49$ equations for the
$3\times 3=9$ variables $(h_{i,j})_{0\leq i,j\leq 2}$. Remarkably enough, this over-determined system
again admits a unique solution displayed explicitly in Appendix~B.
This solution has non-zero finite values for $\mathfrak{H}(0,0,a,b)$ and $\mathfrak{D}(0,0,a,b)$ and therefore we deduce $\mathfrak{K}(0,a,b)=0$ so that we find
\begin{equation*}
\begin{split}
\hskip -1.2cm \int_{0}^{e^{-a\, \epsilon}} d\sigma\, \frac{1}{3}\, b\, \sigma^{b/a} \partial_\sigma\partial_\tau \mathfrak{X}(\sigma,\tau,a,b)\Big\vert_{\tau=\sigma^{b/a}}
&=\mathfrak{K}(e^{-a\, \epsilon},a,b)\\
&= \frac{a\, b\, e^{-a\, \epsilon}\, e^{-b\, \epsilon}\, \mathfrak{H}(e^{-a\, \epsilon},e^{-b\, \epsilon},a,b)}{\left(\mathfrak{D}(e^{-a\, \epsilon},e^{-b\, \epsilon},a,b)\right)^2}\\
&=\frac{1}{4\, \epsilon^2}-\frac{\left(a^2- a\, b+b^2\right) \left(a^2+a\, b+b^2\right)}{18 \left(a^2+b^2\right)}+O(\epsilon^2)\ .
\end{split}
\end{equation*}
Eq.~\eqref{eq:expFgh} gives us the desired singularity
\begin{equation}
\begin{split}
F(g,h)
&=\frac{1}{4}-\frac{\left(a^2-a\, b+b^2\right) \left(a^2+a\, b+b^2\right)}{18 \left(a^2+b^2\right)}\epsilon^2
-\frac{1}{4}+\log \left(\frac{4}{3}\right)+O(\epsilon ^3)\\
&= \log \left(\frac{4}{3}\right) -\frac{\left(a^2-a\, b+b^2\right) \left(a^2+a\, b+b^2\right)}{18 \left(a^2+b^2\right)}\epsilon^2+O(\epsilon ^3)\\
&= \log \left(\frac{4}{3}\right) -\frac{1}{18}\, \frac{(a^6-b^6)}{(a^4-b^4)}\, \epsilon^2+O(\epsilon ^3)\ .\\
\end{split}
\label{eq:lutfin}
\end{equation}
Note that for $b=a$ ($h=g$), we recover the result \eqref{eq:singFgg} for the singularity of $F(g,g)$, as it should.
More interestingly,
we may now obtain from \eqref{eq:lutfin} some asymptotic estimate for the number $[g^{N-\frac{p}{2}}h^{\frac{p}{2}}]F(g,h)$ of planar quadrangulations with $N$ faces, with two marked (distinct and distinguished) vertices at even distance from each other and with Vorono\"\i\ cells of respective areas $N-(p/2)$ and $(p/2)$ (recall
that, due to the existence of faces shared by the two cells, the area of a cell may be any half-integer between $0$ and $N$).
Writing
\begin{equation}
\begin{split}
\hskip -1.2cm -\frac{1}{18}\, \frac{(a^6-b^6)}{(a^4-b^4)}\, \epsilon^2&=-\frac{1}{18}\, \frac{(a^4\, \epsilon^4)^{3/2}-(b^4\, \epsilon^4)^{3/2}}{(a^4\, \epsilon^4)-(b^4\, \epsilon^4)}
\\&=\frac{1}{36}\, \frac{(1-12h)^{3/2} -(1-12g)^{3/2}}{h-g}
\\&=\frac{1}{6}\, \frac{\sqrt{h}(1-12h)^{3/2} -\sqrt{g}(1-12g)^{3/2}}{\sqrt{h}-\sqrt{g}}+O(\epsilon^6)
\\&=\frac{1}{6}+\sum_{N\geq 1} \frac{2N\!+\!1}{2N\!-\!3}\, \frac{3^N}{N}{2(N\!-\!1)\choose N\!-\!1}\sum_{p=0}^{2N} \frac{g^{N-\frac{p}{2}}\, h^{\frac{p}{2}}}{2N+1} +O(\epsilon^6)\ ,
\end{split}
\label{eq:ident}
\end{equation}
where we have on purpose chosen in the third line an expression whose expansion involves half integer powers in $g$ and $h$, we deduce \emph{heuristically} that, \emph{for large $N$}, $[g^{N-\frac{p}{2}}h^{\frac{p}{2}}]F(g,h)$ behaves like
\begin{equation*}
[g^{N-\frac{p}{2}}h^{\frac{p}{2}}] \frac{1}{6}\, \frac{\sqrt{h}(1-12h)^{3/2} -\sqrt{g}(1-12g)^{3/2}}{\sqrt{h}-\sqrt{g}}
\underset{N \to \infty}{\sim} \frac{1}{4} \frac{12^{N}}{\sqrt{\pi} N^{3/2}} \times \frac{1}{2N+1}
\end{equation*}
\emph{independently of $p$}. After normalizing by \eqref{eq:norm}, the probability that the second Vorono\"\i\ cell has some fixed half-integer area $n=p/2$ ($0\leq p\leq 2N$) is asymptotically
equal to $1/(2N+1)$ independently
of the value of $n$. As a consequence, \emph{the law for $\phi=n/N$ is uniform in the interval $[0,1]$.}
Clearly, the above estimate is too precise and has no reason to be true {\it{stricto sensu}} for finite values of $p$. Indeed, in the expansion \eqref{eq:lutfin}, both $g$ and $h$ tend simultaneously to
$0$, so that the above estimate for $[g^{N-\frac{p}{2}}h^{\frac{p}{2}}]F[g,h]$ should be considered as valid only when both $N$ and $n=p/2$ become large in a limit where the ratio $\phi=n/N$ may
be considered as a finite continuous variable. In other word, some \emph{average} over values of $p$ with $n=p/2$ in the range $N\phi \leq n< N(\phi+d\phi)$ is implicitly required.
With this averaging procedure, any other generating function with the same singularity as \eqref{eq:lutfin} would then lead to the same uniform law for $\phi$.
For instance, using the second line of \eqref{eq:ident} and writing
\begin{equation*}
\frac{1}{36}\, \frac{(1-12h)^{3/2} -(1-12g)^{3/2}}{h-g}
= -\frac{1}{2}+\sum_{N\geq 1} \frac{3^N}{N}{2(N\!-\!1)\choose N\!-\!1}\sum_{n=0}^N \frac{g^{N-n}\, h^n}{N+1}\ ,
\end{equation*}
we could as well have estimated from our singularity a value of $[g^{N-\frac{p}{2}}h^{\frac{p}{2}}]F(g,h)$ asymptotically equal to:
\begin{equation*}
[g^{N-\frac{p}{2}}h^{\frac{p}{2}}]\frac{1}{36}\, \frac{(1-12h)^{3/2} -(1-12g)^{3/2}}{h-g}
\underset{N \to \infty}{\sim} \frac{1}{4} \frac{12^{N}}{\sqrt{\pi} N^{3/2}} \times \frac{1}{N+1} \, \delta_{p,\rm{even}}
\end{equation*}
with $\delta_{p,\rm{even}}=1$ if $p$ is even and $0$ otherwise. Of course, averaging over both parities, this latter estimate leads to the same uniform
law for the continuous variable $\phi=n/N$.
\vskip .2cm
Beyond the above heuristic argument, we may compute the law for $\phi$ in a rigorous way by considering the large $N$ behavior of the fixed $N$ expectation value
\begin{equation*}
E_N[e^{\mu\, \left(\frac{n}{N}\right)}]\equiv\frac{\sum\limits_{p=0}^{2N} e^{\mu\, \left(\frac{p}{2N}\right)}\, [g^{N-\frac{p}{2}} h^{\frac{p}{2}}]F[g,h]}{\sum\limits_{p=0}^{2N} [g^{N-\frac{p}{2}}
h^{\frac{p}{2}}]F[g,h]}
=\frac{[g^N]F(g,g\, e^{\frac{\mu}{N}})}{[g^N]F(g,g)}\ .
\end{equation*}
The coefficient $[g^N]F(g,g\, e^{\frac{\mu}{N}})$ may then be obtained by a contour integral around $g=0$, namely
\begin{equation*}
\frac{1}{2\rm{i}\pi} \oint\frac{dg}{g^{N+1}} F(g,g\, e^{\frac{\mu}{N}})
\end{equation*}
and, at large $N$, we may use \eqref{eq:scalgh} and \eqref{eq:lutfin} with
\begin{equation*}
\hskip 5.cm \epsilon^4=\frac{1}{N}
\end{equation*}
to rewrite this integral as an integral over $a$. More precisely, at leading order in $N$, setting $h=g\, e^{\frac{\mu}{N}} =g\, (1+ \mu \epsilon^4)$
amounts to take:
\begin{equation*}
\hskip 4.5cm b^4=a^4-36\mu\ .
\end{equation*}
Using $dg=-(1/12)a^3/(9 N)$, $g^{N+1}\sim (1/12)^{N+1}\, e^{-a^4/36}$ (and ignoring the constant term $\log(4/3)$ which does not contribute to
the $g^N$ coefficient for $N\geq 1$), the contour integral above becomes at leading order:
\begin{equation*}
\frac{1}{2\rm{i}\pi} \frac{12^N}{N^{3/2}} \int_\mathcal{C} da\, \frac{-a^3}{9} \left\{-\frac{1}{18} \frac{a^6-(a^4-36\mu)^{3/2})}{36\mu}\, e^{a^4/36} +O\!\left(\frac{1}{N^{1/4}}\right)\right\}
\end{equation*}
where the integration path follows some appropriate contour $\mathcal{C}$ in the complex plane. The precise form of this contour and the details of the computation of this integral are given in Appendix~C.
We find the value\begin{equation*}
\frac{1}{2\rm{i}\pi} \int_\mathcal{C} da\, \frac{-a^3}{9} \left\{ -\frac{1}{18} \frac{a^6-(a^4-36\mu)^{3/2})}{36\mu}\, e^{a^4/36} \right\}=\frac{1}{4 \sqrt{\pi}} \times \frac{e^\mu-1}{\mu}\ ,
\end{equation*}
which matches the asymptotic result obtained by
the identification \eqref{eq:ident} since\footnote{We have as well:
$$\frac{1}{4} \frac{12^{N}}{\sqrt{\pi} N^{3/2}} \times \sum_{p=0}^{2N} \frac{1}{N+1} e^{\mu\, \left(\frac{p}{2N}\right)}\, \delta_{p,\rm{even}}=\frac{1}{4} \frac{12^{N}}{\sqrt{\pi} N^{3/2}} \times
\sum_{n=0}^{N}\frac{1}{N+1}e^{\mu\, \left(\frac{n}{N}\right)}
\underset{N \to \infty}{\sim} \frac{1}{4} \frac{12^{N}}{\sqrt{\pi} N^{3/2}} \times \frac{e^\mu-1}{\mu}\ .$$}
\begin{equation*}
\frac{1}{4} \frac{12^{N}}{\sqrt{\pi} N^{3/2}} \times \sum_{p=0}^{2N} \frac{1}{2\,N+1} e^{\mu\, \left(\frac{p}{2N}\right)} \underset{N \to \infty}{\sim} \frac{1}{4} \frac{12^{N}}{\sqrt{\pi} N^{3/2}} \times \frac{e^\mu-1}{\mu}\ .
\end{equation*}
After normalization by $[g^N]F(g,g)$ via \eqref{eq:norm}, we end up with the result
\begin{equation*}
\hskip 3.cm E_N[e^{\mu\, \left(\frac{n}{N}\right)}]\underset{N \to \infty}{\sim}\frac{e^\mu-1}{\mu}\ .
\end{equation*}
Writing
\begin{equation*}
\hskip 3.cm \frac{e^\mu-1}{\mu}\ = \int_0^1 d\phi\, e^{\mu\, \phi} \, \mathcal{P}(\phi) \ ,
\end{equation*}
where $\mathcal{P}(\phi)$ is the law for the proportion of area $\phi=n/N$ in, say the second Vorono\"\i\ cell, we obtain that
\begin{equation*}
\hskip 3.cm \mathcal{P}(\phi)=1\quad \forall \phi\in[0,1]\ ,
\end{equation*}
i.e.\ the law is uniform on the unit segment. This proves the desired result and corroborates Chapuy's conjecture.
\vskip .2cm
To end our discussion on quadrangulations, let us mention a way to extend our analysis to the case where the distance $d(v_1,v_2)$ is
equal to some odd integer. Assuming that this integer is at least $3$, we can still use the Miermont bijection at the price of introducing a ``delay" $1$
for one of two vertices, namely labelling now the vertices by, for instance, $\ell(v)=\min(d(v,v_1), d(v, v_2) + 1)$ and repeating the construction of Figure~\ref{fig:SR}.
This leads to a second Vorono\"\i\ cell slightly smaller (on average) than the first one but this effect can easily be corrected by averaging the law for $n$ and that for $N-n$. At large
$N$, it is easily verified that the generating function generalizing $F(g,h)$ to this (symmetrized) ``odd" case (i.e.\ summing over all values $d(v_1,v_2)=2s+1$, $s\geq 1$) has a similar expansion as \eqref{eq:lutfin}, except for the constant term
$\log(4/3)$ which is replaced by the different value $\log(9/8)$. What matters however is that this new generating function has the same singularity as before when $g$ and
$h$ tend to $1/12$ so that we still get the uniform law $\mathcal{P}(\phi)$ for the ratio $\phi= n/N$ at large $N$. Clearly, summing over both parities of $d(v_1,v_2)$ would then also
lead to the uniform law for $\phi$.
\section{Vorono\"\i\ cells for general maps}
\subsection{Coding of general bi-pointed maps by i-l.2.f.m}
\begin{figure}
\begin{center}
\includegraphics[width=8cm]{AB.pdf}
\end{center}
\caption{The local rules of the Ambj\o rn-Budd bijection.}
\label{fig:AB}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=8cm]{genmap.pdf}
\end{center}
\caption{Lower part: a general bi-pointed planar map (in red) and the associated i-l.2.f.m (in blue). Upper part: Both maps result from the same bi-pointed quadrangulation using,
on one hand the Miermont bijection via the rules of Figure~\ref{fig:SR} and on the other hand the Ambj\o rn-Budd bijection via the rules of Figure~\ref{fig:AB}.
Note that the label of a vertex $v$ of the general map corresponds to $\min(\delta(v,v_1),\delta(v,v_2))$ where $\delta$ is the graph distance in this map.}
\label{fig:genmap}
\end{figure}
Another direct application of our calculation concerns the statistics of Vorono\"\i\ cells in \emph{bi-pointed general planar maps}. i.e.\ maps with faces of arbitrary degree
and with two distinct (and distinguished) vertices $v_1$ and $v_2$, now at \emph{arbitrary distance $\delta(v_1,v_2)\geq 1$}. As customary, the ``area" of general maps is measured
by their number $N$ of \emph{edges} to ensure the existence of a finite number of maps for a fixed $N$. General maps are known to be bijectively related
to quadrangulations and it is therefore not surprising that bi-pointed general planar maps may also be coded by i-l.2.f.m. Such a coding is displayed in Figure~\ref{fig:genmap}
and its implementation was first discussed in \cite{AmBudd}.
The simplest way to understand it is to start from a bi-pointed quadrangulation like that of Figure~\ref{fig:cells} (with its two marked vertices $v_1$ and $v_2$
and the induced labelling $\ell(v)=\min(d(v,v_1),d(v,v_2)$) and to draw within each face a new edge according to the rules of
Figure~\ref{fig:AB} which may be viewed as complementary to the rules of Figure~\ref{fig:SR}. The resulting map formed by these new edges
is now a general planar map (with faces of arbitrary degree) which is still bi-pointed since
$v_1$ and $v_2$ are now retained in this map, with vertices labelled by $\ell(v)=\min(\delta(v,v_1),\delta(v,v_2))$ where $\delta(v,v')$ is the graph distance between
$v$ and $v'$ \emph{in the resulting map}\footnote{Note that, although related, the distance $\delta(v,v')$ between two vertices $v$ and $v'$ in the resulting map and that, $d(v,v')$, in the original quadrangulation are not identical in general.}. This result was shown by Ambj\o rn and Budd in \cite{AmBudd} who also proved that this new construction provides a bijection
between bi-pointed planar maps with $N$ edges and their two marked vertices at \emph{arbitrary graph distance} and bi-pointed planar quadrangulations with $N$ faces and
their two marked vertices at \emph{even graph distance}\footnote{In their paper, Ambj\o rn and Budd considered quadrangulations with general labellings satisfying
$\ell(v)-\ell(v')=\pm 1$ if $v$ and $v'$ are adjacent. The present bijection is a specialization of their bijection when the labelling has exactly two local minima (the marked vertices) and the label
is $0$ for both minima. This implies that the two minima are at even distance from each other in the quadrangulation.}. Note that, in the bi-pointed general map, the labelling may be
erased without loss of information since it may we retrieved directly from graph distances.
Combined with the Miermont bijection, the Ambj\o rn-Budd bijection gives the desired coding of bi-pointed general planar maps by i-l.2.f.m, whose two faces $f_1$ and $f_2$ moreover surround the
vertices $v_1$ and $v_2$ respectively.
In this coding, all the vertices of the general maps except $v_1$ and $v_2$
are recovered in the i-l.2.f.m, with the same label but the i-l.2.f.m has a number of additional vertices, one lying in each face of the general map and carrying a label
equal to $1$ plus the maximal label in this face. As discussed in \cite{FG14b}, if the distance $\delta(v_1,v_2)$ is even, equal to $2s $ ($s\geq 1$),
the i-l.2.f.m (which has by definition minimal label $1$ in its two faces) has a minimum label equal to $s$ for the vertices along the loop separating the two faces,
and \emph{none of the loop edges has labels $s\, \rule[1.5pt]{8.pt}{1.pt}\, s$}. If the distance $\delta(v_1,v_2)$ is odd, equal to $2s-1$ ($s\geq 1$),
the i-l.2.f.m has again a minimum label equal to $s$ for the vertices along the loop separating the two faces, but now has \emph{at least one loop edge with labels $s\, \rule[1.5pt]{8.pt}{1.pt}\, s$}.
\subsection{Definition of Vorono\"\i\ cells for general maps}
\begin{figure}
\begin{center}
\includegraphics[width=8cm]{bump.pdf}
\end{center}
\caption{Explanation of the ``rebound" property: for any edge $e$ of type $\ell+1\to \ell$ of the general map (here in red) lying in $f_2$ and hitting a loop vertex $w$ of the associated
i-l.2.f.m (here in blue), there exists, in the sector around $w$ going clockwise from $e$ to the loop edge $e'$ of the i-l.2.f.m leading to $w$ and having $f_1$ on its left, an edge of the general map leaving $w$ within $f_2$ and with endpoint of label $\ell-1$. To see that, we first note that, in the associated quadrangulation (in black), the first edge leaving $w$ clockwise in the
sector has label $\ell+1$ from the rules of Figure~\ref{fig:AB}. Similarly the last edge in the quadrangulation leaving $w$ clockwise in the
sector has label $\ell-1$ from the rules of Figure~\ref{fig:SR}. This holds for the three possible values of the label at the origin of $e'$, namely $\ell$ (upper left), $\ell-1$ (upper right)
and $\ell+1$ (bottom). Then there must be around $w$ in this sector two clockwise consecutive edges of the quadrangulation with
respective labels $\ell+1$ and $\ell-1$ at their endpoint other than $w$. From the rules of Figure~\ref{fig:AB}, the incident face in the quadrangulation gives rise to an edge of the general map lying in the sector, hence in $f_2$, and leaving $w$ toward a vertex with label $\ell-1$. This ``rebound" property is easily generalized to the case where the hitting edge $e$ is of type $\ell\to\ell$. In that case, we only need that the
second half of $e$ lies in $f_2$ to ensure the existence of a subsequent edge $\ell\to\ell-1$ in $f_2$.
}
\label{fig:bump}
\end{figure}
As before, we may define the two Vorono\"\i\ cells in bi-pointed general planar maps as the domains obtained by cutting them along the loop of the associated i-l.2.f.m.
Let us now see why this definition again matches what we expect from a Vorono\"\i\ cell, namely that vertices in one cell are closer to one of the marked vertices than
to the other. Let us show that any vertex $v$ of the general map strictly inside, say the second face $f_2$ (that containing $v_2$) is closer to $v_2$ than to $v_1$ (or possibly at the same distance).
Since this is obviously true for $v_2$, we may assume $v\neq v_2$ in which case $\ell(v)>0$. Recall that, for any $v$, $\ell(v)=\min(\delta(v,v_1),\delta(v,v_2))$ so that the vertex $v$ necessarily has a neighbor with label $\ell(v)-1$ within the general map, which itself, if $\ell(v)>1$, has a neighbor of label $\ell(v)-2$, and so on.
A sequence of edges connecting these neighboring vertices
with strictly decreasing labels provides a shortest path from $v$ to a vertex with label $0$, i.e.\ to either $v_1$ or $v_2$. Let us show
this path may always be chosen so as to stay inside $f_2$, so that it necessarily ends at
$v_2$ and thus $\ell(v)=\delta(v,v_2)\leq \delta(v,v_1)$.
To prove this, we first note that, since by construction the map edges (in red in the figures) and the i-l.2.f.m edges (in blue) cross only along red edges of type $m\, \rule[1.5pt]{8.pt}{1.pt}\, m$ which cannot belong to a path with strictly decreasing labels, if such a path (which starts with an edge in $f_2$) crosses the loop a first time so as to enter $f_1$, it has to first hit the loop separating the two faces
at some loop vertex $w$ with, say label $\ell$. We may then rely on the following ``rebound" property, explained in Figure~\ref{fig:bump}:
looking at the environment of $w$ in the sector going clockwise from the map edge $(\ell+1)\to \ell$ of the strictly decreasing path leading
to $w$ (this edge lies in $f_2$ by definition) and the loop edge of the i-l.2.f.m leading to $w$ (with the loop oriented as before with $f_1$ on its left), we see that there always exist a map
edge $\ell\to \ell-1$ leaving $w$ and lying inside this sector and therefore in $f_2$ (see the legend of Figure~\ref{fig:bump} for
a more detailed explanation). We may then decide to take this edge as the next edge in our path with decreasing labels which de facto, may always be chosen to as to stay\footnote{Note that
some of the vertices along the path may lie on the loop but the path must eventually enter strictly inside $f_2$ since loop labels are larger than $1$.} in $f_2$.
Let us now discuss vertices of the general map which belong to both Vorono\"\i\ cells, i.e.\ are loop vertices in the i-l.2.f.m. Such vertices may be strictly closer to $v_1$, strictly closer to $v_2$ or
at equal distance from both. More precisely, if a loop vertex $v$ with label $\ell$ is incident to a general map edge in $f_2$, then we can find a path with decreasing labels
staying inside $f_2$ and thus $\delta(v,v_2)\leq \delta(v,v_1)$. Indeed, if the incident edge is of type $\ell\, \rule[1.5pt]{8.pt}{1.pt}\,(\ell-1)$, it gives
the first step of the desired path, if it is of type $\ell\, \rule[1.5pt]{8.pt}{1.pt}\,(\ell+1)$, looking at this edge backwards and using the rebound property,
the loop vertex is also incident to an edge of type $\ell\, \rule[1.5pt]{8.pt}{1.pt}\,(\ell-1)$ in $f_2$ which may serve as the first step of the desired path. If the incident edge is
of type $\ell\, \rule[1.5pt]{8.pt}{1.pt}\,\ell$, a straightforward extension of the rebound property shows that the loop vertex is again incident to an edge of
type $\ell\, \rule[1.5pt]{8.pt}{1.pt}\,(\ell-1)$ in $f_2$ which provides the first step of the desired path.
Similarly, if a loop vertex $v$ incident to a general map edge in $f_1$, then $\delta(v,v_1)\leq \delta(v,v_2)$ and,
as a consequence, if a loop vertex $v$ in incident to a general map edge in both $f_1$ and $f_2$, then $\delta(v,v_1)= \delta(v,v_2)$.
From the above properties, we immediately deduce that all the map edges inside $f_1$ (respectively $f_2$) have their two endpoints closer to $v_1$ than to $v_2$
(respectively closer to $v_2$ than to $v_1$) or possibly at the same distance.
As for map edges shared by the two cells, they necessarily connect two vertices $w_1$ and $w_2$ (lying in $f_1$ and $f_2$ respectively) with the same label and with $w_1$ closer to $v_1$ than to $v_2$
(or at the same distance) and $w_2$ closer to $v_2$ than to $v_1$ (or at the same distance).
This fully justifies our definition of Vorono\"\i\ cells.
\subsection{Generating functions and uniform law}
In the context of general maps, a proper measure of the ``area" of Vorono\"\i\ cells is now provided by the number of edges of the general map lying within each cell. Again,
a number of these edges are actually shared by the two cells, hence contribute $1/2$ to the area of each cell. In terms of generating functions,
edges inside the first cell receive accordingly the weight $g$, those in the second cell the weight $h$ and those shared by the two cells the weight $\sqrt{g\, h}$
and we call $F^{\rm{even}}(g,h)$ and $F^{\rm{odd}}(g,h)$ the corresponding generating functions for bi-pointed maps conditioned to have their marked
vertices at even and odd distance respectively.
When transposed to the associated i-l.2.f.m, this amounts as before to assigning the weight $g$ to those edges of the i-l.2.f.m strictly in $f_1$, $h$ to those strictly in $f_2$, and $\sqrt{g\, h}$ to
those on the loop separating $f_1$ and $f_2$. Indeed, from the rules of figures \ref{fig:SR} and \ref{fig:AB}, edges of the i-l.2.f.m are in one-to-one correspondence with edges
of the general map. Edges of the i-l.2.f.m strictly in $f_1$ (respectively $f_2$) correspond to edges
of the general map in the first (respectively second) Vorono\"\i\ cell. As for edges on the loop separating $f_1$ and $f_2$, they come in three species: edges of type $m\, \rule[1.5pt]{8.pt}{1.pt}\, m$ correspond to
map edges of type $(m-1)\, \rule[1.5pt]{8.pt}{1.pt}\, (m-1)$ shared by the two cells and receive the weight $\sqrt{g\, h}$ accordingly; edges of type $m\, \rule[1.5pt]{8.pt}{1.pt}\, (m+1)$ (when oriented with $f_1$ on their left) correspond to edges of the general map of type $m\, \rule[1.5pt]{8.pt}{1.pt}\, (m-1)$ in the first cell and
edges of type $(m+1)\, \rule[1.5pt]{8.pt}{1.pt}\, m$ correspond to edges of the general map of type $m\, \rule[1.5pt]{8.pt}{1.pt}\, (m-1)$ in the second cell. We are thus lead to assign the weight $g$ to loop
edges of
the second species and $h$ to loop edges of the third species but, since there is clearly the same number of edges of the two types in a closed loop, we way equivalently assign the weight $\sqrt{g\, h}$ to all of them.
Again, writing $\delta(v_1,v_2)=2s$ for general maps enumerated by $F^{\rm{even}}(g,h)$ and $\delta(v_1,v_2)=2s-1$ for general maps enumerated by $F^{\rm{odd}}(g,h)$, with $s\geq 1$, we may decide to shift all labels by $-s$ in the associated i-l.2.f.m. With these shifted labels, the planar i-l.2.f.m enumerated by $F^{\rm{even}}(g,h)$ may alternatively be characterized by the same rules
$\langle \hbox{c}_2\rangle$-$\langle \hbox{c}_4\rangle$ as before but with $\langle \hbox{c}_1\rangle$ replaced by the slightly more restrictive rule:.
\begin{enumerate}[$\langle \hbox{c}_1\rangle$-even:]
\item{The minimal label for the set of loop vertices is $0$ and none of the loop edges has labels $0\, \rule[1.5pt]{8.pt}{1.pt}\, 0$. The edges of the loop receive a weight $\sqrt{g\, h}$.}
\end{enumerate}
Similarly, for planar i-l.2.f.m enumerated by $F^{\rm{odd}}(g,h)$, $\langle \hbox{c}_1\rangle$ is replaced by the rule:
\begin{enumerate}[$\langle \hbox{c}_1\rangle$-odd:]
\item{The minimal label for the set of loop vertices is $0$ and at least one loop edge has labels $0\, \rule[1.5pt]{8.pt}{1.pt}\, 0$. The edges of the loop receive a weight $\sqrt{g\, h}$.}
\end{enumerate}
The conditions $\langle \hbox{c}_1\rangle$-even and $\langle \hbox{c}_1\rangle$-odd are clearly complementary among i-l.2.f.m satisfying the condition $\langle \hbox{c}_1\rangle$.
We immediately deduce that
\begin{equation*}
F^{\rm{even}}(g,h)+F^{\rm{odd}}(g,h)=F(g,h)
\end{equation*}
so that we may interpret $F(g,h)$ as the generating function for bi-pointed general planar maps with two marked vertices at \emph{arbitrary distance} from each other,
with a weight $g$ per edge in the first Vorono\"\i\ cell, $h$ per edge in the second cell, and $\sqrt{g\, h}$ per edge shared by both cells.
As a direct consequence, among bi-pointed general planar maps of fixed area $N$, with their two marked vertices at arbitrary distance, the law for the ratio $\phi=n/N$ of the area $n$
of one of the two Vorono\"\i\ cells by the total area $N$ is again, for large $N$, uniform between $0$ and $1$.
If we wish to control the parity of $\delta(v_1,v_2)$, we have to take into account the new constraints on loop edges.
We invite the reader to look at \cite{FG14b} for a detailed discussion on how to incorporate these constraints.
For $\delta(v_1,v_2)$ even, the generating function $F^{\rm{even}}(g,h)$ may be written as
\begin{equation*}
F^{\rm{even}}(g,h)=\sum_{s\geq 1} \Delta_s\Delta_t \log(N_{s,t}(g,h))\Big\vert_{t=s} = \sum_{s\geq 1} \log\left(\frac{N_{s,s}(g,h)N_{s-1,s-1}(g,h)}{N_{s-1,s}(g,h)N_{s,s-1}(g,h)}\right)
\end{equation*}
where
\begin{equation}
N_{s,t}(g,h)=\frac{X_{s,t}(g,h)}{1+\sqrt{g\, h}\, R_s(g)\, R_t(h)\, X_{s,t}(g,h)}
\label{eq:NX}
\end{equation}
enumerates l.c with none of their spine edges having labels $0\, \rule[1.5pt]{8.pt}{1.pt}\, 0$.
For $\delta(v_1,v_2)$ odd, the generating function $F^{\rm{odd}}(g,h)$ reads (see again \cite{FG14b})
\begin{equation*}
F^{\rm{odd}}(g,h)=\sum_{s\geq 1} \Delta_s\Delta_t \log\left(\frac{X_{s,t}(g,h)}{N_{s,t}(g,h)}\right)\Big\vert_{t=s} =F(g,h)-F^{\rm{even}}(g,h)
\end{equation*}
as it should.
We may estimate the singularity $F^{\rm{even}}(g,h)$ from the scaling function
associated with $N_{s,t}(g,h)$ and from its value at $g=h=1/12$.
It is easily checked from its expression \eqref{eq:NX} that
\begin{equation*}
N_{\left\lfloor S/\epsilon\right\rfloor,\left\lfloor T/\epsilon\right\rfloor}(g,h) =\frac{3}{2}+\frac{1}{4} x(S,T,a,b)\ \epsilon+O(\epsilon^2)
\end{equation*}
and, by the same arguments as for quadrangulations,
\begin{equation*}
\begin{split}
F^{\rm{even}}(g,h)
& =\epsilon^2 \int_{\epsilon}^{\infty} dS\, \frac{1}{6} \partial_S\partial_T x(S,T,a,b)\Big\vert_{T=S} \\
&+\sum_{s=1}^{\left\lfloor S_0/\epsilon\right\rfloor-1} \log \left(\frac{(2s+1)^3 (2 s+3)}{(2 s+2)^3 \, 2s}\right)-\epsilon^2 \int_\epsilon^{S_0}dS\, \frac{1}{4S^3}+O(\epsilon^3)\ .
\\
\end{split}
\end{equation*}
This yields the expansion:
\begin{equation*}
\begin{split}
F^{\rm{even}}(g,h)
= \log \left(\frac{32}{3\pi^2}\right) -\frac{1}{36}\, \frac{(a^6-b^6)}{(a^4-b^4)}\, \epsilon^2+O(\epsilon ^3)\ .\\
\end{split}
\end{equation*}
with, as expected, the same singularity as $F(g,h)$ up to a factor $1/2$ since the number of bi-pointed general maps with $\delta(v_1,v_2)$ even is (asymptotically) half
the number of bi-pointed quadrangulations with $d(v_1,v_2)$ even.
Again, for the restricted ensemble of bi-pointed general planar map whose marked vertices are at even distance from each other, the law for the ratio $\phi=n/N$ of the area $n$ of one of the two Vorono\"\i\ cells by the total area $N$ is, for large $N$, uniform between $0$ and $1$. The same is obviously true if we condition the distance to be odd since
\begin{equation*}
\begin{split}
F^{\rm{odd}}(g,h)
= \log \left(\frac{\pi^2}{8}\right) -\frac{1}{36}\, \frac{(a^6-b^6)}{(a^4-b^4)}\, \epsilon^2+O(\epsilon ^3)\ .\\
\end{split}
\end{equation*}
\section{Conclusion}
\label{sec:conclusion}
In this paper, we computed the law for the ratio $\phi=n/N$ of the area $n$ ($=$ number of faces) of one of the two Vorono\"\i\ cells by the total area $N$ for random planar quadrangulations
with a large area $N$ and two randomly chosen marked distinct vertices at even distance from each other. We found that this law is uniform between $0$ and $1$,
which corroborates Chapuy's conjecture. We then extended this result to the law for the ratio $\phi=n/N$ of the area $n$ ($=$ number of edges) of one of the two Vorono\"\i\ cells by the total area $N$ for random general planar maps with a large area $N$ and two randomly chosen marked distinct vertices at arbitrary distance from each other.
We again found that this law is uniform between $0$ and $1$.
Our calculation is based on an estimation of the singularity of the appropriate generating function keeping a control on the area of the Vorono\"\i\ cells,
itself based on an estimation of the singularity of some particular generating function $X_{s,t}(g,h)$ for labelled chains. Clearly, a challenging problem
would be to find an exact expression for $X_{s,t}(g,h)$ as it would certainly greatly simplify our derivation.
Chapuy's conjecture extends to an arbitrary number of Vorono\"\i\ cells in a map of arbitrary fixed genus. It seems possible to test it by our method for
some slightly more involved cases than the one discussed here, say with three Vorono\"\i\ cells in the planar case or for two Vorono\"\i\ cells in maps with genus $1$. An important
step toward this calculation would be to estimate the singularity of yet another generating function, $Y_{s,t,u}(g,h,k)$ enumerating labelled trees
with three non-aligned marked vertices and a number of label constraints\footnote{See \cite{BG08} for a precise list of label constraints.
There $Y_{s,t,u}(g,h,k)$ is defined when $g=h=k$ but the label constraints are independent of the weights.} involving subtrees divided into three
subsets with edge weights $g$, $h$, and $k$ respectively. Indeed, applying the Miermont bijection to maps with more points or for higher genus creates
labelled maps whose ``skeleton" (i.e.\ the frontier between faces) is no longer a single loop but has branching points enumerated by $Y_{s,t,u}(g,h,k)$.
This study will definitely require more efforts.
Finally, in view of the simplicity of the conjectured law, one may want to find a general argument which makes no use of any precise enumeration result
but relies only on bijective constructions and/or symmetry considerations.
\section*{Acknowledgements}
I thank Guillaume Chapuy for bringing to my attention his nice conjecture and Timothy Budd for clarifying discussions. I also acknowledge the support of the grant ANR-14-CE25-0014 (ANR GRAAL).
\newpage
\appendix
\section{Expression for the scaling function $x(S,T,a,b)$}
The scaling function $x(S,T,a,b)$, determined by the partial differential equation
\begin{equation*}
2 \big(x(S,T,a,b)\big)^2+6 \big(\partial_Sx(S,T,a,b)+\partial_Tx(S,T,a,b)\big)+27 \big(r(S,a)+r(T,b)\big)=0
\end{equation*}
(with $r(S,a)$ as in \eqref{eq:expRg}) and by the small $S$ and $T$ behavior \eqref{eq:smalST} is given by
\begin{equation*}
x(S,T,a,b)=-3\sqrt{\frac{a^2+b^2}{2}}-\frac{\mathfrak{N}(e^{-a\, S},e^{-b\, T},a,b)}{(1-e^{-a\, S} ) (1-e^{-b\, T}) \mathfrak{D}(e^{-a\, S},e^{-b\, T},a,b)}\ ,
\end{equation*}
where the polynomials
\begin{equation*}
\mathfrak{N}(\sigma,\tau,a,b)=\sum_{i=0}^{3}\sum_{j=0}^3 n_{i,j}\, \sigma^i\tau^j \quad \hbox{and}\quad
\mathfrak{D}(\sigma,\tau,a,b) =\sum_{i=0}^{2}\sum_{j=0}^2 d_{i,j}\, \sigma^i\tau^j
\end{equation*}
have the following coefficients $n_{i,j}\equiv n_{i,j}(a,b)$ and $d_{i,j}\equiv d_{i,j}(a,b)$:
writing for convenience these coefficients in the form
\begin{equation*}
n_{i,j}=n_{i,j}^{(0)}+\sqrt{\frac{a^2+b^2}{2}}n_{i,j}^{(1)}\ , \qquad
d_{i,j}=d_{i,j}^{(0)}+\sqrt{\frac{a^2+b^2}{2}}d_{i,j}^{(1)}\ ,
\end{equation*}
we have
\begin{equation*}
\begin{split}
n_{0,0}^{(0)}&= n_{0,3}^{(0)} = n_{3,0}^{(0)} = n_{3,3}^{(0)}= 0\\
n_{0,1}^{(0)}&= -\frac{18 b^3}{2 a^2+b^2} \qquad n_{0,2}^{(0)}= \frac{18 b^3 \left(5 a^2 +7 b^2\right)}{(a-b) (a+b) \left(2 a^2+b^2\right)}\\
n_{1,0}^{(0)}&= -\frac{18 a^3}{a^2+2 b^2} \qquad n_{2,0}^{(0)}= -\frac{18 a^3 \left(7 a^2+5 b^2\right)}{(a-b) (a+b) \left(a^2+2 b^2\right)}\\
n_{1,1}^{(0)}&= \frac{54 \left(2 a^7+17 a^5 b^2+17 a^4 b^3+17 a^3 b^4+17 a^2 b^5+2 b^7\right)}{(a-b)^2 \left(2 a^2+b^2\right) \left(a^2+2 b^2\right)}\\
n_{1,2}^{(0)}&=-\frac{54 \left(2 a^7+8 a^6 b+27 a^5 b^2+47 a^4 b^3+47 a^3 b^4+51 a^2 b^5+20 a b^6+14 b^7\right)}{(a-b) (a+b) \left(2 a^2+b^2\right) \left(a^2+2 b^2\right)}\\
n_{1,3}^{(0)}&= \frac{18 a^2 \left(2 a^5+12 a^4 b+17 a^3 b^2+36 a^2 b^3+17 a b^4+24 b^5\right)}{(a-b) (a+b) \left(2 a^2+b^2\right) \left(a^2+2 b^2\right)}\\
n_{2,1}^{(0)}&= \frac{54 \left(14 a^7+20 a^6 b+51 a^5 b^2+47 a^4 b^3+47 a^3 b^4+27 a^2 b^5+8 a b^6+2 b^7\right)}{(a-b) (a+b) \left(2 a^2+b^2\right) \left(a^2+2 b^2\right)}\\
n_{2,2}^{(0)}&= -\frac{54 \left(14 a^7+12 a^6 b+41 a^5 b^2+41 a^4 b^3+41 a^3 b^4+41 a^2 b^5+12 a b^6+14 b^7\right)}{(a-b)^2 \left(2 a^2+b^2\right) \left(a^2+2 b^2\right)}\\
n_{2,3}^{(0)}&= \frac{18 a^2 \left(14 a^5+32 a^4 b+51 a^3 b^2+58 a^2 b^3+37 a b^4+24 b^5\right)}{(a-b)^2 \left(2 a^2+b^2\right) \left(a^2+2 b^2\right)}\\
n_{3,1}^{(0)}&= -\frac{18 b^2 \left(24 a^5+17 a^4 b+36 a^3 b^2+17 a^2 b^3+12 a b^4+2 b^5\right)}{(a-b) (a+b) \left(2 a^2+b^2\right) \left(a^2+2 b^2\right)}\\
n_{3,2}^{(0)}&= \frac{18 b^2 \left(24 a^5+37 a^4 b+58 a^3 b^2+51 a^2 b^3+32 a b^4+14 b^5\right)}{(a-b)^2 \left(2 a^2+b^2\right) \left(a^2+2 b^2\right)}\\
\end{split}
\end{equation*}
and
\begin{equation*}
\begin{split}
n_{0,0}^{(1)}&= n_{0,3}^{(1)}= n_{3,0}^{(1)}= n_{3,3}^{(1)}= 0\\
n_{0,1}^{(1)}&= \frac{36 b^2}{2 a^2+b^2}\qquad n_{0,2}^{(1)}= -\frac{36 b^2 \left(a^2+5 b^2\right)}{(a-b) (a+b) \left(2 a^2+b^2\right)}\\
n_{1,0}^{(1)}&=\frac{36 a^2}{a^2+2 b^2} \qquad n_{2,0}^{(1)}= \frac{36 a^2 \left(5 a^2+b^2\right)}{(a-b) (a+b) \left(a^2+2 b^2\right)}\\
n_{1,1}^{(1)}&= -\frac{216 \left(a^6-a^5 b+8 a^4 b^2+2 a^3 b^3+8 a^2 b^4-a b^5+b^6\right)}{(a-b)^2 \left(2 a^2+b^2\right) \left(a^2+2 b^2\right)}\\
n_{1,2}^{(1)}&=\frac{216 \left(a^2+a b+b^2\right) \left(a^4+a^3 b+9 a^2 b^2+2 a b^3+5 b^4\right)}{(a-b) (a+b) \left(2 a^2+b^2\right) \left(a^2+2 b^2\right)}\\
n_{1,3}^{(1)}&= -\frac{36 a^2 \left(2 a^4+6 a^3 b+17 a^2 b^2+12 a b^3+17 b^4\right)}{(a-b) (a+b) \left(2 a^2+b^2\right) \left(a^2+2 b^2\right)}\\
n_{2,1}^{(1)}&= -\frac{216 \left(a^2+a b+b^2\right) \left(5 a^4+2 a^3 b+9 a^2 b^2+a b^3+b^4\right)}{(a-b) (a+b) \left(2 a^2+b^2\right) \left(a^2+2 b^2\right)}\\
n_{2,2}^{(1)}&=\frac{216 \left(5 a^6+4 a^5 b+13 a^4 b^2+10 a^3 b^3+13 a^2 b^4+4 a b^5+5 b^6\right)}{(a-b)^2 \left(2 a^2+b^2\right) \left(a^2+2 b^2\right)} \\
n_{2,3}^{(1)}&=-\frac{36 a^2 \left(10 a^4+22 a^3 b+33 a^2 b^2+26 a b^3+17 b^4\right)}{(a-b)^2 \left(2 a^2+b^2\right) \left(a^2+2 b^2\right)}\\
n_{3,1}^{(1)}&= \frac{36 b^2 \left(17 a^4+12 a^3 b+17 a^2 b^2+6 a b^3+2 b^4\right)}{(a-b) (a+b) \left(2 a^2+b^2\right) \left(a^2+2 b^2\right)}\\
n_{3,2}^{(1)}&= -\frac{36 b^2 \left(17 a^4+26 a^3 b+33 a^2 b^2+22 a b^3+10 b^4\right)}{(a-b)^2 \left(2 a^2+b^2\right) \left(a^2+2 b^2\right)}\ ,\\
\end{split}
\end{equation*}
while
\begin{equation*}
\begin{split}
d_{0,0}^{(0)}&=1\\
d_{0,1}^{(0)}&= -\frac{4 \left(a^2+2 b^2\right)}{2 a^2+b^2}\\
d_{0,2}^{(0)}&= \frac{2 a^4+17 a^2 b^2+17 b^4}{(a-b) (a+b) \left(2 a^2+b^2\right)}\\
d_{1,0}^{(0)}&= -\frac{4 \left(2 a^2+b^2\right)}{a^2+2 b^2}\\
d_{1,1}^{(0)}&=\frac{8 \left(4 a^2+a b+4 b^2\right) \left(a^4+7 a^2 b^2+b^4\right)}{(a-b)^2 \left(2 a^2+b^2\right) \left(a^2+2 b^2\right)}\\
d_{1,2}^{(0)}&=-\frac{4 \left(4 a^5+14 a^4 b+22 a^3 b^2+32 a^2 b^3+19 a b^4+17 b^5\right)}{(a-b) \left(2 a^2+b^2\right) \left(a^2+2 b^2\right)}\\
d_{2,0}^{(0)}&= -\frac{17 a^4+17 a^2 b^2+2 b^4}{(a-b) (a+b) \left(a^2+2 b^2\right)}\\
d_{2,1}^{(0)}&=\frac{4 \left(17 a^5+19 a^4 b+32 a^3 b^2+22 a^2 b^3+14 a b^4+4 b^5\right)}{(a-b) \left(2 a^2+b^2\right) \left(a^2+2 b^2\right)}\\
d_{2,2}^{(0)}&=-\frac{34 a^6+76 a^5 b+137 a^4 b^2+154 a^3 b^3+137 a^2 b^4+76 a b^5+34 b^6}{(a-b)^2 \left(2 a^2+b^2\right) \left(a^2+2 b^2\right)} \\
\end{split}
\end{equation*}
and
\begin{equation*}
\begin{split}
d_{0,0}^{(1)}&=0\\
d_{0,1}^{(1)}&= \frac{12 b}{2 a^2+b^2}\\
d_{0,2}^{(1)}&=-\frac{12 b \left(a^2+2 b^2\right)}{(a-b) (a+b) \left(2 a^2+b^2\right)} \\
d_{1,0}^{(1)}&= \frac{12 a}{a^2+2 b^2}\\
d_{1,1}^{(1)}&=-\frac{48 \left(a^2+a b+b^2\right) \left(a^4+7 a^2 b^2+b^4\right)}{(a-b)^2 (a+b) \left(2 a^2+b^2\right) \left(a^2+2 b^2\right)}\\
d_{1,2}^{(1)}&=\frac{12 \left(2 a^4+6 a^3 b+11 a^2 b^2+9 a b^3+8 b^4\right)}{(a-b) \left(2 a^2+b^2\right) \left(a^2+2 b^2\right)}\\
d_{2,0}^{(1)}&= \frac{12 a \left(2 a^2+b^2\right)}{(a-b) (a+b) \left(a^2+2 b^2\right)}\\
d_{2,1}^{(1)}&=-\frac{12 \left(8 a^4+9 a^3 b+11 a^2 b^2+6 a b^3+2 b^4\right)}{(a-b) \left(2 a^2+b^2\right) \left(a^2+2 b^2\right)}\\
d_{2,2}^{(1)}&= \frac{12 (a+b) \left(a^2+a b+b^2\right) \left(4 a^2+a b+4 b^2\right)}{(a-b)^2 \left(2 a^2+b^2\right) \left(a^2+2 b^2\right)}\ .\\
\end{split}
\end{equation*}
It is easily verified that, for $b\to a$, $x(S,T,a,b)$ tends to $x(S,T,a)$ given by \eqref{eq:xa}, as expected.
\section{Expression for the primitive $\mathfrak{K}(\sigma,a,b)$}
Taking $\mathfrak{K}(\sigma,a,b)$ in the form
\begin{equation*}
\mathfrak{K}(\sigma,a,b)=\frac{a\, b\, \sigma\, \tau\, \mathfrak{H}(\sigma,\tau,a,b)}{\left(\mathfrak{D}(\sigma,\tau,a,b)\right)^2}\Bigg\vert_{\tau=\sigma^{b/a}}
\end{equation*}
with the same function $\mathfrak{D}(\sigma,\tau,a,b)$ as in Appendix~A and where $\mathfrak{H}(\sigma,\tau,a,b)$ is a polynomial of the form
\begin{equation*}
\mathfrak{H}(\sigma,\tau,a,b)=\sum_{i=0}^{2}\sum_{j=0}^2 h_{i,j}\, \sigma^i\tau^j\ ,
\end{equation*}
the desired condition \eqref{eq:K} is fulfilled if
\begin{equation*}
\frac{1}{3}\partial_\sigma\partial_\tau \mathfrak{X}(\sigma,\tau,a,b)= \left\{(a+b)+a\, \sigma\, \partial_\sigma +b\, \tau\, \partial_\tau \right\}\frac{\mathfrak{H}(\sigma,\tau,a,b)}{\left(\mathfrak{D}(\sigma,\tau,a,b)\right)^2}\ .
\end{equation*}
This fixes the coefficients $h_{i,j}\equiv h_{i,j}(a,b)$, namely:
\begin{equation*}
h_{i,j}=h_{i,j}^{(0)}+\sqrt{\frac{a^2+b^2}{2}}h_{i,j}^{(1)}
\end{equation*}
with
\begin{equation*}
\begin{split}
\hskip -1.2cm h_{0,1}^{(0)}&=
h_{1,0}^{(0)}=
h_{1,1}^{(0)}=
h_{1,2}^{(0)}=
h_{2,1}^{(0)}= 0\\
\hskip -1.2cm h_{0,0}^{(0)}&= -\frac{72 a^2 b^2 \left(4 a^2+a b+4 b^2\right)}{(a-b)^2 \left(2 a^2+b^2\right) \left(a^2+2 b^2\right)}\\
\hskip -1.2cm h_{0,2}^{(0)}&= \frac{72 a^2 b^2 \left(8 a^7+46 a^6 b+114 a^5 b^2+237 a^4 b^3+261 a^3 b^4+333 a^2 b^5+157 a b^6+140 b^7\right)}{(a-b)^3 (a+b)^2 \left(2 a^2+b^2\right)^2 \left(a^2+2 b^2\right)}\\
\hskip -1.2cm h_{2,0}^{(0)}&= -\frac{72 a^2 b^2 \left(140 a^7+157 a^6 b+333 a^5 b^2+261 a^4 b^3+237 a^3 b^4+114 a^2 b^5+46 a b^6+8 b^7\right)}{(a-b)^3 (a+b)^2 \left(2 a^2+b^2\right) \left(a^2+2 b^2\right)^2}\\
\hskip -1.2cm h_{2,2}^{(0)}&= \frac{72 a^2 b^2 \left(4 a^2+a b+4 b^2\right) \left(70 a^6+148 a^5 b+281 a^4 b^2+298 a^3 b^3+281 a^2 b^4+148 a b^5+70 b^6\right)}{(a-b)^4 \left(2 a^2+b^2\right)^2 \left(a^2+2
b^2\right)^2}\\
\end{split}
\end{equation*}
and
\begin{equation*}
\begin{split}
\hskip -1.2cm h_{0,1}^{(1)}&=
h_{1,0}^{(1)}=
h_{1,1}^{(1)}=
h_{1,2}^{(1)}=
h_{2,1}^{(1)}=0 \\
\hskip -1.2cm h_{0,0}^{(1)}&=\frac{432 a^2 b^2 \left(a^2+a b+b^2\right)}{(a-b)^2 (a+b) \left(2 a^2+b^2\right) \left(a^2+2 b^2\right)}\\
\hskip -1.2cm h_{0,2}^{(1)}&=-\frac{432 a^2 b^2 \left(2 a^6+10 a^5 b+29 a^4 b^2+43 a^3 b^3+62 a^2 b^4+37 a b^5+33 b^6\right)}{(a-b)^3 (a+b)^2 \left(2 a^2+b^2\right)^2 \left(a^2+2 b^2\right)}\\
\hskip -1.2cm h_{2,0}^{(1)}&=\frac{432 a^2 b^2 \left(33 a^6+37 a^5 b+62 a^4 b^2+43 a^3 b^3+29 a^2 b^4+10 a b^5+2 b^6\right)}{(a-b)^3 (a+b)^2 \left(2 a^2+b^2\right) \left(a^2+2 b^2\right)^2}\\
\hskip -1.2cm h_{2,2}^{(1)}&=-\frac{1296 a^2 b^2 \left(a^2+a b+b^2\right) \left(22 a^6+52 a^5 b+89 a^4 b^2+106 a^3 b^3+89 a^2 b^4+52 a b^5+22 b^6\right)}{(a-b)^4 (a+b) \left(2 a^2+b^2\right)^2 \left(a^2+2
b^2\right)^2}\ .\\
\end{split}
\end{equation*}
\section{Contour integral over $a$}
\begin{figure}
\begin{center}
\includegraphics[width=6cm]{contourg.pdf}
\end{center}
\caption{Deformation of the contour for the integral over $g$.}
\label{fig:contourg}
\end{figure}
Given $\mu\geq 0$, the integral over $g$
\begin{equation*}
\frac{1}{2\rm{i}\pi} \oint\frac{dg}{g^{N+1}} F(g,g\, e^{\frac{\mu}{N}})
\end{equation*}
is on a contour around $0$. Here $F(g,g\, e^{\frac{\mu}{N}})$ has a singularity for real $g>(1/12) e^{-\frac{\mu}{N}}$ and the contour may be deformed as in Figure~\ref{fig:contourg}.
For large $N$, the dominant contribution comes from the vicinity of the cut and is captured by setting
\begin{figure}
\begin{center}
\includegraphics[width=11cm]{contoura.pdf}
\end{center}
\caption{The contour in the variable $a^4$ resulting from the large $N$ limit of the integral over $g$ along the contour of Figure~\ref{fig:contourg}. The resulting
countour $\mathcal{C}=\mathcal{C}_1\cap \mathcal{C}_2$ in the variable $a$.}
\label{fig:contoura}
\end{figure}
\begin{equation*}
g=\frac{1}{12}\left(1-\frac{a^4}{36}\, \frac{1}{N}\right)
\end{equation*}
where the variable $a^4$ varies along the cut from $-\infty$ to $36 \mu$ back to $-\infty$. In other words, the contour $\mathcal{C}$ for the variable $a$ is that of Figure~\ref{fig:contoura},
made of two parts: a contour $\mathcal{C}_1$ made of two half straight lines at $\pm 45^\circ$ starting from the origin, and a contour $\mathcal{C}_2$ consisting of a back and
forth excursion from $0$ to $(36\mu)^{1/4}$ back to $0$. In the variable $a$, the integral reads
\begin{equation*}
\frac{1}{2\rm{i}\pi} \int_\mathcal{C} da\, \frac{-a^3}{9} \left\{ -\frac{1}{18} \frac{a^6-(a^4-36\mu)^{3/2})}{36\mu}\, e^{a^4/36} \right\}
\end{equation*}
Concerning the contour $\mathcal{C}_2$, the term $a^6$ has no cut hence contributes $0$ to the integral. As for the $(a^4-36\mu)^{3/2}$ term, setting
$a=\sqrt{6}(\mu-t^2)^{1/4}$ with real $t$ from $\sqrt{\mu}$ to $0$ back to $\sqrt{\mu}$, we have
\begin{equation*}
\begin{split}
\frac{1}{2\rm{i}\pi} \int_{\mathcal{C}_2} da\, \frac{-a^3}{9} \left\{\frac{1}{18} \frac{(a^4-36\mu)^{3/2})}{36\mu}\, e^{a^4/36} \right\}&=
\frac{1}{2\rm{i}\pi} \left\{\int_{\sqrt{\mu}}^0 dt\, 2t\, \frac{(-t^2)^{3/2}}{3\mu}\, e^{\mu-t^2} \right.
\\&\qquad \left.+\int_0^{\sqrt{\mu}} dt\, 2t\, \frac{(-t^2)^{3/2}}{3\mu}\, e^{\mu-t^2} \right\}\\
\end{split}
\end{equation*}
where $(-t^2)^{3/2}=-{\rm{i}}\, t^3$ for the first integral and $(-t^2)^{3/2}={\rm{i}}\, t^3$ for the second, so that the final contribution of the contour $\mathcal{C}_2$
is
\begin{equation*}
\frac{2}{3 \pi} \frac{e^{\mu}}{\mu} \int_0^{\sqrt{\mu}} dt\, t^4\, e^{-t^2}\ .
\end{equation*}
Let us now come to the integral over the contour $\mathcal{C}_1$.
The term $a^6$ now contributes to the integral: setting $a=\sqrt{6}\, e^{\pm {\rm i} \frac{\pi}{4}} \sqrt{t}$ with real $t$ from $+\infty$ to $0$ (respectively $0$ to $+\infty$),
we get a contribution
\begin{equation*}
\begin{split}
\frac{1}{2\rm{i}\pi} \int_{\mathcal{C}_1} da\, \frac{-a^3}{9} \left\{-\frac{1}{18}\ \frac{a^6}{36\mu}\, e^{a^4/36} \right\}&=
\frac{1}{2\rm{i}\pi} \left\{{\rm{i}}\int_{+\infty}^0 dt\, 2t\, \frac{t^3}{3\mu}\, e^{-t^2} \right.
\\& \qquad \qquad \left.-{\rm{i}} \int_0^{+\infty} dt\, 2t\, \frac{t^3}{3\mu}\, e^{-t^2} \right\}\\
&= -\frac{2}{3 \pi} \frac{1}{\mu} \int_0^{\infty} dt\, t^4\, e^{-t^2}\ .
\end{split}
\end{equation*}
Finally the $(a^4-36\mu)^{3/2}$ contribution is obtained by setting
$a=\sqrt{6}\, e^{\pm {\rm i} \frac{\pi}{4}} (t^2-\mu)^{1/4}$ with real $t$ from $+\infty$ to $\sqrt{\mu}$ (respectively $\sqrt{\mu}$ to $+\infty$). We get
\begin{equation*}
\begin{split}
\frac{1}{2\rm{i}\pi} \int_{\mathcal{C}_1} da\, \frac{-a^3}{9} \left\{\frac{1}{18} \frac{(a^4-36\mu)^{3/2})}{36\mu}\, e^{a^4/36} \right\}&=
\frac{1}{2\rm{i}\pi} \left\{\int_{+\infty}^{\sqrt{\mu}} dt\, 2t\, \frac{(-t^2)^{3/2}}{3\mu}\, e^{\mu-t^2} \right.
\\&\qquad \left.+\int_{\sqrt{\mu}}^{+\infty} dt\, 2t\, \frac{(-t^2)^{3/2}}{3\mu}\, e^{\mu-t^2} \right\}\\
\end{split}
\end{equation*}
where again $(-t^2)^{3/2}=-{\rm{i}}\, t^3$ for the first integral and $(-t^2)^{3/2}={\rm{i}}\, t^3$ for the second, so that the final contribution reads
\begin{equation*}
\frac{2}{3 \pi} \frac{e^{\mu}}{\mu} \int_{\sqrt{\mu}}^\infty dt\, t^4\, e^{-t^2}\ .
\end{equation*}
Adding up all the contributions, we end up with the result:
\begin{equation*}
\begin{split}
\hskip -1.2cm \frac{2}{3 \pi} \frac{e^{\mu}}{\mu} \int_0^{\sqrt{\mu}} dt\, t^4\, e^{-t^2} -\frac{2}{3 \pi} \frac{1}{\mu} \int_0^{\infty} dt\, t^4\, e^{-t^2}
+\frac{2}{3 \pi} \frac{e^{\mu}}{\mu} \int_{\sqrt{\mu}}^\infty dt\, t^4\, e^{-t^2}& = \frac{e^\mu -1}{\mu} \frac{2}{3\pi} \int_0^{\infty} dt\, t^4\, e^{-t^2}\\
& = \frac{e^\mu -1}{\mu} \times \frac{1}{4\, \sqrt{\pi}}\ .\\
\end{split}
\end{equation*}
\bibliographystyle{plain}
\bibliography{voronoi}
\end{document} | {"config": "arxiv", "file": "1703.02781/voronoi.tex"} |
TITLE: Why is $(\forall x \in \mathbb{Z})(\exists y \in \mathbb{Z}) (2x + y = 3 \to x + 2y = 3)$ true?
QUESTION [2 upvotes]: Is the statement true or false? $$(\forall x \in \mathbb{Z})(\exists y \in \mathbb{Z}) (2x + y = 3 \to x + 2y = 3)$$ The answer is that the entire statement is true, but I do not see why.
The first part of the equation $2x + y = 3$ would be true since for every $x$ there is at least one that satisfies the equation. The right is false, since there for every $x$ there is not at least one $y$ that satisfies the equation. But this would make the conditional statement false since the left is true and the right is false?
REPLY [0 votes]: $A\to B$ is equivalent to $((\neg A)\lor B).$ So if $A$ is false then $A\to B$ is true. So $(\neg A) \implies (\;(\neg A)\land((\neg A)\lor B)\;)\implies (A\to B).$
So if $A$ is $2x+y=3,$ and $B$ is anything, then $\forall x\in \Bbb Z\;\exists y\in \Bbb Z\;(A\to B)$ is true if $\forall x\in \Bbb Z\;\exists y\in \Bbb Z\;(\neg A)$ is true.
In detail $$\forall x\in \Bbb Z\; \exists y\in \Bbb Z\;(y\ne 3-2x)\implies$$ $$\implies \forall x\;\exists y\in \Bbb Z\;(\neg A)\implies$$ $$\implies \forall x\in \Bbb Z\;\exists y\in \Bbb Z\;(\neg A\land (A\to B))\implies$$ $$\implies \forall x\in \Bbb Z\;\exists Y\in \Bbb Z\;(A\to B).$$ | {"set_name": "stack_exchange", "score": 2, "question_id": 2906826} |
TITLE: Find functions $f:\mathbb{N}\rightarrow \mathbb{N}$ such that $ f(m+n)=f(m)+f(n)+2mn$
QUESTION [5 upvotes]: Find functions $f:\mathbb{N}\rightarrow \mathbb{N}$ such that $ f(m+n)=f(m)+f(n)+2mn$
Setting $m=n$ we see that:
$$ f(2n)=2f(n)+2n^2$$
Setting $m=1$:
$$ f(n+1)=f(1)+f(n)+2n$$
Then, for example when $n=1$:
$$ f(2)=f(1)+f(1)+2=2f(1)+2$$
And we can observe that $f(n)=n^2$ satisfies the equations, as:
$$ f(m+n)=(m+n)^2=m^2+n^2+2mn=f(m)+f(n)+2mn$$
Then, if we assume that $f(n)=n^2+g(n)$, then we have:
$$ f(m+n)=(m+n)^2+g(m+n)=m^2+g(m)+n^2+g(n)+2mn $$
and so:
$$ g(m+n)=g(m)+g(n)$$
And that is Cauchy functional equation for natural numbers, so we see that we can only have $$g(n)=cn$$ for some $c$, so our solution would look like:
$$ f(n)=n^2+cn$$
And in this case:
$$ f(m+n)=(m+n)^2+c(m+n)$$
and $$f(m)+f(n)+2mn=m^2+cm+n^2+cn+2mn$$
so we see that $f(n)=n^2+cn$ is a solution. My question is:
is my solution correct? Does it suffice to do what I did to show that's the only one such function?
REPLY [1 votes]: Looks correct to me,
wirh the addition of
$f(1)$ being arbitrary
and
$f(n) = n^2+(f(1)-1)n$. | {"set_name": "stack_exchange", "score": 5, "question_id": 2183340} |
TITLE: $f \in L^1(\mathbb R), f>0$ then $|\hat f(y)| < \hat f(0), y \ne 0$
QUESTION [3 upvotes]: Suppose $f$ is a strictly positive function in $L^1(\mathbb R)$. Show
$$
|\hat f(y)| < \hat f(0) \text{, for all } y \ne 0.
$$
Using monotonicity of the integral, I can show $|\hat f(y)| \le \hat f(0)$.
I don't see how to make the inequality strict.
REPLY [3 votes]: If $\widehat f(0)=|\widehat f(y)|$ for some $y\neq 0$, let $\theta$ such that $|\widehat f(y)|=e^{i\theta}\widehat f(y)$. Then
$$\int_{\Bbb R}f(t)dt=\int_{\Bbb R}f(t)e^{i(ty+\theta})dt,$$
which gives
$$\int_{\Bbb R}f(t)(1-e^{i(ty+\theta)})dt=0=\int_{\Bbb R}f(t)(1-\cos(ty+\theta))-i\int_{\Bbb R}f(t)\sin(ty+\theta)dt.$$
In particular
$$\int_{\Bbb R}\underbrace{f(t)(1-\cos(ty+\theta))}_{\geq 0}dt=0,$$
so $f(t)(1-\cos(ty+\theta))=0$ almost everywhere. As $f>0$ almost everywhere, we get a contradiction. | {"set_name": "stack_exchange", "score": 3, "question_id": 218671} |
TITLE: Understanding units mod $n$ are relatively prime to $n$
QUESTION [1 upvotes]: I am trying to prove that the only invertible elements in $\mathbb{Z}_n$ are those that are relatively prime to $n$.
The first half is straightforward. If $i$ is relatively prime to $n$, so $\gcd(i,n) = 1$, we have $\alpha i + \beta n = 1$, so $\alpha i - 1 = (-\beta)n$, so $\alpha i \equiv 1 \text{ mod $n$}$, and $a$ is the inverse of $i$.
The second half is less straightforward. I am trying to follow some proofs of this fact that I have found, but none of them are clear about the important step. Suppose $a$ is invertible. Then there exists $b$ such that $ab \equiv 1 \text{ mod $n$}$. So $ab - 1 = kn$ for some $k \in \mathbb{Z}$, so $ab = 1 + kn$. The claim at this point is that we can deduce immediately that $\gcd(a,n) = 1$. To show this, I believe we would need to show that $1$ divides both $a$ and $n$ (which is rather trivially true) and that if any other integer divides both $a$ and $n$, then it must divide $1$. Suppose $\gamma$ divides $a$ and $n$. That it divides $n$ implies that it implies $ab$, so it must divide $1 + kn$. It certainly divides $kn$ because it divides $n$, and it must also divide $1$, which gives the result.
How does this proof sound?
REPLY [1 votes]: You proof is good. we can tie this all together:
$m$ is invertible $\mod n$ if and only if there is an integer $k$ so that $km \equiv 1\pmod n$ (Def)
if and only if $km -1$ is divisible by $n$ (Def)
if and only if there is an integer $j$ so that $jn = km-1$ (Def)
if and only if $km +(-j)n = 1$ (Obvious algebraic maniplation).
....
So we need to prove such $k,-j$ exist iff and only if $m,n$ are relatively prime.
....
If $m$ and $n$ are relatively prime the Bezout's lemma states precisely that such $k$ and $-j$ exist
And if such $k,-j$ exist then the $\gcd(m,n)$ divides the LHS so $\gcd(m,n)|1$ and so $\gcd(m,n) =1$. So $m$ and $n$ are relatively prime. | {"set_name": "stack_exchange", "score": 1, "question_id": 3708637} |
\begin{document}
\title{On the generalized Helmholtz conditions for Lagrangian systems with dissipative forces}
\author{M.\ Crampin, T.\ Mestdag and W.\ Sarlet\\[2mm]
{\small Department of Mathematics, Ghent University}\\
{\small Krijgslaan 281, B-9000 Ghent, Belgium}}
\date{}
\maketitle
\begin{quote}
{\small {\bf Abstract.}
In two recent papers necessary and sufficient conditions for a given
system of second-order ordinary differential equations to be of
Lagrangian form with additional dissipative forces were derived. We
point out that these conditions are not independent and prove a stronger result
accordingly.\\
{\bf Keywords.} Lagrangian systems, dissipative forces, inverse
problem, Helmholtz conditions.\\[1mm]
{\bf MSC (2000).} 70H03, 70F17, 49N45}
\end{quote}
\section{Introduction}
The Helmholtz conditions, for the purposes of this paper, are the
necessary and sufficient conditions for a given system of
second-order ordinary differential equations
$f_a(\ddot{q},\dot{q},q,t)=0$ to be of Euler-Lagrange type, that is,
for there to exist a Lagrangian $\Lambda(\dot{q},q,t)$ such that
\begin{equation}
f_a=\frac{d}{dt}\left(\fpd{\Lambda}{\dot{q}^a}\right)-
\fpd{\Lambda}{q^a}.\label{EL}
\end{equation}
Here $q^a$ are the generalized coordinates (collectively abbreviated
to $q$), $\dot{q}^a$ the corresponding generalized velocities, and so
on. We shall state the conditions shortly. The
Lagrangian is supposed to be of first order, that is,
independent of $\ddot{q}$ and higher-order derivative (or more
properly jet) coordinates.
In two recent papers the problem of finding analogous necessary and
sufficient conditions for a given set of functions $f_a(\ddot{q},\dot{q},q,t)$ to take the
more general form
\begin{equation}
f_a=\frac{d}{dt}\left(\fpd{\Lambda}{\dot{q}^a}\right)-
\fpd{\Lambda}{q^a}+\fpd{D}{\dot{q}^a}\label{diss}
\end{equation}
for first-order functions $\Lambda$ (a Lagrangian) and $D$ (a
dissipation function) has been discussed. We shall say that in this
case the equations $f_a=0$ are of Lagrangian form with dissipative
forces of gradient type. A set of necessary
and sufficient conditions is given, in terms of standard coordinates,
in the fairly recent paper \cite{germ1}. In a very recent paper
\cite{germ2} a version of the conditions expressed in terms of
quasi-velocities, or as the authors call them nonholonomic velocities,
is obtained. We shall quote the conditions from \cite{germ1}
explicitly in Section~2. These conditions are described as
generalized Helmholtz conditions to distinguish them from the
Helmholtz conditions discussed in our opening paragraph, which may be
called the classical Helmholtz conditions; these must of course
comprise a special case of the generalized conditions.
The main purpose of the present paper is to point out that the
generalized Helmholtz conditions as stated in \cite{germ1} are not
independent:\ in fact two of them are redundant, in that they can be
derived from the remaining ones. This we show in Section~2 below. We
use the same formalism as \cite{germ1}. Since the version of
the generalized conditions obtained in \cite{germ2} is equivalent to
that in \cite{germ1} the same redundancy
is present there as well. By taking advantage of the improvement in the formulation of the
generalized Helmholtz conditions that we achieve, we are able to give
a shorter and more elegant proof of their sufficiency than the one to
be found in \cite{germ1}.
There are in fact several interesting questions raised by the two
papers \cite{germ2,germ1}, only one of which will be dealt with here.
In the third and final section of our paper we give an outline of
these additional points of interest, which will receive a fuller
airing elsewhere.
We employ the Einstein summation convention throughout.
We end this introduction with a brief summary of the results about the
classical Helmholtz conditions that we shall need.
The classical Helmholtz conditions are that the $f_a$ should satisfy
\begin{align}
\fpd{f_a}{\ddot{q}^b}&=\fpd{f_b}{\ddot{q}^a}\label{HC1}\\
\fpd{f_a}{\dot{q}^b}+\fpd{f_b}{\dot{q}^a}
&=2\frac{d}{dt}\left(\fpd{f_b}{\ddot{q}^a}\right)\label{HC2}\\
\fpd{f_a}{q^b}-\fpd{f_b}{q^a}&=
\onehalf\frac{d}{dt}\left(\fpd{f_a}{\dot{q}^b}-\fpd{f_b}{\dot{q}^a}\right).\label{HC3}
\end{align}
It is a consequence of these (and not an extra condition, as stated in
\cite{germ1}) that
\[
\spd{f_a}{\ddot{q}^b}{\ddot{q}^c}=0
\]
(this follows from the vanishing of the coefficient of \dddotqc\
in condition~(\ref{HC2})). Thus we may write
$f_a=g_{ab}\ddot{q}^b+h_a$, the coefficients being of first order,
with $g_{ab}=g_{ba}$ as a result of condition~(\ref{HC1}). The
Helmholtz conditions can be re-expressed in terms of $g_{ab}$ (assumed
to be symmetric) and $h_a$, when they reduce to the following three
conditions:
\begin{align}
\fpd{g_{ab}}{\dot{q}^c}-\fpd{g_{ac}}{\dot{q}^b}&=0\label{2gh1}\\
\fpd{h_a}{\dot{q}^b}+\fpd{h_b}{\dot{q}^a}
&=2\frac{\bar{d}}{dt}(g_{ab})\label{2gh2}\\
2\left(\fpd{h_a}{q^b}-\fpd{h_b}{q^a}\right)
&=\frac{\bar{d}}{dt}\left(\fpd{h_a}{\dot{q}^b}-\fpd{h_b}{\dot{q}^a}\right),\label{2gh3}
\end{align}
where
\[
\frac{\bar{d}}{dt}=\fpd{}{t}+\dot{q}^c\fpd{}{q^c}.
\]
That is to say, the
$f_a$ are the Euler-Lagrange expressions of some first-order
Lagrangian if and only if $f_a=g_{ab}\ddot{q}^b+h_a$ for some
first-order functions $g_{ab}$ and $h_a$ such that $g_{ab}=g_{ba}$ and
(\ref{2gh1})--(\ref{2gh3}) hold. This reformulation can be found in
the book by Santilli \cite{Sant}, for example.
\section{The generalized Helmholtz conditions}
We next turn to the analysis of
the generalized Helmholtz conditions. Following the notation of
\cite{germ1} we set
\begin{align*}
r_{ab}&=\fpd{f_a}{q^b}-\fpd{f_b}{q^a}
+\onehalf\frac{d}{dt}\left(\fpd{f_b}{\dot{q}^a}-\fpd{f_a}{\dot{q}^b}\right)\\
s_{ab}&=\onehalf\left(\fpd{f_a}{\dot{q}^b}+\fpd{f_b}{\dot{q}^a}\right)
-\frac{d}{dt}\left(\fpd{f_b}{\ddot{q}^a}\right).
\end{align*}
The generalized Helmholtz conditions as given in \cite{germ1} are that
$r_{ab}$ and $s_{ab}$ are of first order, and that in addition
\begin{align}
\fpd{f_a}{\ddot{q}^b}&=\fpd{f_b}{\ddot{q}^a}\label{GHC1}\\
\fpd{s_{ab}}{\dot{q}^c}&=\fpd{s_{ac}}{\dot{q}^b}\label{GHC2}\\
\fpd{r_{ab}}{\dot{q}^c}&=\fpd{s_{ac}}{q^b}-\fpd{s_{bc}}{q^a}\label{GHC3}\\
0&=\fpd{r_{ab}}{q^c}+\fpd{r_{bc}}{q^a}+\fpd{r_{ca}}{q^b}.\label{GHC4}
\end{align}
Our main concern will be with analyzing conditions~(\ref{GHC2})--(\ref{GHC4}), which
correspond to (2.3e), (2.3f) and (2.3g) of \cite{germ1}; we shall show
that conditions (\ref{GHC2}) and (\ref{GHC4}) are redundant, being
consequences of the remaining conditions.
Our first aim is to understand exactly what it means for $r_{ab}$ and
$s_{ab}$ to be of first order, bearing in mind condition~(\ref{GHC1})
above. From the vanishing of the coefficient of \dddotqc\ in $s_{ab}$ we have
\[
\spd{f_a}{\ddot{q}^b}{\ddot{q}^c}=0.
\]
As before $f_a=g_{ab}\ddot{q}^b+h_a$, the coefficients being of first order and
$g_{ab}$ symmetric. The coefficient of \dddotqc\ in $r_{ab}$ is
\[
\onehalf\fpd{}{\ddot{q}^c}\left(
\fpd{g_{bd}}{\dot{q}^a}\ddot{q}^d+\fpd{h_b}{\dot{q}^a}
-\fpd{g_{ad}}{\dot{q}^b}\ddot{q}^d-\fpd{h_a}{\dot{q}^b}\right),
\]
whence
\[
\fpd{g_{bc}}{\dot{q}^a}=\fpd{g_{ac}}{\dot{q}^b}.
\]
The coefficient of $\ddot{q}^c$ in $s_{ab}$, namely
\[
\fpd{g_{ac}}{\dot{q}^b}+\fpd{g_{bc}}{\dot{q}^a}-2\fpd{g_{ab}}{\dot{q}^c},
\]
vanishes as a consequence. The coefficient of $\ddot{q}^c$ in
$r_{ab}$ is
\[
\fpd{g_{ac}}{q^b}-\fpd{g_{bc}}{q^a}
-\onehalf\left(\spd{h_a}{\dot{q}^b}{\dot{q}^c}-\spd{h_b}{\dot{q}^a}{\dot{q}^c}\right),
\]
an expression which for later convenience we write as $\rho_{abc}$; we
must of course have $\rho_{abc}=0$. The remaining terms in $r_{ab}$
and $s_{ab}$ are all of first order, and we have
\begin{align*}
r_{ab}&=\fpd{h_a}{q^b}-\fpd{h_b}{q^a}
-\onehalf\frac{\bar{d}}{dt}\left(\fpd{h_a}{\dot{q}^b}-\fpd{h_b}{\dot{q}^a}\right)\\
s_{ab}&=\onehalf\left(\fpd{h_a}{\dot{q}^b}+\fpd{h_b}{\dot{q}^a}\right)
-\frac{\bar{d}}{dt}(g_{ab});
\end{align*}
compare with (\ref{2gh2}) and (\ref{2gh3}), and also with equations
(2.16b) and (2.17c) of \cite{germ1}.
The redundancy of condition~(\ref{GHC2}) is a consequence of the
vanishing of $\rho_{abc}$, as we now show. We have
\[
\fpd{s_{ac}}{\dot{q}^b}
=\onehalf\left(\spd{h_a}{\dot{q}^b}{\dot{q}^c}+\spd{h_c}{\dot{q}^a}{\dot{q}^b}\right)
-\frac{\bar{d}}{dt}\left(\fpd{g_{ac}}{\dot{q}^b}\right)-\fpd{g_{ac}}{q^b},
\]
using the commutation relation
\begin{equation}
\left[\fpd{}{{\dot q}^a} , \frac{\bar{d}}{dt} \right] =
\fpd{}{q^a}.\label{comm}
\end{equation}
It follows that $\partial s_{ac}/\partial \dot{q}^b-\partial s_{bc}/\partial \dot{q}^a=-\rho_{abc}=0$.
That is to say, condition~(\ref{GHC2}) holds automatically as a
consequence of the first-order property. Furthermore, $\rho_{abc}=0$ is equivalent
to equation (2.17b) of \cite{germ1}; in other words, the redundancy of
(\ref{GHC2}) is actually implicit in \cite{germ1}, though not
apparently recognized there.
Before proceeding to consider condition~(\ref{GHC4}) we turn aside to
make some remarks about the classical Helmholtz conditions. The calculations just carried out are essentially the same as those
which lead to the version of the classical Helmholtz conditions given
in equations~(\ref{2gh1})--(\ref{2gh3}) at the end of the
introduction. It is easy to see that in that case $r_{ab}=s_{ab}=0$
are necessary conditions. This observation, together with the part of
the argument concerning the vanishing of the coefficients of \dddotq\
and $\ddot{q}$, leads to the following conditions:
\begin{align}
\fpd{g_{ac}}{\dot{q}^b}+\fpd{g_{bc}}{\dot{q}^a}&=2\fpd{g_{ab}}{\dot{q}^c}\label{gh1}\\
\fpd{h_a}{\dot{q}^b}+\fpd{h_b}{\dot{q}^a}
&=2\frac{\bar{d}}{dt}(g_{ab})\label{gh2}\\
\fpd{g_{ab}}{\dot{q}^c}-\fpd{g_{ac}}{\dot{q}^b}&=0\label{gh3}\\
\spd{h_a}{\dot{q}^b}{\dot{q}^c}-\spd{h_b}{\dot{q}^a}{\dot{q}^c}
&=2\left(\fpd{g_{ac}}{q^b}-\fpd{g_{bc}}{q^a}\right)\label{gh4}\\
2\left(\fpd{h_a}{q^b}-\fpd{h_b}{q^a}\right)
&=\frac{\bar{d}}{dt}\left(\fpd{h_a}{\dot{q}^b}-\fpd{h_b}{\dot{q}^a}\right)\label{gh5}.
\end{align}
These are the conditions quoted in Remark~3 of Section~1 of
\cite{germ1}. However, it is now evident that two of them are
redundant. Clearly condition~(\ref{gh1}) (which is the vanishing of
the coefficient of $\ddot{q}^c$ in $s_{ab}$) follows from
condition~(\ref{gh3}) and the symmetry of $g_{ab}$.
Condition~(\ref{gh4}) is the condition $\rho_{abc}=0$. The second
part of the argument above, that leading to the relation
$\rho_{abc}=\partial s_{bc}/\partial\dot{q}^a-\partial
s_{ac}/\partial\dot{q}^b$, shows that in the classical case
condition~(\ref{gh4}) follows from the other conditions. When
these two redundant conditions are removed we obtain the classical
Helmholtz conditions in the form given at the end of the introduction.
These results in the classical case are actually very well known,
though not apparently to the authors of \cite{germ1}, and have been
known for a long time:\ they are to be found, for example, in
Santilli's book of 1978 \cite{Sant} (which is in fact referred to in
\cite{germ1}).
For the sake of clarity we should point out a difference
between the two cases:\ in the classical case condition~(\ref{gh4}) is
completely redundant; in the generalized case it is not redundant, but
occurs twice in the formulation of the conditions in \cite{germ1},
once in the requirement that $r_{ab}$ should be of first order and
once as the condition $\partial s_{ab}/\partial\dot{q}^c=\partial
s_{bc}/\partial\dot{q}^a$.
We now return to the generalized conditions, and prove that
condition~(\ref{GHC4}) follows from condition~(\ref{GHC3}). It will
be convenient to write condition~(\ref{GHC4}) as
\[
\sum_{a,b,c} \fpd{r_{ab}}{q^c}=0,
\]
where $\sum_{a,b,c}$ stands for the cyclic sum over $a$, $b$ and $c$, here
and below. As a preliminary remark,
note that if $k_{abc}$ is symmetric in $b$ and $c$ (say) then
$\sum_{a,b,c}k_{abc}=\sum_{a,b,c}k_{bac}$. Now
\[
\fpd{r_{ab}}{q^c}=\spd{h_a}{q^b}{q^c}-\spd{h_b}{q^a}{q^c}
-\onehalf\frac{\bar{d}}{dt}\left(\spd{h_a}{q^c}{\dot{q}^b}-\spd{h_b}{q^c}{\dot{q}^a}\right),
\]
and so by the preliminary remark
\[
\sum_{a,b,c}\fpd{r_{ab}}{q^c}=
-\onehalf\frac{\bar{d}}{dt}\left(\sum_{a,b,c}
\left(\spd{h_a}{q^c}{\dot{q}^b}-\spd{h_b}{q^c}{\dot{q}^a}\right)\right)
=
-\onehalf\frac{\bar{d}}{dt}\left(\sum_{a,b,c}
\left(\spd{h_a}{q^c}{\dot{q}^b}-\spd{h_a}{q^b}{\dot{q}^c}\right)\right).
\]
On the other hand, using the commutation relation (\ref{comm}) and the fact
that $\partial/\partial q^a$ and $\bar{d}/dt$ commute it is easy
to see that condition~(\ref{GHC3}) leads to
\[
\onehalf\sum_{a,b,c}\left(\spd{h_a}{q^c}{\dot{q}^b}-\spd{h_a}{q^b}{\dot{q}^c}\right)
=-\frac{\bar{d}}{dt}(\rho_{abc})=0.
\]
We therefore reach the following proposition, which is
stronger than the corresponding result in
\cite{germ1}.
\begin{myprop} The necessary and sufficient conditions for the
equations $f_a(\ddot{q},\dot{q},q,t)=0$ to be of Lagrangian form with
dissipative forces of gradient type as in (\ref{diss}) are that the
functions $r_{ab}$ and $s_{ab}$ are of first order, that
\[
\fpd{f_a}{\ddot{q}^b}=\fpd{f_b}{\ddot{q}^a},
\]
and that
\begin{equation}
\fpd{r_{ab}}{\dot{q}^c}=\fpd{s_{ac}}{q^b}-\fpd{s_{bc}}{q^a}\label{prop}.
\end{equation}
\end{myprop}
Just as in the classical case we can give an equivalent formulation of
these conditions in terms of $g_{ab}$ and $h_a$. Bearing in mind that
$r_{ab}$ and $s_{ab}$ being of first order are essential hypotheses,
we find that the following conditions are equivalent to those given in
the proposition above:\ $f_a=g_{ab}\ddot{q}^b+h_a$ with $g_{ab}$
symmetric, where $g_{ab}$, $h_a$ are of first order and further satisfy
\begin{align*}
\fpd{g_{ab}}{\dot{q}^c}-\fpd{g_{ac}}{\dot{q}^b}&=0\\
\spd{h_a}{\dot{q}^b}{\dot{q}^c}-\spd{h_b}{\dot{q}^a}{\dot{q}^c}
&=2\left(\fpd{g_{ac}}{q^b}-\fpd{g_{bc}}{q^a}\right)\\
\sum_{a,b,c}
\left(\spd{h_a}{q^b}{\dot{q}^c}-\spd{h_a}{q^c}{\dot{q}^b}\right)&=0.
\end{align*}
The first of these is one of the classical conditions. The second is
the condition $\rho_{abc}=0$, which holds in the classical case as we
have shown. The third is just condition~(\ref{prop}) above expressed
in terms of $g_{ab}$ and $h_a$ (or as it turns out, in terms of $h_a$
alone), and $r_{ab}=s_{ab}=0$ in the classical case. It is evident
therefore that the conditions above are indeed a generalization of
those for the classical case.
We end this section by giving an
alternative proof of the sufficiency of the generalized Helmholtz
conditions, based on this formulation of them, which is shorter and in
our view more elegant than the proof in \cite{germ1} (necessity is an
easy if tedious calculation).
We note first that if $g_{ab}$ is symmetric and satisfies
$\partial g_{ab}/\partial\dot{q}^c=\partial g_{ac}/\partial\dot{q}^b$
then
\[
g_{ab}=\spd{K}{\dot{q}^a}{\dot{q}^b}
\]
for some function $K=K(\dot{q},q,t)$ (a well-known result, which also
appears in \cite{germ1}). Of course $K$ is not determined by this
relation; in fact if $\Lambda=K+P_a\dot{q}^a+Q$, where $P_a$ and $Q$
are any functions of $q$ and $t$, then $\Lambda$ has the same Hessian
as $K$ (the same $g_{ab}$, in other words). Our aim is to choose
$P_a$ and $Q$ so that the given equations are of Lagrangian form with
dissipative forces of gradient type as in (\ref{diss}), with
Lagrangian $\Lambda$, assuming that the generalized Helmholtz
conditions above hold. In fact we won't need to consider $Q$ because
it can be absorbed:\ if $\Lambda$ is a Lagrangian and $D$ a
dissipation function for some functions $f_a$, so are $\Lambda+Q$ and
$D+\dot{q}^a\partial Q/\partial q^a$. We shall therefore take $Q=0$
below.
Let $E_a$ be the Euler-Lagrange expressions of $K$.
Then $E_a=g_{ab}\ddot{q}^b+k_a$ for some first-order $k_a$, by
construction, so $f_a-E_a=h_a-k_a=\kappa_a$ say, where $\kappa_a$ is
also of first order. Moreover, $f_a$ satisfies the generalized
Helmholtz conditions by assumption, and $E_a$ does so by construction
(it satisfies the classical conditions after all), whence $\kappa_a$
satisfies
\begin{align}
\spd{\kappa_a}{\dot{q}^b}{\dot{q}^c}-\spd{\kappa_b}{\dot{q}^a}{\dot{q}^c}
&=0\label{gkappa1}\\
\sum_{a,b,c}
\left(\spd{\kappa_a}{q^b}{\dot{q}^c}-\spd{\kappa_a}{q^c}{\dot{q}^b}\right)&=0.\label{gkappa2}
\end{align}
Let us set $\partial\kappa_a/\partial\dot{q}^b-\partial\kappa_b/\partial\dot{q}^a=R_{ab}$.
Then by (\ref{gkappa1}) $R_{ab}$ is independent of $\dot{q}$, and by
(\ref{gkappa2})
\[
\sum_{a,b,c} \fpd{R_{ab}}{q^c}=0.
\]
There are therefore functions $P_a(q,t)$ such that
\[
R_{ab}=2\left(\fpd{P_a}{q^b}-\fpd{P_b}{q^a}\right)=
\fpd{\kappa_a}{\dot{q}^b}-\fpd{\kappa_b}{\dot{q}^a},
\]
which is to say that if we set
\[
\pi_{ab}=
\fpd{\kappa_a}{\dot{q}^b}-\left(\fpd{P_a}{q^b}-\fpd{P_b}{q^a}\right)
\]
then $\pi_{ba}=\pi_{ab}$. Moreover,
\[
\fpd{\pi_{ab}}{\dot{q}^c}=\spd{\kappa_a}{\dot{q}^b}{\dot{q}^c}=\fpd{\pi_{ac}}{\dot{q}^b}.
\]
It follows (just as is the case for $g_{ab}$) that there is a
first-order function $D'$ such that
\[
\pi_{ab}=\spd{D'}{\dot{q}^a}{\dot{q}^b},
\]
from which we obtain
\[
\kappa_a=\left(\fpd{P_a}{q^b}-\fpd{P_b}{q^a}\right)\dot{q}^b+\fpd{D'}{\dot{q}^a}+S_a
\]
where $S_a$ is independent of $\dot{q}$. Now take $\Lambda=K+P_a\dot{q}^a$.
Denoting the Euler-Lagrange expressions of $K$ by $E_a$ as before, the
Euler-Lagrange expressions for $\Lambda$ are
\[
E_a+\left(\fpd{P_a}{q^b}-\fpd{P_b}{q^a}\right)\dot{q}^b+\fpd{P_a}{t}
=E_a+\kappa_a-\fpd{D'}{\dot{q}^a}-S_a+\fpd{P_a}{t}.
\]
Thus, putting
\[
D=D'+\left(S_a-\fpd{P_a}{t}\right)\dot{q}^a,
\]
we get
\[
f_a=\frac{d}{dt}\left(\fpd{\Lambda}{\dot{q}^a}\right)-
\fpd{\Lambda}{q^a}+\fpd{D}{\dot{q}^a}
\]
as required.
This method of proof works equally well in the classical
case. The proof is constructive, in the
same sense that the one in \cite{germ1} is, in either case. It is
particularly well adapted to the familiar situation in which $g_{ab}$
is independent of $\dot{q}$, when one can take the kinetic energy
$\onehalf g_{ab}\dot{q}^a\dot{q}^b$ for $K$.
\section{Concluding remarks}
We wish to make four remarks in conclusion.
The first remark concerns the nature of
conditions~(\ref{GHC2})--(\ref{GHC4}) on the derivatives of $r_{ab}$
and $s_{ab}$, as originally expressed in \cite{germ1} (that is,
ignoring the question of dependence). In particular, bearing in mind
the fact that $r_{ab}$ is skew in its indices, the condition
\[
\sum_{a,b,c} \fpd{r_{ab}}{q^c}=0
\]
is suggestive:\ if perchance the $r_{ab}$ were functions of the $q$ alone this
would have a natural interpretation in terms of the exterior calculus,
being the condition for the 2-form $r_{ab}dq^a\wedge dq^b$ to be
closed, that is, to satisfy $d(r_{ab}dq^a\wedge dq^b)=0$. This point
is made, in somewhat different terms, in \cite{germ1} (and we appealed
to the same general result in our proof of sufficiency of the
generalized Helmholtz conditions in Section 2). The authors of
\cite{germ1} go on to say, however, that the condition above `can be
interpreted as the vanishing curvature of a symplectic space', which
seems to us not to be entirely convincing. In fact it is possible to
interpret the three conditions~(\ref{GHC2})--(\ref{GHC4}) collectively
as signifying the vanishing of a certain exterior derivative of a
certain 2-form on the space of coordinates
$t,q,\dot{q},\ddot{q},\ldots$\,, a 2-form whose coefficients involve
both $r_{ab}$ and $s_{ab}$. This interpretation really arises from
seeing the problem in the context of the so-called variational
bicomplex (see \cite{vitolo} for a recent review).
Secondly, we contend that the problem we are dealing with should
really be regarded as one about (second-order) dynamical systems. The
point is that a dynamical system may be represented as a system of
differential equations in many different coordinate
formulations; the question of real interest is whether there is {\em
some\/} representation of it which takes the form of an Euler-Lagrange
system with dissipation, not just whether a given representation of it
takes that form. Of course this point applies equally, mutatis
mutandis, to the case in which there is no dissipation. Now the
Helmholtz conditions as discussed in \cite{germ2,germ1}, in both the
classical and the generalized versions, suffer from the disadvantage
that they are conditions for a {\em given\/} system of differential
equations to be of Euler-Lagrange type. There is, however, an
alternative approach to the problem which does deal with dynamical
systems rather than equations, at least in the case in which the
system can be expressed in normal form $\ddot{q}^a=F^a(\dot{q},q,t)$.
In this approach one asks (in the absence of dissipation) for
conditions for the existence of a so-called multiplier, a non-singular
matrix with elements $g_{ab}$, such that $g_{ab}(\ddot{q}^b-F^b)$
takes the Euler-Lagrange form (so that in particular when the
conditions are satisfied $g_{ab}$ will be the Hessian of the
Lagrangian with respect to the velocity variables). The basic idea is
to put $h_a=-g_{ab}F^b$ in the conditions at the end of the
introduction, and regard the results as a system of partial
differential equations equations for $g_{ab}$ with $F^a$
known. The seminal paper in this approach
is Douglas's of 1941 \cite{doug}, which analyses in great detail the
case of two degrees of freedom. For a recent review of developments
since then see Sections 5 and 6 of \cite{olgageoff} and references therein. One can in fact
also formulate conditions on a multiplier for a second-order dynamical
system, expressible in normal form, to be representable as equations
of Lagrangian form with dissipative forces of gradient type; these
generalize the known results for representation in Lagrangian form
without dissipation in an interesting way.
The new ingredient in \cite{germ2}, by comparison with \cite{germ1},
is the expression of the generalized Helmholtz conditions in terms of
quasi-velocities. As presented in the paper this is quite a long-drawn-out
procedure, because in effect the conditions are rederived
from scratch. Our third remark is that in principle this should be
unnecessary:\ a truly satisfactory formulation of the conditions
should be tensorial, in the sense of being independent of a choice of
coordinates (and of course quasi-velocities are just a certain type of
velocity coordinates). The approach described in the previous
paragraph leads to conditions which have this desirable property.
Fourthly and finally, there is the question of whether generalized
Helmholtz conditions can be derived for other kinds of ``generalized
force'' terms than $\partial
D/\partial\dot{q}^a$. One important case is that in which such a term
is of gyroscopic type. We have obtained such conditions in this case,
again using the approach discussed in our second remark above.
These points are discussed in full detail in a recently written paper~\cite{MWC}.
\subsubsection*{Acknowledgements}
The first author is a Guest Professor at Ghent University:\ he is
grateful to the Department of Mathematical Physics and Astronomy at
Ghent for its hospitality. The second author is a
Postdoctoral Fellow of the Research Foundation -- Flanders (FWO). | {"config": "arxiv", "file": "1003.1840.tex"} |
TITLE: partition to min the max number of intersections
QUESTION [2 upvotes]: Given $n$ items and $m$ customers, each of whom is interested in some subset of the items, partition the set of items among $k$ different stores so that the maximum number of customers visiting any store is minimized.
Does anyone recognize this problem? I have the slight feeling that it is some standard set or cut problem in disguise, but I just cannot find the right one.
REPLY [3 votes]: This is NP-hard, by reduction from 3-partition.
Given $\left[\hspace{.03 in}j_0,j_1,j_2,...,j_{(3\cdot k)-1}\hspace{-0.01 in}\right]$, $\:$ where $\; J = \operatorname{sum}\left(\left[\hspace{.03 in}j_0,j_1,j_2,...,j_{(3\cdot k)-1}\hspace{-0.01 in}\right]\right) \;$, $\;$ if there are
$k$ different stores, $\:J$ items, and $\:(3\cdot k)+(4\cdot J)\:$ customers,
with customer $i$ for $\:i\in [0,1,2,...,(3\cdot k)-1]\:$ interested in the obvious $j_i$ items
and four other customers interested in each one of the items,
then
the maximum can't be less than $\:\frac{4\cdot J}k+3\:$ and the partitions whose maximum is
$\frac{4\cdot J}k+3\:$ are exactly those corresponding to $3$-partitions of $\left[\hspace{.03 in}j_0,j_1,j_2,...,j_{(3\cdot k)-1}\hspace{-0.01 in}\right]$
. | {"set_name": "stack_exchange", "score": 2, "question_id": 17628} |
TITLE: What are the computationally useful ways of thinking about Killing fields?
QUESTION [3 upvotes]: One definition of the Killing field is as those vector fields along which the Lie Derivative of the metric vanishes. But for very many calculation purposes the useful way to think of them when dealing with the Riemann-Christoffel connection for a Riemannian metric is that they satisfy the differential equation, $\nabla_\nu X_\nu + \nabla_\nu X_\nu = 0$
I suppose solving the above complicated set of coupled partial differential equations is the only way to find Killing fields given a metric. (That's the only way I have ever done it!) Would be happy to know if there are more elegant ways.
But what would be the slickest way to say prove that the scalar curvature is constant along a Killing field, i.e it will satisfy the equation $X^\mu \nabla_\mu R = 0$ ?
Sometimes it is probably good to think of Killing fields as satisfying a Helmholtz-like equation sourced by the Ricci tensor.
Then again very often one first starts off knowing what symmetries one wants on the Riemannian manifold and hence knows the Lie algebra that the Killing fields on it should satisfy. Knowing this one wants to find a Riemannian metric which has that property. As I had asked in this earlier question
There is also the issue of being able to relate Killing fields to Laplacians on homogeneous spaces and H-connection like earlier raised in this question. Good literature on this has been very hard to find beyond the very terse and cryptic appendix of a paper by Camporesi and Higuchi.
I would like to know what are the different ways of thinking about Killing fields some of which will say help easily understand the connection to Laplacians, some of which make calculations of metric from Killing fields easier and which ones give good proofs of the constancy of scalar curvature along them. It would be enlightening to see cases where the "original" definition in terms of Lie derivative is directly useful without needing to put it in coordinates.
REPLY [3 votes]: Sometimes it is probably good to think of Killing fields as satisfying a Helmholtz-like equation sourced by the Ricci scalar.
Firstly, the source is not the Ricci scalar, but the Ricci tensor. The equation, up to a minus sign and some constant factors should be $\triangle_g \tau_a \propto R_{ab}\tau^b$.
That expression sometimes is useful when you already know the metric and the Killing vector field, and are interested in a priori esimates in geometric analysis. My Ph.D. thesis contains some related examples. (Though in the Riemannian case on a compact manifold, the Bochner technique is generally more fruitful; the classical paper of Bochner's [in the Killing case] is, however, isomorphic to contracting the elliptic equation against the Killing field itself. http://projecteuclid.org/euclid.bams/1183509635 )
Anyway, the "Helmholtz like" equation is obtained by taking the trace of the Jacobi equation which is satisfied by any Killing vector field. So you are definitely losing information unless you work with only 2 or 3 manifolds. | {"set_name": "stack_exchange", "score": 3, "question_id": 31384} |
\section{Introduction\label{sec:intro_three_users}}
In this paper, we study the erasure source-broadcast problem with feedback for the case of three users in what is the sequel to~\cite{TMKS_TIT20}, which studied the analogous problem for the case of \emph{two} users. In~\cite{TMKS_TIT20}, it was shown that the source-channel separation bound can always be achieved when a binary source is to be sent over the erasure broadcast channel with feedback to two users with individual erasure distortion constraints. This is true whether a feedback channel is available for both receivers or when a feedback channel is available to only the stronger receiver. The coding scheme relied heavily on the transmission of \emph{instantly-decodable, distortion-innovative} symbols~\cite{TMKS_TIT20,SorourValaee15,SorourValaee10}. For the case of two receivers, each with a feedback channel, we found that transmitting such symbols was always possible.
In this work, we again utilize uncoded transmissions that are instantly-decodable, and distortion-innovative. The zero latency in decoding uncoded packets has benefits in areas in which packets are instantly useful at their destination such as applications in video streaming and disseminating commands to sensors and robots~\cite{SorourValaee15,SorourValaee10}.
However, for the case of three users, we find that the availability of sending such uncoded transmissions is not always possible. Instead, it is random and based on the channel noise. When the opportunity to send such packets is no longer possible, we design two alternative coding schemes. The first is based on channel coding, while the second is a novel chaining algorithm that performs optimally for a wide range of channel conditions we have simulated.
Typically, in order to find opportunities to send instantly-decodable, distortion-innovative symbols, we set up queues that track which symbols are required by which group of users and subsequently find opportunities for network coding based on these queues. In our analysis of our coding schemes, we propose two new techniques for analyzing queue-based opportunistic network coding algorithms. The first is in deriving a linear program to solve for the number of instantly-decodable, distortion-innovative symbols that can be sent. The second technique is in using a Markov rewards process with impulse rewards and absorbing states to analyze queue-based algorithms.
Previously, there have been two primary techniques in deriving a rate region for queue-based coding schemes, which we have found to be insufficient for our purposes. The first technique was derived from a channel coding problem involving erasure channels with feedback and memory~\cite{HeindlmaierBidokhti16}. In~\cite{HeindlmaierBidokhti16}, the authors use a queue stability criterion to set up a system of inequalities involving the achievable rates and flow variables that represent the number of packets that are transferred between queues. When this technique is applied to the chaining algorithm of Section~\ref{subsubsec:chaining_algorithm} however, we find that our system of inequalities is over-specified. That is, the large number of queues leaves us with more equations than unknown variables, and we are unable to solve for a rate region.
The technique of using a Markov rewards process to analyze the rate region of a queue-based algorithm was also used in~\cite{GeorgiadisTassiulas_Netcod09}. In their work, the transmitter and receivers were modelled with a Markov rewards process. The authors solved for the \emph{steady-state} distribution and were able to derive a rate region by considering the number of rewards accumulated per timeslot and thus the number of timeslots it would take to send a group of packets. In our analysis in Section~\ref{subsec:chaining_algorithm_analysis} however, we find that our transmitter and receiver must be modelled with an \emph{absorbing} Markov rewards process. Thus, it is the \emph{transient} behaviour that is important and we cannot use a steady-state analysis.
In Section~\ref{subsec:chaining_algorithm_analysis}, we modify the analysis for that of an \emph{absorbing} Markov rewards process. Specifically, we solve for the expected accumulated reward before absorption of a Markov rewards process with impulse rewards and absorbing states. The analysis does not involve the optimization of any parameters. Rather, we use the analysis of the absorbing Markov rewards process to state sufficient conditions for optimality. We illustrate the operational significance of this analysis in Section~\ref{sec:operational_meaning} where we show how Theorem~\ref{thm:chaining_sufficient} can be used to delineate regions of optimality. Finally, in Section~\ref{subsec:instantly_decodable}, we also show how to formulate a linear program to derive a rate region by solving for the number of instantly-decodable, distortion innovative transmissions that can be sent. | {"config": "arxiv", "file": "2105.00353/intro/intro_three_users.tex"} |