text
stringlengths 28
2.36M
| meta
stringlengths 20
188
|
---|---|
TITLE: Proving integer inequality
QUESTION [1 upvotes]: I am having trouble proving this inequality:
For positive integers $p$, $q$, $c$, $k$ such that $2 \le p < q$:
$$
\frac{(p-1) \, p^{k-1} / \, (c p^k - 1)}{(q-1) \, q^{k-1} / \, (c q^k - 1)} \le 1
$$
Note that I have verified it empirically in code for thousands of small values.
REPLY [3 votes]: Your statement is equivalent to saying the function
$$f_{c,k}(x)= \frac{(x-1) \, x^{k-1} }{ c x^k - 1} $$
is increasing on the set of integers $x \ge 2$.
Now, extend this function to the real interval $(1, \infty)$.
You have
$$ f'(x)=\frac{x^{k-2} }{ (c x^k - 1)^2} (cx^k-kx+k-1)$$
we clearly have
$$\frac{x^{k-2} }{ (c x^k - 1)^2} >0$$
and by Bernoulli's inequality
$$x^k \geq 1+k(x-1)=1+kx-k$$
thus
$$cx^k-kx+k-1 \geq x^k-kx+k-1 \geq 1+kx-k-kx+k-1=0$$
This shows that $f(x)$ is increasing on $(1, \infty)$, which proves your statement. | {"set_name": "stack_exchange", "score": 1, "question_id": 1669122} |
TITLE: Parts with the maximum number of each integer
QUESTION [27 upvotes]: Let $n$ be a positive integer. Some integers between $1$ and $n$ (inclusive) are written on a line, where the same integer can appear multiple times. Is it true that we can always divide the line into $n$ parts and assign to each part a label between $1$ and $n$ (inclusive), where each label is used exactly once, so that for any $1\leq i\leq n$, the part labeled $i$ contains at least as many copies of the integer $i$ as any other part?
If $n=1$ then this is trivially true, because we only write $1$s and we do not need to divide the line at all.
If $n=2$ it is also true: Move from the left of the line until at least half of all the $1$s or half of all the $2$s are reached, then stop and divide the line. Say we have reached at least half of all the $1$s. Then label the left part with $1$ and the right part with $2$.
REPLY [4 votes]: This follows from David Gale's generalization of Knaster–Kuratowski–Mazurkiewicz lemma. See Wikipedia for the formulation of the generalization and the relevant terminology.
To prove the theorem, we first make the problem continuous:
For each $i=0, \dots, L-1$, the unit (open) interval $(i, i+1)$ has color $j$ for some $j \in [n]$. Is it true that we can always divide the interval $[0, L]$ into $n$ disjoint (open) intervals $I_1, \dots, I_n$ (from left to right) of length $x_1, x_2, \dots, x_n$ and find a permutation $\pi\in S_n$ so that for any $i\in[n]$, the total length of color $\pi(i)$ in interval $I_i$ is at least as much as in any other interval?
Once we prove the continuous version, we can "round" each interval in the following way. If the left end point $a$ of interval $I_i$ is colored $\pi(i)$, we round $a$ to $\lfloor a\rfloor$. Similarly, if the right end point $b$ of interval $I_{i}$ is colored by $\pi(i)$, we round $b$ to $\lceil b\rceil$. If there are end points not rounded yet, we round them arbitrarily. One can show that the resulting partition gives a solution to the discrete version.
Finally, we prove the continuous version. Consider the $(n-1)$-simplex $$\Delta := \left\{(x_1, \dots, x_n)\in \mathbb{R}^n:x_1 + x_2 + \dots + x_n = L,x_1, \dots, x_n \ge 0\right\},$$ with vertices $v_i = Le_i$, where $e_1, \dots, e_n$ forms the standard basis of $\mathbb{R}^n$.
For every $x = (x_1, \dots, x_n) \in \Delta$, define the (open) interval $I_i(x) = (x_1+\dots+x_{i-1}, x_1+\dots+x_{i})$ for all $i\in [n]$, and we say $I_i(x)$ is dominant in color $j$ if the total length of color $j$ in interval $I_i(x)$ is at least as much as in the other $I_*(x)$.
For every $i \in [n]$ and $j \in [n]$, let
$$C_i^j := \{x\in\Delta: I_i(x) \text{ is dominant in color }j\}.$$
One can check that for every $j\in[n]$, $C_1^j, \dots, C_n^j$ is a KKM covering. By David Gale's result, there exists a permutation $\pi$ such that $$\bigcap_{i=1}^n C_i^{\pi(i)}\neq \emptyset.$$
In other words, there is $(x_1, \dots, x_n)\in \Delta$ such that $(x_1, \dots, x_n)\in C_{i}^{\pi(i)}$ for all $i\in[n]$, that is, $I_i(x)$ is dominant in color $\pi(i)$. | {"set_name": "stack_exchange", "score": 27, "question_id": 2234527} |
TITLE: limiting distribution of Xbar considering an autoregressive process
QUESTION [1 upvotes]: Here is the question:
Consider the stationary Gaussian autoregressive process of order 1,
$X_{i+1} − μ = ρ(X_i − μ) + \sigma Z_i$, where $Z_i$ are iid N(0, 1). Find the
limiting distribution of $\overline{X}$ assuming |ρ| < 1.
Here is what I have so far:
$X_{i+1} − μ = ρ(X_i − μ) + \sigma Z_i$
$\sum (X_{i+1} − μ) = \sum (ρ(X_i − μ) + \sigma Z_i)$
$\sum(X_{i+1}) − nμ = ρ\sum(X_i) − n\rhoμ+ \sigma \sum(Z_i)$
$(X_1-X_1+X_2+X_3+...+X_{i+1} − nμ =ρ\sum(X_i) − n\rhoμ+ \sigma \sum(Z_i)$
by expanding sum
$\sum X_i+X_{i+1}-X_1-n\mu=ρ\sum(X_i) − n\rhoμ+ \sigma \sum(Z_i)$
dividing both sides by n
$\overline{X}+\frac{X_{i+1}-X_1}{n}-\mu=ρ\overline{X} − \rhoμ+ \sigma \overline{Z}$
$\overline{X}-\rho\overline{X}= \mu -\rho \mu +\sigma \overline{Z}-\frac{X_{i+1}-X_1}{n}$
$\overline{X}(1-\rho)=\mu(1-\rho)+\sigma \overline{Z}-\frac{X_{i+1}-X_1}{n}$
$\overline{X}= \mu+\frac{\sigma \overline{Z}-\frac{X_{i+1}-X_1}{n}}{1-\rho}$
But now I am stuck... I know that $\overline{Z}$ converges to N(0,1) but I am not sure what to do now.
Any help is appreciated!
REPLY [0 votes]: Correction: sorry, my previous explanation was wrong.
Something is not right in your computation as the final term converges to zero and not to a random variable. I guess you want to consider $\sqrt{n}(\bar{X}-\mu)$ to get a CLT. Again, using Slutskys Theorem should work, but please clarify your intention first. | {"set_name": "stack_exchange", "score": 1, "question_id": 3108153} |
TITLE: If a group $G$ contains an element a having exactly two conjugates, then $G$ has a proper normal subgroup $N \ne \{e\}$
QUESTION [6 upvotes]: If a group $G$ contains an element a having exactly two conjugates, then $G$ has a proper normal subgroup $N \ne \{e\}$
So my take on this is as follows: If we take $C_G(S)$ of S. This is a subgroup of G. If $C_G(S)=G$, then S has no conjugate but itself, so therefore $C_G(S)$ is a proper subgroup. If we suppose $C_G(S)=\{e\}$,then in order for there to be exactly two conjugates of S, then
For every $a \ne b \in G \ \{e\}, bxb^{-1} =axa^{-1}$ but $bxb^{-1}=axa^{-1} \to (a^{-1}b)xb^{-1}=xa^{-1} \to (a^{-1}b)x(a^{-1}b)^{-1}=x \to a^{-1}b \in C_G(S)$
Which means that $C_G(S)$ is actually nontrivial or that $a^{-1}b=e$ if and only if $a=b$, which would be a contradiction. Thus $C_G(S)$ is a nontrivial proper subgroup. Since there are exactly 2 conjugacy classes of S and they are in one to one correspondence with cosets of S, its centralizers' index $[G:C_G(S)]=s$. Subgroups of index $2$ are normal, so $C_G(S)$ is a proper nontrivial normal subgroup.
This approach seemed very different from other examples I have seen so I guess I am wondering if this approach makes sense.
REPLY [0 votes]: Let $g$ be an element with two conjugates then $\mid N(g)\mid\,=\frac{\mid\mathbb{G}\mid}{2}$ where $N(g)$ is normaliser subgroup with respect to $g$ and any subgroup with index 2 is normal hence $N(g)$ is a non-trivial normal group. | {"set_name": "stack_exchange", "score": 6, "question_id": 1184658} |
TITLE: Can a real harmonic function on the unit disk satisfy $f(0)=1$ while the area of $\{z:f(z)>0\}$ is zero?
QUESTION [1 upvotes]: Does there exist a harmonic function defined in the unit disk such that
(1) $f(0)=1$
(2) area of $\{z\in\mathbb{D}:\,f(z)>0\}$ is equal to zero?
I tried to use certain representations of harmonic functions, and to relate it to analytic content.
REPLY [2 votes]: Since $f$ is continuous and $f(0)=1$ then there exists $r>0$ such that the inequality $x^2 +y^2<r$ implies that $f(x,y)>\frac{1}{2} .$ Hence $$m(\{z\in\mathbb{D}: f(z)>0\} )>\pi r^2 >0,$$
where $m$ denotes the Lebesgue measure. | {"set_name": "stack_exchange", "score": 1, "question_id": 972761} |
TITLE: What actually happens during metallic conduction?
QUESTION [0 upvotes]: My book mentions that when an electric field is applied to a conductor the electrons get accelerated in a direction opposite to that of the field. These electrons however collide with the atoms on the lattice of the metal. What actually happens in the collision? Is any other electron ejected from the atom during this collision? I also want to know about drift velocity, mean free path and relaxation time.
REPLY [0 votes]: First, when an electric field is applied to a metal conductor, as per QM the electric field's force is mediated by virtual photons, they interact with the electrons of the metal.
These electrons in the metal are not free. They are loosely bound to the atoms of the lattice of the metal. Because they are loosely bound, they can move from one atom to another atom.
As the electric field interacts with the electrons in the metal, the virtual photons' energy is given to the electron's kinetic energy, and so these loosely bound electrons start moving inside the metal, from one nucleus to another.
The speed as these loosely bound electrons move from one atom to another atom is called drift velocity. It is very slow. Though, since the electrons in the metal are really densely packed, the actual speed of electricity is close to the speed of light.
These loosely bound electrons are not free, they are always bound to nuclei, as per QM they exist at a certain energy level around the nucleus, in the valence band, which is the conduction band too, in metals.
Mean free path is the average distance traveled by the electrons from one nucleus to another nucleus.
Now these loosely bound electrons in a metal have a random thermal motion regardless of the electrical field. But if an electrical field is applied, then there will be a drift velocity in the direction of the electric force, superimposed on the random thermal motion. That is why this drift velocity is so slow.
The relaxation time in this case is the collision time, that is the average time between collisions. These collisions are inelastic scattering.
Please see here:
http://bcs.whfreeman.com/webpub/Ektron/Tipler%20Modern%20Physics%206e/Classical%20Concept%20Review/Chapter_10_CCR_10_Mean_Free_Path.pdf | {"set_name": "stack_exchange", "score": 0, "question_id": 425515} |
TITLE: Alternative or reprint of Carter's "Finite Groups of Lie Type: Conjugacy Classes and Complex Characters"
QUESTION [6 upvotes]: I would like to learn about character theory of finite groups of Lie type and some Deligne-Lusztig theory. The classic textbook on the subject seems to be Roger W. Carter's Finite Groups of Lie Type: Conjugacy Classes and Complex Characters (Wiley 1985, reprinted in 1993 in Wiley Classics). Sadly, this book is out of print, and while second-hand copies can be found, they sell at a ridiculously high price (e.g., this site has one for over 1200€). To make things more problematic, the same author wrote another (earlier) book with a confusingly similar title, Simple Groups of Lie Type (which I have), and which tends to come up whenever one searches for the Finite Groups book.
So, two questions:
Is there a (finite!) set of textbooks and/or introductory articles whose union covers the same material as Carter's Finite Groups of Lie Type? I have a copy of Digne & Michel's Representations of Finite Groups of Lie Type, but it seems (a) much harder to read, and (b) less complete (which makes sense as it is over three times shorter); are there other texts which might fill the gaps?
(This is not a mathematical question, but I hope it is nevertheless allowed here:) Is there any hope of persuading someone (whether the publisher or author, whoever holds the rights) to republish this book? The fact that it is still considered a standard reference, especially if question (1) does not have an answer, and the fact that used copies are so expensive, suggests that there is some demand. But I don't know if the author is still mathematically active, or if Wiley still has the rights.
REPLY [0 votes]: Malle, Gunter; Testerman, Donna. Linear algebraic groups and finite groups of Lie type. Cambridge Studies in Advanced Mathematics, 133. Cambridge University Press, Cambridge, 2011. xiv+309 pp.
"Originating from a summer school taught by the authors, this concise treatment includes many of the main results in the area. An introductory chapter describes the fundamental results on linear algebraic groups, culminating in the classification of semisimple groups. The second chapter introduces more specialized topics in the subgroup structure of semisimple groups, and describes the classification of the maximal subgroups of the simple algebraic groups. The authors then systematically develop the subgroup structure of finite groups of Lie type as a consequence of the structural results on algebraic groups. This approach will help students to understand the relationship between these two classes of groups. The book covers many topics that are central to the subject, but missing from existing textbooks. The authors provide numerous instructive exercises and examples for those who are learning the subject as well as more advanced topics for research students working in related areas." | {"set_name": "stack_exchange", "score": 6, "question_id": 256011} |
TITLE: Closed geodesic loops around points in compact manifolds
QUESTION [4 upvotes]: Since in a compact Riemannian manifold $M$ the only totally convex subset is the whole manifold itself, see Closed manifold has no nontrivial totally convex subset?, it should follow that for every point $p\in M$ there exists a geodesic loop starting and ending at $p$ (not necessarily "closing" smoothly). I was looking for a (possibly simple) direct proof of this fact without passing by the general theorem (whose proof is not easy, in my opinion). Thanks.
REPLY [3 votes]: Here is a standard argument, I learned it from "Comparision Theorems in Riemannian Geometry" by Cheeger and Ebin. It avoids use of infinite dimensional spaces.
Choose the smallest $k>0$ such that $\pi_kM\ne 0$.
Choose a spheroid which represents a nontricial element of $\pi_kM$.
We can assume that the spheroid is swapped by an $\mathbb S^{k-1}$-parameter family of broken geodesics such that the length of each edge is smaller than the injectivity radius of $M$; denote it by $i_M$.
Start a natural curve-shortening process --- velocity vector of every vertex should depend on the directions and lengths of the two edges coming from it. You have to choose one which keep the lengths of the edges below $i_M$ and such that the rate of length (or energy) decay is estimated through rate of change of the broken geodesic.
Note that there is a lower bound for the maximal length of broken geodesics; "maximal" means "maximal in the $\mathbb S^{k-1}$-family after spending arbitrary time in the process".
Say, this value can not go below $i_M$.
It follows that after long time in the process, one broken line in the family almost does not change the length.
Hence this it almost does not move;
the later implies that and all the angles almost $\pi$.
Pass to the limit and you get the loop. | {"set_name": "stack_exchange", "score": 4, "question_id": 131175} |
TITLE: Is this definition of complex wave number in dispersive media correct?
QUESTION [0 upvotes]: In Griffith's Introduction to Electrodynamics (4th edition, p.421), the complex wave number in the section on dispersive media is defined as $\tilde{k}=\sqrt{\tilde{\epsilon}\mu_0}\omega$. Why is the vacuum permeability used? These are electromagnetic waves in matter, right?
I have checked the errata, but have not found a comment on this nor on any assumptions made earlier in the text.
REPLY [1 votes]: Magnetic effects are ignored and the relation $\epsilon_0 \mu_0 = 1/c^2$ was used. | {"set_name": "stack_exchange", "score": 0, "question_id": 614847} |
TITLE: Solving Equation $e^x= \frac 1 x$
QUESTION [1 upvotes]: For $x>0$, I want to study and try to solve this equation:
$$e^x= \frac 1 x$$
without using a graphics-grapher.I am not looking for the exact $x$ that satisfies this equation but the interval that it fits in (e.g. $x \in (3.1636,4,333)$)
REPLY [3 votes]: Solving $e^x=\frac{1}{x}$ is equivalent to finding roots of $f(x)=e^x-\frac{1}{x}$
A root $\bar x$ of a function has the property that the values of $f$ before $\bar x$ and the values of $f$ after have opposite signs, because the function is continuous and cannot jump from positive to negative without assuming the value $f(\bar x)=0$
Thus I'd propose a table like the following
$
\begin{array}{l|l}
x & f(x) \\
\hline
0.1 & -8.89483 \\
0.2 & -3.7786 \\
0.3 & -1.98347 \\
0.4 & -1.00818 \\
0.5 & -0.351279 \\
0.6 & 0.155452 \\
0.7 & 0.585181 \\
0.8 & 0.975541 \\
0.9 & 1.34849 \\
1. & 1.71828 \\
\end{array}
$
looking at the table it is clear that $\bar x$ is in the interval $(0.5,0.6)$
And one can refine its search with another table
$
\begin{array}{l|l}
x & f(x) \\
\hline
0.5 & -0.351279 \\
0.51 & -0.295493 \\
0.52 & -0.241049 \\
0.53 & -0.18786 \\
0.54 & -0.135845 \\
0.55 & -0.0849288 \\
0.56 & -0.0350418 \\
0.57 & 0.0138811 \\
0.58 & 0.0619005 \\
0.59 & 0.109073 \\
0.6 & 0.155452 \\
\end{array}
$
where it is evident that $\bar x\in(0.56,\;0.57)$
Hope that this helps | {"set_name": "stack_exchange", "score": 1, "question_id": 2421919} |
\begin{document}
\bibliographystyle{amsplain}
\title{Mori Dream Spaces and Blow-ups}
\author{Ana-Maria Castravet}
\address{Ana-Maria Castravet: \sf Department of Mathematics, Northeastern University, 360 Huntington Avenue, Boston, MA 02115}
\email{noni@alum.mit.edu}
\begin{abstract}
The goal of the present article is to survey the general theory of Mori Dream Spaces, with special regards to the question: When is the blow-up of toric variety at a general point a Mori Dream Space? We translate the question for toric surfaces of Picard number one into an interpolation problem involving points in the projective plane. An instance of such an interpolation problem is the Gonzalez-Karu theorem that gives new examples of weighted projective planes whose blow-up at a general point is not a Mori Dream Space.
\end{abstract}
\maketitle
\section{Introduction}
Mori Dream Spaces were introduced in \cite{HK} as a natural Mori theoretic generalization of toric varieties. As the name suggests, their main feature is that the Minimal Model Program (MMP) can be run for any divisor (not just the canonical divisor class). In particular, as for toric varieties, one only has to look into the combinatorics of the various birational geometry cones to achieve the desired MMP steps.
As being a Mori Dream Space is equivalent to all (multi-)section rings being finitely generated, it is not surprising that non-trivial examples may be hard to find. It was not until the major advances in the MMP, that Hu and Keel's original conjecture that varieties of Fano type are Mori Dream Spaces was proved \cite{BCHM}. Although there are many examples outside of the Fano-type range, these often have an ad-hoc flavor. Certain positivity properties of the anticanonical divisor (such as being of Fano type or Calabi-Yau type) of a Mori Dream Space are reflected in the multi-section rings \cite{Okawa}, \cite{GOST}, but no clear picture emerges in general.
More often than not, the usual operations of blowing up, taking projective bundles, crepant resolutions, hyperplane sections, when applied to Mori Dream Spaces, do not lead to Mori Dream Spaces.
\medskip
Our current goal is to pay special attention to blow-ups of Mori Dream Spaces, in particular, blow-ups at a single (general) point. More specifically, the following is a question asked by Jenia Tevelev:
\begin{ques}\label{Jenia}
Let $X$ be a projective $\QQ$-factorial toric variety over an algebraically closed field $\kk$. When is the blow-up $\Bl_p X$ of $X$ at a general point $p$ not a MDS?
\end{ques}
Using the action of the open torus $T=(\kk^*)^n$, we may assume the point $p$ is the identity $e$ of $T$. Currently, the only known examples of $X$ toric such that $\Bl_e X$ is not a MDS fall into the following categories:
\bi
\item[(I) ] Certain (singular) toric projective surfaces with Picard number one;
\item[(II) ] Certain toric varieties for which there exists a small modification that admits a surjective morphism into one of the toric surfaces in (I). (Note that small modifications and images of Mori Dream Spaces are Mori Dream Spaces \cite{HK}, \cite{Okawa}).
\ei
All known examples are in characteristic zero, since the only examples of surfaces in (I) are in characteristic zero. Eventually, blowing up (very) general points\footnote{Recall that the blow-up of a toric variety along a torus invariant stratum is a toric variety.} on a toric variety leads to non Mori Dream Spaces: for example, the blow-up of $\PP^2$ at $r$ very general points is toric if and only if $r\leq 3$ and a Mori Dream Space if and only if $r\leq 8$.
\medskip
A good portion of the examples in (I) are weighted projective planes $\PP(a,b,c)$ for a certain choice of weights $(a,b,c)$. Until \cite{CT4}, \cite{GK}, the only known examples of varieties as in Question \ref{Jenia} were of this type \cite{GNW}. The question whether $\Bl_e \PP(a,b,c)$ is a Mori Dream Space is equivalent to the symbolic Rees algebra of a so-called monomial prime ideal being Noetherian, and as such, it has a long history. Major progress was recently achieved by Gonzalez and Karu \cite{GK} by using methods of toric geometry. However, the main question remains open:
\begin{ques}\label{classify}
For which triples $(a,b,c)$ the blow-up $\Bl_e\PP(a,b,c)$ of $\PP(a,b,c)$ at the identity point $e$ is not a MDS?
\end{ques}
With the exception of $(a,b,c)=(1,1,1)$, in all examples where the Mori Dream Space-ness of $\Bl_e\PP(a,b,c)$ is understood (one way or another), it happens that $\Bl_e\PP(a,b,c)$ contains a negative curve $C$, different than the exceptional divisor $E$ above the point $e$. In positive characteristic, the existence of the negative curve $C$ implies that $\Bl_e\PP(a,b,c)$ is a Mori Dream Space by Artin's contractability theorem \cite{Artin}. No triples
$(a,b,c)\neq(1,1,1)$ are known for which $\Bl_e\PP(a,b,c)$ contains no negative curve (other than $E$). If such an example exists (in any characteristic), it would imply the Nagata conjecture on linear systems on blow-ups of $\PP_{\CC}^2$ at $abc$ points \cite{CutkoskyKurano}. If $\sqrt{abc}\notin\ZZ$, such an example would have many important consequences: new cases of the Nagata conjecture, examples of irrational Seshadri constants, new examples when $\Bl_e\PP(a,b,c)$ is not a Mori Dream Space, etc.
\
The goal of the present article is two-fold. First, to survey some of the general theory of Mori Dream Spaces, along with known results and open problems related to Question \ref{Jenia}. Second, to use the toric geometry methods of Gonzalez and Karu in order to translate Question \ref{classify} (and more generally, Question \ref{Jenia} in the case of surfaces of Picard number one) into an interpolation problem involving points in the (usual) projective plane $\PP^2$ (this translation is likely not new to the experts). As an illustration of this approach, we reprove (or rather, present a shortcut in the proof of) the main theorem in \cite{GK} (Thm. \ref{GK}). The advantages are that the interpolation problem is really equivalent to the original question, and there are further potential applications towards
Question \ref{Jenia} and Question \ref{classify}. For example, both of the following questions can be reformulated into interpolation problems: (a) whether $\Bl_e\PP(a,b,c)$ is a Mori Dream Space when in the presence of a negative curve, or (b)
whether $\Bl_e\PP(a,b,c)$ has any negative curves at all. The drawback is that the interpolation problem seems to be almost equally difficult.
By interpolation, we simply mean to separate points lying in the lattice points of a plane polytope (so in a grid!) by curves of an appropriate degree. For example, to prove that $\Bl_e\PP(9,10,13)$ has no negative curve (other than $E$), it suffices to answer affirmatively:
\begin{ques}\label{interpolation_9,10,13}
Let $\De$ be the polytope in $\RR^2$ with vertices $(0,0)$, $(10,40)$, $(36,27)$. For every $q\geq 1$, let
$$m_q=\lfloor q\sqrt{1170}\rfloor+1.$$
Is it true that for every $q\geq1$ and any point $(i,j)\in q\De\cap\ZZ^2$, there exists a curve $C\subset\RR^2$ of degree $m_q$ passing through all the points
$(i',j')\neq(i,j)$ in $q\De\cap\ZZ^2$, but not $(i,j)$?
\end{ques}
\medskip
\noindent {\bf Structure of paper.} The first three sections present a general survey on Mori Dream Spaces: Section \ref{basics}
reviews the basic definitions and properties, Section \ref{examples} presents several key examples, while Section \ref{structure} gives an overview of the ``structure theory". The last four sections focus on blow-ups at a general point. Section \ref{Picard1} discusses generalities on blow-ups of
(not necessarily toric) surfaces of Picard number one, while Section \ref{section wpp} presents the special case of weighted projective planes. Section \ref{higher} discusses blow-ups of higher dimensional toric varieties, with Losev-Manin spaces playing a central role. Finally (the linear algebra heavy) Section \ref{toricPicard1} translates Question \ref{Jenia} in the case of surfaces of Picard number one, into an interpolation problem and proves Thm. \ref{GK} as an application.
\medskip
\noindent {\bf Conventions and Notations.} Unless otherwise specified, we work over an algebraically closed field $\kk$ of arbitrary characteristic.
For an abelian group $\Ga$ and a field $K$, we denote $\Ga_K$ the $K$-vector space $\Ga\otimes_{\ZZ} K$.
\medskip
\noindent {\bf Acknowledgements.} I am grateful to Jenia Tevelev who pointed out Question \ref{Jenia} and the surrounding circle of ideas.
I thank Shinosuke Okawa for his questions and comments, Jos\'e Gonzalez and Antonio Laface for useful discussions, and the
anonymous referees for several useful comments. This work is partially supported by NSF grant DMS-1529735. I thank Institut de
Math\'ematiques de Toulouse for its hospitality during the writing of this paper.
\section{Mori Dream Spaces}\label{basics}
Mori Dream Spaces are intrinsically related to Hilbert's $14$'th problem. Many of the results on finite generation of multi-section rings go back to Zariski and Nagata (see \cite{Mumford}). For a survey of Mori Dream Spaces from the invariant theory perspective, see \cite{McK_survey}.
In what follows, we briefly recall the definitions and basic properties from \cite{HK}. We found \cite{Okawa} to be a useful additional reference.
\medskip
Let $X$ be a projective variety over $\kk$. We denote by $\NN^1(X)$ the group of Cartier divisors modulo numerical equivalence\footnote{$\NN^1(X)$ is a finitely generated abelian group.}. The cone generated by nef divisors in $\NN^1(X)_{\RR}$ is denoted $\Nef(X)$. Similarly, the closure of the cone of effective divisors (resp., movable divisors) is denoted $\Eff(X)$ (resp., $\Mov(X)$). Recall that an effective divisor is called movable if its base locus has codimension at least $2$. Similarly, if $\NN_1(X)$ is the group of $1$-cycles modulo numerical equivalence\footnote{The dual of $\NN^1(X)$ under the intersection pairing.}, the Mori cone $\NE(X)$ is the closure in $\NN_1(X)_{\RR}$ of the cone of effective $1$-cycles.
\smallskip
The closure operations in the definition of $\Eff(X)$, $\Mov(X)$ and $\NE(X)$ are not necessary for Mori Dream Spaces (see Prop. \ref{effective} below).
A \emph{small $\QQ$-factorial modification} (SQM for short) of a normal projective variety $X$ is a small (i.e., isomorphic in codiemsnion one) birational map $X\dra Y$ to another normal, $\QQ$-factorial projective variety $Y$.
\begin{defn}\label{mds}
A normal projective variety $X$ is called a \emph{Mori Dream Space} (MDS for short) if the following conditions are satisfied:
\bi
\item[(1) ] $X$ is $\QQ$-factorial, $\Pic(X)$ is finitely generated, with
$$\Pic(X)_{\QQ}\cong\NN^1(X)_{\QQ};$$
\item[(2) ] $\Nef(X)$ is generated by finitely many semiample divisors;
\item[(3) ] There are finitely many SQMs $f_i: X\dra X_i$ such that each $X_i$ satisfies (1) and (2), and $\Mov(X)$ is the union of $f_i^*\Nef(X_i)$\footnote{If $f: X\dra Y$ is birational map, the pull back $f^*D$ of a Cartier divisor $D$ from $Y$ is defined as $p_*(q^*D)$, where $p: W\ra X$, $q: W\ra Y$ are given by a common resolution. If $f$ is small, $f^*D$ is simply the push forward $f^{-1}_*(D)$ via the inverse map $f^{-1}$.}.
\ei
\end{defn}
\begin{rmks}
(a) If $\kk$ is not the algebraic closure of a finite field, the condition that $\Pic(X)$ is finitely generated is equivalent to the condition $\Pic(X)_{\QQ}\cong\NN^1(X)_{\QQ}$, but not otherwise (see \cite[Rmk. 2.4]{Okawa})\footnote{In the original definition in \cite{HK}, only the condition
$\Pic(X)_{\QQ}\cong\NN^1(X)_{\QQ}$ appears, but as explained in \cite{Okawa}, adding both conditions seems more natural.}.
(b) Semiampleness and polyhedrality in conditions (2) and (3) are key, guaranteeing that all the MMP steps are reduced to combinatorics (finding the divisor class with the desired numerical properties).
\end{rmks}
A birational map $f: X\dra Y$ between normal projective varieties is called contracting if the inverse map $f^{-1}$ does not contract any divisors. If $E_1,\ldots E_k$ are the prime divisors contracted by $f$, then $E_1,\ldots E_k$ are linearly independent in $\NN^1(X)_{\RR}$ and each $E_i$ spans an extremal ray of $\Eff(X)$. The effective cone of a MDS also has a decomposition into rational polyhedral cones:
\begin{prop}\cite[Prop. 1.11 (2)]{HK}\label{effective}
Let $X$ be a MDS. There are finitely many birational contractions $g_i: X \dra Y_i$, with $Y_i$ a MDS, such that
$$\Eff(X)=\bigcup_i\cC_i,$$
$$\cC_i=g_i^*\Nef(Y_i)+\RR_{\geq0}\{E_1,\ldots,E_k\},$$
where $E_1,\ldots, E_k$ are the prime divisors contracted by $g_i$.
\end{prop}
The cones $\cC_i$ are called the \emph{Mori chambers} of $X$.
Prop. \ref{effective} is best interpreted as an instance of Zariski decomposition: for each effective $\QQ$-Cartier divisor $D$, there exists a birational contraction $g: X\dra Y$ (factoring through an SQM and a birational morphism $X\dra X'\ra Y$) and $\QQ$-divisors $P$ and $N$, such that $P$ is nef on $X'$, $N$ is an effective divisor contracted by $g$ and for $m>0$ sufficiently large and divisible, the multiplication map given by the canonical section $x^m_N$
$$\HH^0(X,\cO(mP))\ra\HH^0(X,\cO(mD))$$
is an isomorphism. To see this, simply take
$$P=g^*g_*(D),\quad N=D-P.$$
\begin{rmks}\label{rmk2}\label{rmks1}
(a) If $X$ is a MDS, all birational contractions $X\dra Y$ with $\QQ$-factorial $Y$, are the ones that appear in Prop. \ref{effective}.
In particular, any such $Y$ is a MDS.
(b) The SQMs in Def. \ref{mds} are the only SQMs of $X$. In particular, any SQM of a MDS is itself a MDS.
\end{rmks}
\begin{defn} Let $X$ be a normal variety. For a semigroup $\Ga\subset\WDiv(X)$\footnote{$\WDiv(X)$ is the group freely generated by prime Weil divisors in $X$.} of Weil divisors on $X$, we define the multi-section ring $\R(X,\Ga)$ as the $\Ga$-graded ring:
$$\R(X,\Ga)=\bigoplus_{D\in\Ga}\HH^0(X,\cO(D))$$
with the multiplication induced by the product of rational functions. When $\Ga$ is a group such that the class map $\Ga_{\QQ}\ra\Cl(X)_{\QQ}$ is an isomorphism, we call $\R(X,\Ga)$ a \emph{Cox ring} of $X$ and denote this by $\Cox(X)$\footnote{The greater generality of working with Weil divisors rather than than Cartier divisors will be essential in Section \ref{section wpp}.}.
\end{defn}
The definition of $\Cox(X)$ depends on the choice of $\Ga$, but basic properties, such as finite generation as a $\kk$-algebra, do not. Note that if $\Ga'\subset\Ga$ is finite index subgroup, then $\R(X,\Ga)$ is an integral extension of $\R(X,\Ga')$. For more details on Cox rings see \cite{ADHL}, \cite{LafaceVelasco}.
\medskip
Mori Dream Spaces can be algebraically characterized as follows:
\begin{thm}\cite[Prop. 2.9]{HK}\label{cox}
Let $X$ be a projective normal variety satisfying condition (1) in Def. \ref{mds}. Then $X$ is a MDS if and only if $\Cox(X)$ is a finitely generated $\kk$-algebra.
\end{thm}
\bp[Sketch of Proof]
If $\Cox(X)$ is finitely generated, let $V$ be the affine variety $\Spec(\Cox(X))$. Since $\Cox(X)$ is
graded by a lattice $\Ga\subset\WDiv(X)$, the algebraic torus $T=\Hom(\Ga,\GG_m)$ naturally acts on the affine variety $V$.
Let $\chi\in\Ga$ be a character of $T$ which corresponds to an ample divisor in $\Ga$. Then $X$ is $V//_{\chi}T$, the GIT quotient
constructed with respect to the trivial line bundle on V endowed with a $T$-linearization by $\chi$. Similarly, all small modifications of $X$ can be obtained as GIT quotients $V//_{\chi}T$, for different classes $\chi$ in $\Ga$ (thus the Mori chamber decomposition is an instance of variation of GIT).
The ``only if" implication follows from the more general Lemma \ref{multisection}.
\ep
\begin{lemma}\label{multisection}
Let $X$ be a MDS and let $\Ga$ be a finitely generated group of Weil divisors. Then $\R(X,\Ga)$ is a finitely generated $\kk$-algebra.
\end{lemma}
\bp We follow the proof in \cite[Lemma 2.20]{Okawa}.
The key facts used are (i) $\R(X,\Ga)$ is finitely generated if $\Ga$ is generated by finitely many semiample divisors (\cite[Prop. 2.8]{HK}); (ii) Zariski decomposition as in Prop. \ref{effective}.
When $\R(X,\Ga)$ is a Cox ring, this is immediate: as $\Nef(X)$ is a full cone inside $\NN^1(X)_{\RR}$, if $\Ga$ is generated by $\QQ$-divisors that are generators of $\Nef(X)$ (hence, $\Ga_{\QQ}\cong\Cl(X)_{\QQ}$), the result follows by (i).
For the general case, without loss of generality, we may replace $\Ga$ with a subgroup of finite index. In particular, we may assume that $\Ga$ has no torsion. For a Mori chamber $\cC$, denote $\Ga_{\cC}=\Ga\cap\cC$ (a semigroup).
As there are finitely many Mori chambers and the support of $\R(X,\Ga)$ is the union of $\Ga_{\cC}$, it is enough to prove that $\R(X,\Ga_{\cC})$ is finitely generated. We may assume that there is $g:X\ra Y$ birational morphism, with $$\cC=g^*\Nef(Y)+\RR_{\geq0}\{E_1,\ldots,E_k\},$$
where $E_1,\ldots,E_k$ are the prime divisors contracted by $g$. Note that since $\cC$ is a rational polyhedral cone, $\Ga_{\cC}$ is a finitely generated semigroup. For a set of generators $D_1,\ldots, D_r$ we consider Zariski decompositions as in Prop. \ref{effective}:
$D_i=P_i+N_i$, with $\QQ$-divisors $P_i$ in $g^*\Nef(X)$ and $N_i$ effective and supported on $E_1,\ldots, E_k$. Up to replacing each $D_i$ with a multiple, we may assume $P_i$ and $N_i$ are $\ZZ$-divisors. Then $\R(X,\Ga_{\cC})$ is isomorphic to an algebra over $\R(Y,P_1,\ldots, P_r)$ generated by the canonical sections $x_{N_1},\ldots,x_{N_k}$. By (i), it follows that $\R(X,\Ga_{\cC})$ is finitely generated.
\ep
\section{Examples}\label{examples}
We give several examples and non-examples of MDS (along with all the possible different ways in which the MDS property can fail). In Example \ref{rnc} we show how the property of being a MDS is neither an open, nor a closed condition.
\begin{example}
Projective $\QQ$-factorial toric varieties are MDS, as they have Cox rings which are polynomial algebras generated by sections corresponding to the
$1$-dimensional rays of the defining fan \cite{Cox}.
\end{example}
\begin{example}
$\QQ$-factorial varieties of Fano type are MDS if $\ch \kk=0$ \cite{BCHM}. A variety $X$ is said to be \emph{of Fano type} if there is a Kawamata log-terminal (klt) pair $(X,\De)$, such that $-(K_X+\Delta)$ is ample. Examples include toric varieties, Fano varieties ($\De=\emptyset$) and weak Fano varieties ($-K_X$ is big and nef) with klt singularities. SQMs of varieties of Fano type are of Fano type in characteristic zero (see for example \cite{GOST}, \cite{KO}).
\end{example}
\begin{example}
Any projective $\QQ$-factorial variety with $\rho=1$ is trivially a MDS. Starting with $\rho\geq2$, there is no classification for MDS, not even for rational surfaces (see Sections \ref{section wpp} and \ref{toricPicard1}).
\end{example}
\begin{example}
A projective, normal, $\QQ$-factorial surface $X$ is a MDS if and only if the Mori cone $\NE(X)$ is rational polyhedral and every nef divisor $D$ is semiample.
By Zariski's theorem \cite[Rmk. 2.1.32]{Laz}, every movable divisor on a projective surface is semiample. In particular, $\Mov(X)=\Nef(X)$. Hence, a nef divisor $D$ is semiample if and only if a multiple $mD$ is movable for some $m>0$.
\end{example}
\begin{example}\label{Hesse}
Let $X$ be the blow-up of $\PP^2$ at points $p_1,\ldots, p_r$ in general position. If $r\leq8$, $X$ is a del Pezzo surface $\NE(X)$
is generated by the (finitely many) $(-1)$-curves if $r\geq3$. It follows by induction on $r$ that every nef divisor is semiample.
If $r\geq9$ and the points $p_1,\ldots, p_r$ are in very general position, then $X$ has infinitely many $(-1)$-curves (hence, $\Eff(X)$ has infinitely many extremal rays and $X$ is not a MDS). It is enough to prove that there are infinitely many $(-1)$-classes when $r=9$ and the points are the base points of a general cubic pencil.
In this case
$$\phi_{|-K_X|}:X\ra\PP^1$$
is an elliptic fibration whose sections are the $(-1)$-curves on $X$. Sections of $\phi$ correspond to $k(t)$-points of the generic fiber $E=X_{k(t)}$ (an elliptic curve over $k(t)$). The Mordell-Weil group $\Pic^0(E)$ is the group of sections of $\pi$, once we fix one section as the identity. It follows that $\Pic^0(E)$ is infinite if for a smooth cubic $C$ containing $p_1,\ldots, p_9$ if $\cO(p_i-p_j)\in\Pic^0(C)$ is non-torsion line bundle for some $i\neq j$.
When $X$ contains only finitely many $(-1)$- curves (an \emph{extremal} rational elliptic surface), $X$ is a MDS \cite{AL_anticanonical}.
There is a complete classification extremal rational elliptic surfaces, by Miranda-Persson in characteristic zero \cite{MP} and Lang in positive characteristic \cite{Lang1, Lang2}. For example, it follows from this classification that if $\ch \kk\neq 2,3,5$ then the blow-up $X$ of $\PP^2$ at distinct points $p_1,\ldots, p_9$ which are the base points of a cubic pencil, is extremal if and only if the points are the $9$ flexes of a smooth cubic in the pencil, i.e., this is the Hesse configuration in $\PP^2$ (unique, up to $\PGL_3$).
\end{example}
\
\begin{example}\label{Hilbert}
Let $X$ be the blow-up of $\PP^n$ at very general points $p_1,\ldots, p_r$ and let $E_1,\ldots, E_r$ be the corresponding exceptional divisors. Generalizing the case of del Pezzo surfaces, the following are equivalent \cite{Mukai}, \cite{CT1}:
\bi
\item[(a) ] $X$ is a MDS
\item[(b) ] $\Eff(X)$ is rational polyhedral\footnote{$\Nef(X)$ is rational polyhedral, generated by semiample divisors for $r\leq 2n$.};
\item[(c) ] The following inequality holds:
$$\frac{1}{n+1}+\frac{1}{r-n-1}>\frac{1}{2}.$$
\ei
The Weyl group $\cW$ associated to the three-legged Dynkin diagram
$T_{2,n+1,r-n-1}$ acts on $\Pic(X)$ preserving effective divisors. Every element in the orbit $\cW.E_1$ (which contains all $E_i$'s) generates and
extremal ray of $\Eff(X)$. The group $\cW$ is finite if and only if the above inequality holds, which for $n\geq5$ translates to $r\leq n+3$.
Assume $r=n+3$. Let $C$ be the unique rational normal curve in $\PP^n$ passing through $p_1,\ldots,p_{n+3}$. Then $X$ is a moduli space of parabolic rank $2$ vector bundles on $(C, p_1,\ldots,p_{n+3})$ \cite{Bauer}, \cite{MukaiBook}, \cite{Mukai}. Varying stability gives rise to all the SQMs of $X$. In particular, $X$ has an SQM which is a weak Fano, hence, $X$ is of Fano type (see also \cite{AM}).
\end{example}
\begin{example}\label{rnc}
Generalizing Ex. \ref{Hilbert} for $r=n+3$, let $X$ be the blow-up of $\PP^n$ at any number $r$ of points lying on on a rational normal curve.
Then $X$ is a MDS \cite{CT1}. Hence, being a MDS is not an open condition. We now give an example (due to Hassett and Tschinkel) that shows that being a MDS is not a closed condition either.
Consider a family of blow-ups $X_t$ of $\PP^3$ along points $p^t_1,\ldots, p^t_9$ lying on some rational normal curve (hence, $X_t$ is a MDS). Such a family admits a degeneration to the blow-up $X_0$ of $\PP^3$ at nine points which are the intersection points of two smooth cubics contained in a plane $\La\subset\PP^3$ (we may assume that the nine points are not the nine flexes of the cubics). Let $E_1,\ldots,E_9$ be the exceptional divisors on $X_0$ and let $S$ be the proper transform of the plane. As $X_0$ is an equivariant $\GG_a$-compactification of $\PP^3\setminus\La=\GG_a^3$, it follows that $\Eff(X_0)$ is generated by
$E_1,\ldots,E_9$, while $\NE(X_0)$ is generated by curves in $S$. As the restriction map $\Pic(S)\ra\Pic(X_0)$ is an isomorphism, it follows that
$\NE(X_0)=\NE(S)$ via this identification. As seen in Ex. \ref{Hesse}, $\NE(S)$ is not a rational polyhedral cone if the cubic pencil is not the Hesse pencil. Hence, $X_0$ is not a MDS.
\end{example}
\begin{example}
If $X$ is a Calabi-Yau variety of dimension at most $3$, then $X$ is a MDS if and only if $\Eff(X)$ is rational polyhedral, generated by effective divisor classes. (The abundance conjecture implies the same statement in higher dimensions \cite[Cor. 4.5]{McK_survey}.) This is clearly the case if $\rho(X)=1$.
If $X$ is a K3 surface with $\rho(X)\geq3$, $\Eff(X)$ is rational polyhedral if and only if $\Aut(X)$ is finite
(\cite[Thm. 1, Rmk. 7.2]{Kovacs}, \cite{PS-S}). In this case, $\Eff(X)$ is generated by smooth rational curves.
If $\rho(X)=2$, although $\Eff(X)$ is rational polyhedral, it may not be generated by effective classes \cite[Thm. 2]{Kovacs}.
\end{example}
\begin{example}\label{toric_generalizations}
Rational normal projective varieties with a \emph{complexity one torus action} are MDS by \cite{HaussenSuss}.
Such varieties $X$ admit a faithful action of a torus of dimension $\dim(X)-1$. Examples include projectivizations of toric rank $2$ vector bundles (see \ref{toricvb}) and several singular del Pezzo surfaces.
By \cite{Brion}, \emph{wonderful varieties} are MDS. Wonderful varieties admit an action of a semi-simple algebraic group $G$ which has finitely many orbits.
Examples include toric varieties, flag varieties $G/P$ and and the complete symmetric varieties of De Concini and Procesi \cite{DeConciniProcesi_symmetric}.
\end{example}
\section{Structure Theory}\label{structure}
As for log-Fano varieties, there is little ``structure theory" for MDS:
\begin{itemize}
\item If $X$ is a MDS, any normal projective variety which is an SQM of $X$, is also a MDS. This follows from the fact that the $f_i$ of Def.
\ref{mds} are the only SQMs of $X$ (see Rmk. \ref{rmks1}).
\item \cite[Thm. 1.1]{Okawa} If $f: X\ra Y$ is a surjective morphism of projective normal $\QQ$-factorial varieties and $X$ is a MDS, then $Y$ is a MDS.
When $f$ is birational, this follows from \cite{HK} (see Rmk. \ref{rmk2}).
\end{itemize}
\subsection{\bf Projective bundles.}
The projectivization $\PP(E)$ of a vector bundle $E$ on a MDS may or may not be a MDS.
\subsubsection{}
If $L_1,\ldots, L_k$ are line bundles on a MDS $X$, then $\PP(L_1\oplus\ldots\oplus L_k)$ is also a MDS \cite[Thm. 3.2]{Brown}, \cite[Prop. 2.6]{CG}
(see also \cite{Jow}).
\subsubsection{Toric vector bundles.}\label{toricvb}
A vector bundle $E$ on a toric variety $X$ is called toric if $E$ admits an action of the open torus of $X$ that is linear on fibers and compatible with the action on the base. For example, a direct sum of line bundles is a toric vector bundle. By \cite{GHPS}, a projectivized toric bundles $\PP(E)$ is a MDS if and only if a certain blow-up $Y$ of the fiber of $\PP(E)\ra X$ above the identity point of the torus is a MDS. Hence, toric $\PP^1$-bundles are always MDS (see also Ex. \ref{toric_generalizations}). In fact, any blow-up of a projective space along linear subspaces can appear as the variety $Y$ \cite[Cor. 3.8]{GHPS}
(in particular, Ex. \ref{Hilbert}, Ex. \ref{rnc}). Moreover, there is an example of a toric vector bundle on the Losev-Manin space $\LM_n$ such that $Y =\M_{0,n}$ \cite[p. 21]{GHPS} (see \ref{LM} for details on Losev-Manin spaces).
\
The question whether $\PP(E)$ is a MDS seems difficult for non-toric vector bundles $E$, even when $\rk E=2$ \cite{MOS}.
\subsection{Ample divisors} An ample divisor in a MDS may or may not be a MDS. A question of Okawa: does every MDS have a (not necessarily ample) divisor which is a MDS?
\subsubsection{\bf Lefschetz-type theorems \cite{Jow}}
If $X$ is a smooth MDS of dimension $\geq4$ over $\CC$ which satisfies a certain GIT condition,
then any smooth ample divisor $Y\subset X$ is a MDS. Moreover, the restriction map identifies $\NN^1(X)$ and $\NN^1(Y)$. Under this identification, every Mori chamber of $Y$ is a union of some Mori chambers of $X$ and $\Nef(Y)=\Nef(X)$. The GIT condition is stable under taking products and taking the projective bundle of the direct sum of at least three line bundles. The GIT condition is satisfied by smooth varieties of dimension at least $2$ and with $\rho=1$. For toric varieties, the GIT condition is equivalent to the corresponding fan $\Si$ being \emph{2-neighborly}, i.e., for any $2$ rays of
$\Si$, the convex cone spanned by them is also in $\Si$. See also \cite{AL_hypers} for examples of non-ample divisors which are MDS.
\subsubsection{\bf Hypersurfaces in $\PP^m\times \PP^n$ \cite{Ottem}}
If $X\subset\PP^n\times \PP^m$ is a hypersurface of type $(d,e)$, the cones $\Nef(X)$, $\Mov(X)$ and $\Eff(X)$ are rational polyhedral.
If $m,n\geq2$, $X$ is a MDS (as proved also in \cite{Jow}). If $m=1$ and $d\leq n$ or $e=1$, then $X$ is a MDS. However, a very general hypersurface $X\subset \PP^1\times\PP^n$ of degree $(d,e)$ with $d\geq n+1$ and $e\geq2$ is not MDS, as $\Nef(X)$ is generated by $H_1$ and $neH_2-dH_1$ (where $H_i=p_i^*\cO(1)$ and $p_1, p_2$ are the two projections), and the divisor $neH_2-dH_1$ has no effective multiple.
As noted in \cite{Ottem}, it is the value of $d$, rather than $-K_X$, that determines whether a general hypersurface of degree $(d, e)$ is a MDS or not. In particular, it is not true that a sufficiently ample hypersurface in a MDS is again a MDS.
\subsection{Smooth rational surfaces}\label{surfaces}
A smooth rational surface $X$ whose anticanonical class $-K_X$ is big (the Iitaka dimension $\kappa(-K_X)$ is $2$) is a MDS \cite[Thm. 1]{TVV}\footnote{There is evidence that the same result holds for all projective $\QQ$-factorial rational surfaces - see Thm. \ref{-K big}.}. There are examples of smooth rational surfaces with $-K_X$ big, which are not of Fano type \cite{TVV}.
Smooth rational surfaces $X$ with $\kappa(-K_X)=1$ are MDS if and only if $\Eff(X)$ is rational polyhedral \cite{AL_anticanonical}. It is not clear what this condition means in practice. By Ex. \ref{Hesse}, if $X=\Bl\PP^2_{p_1,\ldots,p_9}$, where $p_1,\ldots,p_9$ are the base points of a cubic pencil, then
$X$ is a MDS if and only if $p_1,\ldots,p_9$ are the $9$ inflection points of the cubics in the pencil (the configuration is unique up to $\Aut(\PP^2)$).
When the points are not the base points of a cubic pencil, it is not clear what the precise condition should be for $X$ to be a MDS.
When $\kappa(-K_X)\leq 0$, the question is less settled. There exist smooth rational surfaces (of arbitrarily large Picard number) with $\kappa(-K_X)=-\infty$ which are MDS \cite{HwangPark}.
\subsection{Surfaces with $\rho(X)=2$}
The classification of singular rational MDS surfaces with $\rho(X)=2$ is far from settled (see Sections \ref{section wpp} and \ref{toricPicard1}).
In general, understanding when the blow-up $\Bl_p X$ of a surface $X$ with $\rho(X)=1$ at a general point $p$ is a MDS, is related to the rationality of Seshadri constants (see Section \ref{Picard1}) and is not understood in most cases.
\subsection{\bf Singularities of Cox rings and positivity of $-K_X$}
Assume $\ch \kk=0$ and let $X$ be a MDS. Then $X$ is of Fano type (resp., Calabi-Yau type) if and only if $\Spec(\Cox(X))$ has klt singularities (resp. log canonical singularities) \cite{KO} (see also \cite{GOST}, \cite{Brown}). Recall that $X$ is said to be \emph{of Calabi-Yau type} if there exists a log-canonical pair $(X,\De)$ such that $(K_X+\De)$ is $\QQ$-linearly trivial. It would be interesting if the condition $-K_X\in\Eff(X)$ is also reflected in $\Cox(X)$.
\section{Blow-ups of surfaces of Picard number one}\label{Picard1}
Let $X$ be a projective, $\QQ$-factorial, normal surface with $\rho(X)=1$. Let $H$ be an ample $\QQ$-divisor on $X$ and let
$$w:=H^2.$$
If $p\in X$ is a general point, let $\Bl_p X$ denote the blow-up of $p$ and $E$ be the exceptional divisor.
The Mori cone of $\Bl_p X$ has the form
$$\NE(\Bl_p X)=\RR_{\geq0}\{E, R\},\quad R=H-\epsilon E, \quad \epsilon\in\RR_{>0}.$$
There are two possibilities: either $R^2=0$, or $R^2<0$. Assume that $R^2=0$. Then $\epsilon=\sqrt{w}$ and we have
$$\Nef(X)=\RR_{\geq0}\{H, R\}.$$
In particular, $\epsilon$ is the \emph{Seshadri constant} $\epsilon(H,p)$ of $H$ at the point $p$.
Then $\Bl_p X$ is a MDS if and only if $R$ is semiample (in particular,
$\epsilon\in\QQ$). There are no known examples (in any dimension) of irrational Seshadri constants at points. For example, if $X\subset\PP^3$ is a general quintic surface, it is expected that $\epsilon(\cO(1), p)=\sqrt{5}$ for a general point $p$.
We discuss other conjectural examples of irrational Seshadri constants in Section \ref{section wpp}.
Assume now $R^2<0$. Then there exists an irreducible curve $C$ on $\Bl_p X$ such that $C^2<0$ and $C$ spans the same ray as $R$.
Then $\Bl_p X$ is a MDS if and only if the class
$$R^{\perp}:=H-\frac{w}{\epsilon}E$$ is semiample, or equivalently, using Zariski's theorem, the ray spanned by $R^{\perp}$ contains a movable divisor. As $E$ and $C$ span
$\NE(\Bl_p X)$ and $R^{\perp}$ is the extremal ray of $\Nef(X)$, it follows that $R^{\perp}$ is semiample if and only if $C$ is not contained in the base locus of $d(R^{\perp})$, for some $d>0$. We state this observation as a Lemma:
\begin{lemma}\label{cutkosky}
Let $X$ be a projective, $\QQ$-factorial surface with Picard number $\rho(X)=1$ and let $p\in X$ be a general point. Let $\Bl_p X$ be the blow-up of $X$ at $p$ and let $E$ be the exceptional divisor. Assume that $\Bl_p X$ contains an irreducible curve $C\neq E$ such that $C^2<0$. Then $\Bl_p X$ is a MDS if and only if there exists an effective divisor $D$ on $\Bl_p X$ such that $D\cdot C=0$ and the linear system $|D|$ does not contain $C$ as a fixed component. Equivalently, there exists a curve $\bar D$ on $X$ that intersects the image $\bar C$ of $C$ in $X$ only at $p$ and with multiplicity one.
\end{lemma}
\begin{rmk}\label{artin}
Assume the situation in Lemma \ref{cutkosky} and $\ch \kk>0$.
If $X$ and $p$ can be defined over the algebraic closure of a finite field, then a divisor $D$ as in the Lemma always exists. This follows from \cite{Artin} if $X$ is smooth. In general, one can consider the desingularization of $X$ and the same conclusion holds.
\end{rmk}
\section{Blow-ups of weighted projective planes}\label{section wpp}
Let $a, b, c>0$ be pairwise coprime integers and consider the weighted projective space
$$\PP=\PP(a,b,c)=\Proj S,$$
where $S=\kk[x,y,z]$ and $x,y,z$ have degrees
$$\deg(x)=a,\quad \deg(y)=b,\quad \deg(z)=c.$$
Then $\PP$ is a toric, projective, $\QQ$-factorial surface with Picard number one.
Note that $\PP$ is smooth outside the three torus invariant points, but singular at some of these points if $(a,b,c)\neq(1,1,1)$.
If $D_1, D_2, D_3$ are the torus invariant (Weil) divisors, let
$$H=m_1 D_1+m_2 D_2+m_3 D_3,$$
for some integers $m_1, m_2, m_3$ such that $m_1a+m_2b+m_3c=1$. Then
$$\Cl(\PP)=\ZZ\{H\},\quad \Pic(\PP)=\ZZ\{abc H\},$$
$$H^2=\frac{1}{abc}.$$
Moreover, $\cO_{\Proj S}(d)\cong\cO(dH)$ for all $d\in\ZZ$ and $\HH^0(\PP,\cO(d))$ can be identified with the degree $d$ part $S_d$ of $S$.
If $\pi: \Bl_e\PP\ra\PP$ is the blow-up map, let $E=\pi^{-1}(e)$. We abuse notations and denote by $H$ the pull-back $\pi^{-1}(H)$ (note that $e$ does not belong to the support of $H$). We have $\Cl(\Bl_e\PP)=\ZZ\{H, E\}$ and hence a Cox ring of $\Bl_e\PP$ is
$$\Cox(\Bl_e\PP)=\bigoplus_{d,l\in\ZZ}\HH^0(X, \cO(dH-lE)). $$
It was observed by Cutkosky \cite{Cutkosky} that finite generation of $\Cox(\Bl_e\PP)$ is equivalent to the finite generation of the
symbolic Rees algebra $R_s(\gp)$ of the prime ideal $\gp$ of $S$ defining the point $e$, or equivalently $\gp$ is a \emph{monomial prime}, i.e.,
the kernel of the $k$-algebra homomorphism:
$$\phi: k[x, y, z]\rightarrow k[t], \quad\phi(x=t^ a,\quad\phi(y)=t^b,\quad\phi(z)=t^c.$$
The \emph{symbolic Rees algebra} of a prime ideal $\gp$ in a ring $R$, is the ring
$$R_s(\gp):=\bigoplus_{l\geq0}\gp^{(l)},\quad \text{ where }\quad \gp^{(l)}=\gp^lR_{\gp}\cap R.$$
In our situation, symbolic Rees algebra $R_s(\gp)$ can be identified with the following subalgebra of $\Cox(X)$:
$$\bigoplus_{d,l\in\ZZ_{\geq0}}\HH^0(X, \cO(dH-lE)),$$
which is clearly finitely generated if and only if $\Cox(\Bl_e\PP)$ is finitely generated (or equivalently Noetherian).
The study of the symbolic Rees algebras $R_s(\gp)$ for monomial primes has a long history: \cite{Huneke1}, \cite{Huneke2}, \cite{Cutkosky}, \cite{GNS}, \cite{GNS2}, \cite{Srinivasan}, \cite{GM}, \cite{GNW}, \cite{KuranoMatsuoka}, \cite{CutkoskyKurano}, \cite{GK}.
Prior to \cite{GK}, the only non-finitely generated examples known were the following:
\begin{thm}\cite[Cor. 1.2, Rmk. 4.5]{GNW}\label{GNW} Assume $(a,b,c)$ is one of the following:
\bi
\item $(7m-3, 5m^2-2m, 8m-3)$, with $m\geq 4$ and $3\nmid m$,
\item $(7m-10, 5m^2-7m+1, 8m-3)$, with $m\geq 5$, $3\nmid 7m-10$ and $m\not\equiv -7(\textrm{mod } 59)$.
\ei
Then $\Bl_e\PP(a,b,c)$ is not a MDS when $\ch \kk=0$.
\end{thm}
The original proof of Theorem \ref{GNW} involved a reduction to positive characteristic. Using methods of toric geometry, Gonzalez and Karu \cite{GK} gave a different proof to Theorem \ref{GNW}, which allows allows for many more examples of toric surfaces $X$ with Picard number one for which $\Bl_e X$ is not a MDS in characteristic zero (Thm. \ref{GK} - to be discussed in detail in Section \ref{toricPicard1}). In particular:
\begin{thm}\cite{GK}\label{GK particular}
If $\ch \kk=0$ $\Bl_e\PP(a,b,c)$ is not a MDS if $(a,b,c)$ is one of the following:
$$(7, 15, 26),\quad (7,17, 22),\quad (10,13, 21),\quad (11, 13, 19),\quad (12, 13, 17).$$
\end{thm}
The above are all the triples $(a,b,c)$ with $a+b+c\leq50$ that satisfy the conditions in Thm. \ref{GK}.
Key in all the examples in \cite{GK} is that $\Bl_e \PP$ has a negative curve, other than $E$ (hence, Lemma \ref{cutkosky} applies).
\begin{ques}\label{irrational}
Are there any triples $(a,b,c)$ for which $\sqrt{abc}\notin\ZZ$ and $\Bl_e\PP(a,b,c)$ contains no curves $C\neq E$ with $C^2<0$?
\end{ques}
As explained in Section \ref{Picard1}, if $\sqrt{abc}\notin\ZZ$ and $\Bl_e\PP(a,b,c)$ has no negative curves, then $\Bl_e\PP$ is not a MDS (in any characteristic), as $\NE(\Bl_e \PP)$ and $\Nef(\Bl_e \PP)$ have an irrational extremal ray generated by $H-\frac{1}{\sqrt{abc}}E$. In particular, Seshadri constant $\epsilon(H,e)$ is irrational. Furthermore, if $\kk=\CC$, the Nagata conjecture for $\PP^2$ and $abc$ points holds \cite[Prop. 5.2.]{CutkoskyKurano}.
\medskip
If $\ch \kk>0$ and $\Bl_e\PP$ is not a MDS, then $\Bl_e\PP$ has no negative curve,
other than $E$ (see Rmk. \ref{artin}). In particular, either $\sqrt{abc}\notin\ZZ$ or $H-\frac{1}{\sqrt{abc}}E$ is not semiample.
If $\Bl_e\PP(a,b,c)$ has no negative curve in characteristic $p$, by standard reduction $p$ methods, it follows $\Bl_e\PP(a,b,c)$ has no negative curves in characteristic zero.
\medskip
\begin{ques}\label{9,10,13}\cite{KuranoMatsuoka}
Does $\Bl_e\PP(9,10,13)$ contain a curve $C\neq E$ with $C^2<0$?
\end{ques}
In Section \ref{toricPicard1} we discuss an approach (for $\ch \kk=0$) to the classifcation problem \ref{classify} by reducing the question to an interpolation problem. In particular, Question \ref{9,10,13} has a negative answer (in $\ch \kk=0$, hence, also in $\ch \kk=p$ for all but finitely many primes $p$) if and
only if there is an affirmative answer to the following:
\begin{ques}(Question \ref{interpolation_9,10,13})
Let $\De$ be the polytope in $\RR^2$ with coordinates $(0,0)$, $(10,40)$, $(36,27)$. For every $q\geq 1$, let
$$m_q=\lfloor q\sqrt{1170}\rfloor+1.$$
Is it true that for every $q\geq1$ and any point $(i,j)\in q\De\cap\ZZ^2$, there exists a curve $C\subset\RR^2$ of degree $m_q$ passing through all the points
$(i',j')\neq(i,j)$ in $q\De\cap\ZZ^2$, but not $(i,j)$?
\end{ques}
Computer calculations show that the answer is affirmative for $q\leq5$.
\
Most known affirmative results are covered by the following:
\begin{thm}\cite{Cutkosky}\label{-K big}
If the anticanonical divisor of $\Bl_e\PP(a,b,c)$
$$-K=(a+b+c)H-E$$
is big, then $\Bl_e\PP(a,b,c)$ is a MDS. In particular, if $(-K)^2>0$, i.e., if
$$a+b+c>\sqrt{abc},$$
then $\Bl_e\PP(a,b,c)$ is a MDS.
\end{thm}
Note that if $(a,b,c)\neq(1,1,1)$ and $-K$ is big, $\Bl_e\PP(a,b,c)$ has a negative curve, other than $E$.
Several particular cases of Thm. \ref{-K big} were proved previously by algebraic methods \cite{Huneke1}, \cite{Huneke2}. Srinivasan \cite{Srinivasan} gave examples of triples $(a,b,c)$ for which $\Bl_e\PP(a,b,c)$ is a MDS, but $-K$ is not always big:
\bi
\item[(a) ] $(6,b,c)$, for any $b, c$
\item[(b) ] $(5,77, 101)$ (in this case $\kappa(-K)=-\infty$).
\ei
A particular case of Theorem \ref{-K big} is when one of $a,b,c$ is $\leq4$. As noted in \cite{Cutkosky}, when compared with (b) above, this raises the question whether $\Bl_e\PP(5,b,c)$ is always a MDS.
\section{Blow-ups of higher dimensional toric varieties}\label{higher}
Recall that a toric variety $X$ corresponds to the data $(N,\Si)$ where $N$ is a lattice (a finitely generated free $\ZZ$-module) and a fan $\Si\subset N_{\RR}$. Then $X=X(N,\Si)$ is $\QQ$-factorial if and only the fan $\Si$ is simplicial. Two toric varieties $X=X(N,\Si)$ and $X'=X(N',\Si')$ are isomorphic in codimension one if and only if $\Si$ and $\Si'$ have the same rays. To reduce dimensions when considering Question \ref{Jenia}, one has the following result:
\begin{prop}\cite[Prop. 3.1]{CT4}\label{toric}
Let $\pi:\,N\to N'$ be a surjective map of lattices
with kernel of rank $1$ spanned by a vector $v_0\in N$.
Let $\Ga$ be a finite set of rays in $N_{\RR}$ spanned by elements of $N$,
such that the rays $\pm{R_0}$ spanned by $\pm{v_0}$ are not in~$\Gamma$. Let $\Si'\subset N'_\RR$ be a complete simplicial fan with rays given by
$\pi(\Ga)$.
Suppose that the corresponding toric variety $X'$ is projective. Then
\bi
\item[(1) ] There exists a complete simplicial fan $\Si\subset N_\RR$
with rays given by $\Ga\cup\{\pm R_0\}$ and such that the corresponding toric variety $X$ is projective and $\pi$ induces a surjective morphism
$p:\,X\ra X'$.
\item[(2) ] There exists an SQM $Z$ of $\Bl_eX$ such that the rational map $Z\dashrightarrow\Bl_eX'$ induced by $p$ is regular. In particular, if $\Bl_eX$ is a MDS then $\Bl_eX'$ is a MDS.
\ei
\end{prop}
\begin{cor}\label{project}
Assume $X=X(N,\Si)$ is a toric variety of dimension $n$. Assume there
exists a saturated sublattice $$N'\subset N,\quad \rk N'=n-2$$ with the following properties:
\bi
\item[(1) ] The vector space $N'\otimes\QQ$ is generated by rays $R$ of $\Si$ with the property that $-R$ is also a ray of $\Si$.
\item[(2) ] There exist three rays of $\Si$ with primitive generators $u, v, w$ whose images generate
$N/N'$ and such that $$au + bv + cw = 0 \quad (\text{mod } N')$$ for some pairwise coprime integers $a,b,c > 0$.
\ei
Then there exists a rational map $\Bl_e X\dra\Bl_e\PP(a,b,c)$ which is a composition of SQMs and surjective morphisms between normal, projective, $\QQ$-factorial varieties. In particular, if $\Bl_e X$ is a MDS, then $\Bl_e\PP(a,b,c)$ is a MDS.
\end{cor}
\say{\bf Losev-Manin spaces.}\label{LM}
Let $\LM_n$ be the Losev-Manin space \cite{LM}. The space $\LM_n$ can be described also as the blow-up of $\PP^{n-3}$ at points $p_1\ldots, p_{n-2}$ in linearly general position and the proper transforms of all the linear subspaces spanned by the points, in order of increasing dimension.
The space $\LM_n$ is a toric variety and its fan $\Si$ is the barycentric subdivision of the fan of $\PP^{n-3}$. It has lattice
$$N=\ZZ\{e_1,\ldots,e_{n-2}\}/\ZZ\{e_1+\ldots+e_{n-2}\},$$
and rays generated by the primitive lattice vectors
$$\sum_{i\in I}e_i,\quad \text{for all } I\subset \{1,\ldots, n-2\}, \quad 1\le \#I\le n-3.$$
Notice that rays of this fan come in opposite pairs. To construct, for all $n$, a sublattice $N'\subset N$ satisfying the conditions in Cor. \ref{project}, we can proceed as follows: we partition
$$ \{1,\ldots, n-2\}=S_1\coprod S_2\coprod S_3$$
into subsets of size $a+2, b+2, c+2$ (so $n=a+b+c+8$).
We also fix some indices $n_i\in S_i$, for $i=1,2,3$.
Let $N'\subset N$ be the sublattice generated by the following vectors:
$$e_{n_i}+e_r\quad\hbox{\rm for}\quad r\in S_i\setminus\{n_i\},\ i=1,2,3.$$
If $\pi: N\ra N/N'$ is the projection map, then we have the following:
\begin{enumerate}
\item $N'$ is a lattice generated by the vectors $\pi(e_{n_i})$, for $i=1,2,3$;
\item $a\pi(e_{n_1})+b\pi(e_{n_2})+c\pi(e_{n_3})=0$.
\end{enumerate}
\begin{cor}\label{sum}
Let $n=a+b+c+8$, where $a,b,c$ are positive pairwise coprime integers.
If $\Bl_e\LM_n$ is a MDS then $\Bl_e\PP(a,b,c)$ is a MDS.
\end{cor}
Cor. \ref{sum} and Theorems \ref{GNW} and \ref{GK particular} give examples of integers $n$ when $\Bl_e\LM_n$ is not a MDS (for $n\geq134$ if ones uses
Theorem \ref{GNW} and $n\geq50$ if one uses Theorem \ref{GK particular}). A smaller
$n$ for which $\Bl_e\LM_n$ is not a MDS was subsequently obtained in \cite{GK}, using Cor. \ref{project} and projecting from a different sublattice $N'$ used in the proof of Cor. \ref{sum}:
\begin{thm}\cite{GK}\label{n=13}
If $\ch \kk=0$, $\Bl_e\LM_{13}$ is not a MDS.
\end{thm}
Cor. \ref{project} is used to prove that if $\Bl_e\LM_{13}$ is a MDS, then $\Bl_e\PP(7,15,26)$ is a MDS.
However, $\Bl_e\PP(7,15,26)$ is not a MDS (Thm. \ref{GK particular}).
The smallest known $n$ (as of the time of this writing) for which $\Bl_e\LM_n$ is not a MDS was recently obtained in \cite{HKL} by again using Cor. \ref{project} and projecting from a yet different sublattice:
\begin{add}\cite{HKL}\label{n=10}
If $\ch \kk=0$, $\Bl_e\LM_{10}$ is not a MDS.
\end{add}
Cor. \ref{project} is used to prove that if $\Bl_e\LM_{10}$ is a MDS, then $\Bl_e\PP(12,13,17)$ is a MDS.
However, $\Bl_e\PP(12,13,17)$ is not a MDS (Thm. \ref{GK particular}).
\begin{lemma}
If $\Bl_e\LM_{n+1}$ is a MDS, then $\Bl_e\LM_{n}$ is a MDS.
\end{lemma}
\bp
Note that although there exist forgetful maps $\LM_{n+1}\ra\LM_{n}$, in general it is not clear whether one can resolve the rational map
$$\Bl_e\LM_{n+1}\dra\Bl_e\LM_{n}$$ by an SQM followed by a surjective morphism. However, if $\Bl_e\LM_{n+1}$ is a MDS, this is always the case, and we are done by \cite{Okawa}.
\ep
As $\Bl_e\LM_{6}$ is a MDS in any characteristic (follows from \cite{C} - see \ref{M0n}; moreover, it is a threefold of Fano type), we are left with:
\begin{ques}
Is $\Bl_e\LM_{n}$ a MDS for $7\leq n\leq9$, $\ch \kk=0$?
\end{ques}
\
\say{\bf Losev-Manin spaces and the moduli spaces $\M_{0,n}$}\label{M0n}
There is a close connection between the blow-ups $\Bl_e\LM_{n}$ of the Losev-Manin spaces and the moduli spaces $\M_{0,n}$ of stable, $n$-pointed
rational curves. By Kapranov \cite{Kapranov}, $\M_{0,n}$ is the blow-up of $\PP^{n-3}$ at points $p_1\ldots, p_{n-1}$ in linearly general position and the proper transforms of all the linear subspaces spanned by the points, in order of increasing dimension.
Up to changing coordinates, we may assume that
$$p_1=[1,0,0,\ldots,0],p_2=[0,1,0,\ldots,0],\ldots,p_{n-2}=[0,0,0,\ldots,1],$$
$$p_{n-1}=e=[1,1,1,\ldots,1].$$
Note that $p_{n-1}$ is the identity of the open torus in $\LM_n$. Moreover, $\M_{0,n}$ is the blow-up of $\LM_n$ along $e$, and the (proper transforms of the) linear susbpaces spanned by $e$ and $\{p_i\}_{i\in I}$, for all the subsets $I$ of $\{1,\ldots, n-2\}$ with $1\leq\#I\leq n-5$. In particular, there is a projective birational morphism $\M_{0,n}\ra\Bl_e\LM_{n}$.
\begin{thm}\cite{CT4}\label{main CT}
\bi
\item[(1) ] If $\M_{0,n}$ is a MDS, then $\Bl_e\LM_{n}$ is a MDS;
\item[(2) ] If $\Bl_e\LM_{n+1}$ is a MDS, then $\M_{0,n}$ is a MDS.
\ei
\end{thm}
The existence of forgetful maps $\M_{0,n+1}\ra\M_{0,n}$ implies that if $\M_{0,n+1}$ is a MDS, then $\M_{0,n}$ is a MDS.
Combined with Cor. \ref{sum} and the resuts in \ref{LM}, Thm. \ref{main CT} gives a negative answer to the question of Hu and Keel \cite{HK} whether
$\M_{0,n}$ is a MDS.
\begin{thm}\cite{CT4},\cite{GK},\cite{HKL}
If $n\geq10$, $\M_{0,n}$ is not a MDS in characteristic $0$.
\end{thm}
Note that $\M_{0,6}$ is a MDS in any characteristic \cite{C} (moreover, it is a threefold of Fano type). The range $7\leq n\leq9$ is still open.
Part (1) of Thm. \ref{main CT} follows from \cite{HK} (see Rmk. \ref{rmk2}). Part (2) follows from:
\begin{thm}\cite{CT4}\label{Xn}
Let $X_n$ be the toric variety which is the blow-up of $\PP^{n-3}$ along points $p_1,\ldots,p_{n-2}$ and (all the proper transforms of) the linear subspaces of codimension at least $3$ spanned by the points $p_1,\ldots,p_{n-2}$. Then $\Bl_e X_{n+1}$ is an SQM of a $\PP^1$-bundle over $\M_{0,n}$ which is the projectivization of a direct sum of line bundles.
\end{thm}
Hence, $\M_{0,n}$ is a MDS if and only if $\Bl_eX_{n+1}$ is a MDS. In particular:
\bi
\item If $n\geq11$, then $\Bl_eX_n$ is not a MDS if $\ch \kk=0$;
\item If $n\leq7$, then $\Bl_eX_n$ is a MDS.
\ei
\say{\bf Further questions.}
\begin{enumerate}
\item Are there other examples of toric varieties besides Losev-Manin spaces, to which Cor. \ref{project} applies?
\item What are the simplest smooth toric varieties $X$ for which $\Bl_e X$ is not a MDS? Any smooth Fano varieties?
\end{enumerate}
If $X$ is a projective, $\QQ$-factorial toric variety such that all the torus invariant divisors are not movable, then $\Bl_e X$ is not toric.
It may or may not be a MDS (for example, when $X$ is $\LM_6$ or $\LM_n$ with $n\geq10$).
If some of the torus invariant divisors are movable, then $\Bl_e X$ may be toric (for example when $X=\PP^n$), but may not even be a MDS (for example, when $X=X_n$ from Thm. \ref{Xn}). It would be interesting to find a geometric criterion for $\Bl_e X$ to not be a MDS.
\section{Blow-ups of toric surfaces}\label{toricPicard1}
In this section we assume
$$\ch\kk=0.$$
Let $(X_{\De}, H)$ be a polarized toric projective surface with $H$ an ample $\QQ$-Cartier divisor on $X_{\De}$ corresponding to the rational polytope $\De\subset N^*_{\RR}=\RR^2$. If $X_{\De}$ has Picard number $\rho$, then $\De$ is a rational polytope with $\rho+2$ vertices. If $d>0$ is an integer such that $d\De$ has integer coordinates, then global sections of $\cO_{X_{\De}}(dH)$ can be identified with Laurent polynomials (considered as regular functions on the open torus):
$$f=\sum_{(i,j)\in d\De\cap\ZZ^2}a_{(i,j)}x^iy^j\in \HH^0(X,\cO(dH)).$$
The vertices of $\De$ correspond to the $\rho+2$ torus invariant points of $X$. A section $f$ vanishes at a torus invariant point if and only if the coefficient $a_{ij}$ of the corresponding vertex in $d\De$ is zero. We fix a vertex $(x_1,y_1)$ of $\De$ and and let $p_1$ be the corresponding torus invariant point.
For simplicity, we assume this is the ``leftmost lowest" point of $\De$.
We now translate into linear algebra the condition that a global section of $\cO_{X_{\De}}(dH)$ has a certain multiplicity at the point $e$. Let $N_d$ be the number of lattice points $(i,j)\in d\De\cap\ZZ^2$ and let $R_m$ be the number of derivatives $\de^a_x\de^b_y$ of order $\leq m-1$ in two variables:
$$R_m=1+2+\ldots+m=\frac{m(m+1)}{2}.$$
\begin{defn}
We order the pairs $(i,j)$ and the pairs $(a,b)$ lexicographically (so the first $(i,j)$ corresponds to the leftmost point $(dx_1,dy_1)$ of $d\De$). We define two $N_d\times R_m$ matrices $A=A_{d,m}$ and $B=B_{d,m}$, whose entries for the pairs $(i,j)$ and $(a,b)$ as are given as follows:
$$A_{(i,j),(a,b)}=\de^a_x\de^b_y(x^iy^j)(1,1)=a!{i\choose{a}}b!{j\choose{b}},$$
$$B_{(i,j),(a,b)}=i^aj^b.$$
where
we denote for any integers $n,k$ ($k\geq0$, but $n$ possibly negative)
$$ {n\choose{k}}=\frac{n(n-1)(n-2)\ldots(n-k+1)}{k!}.$$
\end{defn}
We write $N=N_d$, $R=R_m$, $A=A_{d,m}$, $B=B_{d,m}$, whenever there is no risk of confusion.
\begin{lemma}\label{AB}
The matrix $B_{d,m}$ can be obtained from $A_{d,m}$ by a sequence of reversible column operations.
\end{lemma}
\bp
We claim that for every column $(a,b)$ of $A$, starting from left to right, we can do (reversible) column operations on $A$ involving only previous columns, and end up with the column that has entries $i^aj^b$ for every row $(i,j)$. For simplicity, we may first ignore the $j$'s and consider the situation when one matrix has entries $a!{i\choose{a}}$ and the other $i^a$ (with rows indexed by $i$ and columns by $a$). It is easy to see that one can do reversible column operations from one matrix to the other: use induction on $a$ and expand the product
$$i(i-1)(i-2)\ldots(i-a+1).$$
The general case is similar.
\ep
\begin{lemma}\label{linear algebra}
Let $A$ be an $N\times R$ matrix with entries in $\QQ$. The following are equivalent:
\bi
\item[(a) ] Any linear combination $\sum \al_j R_j$ of the rows $R_j$ of the matrix $A$ that is zero, must have $\al_i=0$,
\item[(b) ] There exists a linear combination of the columns of $A$ that equals the vector $e_i=(0,0,\ldots,1,0\ldots 0)\in\RR^{N_d}$.
\ei
In particular, $A$ has rank $N$ if and only if for every $1\leq i\leq N$, there exists a linear combination of the columns of $A$ which equals
$e_i$.
\end{lemma}
\bp
May assume $i=1$. Consider the pairing
$(,): V\times W\ra\QQ$ with $V$ a $\QQ$-vector space with basis $e_1,\ldots, e_N$ and $W$ a $\QQ$-vector space with basis $f_1,\ldots, f_R$ and
$(e_u,f_v)=a_{uv}$. Let $$\phi: V\ra W^*,\quad \phi^*:W\ra V^*$$ be the induced linear maps. Condition (b) is equivalent to the dual vector $e_1^*\in V^*$ being in the image of the map $\phi^*$. Condition (a) is equivalent to the kernel $K$ of the map $\phi$ being contained in the span of $e_2,\ldots, e_N$. Let
$I=Im(\phi)\subset W^*$. Hence, there is an exact sequence $$0\ra K\ra V\ra I\ra 0.$$
Consider the inclusion map $u: K\ra V$. Dualizing, it follows that $Im(\phi^*)=I^*=ker(u^*)$. Hence, $e_1^*\in Im(\phi^*)$ if and only if $u^*(e_1^*)=0$. As $u^*(e_1^*)$ is the linear functional $K\ra \QQ$ given by $k\mapsto e_1^*(k)$, for $k\in K$, it follows that $u^*(e_1^*)=0$ if and only if $e_1^*(k)=0$, for all $k\in K$, or equivalently, $K$ is contained in the span of $e_2,\ldots, e_N$.
\ep
\begin{lemma} Let $\Bl_e X_{\De}$ be the blow-up of $X_{\De}$ at the identity point $e$ and let $E$ denote the exceptional divisor.
The following are equivalent:
\bi
\item[(i) ] The linear system $|dH-mE|$ is empty,
\item[(ii) ] The matrix $A_{d,m}$ has linearly independent rows,
\item[(iii) ] The matrix $B_{d,m}$ has linearly independent rows,
\item[(iv) ] For every $(i,j)\in d\De\cap\ZZ^2$, there exists a polynomial $f(x,y)\in\QQ[x,y]$ of degree $\leq m-1$, such that $ f(i,j)\neq0$ and
$$f(i',j')=0 \text{ for all } (i',j')\in d\De\cap\ZZ^2, (i'j')\neq(i,j).$$
\ei
\end{lemma}
Equivalently, condition (iv) says that one can separate any lattice point in $d\De$ from the rest by degree $m-1$ plane curves.
\bp
A non-zero section of $\cO_{X_{\De}}(dH)$ has multiplicity $m$ at the point $e$ if and only if there exists a non-zero linear combination of the rows of $A_{d,m}$ which is zero. Hence, (i) is clearly equivalent to (ii). By Lemma \ref{AB}, (ii) is equivalent to (iii). By Lemma \ref{linear algebra}, (iii) is equivalent to (iv).
\ep
\begin{lemma}\label{separate leftmost}
Let $\Bl_e X_{\De}$ be the blow-up of $X_{\De}$ at the identity point $e$ and let $E$ denote the exceptional divisor.
The following are equivalent:
\bi
\item[(i) ] All non-zero sections of the linear system $|dH-mE|$ (if any) define curves that pass through the torus invariant point $p_1$,
\item[(ii) ] There exists a linear combination of the columns of the matrix $A_{d,m}$ that equals the vector $(1,0\ldots, 0)\in\RR^{N_d}$,
\item[(iii) ] There exists a linear combination of the columns of the matrix $B_{d,m}$ that equals the vector $(1,0\ldots, 0)\in\RR^{N_d}$,
\item[(iv) ] There exists a polynomial $f(x,y)\in\QQ[x,y]$ of degree $\leq m-1$, such that $f(dx_1,dy_1)\neq0$ and
$$f(i,j)=0 \text{ for all } (i,j)\in d\De, (i,j)\neq(dx_1,dy_1).$$
\ei
\end{lemma}
Equivalently, condition (iv) says that there exists a plane curve of degree $\leq m-1$ that passes through all the lattice points in $d\De$, except the lefmost point.
\bp
Condition (i) is equivalent to the fact that any non-zero section of $\cO_{X_{\De}}(dH)$ which has multiplicity $m$ at the point $e$, must have the coefficient
$a_{(dx_1,dy_1)}$ is zero. Equivalently, any linear combination $\sum \al_i R_i$ of rows $R_i$ of the matrix $A$ that is zero, must have $\al_1=0$. By Lemma \ref{linear algebra} this is equivalent to condition (ii). Lemma \ref{AB} implies that (ii) and (iii) are equivalent. Condition (iv) is just a reformulation of (iii).
\ep
\
Consider now the situation when $\rho(X_{\De})=1$ (i.e., $\De$ is a triangle) and $\Bl_e(X_{\De})$ has a curve $C\neq E$ with $C^2<0$. As in \cite{GK},
we assume that the point $(0,0)$ is one vertex of $\De$, the point $(0,1)$ lies in the interior of a non-adjacent edge, and moreover, $C$ is the proper transform of the closure $\bar C$ of the curve defined by the section $1-y$ of $\cO_{X_{\De}}(H)$. Then $\bar C=H$ in $\Cl(X_{\De})$ and
$$C=H-E\quad \text{in }\Cl(\Bl_eX_{\De}).$$
The condition $C^2<0$ is equivalent to $$w:=H^2=2(\text{Area}(\De))<1.$$
Denote by $(x_1,y_1)$ the leftmost point of $\De$ and by $(x_2,y_2)$ the rightmost point of $\De$.
Let $p_1$, respectively $p_2$, be the corresponding torus invariant points.
Note that $\bar C$ contains $p_1$ and $p_2$. Moreover, $w=H^2=x_2-x_1$ is the width of $\De$.
\medskip
The main theorem in \cite{GK} becomes an instance of the following more general statement, which shows that the question of $\Bl_e X_{\De}$ not being a MDS is equivalent to solving an interpolation problem for points in the (usual) affine plane.
\begin{prop}\label{GK general}
Let $(X_{\De},H)$ be a polarized projective toric surface with $\rho(X_{\De})=1$ corresponding to a triangle $\De$ as above. Assume
$$w=H^2<1.$$
Then $\Bl_e X_{\De}$ is not a MDS if and only if for any sufficiently divisible integer $d>0$ such that $d\De$ has integer coordinates, there exists a curve $C\subset\AA^2$ of degree $dw-1$ that passes through all the lattice points $d\De\cap\ZZ^2$ except the point $(dx_1,dy_1)$.
\end{prop}
\bp
By Lemma \ref{cutkosky}, $\Bl_e X_{\De}$ is not a MDS if and only if any non-zero effective divisor $D$ with class $dH-dwE$ ($d>0$)
contains $C$ in its fixed locus, or equivalently, the image $\bar D$ of $D$ in $X_{\De}$ contains some other point of $\bar C$ than $e$ (for example $p_1$).
Hence, $\Bl_e X_{\De}$ is not a MDS if and only if for any sufficiently large and divisible $d$, any element of the linear system $|dH-dw E|$ contains $p_1$. The result now follows from Lemma \ref{separate leftmost}.
\ep
The difficult part is of course to solve the interpolation problem posed in Prop. \ref{GK general}. We claim that the main theorem in \cite{GK} gives sufficient (but not necessary) conditions for this.
\begin{thm}\cite[Thm. 1.2]{GK}\label{GK}
Let $(X_{\De},H)$ be a polarized projective toric surface with Picard number one,
corresponding to a triangle $\De$ as above and assume $$w=H^2<1.$$
If $s_1<s_2<s_3$ are the slopes defining the triangle $\De$, let
$$n=\#([s_1,s_2]\cap\ZZ).$$
Assume that $$\#((n-1)[s_2,s_3]\cap\ZZ)=n,\quad \text{and}\quad ns_2\notin\ZZ.$$
Then for any integer $d>0$ such that $d\De$ has integer coordinates, there exists a curve $C\subset\AA^2$ of degree $dw-1$ that passes through all the lattice points $d\De\cap\ZZ^2$ except the leftmost point $(dx_1,dy_1)$. In particular, $\Bl_e X_{\De}$ is not a MDS by Proposition \ref{GK general}.
\end{thm}
As mentioned in \cite{GK}, $\#([s_1,s_2]\cap\ZZ)$ represents the number of points in $d\De\cap\ZZ^2$ (for any $d$ such that $d\De$ has integer coordinates) lying in the second column from the left, i.e., the column with $x$ coordinate $mx_1+1$. Similarly, for any $k\geq1$,
the number
$$\#((k-1)[s_2,s_3]\cap\ZZ)$$ is the number of points in $d\De\cap\ZZ^2$ lying in the $k$-th column from the right, i.e., the column with $x$ coordinate
$mx_2-(k-1)$. None of these numbers depend on $d$. The condition $ns_2\notin\ZZ$ is equivalent to the $(n+1)$-th column from the right not containing a lattice point on the top edge (see Rmk. \ref{ss3}).
\bp[Proof of Theorem \ref{GK}]
As in \cite{GK}, we first transform the triangle $d\De$ by integral translations and shear transformations $(i,j)\mapsto(i,j+ai)$ for $a\in\ZZ$. Clearly, the assumptions still hold for the new triangle. To see that the conclusion is also not affected, recall that the conclusion is equivalent to the fact that any section $f$ of $\HH^0(X_{\De},dH)$
$$f(x,y)=\sum_{(i,j)\in d\De\cap\ZZ^2} a_{(i,j)}x^iy^j$$ that vanishes to order $dw$ at $e=(1,1)$ has the coefficient $a_{(dx_1,dx_2)}=0$ (i.e., $f$ vanishes at the torus invariant point $p_1$). The translation operation multiplies $f$ with a monomial, and the shear transformation performs a change of variables on the torus. The two operations do not affect the order of vanishing of $f$ at $e$ or whether $f$ vanishes at $p_1$.
We first apply a shear transformation, so that $-2<s_2<-1$ (possible since $s_2\notin\ZZ$). We then translate the triangle so that the leftmost point moves to a point with $x$-coordinate $-1$ and the rightmost point moves to a point on the $x$-axis. As there are precisely $n$ lattice points in the $n$-th column from the right, it follows from $-2<s_2<-1$ and that the $n$ points are, in coordinates
$$(\al,0),\quad (\al,1),\quad (\al,2),\quad \ldots, (\al,n-1),\quad \text{for some } \al\geq0,$$
along with $0\leq s_3$. Note $ns_2\notin\ZZ$ implies $\al>0$\footnote{We may also take $\al>0$ at the expense of proving the statement only for sufficiently large and divisible $d$.}.
It also follows that for all $0\leq i\leq n-1$, the column in $d\De$ with $x$-coordinate $\al+i$ has exactly $i$ lattice points:
$$(\al+i,0),\quad (\al+i,1),\quad (\al+i,2),\quad \ldots, (\al+i,n-1-i).$$
We denote these points $\{Q_j\}$ (a total of $\frac{n^2+n}{2}$ points). Let the $n$ lattice points in $d\De$ in the second column from the left be
$$P_0=(\be,0),\quad P_1=(\be+1,0),,\quad \ldots, P_{n-1}=(\be+n-1,0),$$
for some $\be\geq0$. As $-2<s_2<-1$, the rightmost point must be
$$L=(-1,\be+n+1).$$
As the width of $d\De$ is $dw$, the integers $\al,\be$ are related to $w, s_2$ by
$$\al=dw-n, \quad \be=-s_2(dw)-n-1,\quad -s_2=\frac{\be+n+1}{\al+n}$$
\begin{lemma}\label{interpolation}
There is a unique curve $C$ of degree $\leq n$ passing through the $\frac{n^2+3n}{2}$ points $\{P_i\}$ and $\{Q_i\}$. The curve $C$ passes through the point $L$ if and only if $n\be=(n+1)\al$ (or, equivalently, $-s_2=1+\frac{1}{n}$).
\end{lemma}
\begin{rmk}\label{ss3}
It is not hard to see that the condition $ns_2\notin\ZZ$ is equivalent to $-s_2=1+\frac{1}{n}$, which in turn says that
$(n+1)$-th column from the right not containing a lattice point on the top edge.
\end{rmk}
Assuming Lemma \ref{interpolation}, Theorem \ref{GK} follows by considering the union $C'$ of the curve $C$ with all the vertical lines
$$x=1,\quad x=2,\quad\ldots x=(\al-1).$$
Note that the degree of $C'$ equals $dw-1$. Clearly, if $ns_2\notin\ZZ$, Lemma \ref{interpolation} implies that $C'$ does not pass through $L$.
\bp[Proof of Lemma \ref{interpolation}]
We first write down a basis $G_0,\ldots, G_n$ for the vector space of polynomials in $\QQ[x,y]$ of degree $\leq n$ that vanish at the points $\{Q_j\}$ as follows. For all $0\leq i\leq n$, let
$$G_i(x,y)={x-\al\choose{i}}{y\choose{n-i}}.$$
Consider now the equation of a curve $C$ that passes through $\{Q_j\}$:
$$f(x,y)=\sum_{i=0}^n c_{i}G_i(x,y),\quad c_{i}\in\QQ.$$
Let $M$ be the $(n+1)\times (n+1)$ matrix with rows indexed by points $P_0, P_1,\ldots, P_{n-1}, L$ (hence, the last row corresponds to $L$) and columns indexed by $G_0,\ldots, G_n$, such that the entry corresponding to the row $P_i$ (resp. $L$) and column $G_j$ is $G_j(P_i)$ (resp. $G_j(L)$), i.e.,
$$M_{P_i,G_j}=G_j(0,\be+i)={-\al\choose{j}}{\be+i\choose{n-j}},\quad 0\leq i\leq n-1,$$
$$M_{L,G_j}=G_j(-1,\be+n+1)={-1-\al\choose{j}}{\be+n+1\choose{n-j}}.$$
Let $M'$ be the $n\times (n+1)$ matrix obtained by taking the first $n$ rows of $M$. Clearly, there is a unique curve $C$ passing through $\{Q_j\}$ and $\{P_j\}$ if and only if there is a unique solution ${\bf c}=(c_i)$ (up to scaling) to the linear system $M'\cdot{\bf c}={\bf 0}$, i.e.,
$$\rk M'=n.$$
To prove this, successively substract row $P_{n-2}$ from row $P_{n-1}$, row $P_{n-3}$ from row $P_{n-2}$, etc, row $P_0$ from row $P_1$. The result is that the last column of $M'$ has the last $(n-1)$ entries $0$. Substracting row $P_{n-2}$ from row $P_{n-1}$, row $P_{n-3}$ from row $P_{n-2}$, etc, row $P_1$ from row $P_2$ leaves the second column of $M'$ with the last $(n-2)$ entries $0$. Continuing in the same fashion (and using the relation
${k+1\choose{l+1}}={k\choose{l+1}}+{k\choose{l}}$) we obtain an ``upper diagonal" matrix $n\times (n+1)$ matrix $M''$ with entries:
$$
M''_{P_i,G_j} = \left\{
\begin{array}{ll}
{-\al\choose{j}}{\be\choose{n-i-j}} & \quad\text{if} \quad i+j\leq n \\
0 & \quad \text{if} \quad i+j> n
\end{array}
\right.
$$
Hence, $\rk M'=\rk M''=n$.
We now prove that $\det M=0$ if and only if $n\be=(n+1)\al$. Clearly, the curve $C$ passes through the point $L$ if and only if $\det M=0$, hence, this would finish the proof. Let $\tilde{M}$ be the matrix obtained by adding to the matrix $M''$ the last row of $M$, i.e.,
$$
\tilde{M}_{P_i,G_j} = \left\{
\begin{array}{ll}
{-\al\choose{j}}{\be\choose{n-i-j}} & \quad\text{if} \quad i+j\leq n \\
0 & \quad \text{if} \quad i+j> n,
\end{array}
\right.
$$
$$\tilde{M}_{L,G_j} = {-1-\al\choose{j}}{\be+n+1\choose{n-j}}.$$
Clearly, $\det M=\det\tilde{M}$. Let $\tilde{M}^{(1)}$ be the matrix obtained from $\tilde{M}$ by first dividing the column corresponding to $G_j$ by ${-\al\choose{j}}$ (for every $j$) and multiplying the last row with $\al$. Using that ${-1-\al\choose{j}}={-\al\choose{j}}\frac{\al+j}{\al}$, the entries of $\tilde{M}^{(1)}$ are given by
$$
\tilde{M}^{(1)}_{P_i,G_j} = \left\{
\begin{array}{ll}
{\be\choose{n-i-j}} & \quad\text{if} \quad i+j\leq n \\
0 & \quad \text{if} \quad i+j> n,
\end{array}
\right.
$$
$$\tilde{M}^{(1)}_{L,G_j} = (\al+j){\be+n+1\choose{n-j}}.$$
Let $\tilde{M}^{(2)}$ be the matrix obtained from $\tilde{M}^{(1)}$ by first multiplying the last row with $(-1)$, then adding to the last row the sum of rows:
$${n+1\choose{0}}(\text{row } P_0)+{n+1\choose{1}}(\text{row } P_1)+\ldots+{n+1\choose{n}}(\text{row } P_{n-1}),$$
then finally dividing the last row by $(\frac{1}{\be+n+1})$.
Using the identities
$$\sum_{i=0}^{n-j}{\be\choose{n-i-j}}{n+1\choose{i}}={\be+n+1\choose{n-j}},\quad l{n\choose{l}}=n{n-1\choose{l-1}}$$
it follows that the entries in the last row of $\tilde{M}^{(2)}$ are:
$$\tilde{M}^{(2)}_{L,G_j} = {\be+n\choose{n-j-1}},\quad 1\leq j\leq n,$$
$$\tilde{M}^{(2)}_{L,G_0} = {\be+n\choose{n-1}}-\frac{(\al+n)(n+1)}{(\be+n+1)}.$$
Finally, let $\tilde{M}^{(3)}$ be the matrix obtained from $\tilde{M}^{(2)}$ by
substracting from the last row, the following sum of rows:
$${n\choose{0}}(\text{row } P_1)+{n\choose{1}}(\text{row } P_2)+\ldots+{n\choose{n-2}}(\text{row } P_{n-1}).$$
The matrix $\tilde{M}^{(3)}$ has entries
$$\tilde{M}^{(3)}_{L,G_j} =0,\quad 1\leq j\leq n,$$
$$\tilde{M}^{(3)}_{L,G_0} =n-\frac{(\al+n)(n+1)}{(\be+n+1)}.$$
Note that $\tilde{M}^{(3)}_{L,G_0}=0$ if and only if $n\be=(n+1)\al$.
As $\tilde{M}^{(3)}$ is an upper triangular matrix with $\det\tilde{M}^{(3)}=\tilde{M}^{(3)}_{L,G_0}$, the result follows.
\ep
\ep
There are other possible applications of Prop. \ref{GK general} that are not covered by Theorem \ref{GK} towards the classification problem \ref{classify} (see also \cite{He}). For toric surfaces of higher Picard number, we expect that solving an interpolation problem analogous to the one posed in Prop. \ref{GK general}
will lead to examples of non Mori Dream Spaces. An interesting question is whether there is higher dimensional version of Prop. \ref{GK general}.
\newpage
\section*{References}
\begin{biblist} | {"config": "arxiv", "file": "1701.04738.tex"} |
TITLE: Expected Value and Standard Deviation for sample
QUESTION [1 upvotes]: Randomly selecting 50 people from a population, 45% say 'YES' and 55% say 'NO'. Assuming that the true percentage of people in the population who say 'YES' is 48%, what is the expected value and standard deviation for the random variable "survey percentage who say 'YES'"?
REPLY [2 votes]: Before you started the survey, the expected percentage saying YES was $48\%$ and the standard deviation from a sample of $50$ would have been $\sqrt{\dfrac{p(1-p)}{n}} \approx 7\%$.
After the survey, you "know" the percentage of the sample saying YES was $45\%$. This suggests that $22.5$ people said YES and $27.5$ people said NO. | {"set_name": "stack_exchange", "score": 1, "question_id": 495346} |
TITLE: How to diagonalize a Hermitian matrix using a quasi-unitary matrix?
QUESTION [1 upvotes]: I met a problem requiring the diagonalization of a $2n\times 2n$ Hermitian matrix $H$ in the following way:
$U^{*} HU=D$,
where $D$ is diagonal, $U^*$ is the transpose conjugate of $U$. The matrix $U$ is restricted by
$UJU^*=J$,
here $J=diag(I_n,-I_n)$ with $I_n$ is the $n\times n$ identity matrix. Namely, $U$ is quasi-unitary.
I know how to find $D$, by diagonalizing $HJ$ using a similarity transformation, one will find
$V^{-1}HJV=JD$.
But I cannot obtain $U$ from the above $V$. How can we find the matrix $U$?
REPLY [0 votes]: Why can't you multiply by $J^{-1}$? Is JV not quasi-unitary?
What is the matrix V? What properties does it have? How did you get to this factorization? | {"set_name": "stack_exchange", "score": 1, "question_id": 1724001} |
TITLE: Why is the transfer map Tate-dual to restriction ?
QUESTION [5 upvotes]: In one of their papers (before Theorem 7.2), Benson and Carlson state that the transfer map is Tate-dual to the restriction homomorphisms (also see Remark 1.3 of this recent paper).
More precisely: If $H \le G$ are finite groups and $k$ a field those characteristic divides the order of $H$, then the there should be a commutative diagramm
$$\begin{array}{ccc}
\hat{H}^{-s-1}(G,k) & \cong & \text{Hom}_k(\hat{H}^sS(G,k),k) \newline
res^G_H \downarrow & & \downarrow (tr^G_H)^\ast \newline
\hat{H}^{-s-1}(H,k) & \cong & \text{Hom}_k(\hat{H}^sS(H,k),k)
\end{array}$$
where the horizontal isomorphisms are Tate duality.
Does anyone know a reference with a proof or can provide a proof of this statement ? Thanks in advance.
REPLY [4 votes]: I don't know of a reference, but the duality in question can be proved by results from Brown's book on group cohomology. I'll show the case $s\ge 0$. First note that for each integer $j$ there is an isomorphism
$$\psi: \hat{H}^j(G,k) \xrightarrow{\sim} \text{Hom}_k(\hat{H}_j(G,k),k)$$
(Brown, VI.7.2) and for $s\ge 0$ there is an isomorphism $$\varphi: \hat{H}_{-s-1}(G,k) \xrightarrow{\sim} H^s(G,k)$$
(Brown, VI.4). Denote the $k$-dual of a vector space or of a homomorphism by $(-)^\ast$. Tate duality is then the composition
$$t = (\varphi^{-1})^\ast\circ \psi: \hat{H}^{-s-1}(G,k) \xrightarrow{\sim} \hat{H}_{-s-1}(G,k)^\ast \xrightarrow{\sim} H^s(G,k)^\ast.$$
Hence we have to show the commutativity of the diagramm
$$\begin{array}{ccccc}
\hat{H}^{-s-1}(G,k) & \xrightarrow{\psi} & \hat{H}_{-s-1}(G,k)^\ast & \xleftarrow{\varphi^\ast} & H^s(G,k)^\ast \newline
{\scriptstyle \widehat{res}} \downarrow & & \downarrow \scriptstyle res^\ast & & \downarrow \scriptstyle tr^\ast\newline
\hat{H}^{-s-1}(H,k) & \xrightarrow{\psi} & \hat{H}_t(H,k)^\ast & \xleftarrow{\varphi^\ast} & H^s(H,k)^\ast
\end{array}$$
($t$ stands for $-s-1$ which the editor doesn't accept!?) The commutativity of the left hand square follows right from the definition of the maps and the right hand square commutes if we can show the commutativity of the following square:
$$\begin{array}{ccc}
\hat{H}_{-s-1}(G,k) & \xrightarrow{\varphi} & H^s(G,k)\newline
{\scriptstyle \widehat{res}} \uparrow & & \uparrow \scriptstyle tr^G_H \newline
\hat{H}_{-s-1}(H,k) & \xrightarrow{\varphi} & H^s(H,k)
\end{array}\tag{1}$$
In order to describe $\varphi$ on chain level, let $P \to k$ be a projective resolution over $kG$ and let $F$ be a complete resolution such that $F_i=P_i$ and $F_{-i-1} = P_i^\ast$ for $i \ge 0$. Then $\varphi$ is induced by the composition
$$\varphi: F_{-i-1}\otimes_{kG}k=P_i^\ast \otimes_{kG}k \xrightarrow{\alpha\otimes id} \text{Hom}_{kG}(P_i,kG) \otimes_{kG} k\xrightarrow{\beta}\text{Hom}_{kG}(P_i,k)$$
where $\alpha(f)(x)=\sum_{g \in G}f(g^{-1}x)g$ (Brown, VI.3.4) and $\beta(f \otimes a)(x)=f(x)a$ (Brown, I.8.3). Hence
$$\varphi(f \otimes a)(x)=\sum_{g \in G}f(g^{-1}x)(ga)=\sum_{g \in G}f(g^{-1}x)a=tr^G_E(f)(x)a\tag{2}$$
where $f \in P_i^\ast, a \in k, x \in P_i$ and $E=\{1\}$.
On chain level $(1)$ is given by the diagramm
$$\begin{array}{ccc}
P_i^\ast \otimes_{kG} k & \xrightarrow{\varphi_G} & \text{Hom}_{kG}(P_i,k) \newline
{\scriptstyle \kappa} \uparrow & & \uparrow \scriptstyle tr^G_H \newline
P_i^\ast \otimes_{kH} k & \xrightarrow[\varphi_H]{} & \text{Hom}_{kH}(P_i,k) \newline
\end{array}\tag{3}$$
where $\kappa(f \otimes_H a)=f \otimes_G a$. With $f,a,x$ as above, we obtain
$$(tr^G_H \circ \varphi_H)(f \otimes_H a)(x)=\sum_{g \in G/H}\varphi_H(f\otimes_H a)(g^{-1}x)
\overset{(2)}{=}\sum_{g \in G/H}tr^H_E(f)(g^{-1}x)a$$
$$\qquad=tr^G_H(tr^H_E(f))(x)a=tr^G_E(f)(x)a$$
$$\qquad\qquad=\varphi_G(f \otimes_G a)(x)=(\varphi_G \circ res)(f \otimes_H a)(x)$$
Thus the commutativity of $(3)$ is shown. QED | {"set_name": "stack_exchange", "score": 5, "question_id": 115923} |
TITLE: If $f\circ f\circ g\circ g\circ f\circ f$ is invertible, so is $g$
QUESTION [0 upvotes]: Let $f\circ f\circ g\circ g\circ f\circ f$ be invertible (i.e., have left and right inverse functions).
Prove $g$ is invertible as well.
I would appreciate it if you helped me.
REPLY [2 votes]: In general and not difficult to prove: if $u\circ v$ is a bijection then $u$ is surjective and
$v$ is injective.
Applying that here we find that $f$ is surjective and injective,
hence is a bijection.
Then $g\circ g=f^{-1}\circ f^{-1}\circ f\circ f\circ g\circ g\circ f\circ f\circ f^{-1}\circ f^{-1}$
is - as composition of bijections - also a bijection, and again we
can apply the rule to find this time that $g$ is bijective. | {"set_name": "stack_exchange", "score": 0, "question_id": 1137720} |
TITLE: Does this modified harmonic series converge?
QUESTION [0 upvotes]: Let $0 < \alpha < 1$.
Can you choose $c > 0$ so that this modified harmonic series
$$\sum_{k=1}^\infty \frac{1}{k^\alpha} \exp(-c k^{1-\alpha})$$
converges?
REPLY [1 votes]: It converges for any $c>0$.
Consider that for arbitrary $x>0$, we have $e^{x}>x^{2}$.
So for any $c>0$, $e^{ck^{1-\alpha}}>c^{2}k^{2-2\alpha}$, then $0<\frac{1}{k^\alpha} \exp(-c k^{1-\alpha})<\frac{1}{c^{2}}\frac{1}{k^{2-\alpha}}$.
When $\alpha<1$, $\sum_{k=1}^{\infty}\frac{1}{k^{2-\alpha}}$ converges, so $\sum_{k=1}^\infty \frac{1}{k^\alpha} \exp(-c k^{1-\alpha})$ converges. | {"set_name": "stack_exchange", "score": 0, "question_id": 208164} |
TITLE: Second derived subgroup is unit, why G/Z(G) is abelian?
QUESTION [0 upvotes]: why if $[G,[G,G]]=\{e\}$, then $G/Z(G)$ is abelian? ($Z(G)$ ist the center of a group $G$).
Thanks in advance
REPLY [4 votes]: The condition $[G,[G,G]]=\{e\}$ says that every commutator in $G$ is central, that is, that $[G,G]\leq Z(G)$. Therefore, $G/Z(G)$ is a homomorphic image of $G/[G,G]$, which is abelian, so $G/Z(G)$ is abelian. | {"set_name": "stack_exchange", "score": 0, "question_id": 2554532} |
TITLE: Why is this continuous representation differentiable?
QUESTION [2 upvotes]: Let $V$ be a real or complex finite-dimensional vector space and $\pi$ a continuous representation of the additive group $\mathbb{R}$ on $V$:
$$
\pi (t+s) = \pi (t) \pi (s), \ t,s\in\mathbb{R}, \ \pi(0)=I.
$$
Prove that $\pi : \mathbb{R} \to L(V)$ is differentiable.
I have a hint:
Note that from the continuity of $\pi$ we have
$$
\lim_{t\to 0} \frac{1}{t} \int_0^t \! \pi(t) \,\mathrm{d} t = \pi(0) = I.
$$
I don't see how this is obvious. What troubles me is the fact that there is an integral sign in the above equation, where did it come from? And because $\pi(t)$ is a linear map, how do I integrate it?
REPLY [2 votes]: The integral sign comes by fiat. The fundamental theorem of calculus tells us that the integral of a continuous function is (continuously) differentiable, introducing an integration gives us a more regular function to work with. While we only know that $\pi$ is continuous, we know that the integral of $\pi$ is differentiable, and therefore we introduce the integral. That is later used to deduce the differentiability of $\pi$.
And because $\pi(t)$ is a linear map, how do I integrate it?
Choose a basis $B$ of $V$, and identify $\pi(t)$ with its matrix representation with respect to $B$. Then take the componentwise integral, the entry in the $i^{\text{th}}$ row and $j^{\text{th}}$ column of $\int_0^t \pi(s)\,ds$ is the integral of the corresponding component function.
We can define the integral of $L(V)$-valued functions more abstractly, and when $V$ is infinite-dimensional, that is necessary, but in finite dimensions, it is easier to just use a matrix representation.
Finally, for continuous real- or complex-valued functions $f$ the fact that
$$\lim_{t\to 0} \frac{1}{t}\int_0^t f(s)\,ds = f(0)$$
is probably familiar. And since that holds for all components of the matrix function, it holds for the entire matrix function. | {"set_name": "stack_exchange", "score": 2, "question_id": 1471727} |
\begin{document}
\begin{abstract}
We characterize certain noncommutative domains in terms of
noncommutative holomorphic equivalence via a pseudometric
that we define in purely algebraic terms. We prove some
properties of this pseudometric and provide an
application to free probability.
\end{abstract}
\maketitle
\section{Introduction}
Noncommutative functions are (countable) families of functions defined
on matrices of increasing dimension over a base set, usually with some
structure (vector space, operator space, $C{}^*$-algebra, von Neumann
algebra etc) which satisfy certain compatibility conditions, to be
described below. We exploit these conditions to describe
metric/geometric properties of noncommutative domains in purely
algebraic terms and to study properties of noncommutative maps
of such domains. Our results seem to be relevant to the study
of certain classical several complex variables maps, in the spirit
of \cite{A,JLMS}.
\section{Noncommutaive domains, functions and kernels}
\subsection{Noncommutative functions}
Noncommutative functions originate in Joseph L. Taylor's work
\cite{taylor0,taylor} on spectral theory and functional calculus
for $k$-tuples of non-commuting operators.
We largely follow \cite{ncfound} in our presentation
of noncommutative sets and functions. We refer to \cite{ncfound}
for details on, and proofs of, the statements below.
Let us introduce the following notation: if $S$ is
a nonempty set, we denote by $S^{m\times n}$ the
set of all matrices with $m$ rows and $n$ columns
having entries from $S$. If $S=\mathbb F$ is a field,
then we use the standard notation $GL_n(\mathbb F)$ for
the group of matrices $X$ in $\mathbb F^{n\times n}$ which
are invertible (that is, there exists $X^{-1}\in\mathbb F^{n\times n}$
such that $XX^{-1}=X^{-1}X=I_n$, where $I_n$ is the diagonal
matrix having the multiplicative unit of $\mathbb F$ on the
diagonal and zero elsewhere). We will work almost exclusively
with subsets of operator spaces and operator systems (linear subspaces of the
algebra $B(\mathcal H)$ of bounded operators over a Hilbert
space $\mathcal H$ -- which we assume to be separable --
which contain the unit $1$ of $B(\mathcal H)$, are norm-closed and selfadjoint - see \cite{ER}). However
some of our definitions hold in much broader generality.
Given a complex vector space $\mathcal V$,
a {\em noncommutative set} is a family
$\Omega_{\rm nc}:=(\Omega_n)_{n\in\mathbb N}$ such that
\begin{enumerate}
\item[(a)] for each $n\in\mathbb N$, $\Omega_n\subseteq\mathcal
V^{n\times n};$
\item[(b)] for each $m,n\in\mathbb N$, we have $\Omega_m\oplus
\Omega_n\subseteq\Omega_{m+n}$.
\end{enumerate}
The noncommutative set $\Omega_{\rm nc}$ is called {\em right
admissible} if in addition the condition (c) below is satisfied:
\begin{enumerate}
\item[(c)] for each $m,n\in\mathbb N$ and $a\in\Omega_m,b\in
\Omega_n,w\in\mathcal V^{m\times n}$, there is an $\epsilon>0$
such that $\begin{bmatrix}
a & zw\\
0 & b\end{bmatrix}\in\Omega_{m+n}$ for all $z\in\mathbb C,
|z|<\epsilon$.
\end{enumerate}
Left admissible sets are defined similarly, except that $zw$
appears in the lower left corner of the matrix.
Given complex vector spaces $\mathcal{V,W}$ and a noncommutative set
$\Omega_{\rm nc}\subseteq\coprod_{n=1}^\infty\mathcal V^{n\times n}$, a
{\em noncommutative function} is a family $f:=(f_n)_{n\in\mathbb N}$
such that $f_n\colon\Omega_n\to\mathcal W^{n\times n}$ and
\begin{enumerate}
\item $f_m(a)\oplus f_n(b)=f_{m+n}(a\oplus b)$ for all
$m,n\in\mathbb N$, $a\in\Omega_m,b\in\Omega_n$;
\item for all $n\in\mathbb N$, $f_n(T^{-1}aT)=T^{-1}f_n(a)T$ whenever
$a\in\Omega_n$ and $T\in GL_n(\mathbb C)$ are such that $T^{-1}aT$
belongs to the domain of definition of $f_n$.
\end{enumerate}
These two conditions are equivalent to the requirement that $f$ respects
intertwinings by scalar matrices:
\begin{enumerate}
\item[(I)] For all $m,n\in\mathbb N$, $a\in\Omega_m,b\in\Omega_n$,
$S\in\mathbb C^{m\times n}$, we have
\begin{equation}\label{inter}
aS=Sb\implies f_m(a)S=Sf_n(b).
\end{equation}
\end{enumerate}
If $\mathcal{V,W}$ are operator spaces,
it is shown in \cite[Theorem 7.2]{ncfound}) that, under very
mild openness conditions on $\Omega_{\rm nc}$, local boundedness for
$f$ implies each $f_n$ is analytic as a map between Banach spaces. More
specifically, if $\Omega_{\rm nc}$ is finitely open (that is, for all $n\in
\mathbb N$, the intersection of $\Omega_n$ with any finite
dimensional complex subspace is open) and $f$ is locally
bounded on slices (that is, for every
$n\in\mathbb N$, for every $a\in\Omega_n$ and
$b\in\mathcal V^{n\times n}$, there exists an
$\varepsilon>0$ such that the set $\{f_n(a+zb)\colon
z\in\mathbb C,|z|<\varepsilon\}$ is bounded in $\mathcal W^{n\times n}$),
then each $f_n$ is G\^{a}teaux complex differentiable on $\Omega_n$
(see Section \ref{top} below).
Indeed, this is a consequence of the following
essential property of noncommutative functions: if $\Omega_{\rm nc}$ is
admissible, $a\in \Omega_n, c\in\Omega_m, b\in\mathcal V^{n\times m}$
such that $\begin{bmatrix}
a & b \\
0 & c
\end{bmatrix}\in\Omega_{n+m}$, then there exists a linear map
$\Delta f_{n,m}(a,c)\colon\mathcal V^{n\times m}\to
\mathcal W^{n\times m}$ such that
\begin{equation}\label{FDQ}
f_{n+m}\left(\begin{bmatrix}
a & b \\
0 & c
\end{bmatrix}\right)=\begin{bmatrix}
f_n(a) & \Delta f_{n,m}(a,c)(b) \\
0 & f_m(c)
\end{bmatrix}.
\end{equation}
This implies in particular that $f_{n+m}$ extends to
the set of all elements $\begin{bmatrix}
a & b \\
0 & c
\end{bmatrix}$ such that $a\in \Omega_n, c\in\Omega_m,
b\in\mathcal V^{n\times m}$ (see \cite[Section 2.2]{ncfound}).
Two properties of this operator that are important for us are
\begin{equation}\label{FDC}
\Delta f_{n,n}(a,c)(a-c)=f(a)-f(c)=\Delta f_{n,n}(c,a)(a-c),\quad
\Delta f_{n,n}(a,a)(b)=f_n'(a)(b),
\end{equation}
the derivative of $f_n$ in $a$ aplied to the element
$b\in\mathcal V^{n\times m}$. Moreover, $\Delta f(a,c)$
as functions of $a$ and $c$, respectively, satisfy
properties similar to the ones described in items
(1), (2) above -- see \cite[Sections 2.3--2.5]{ncfound}
for details (for convenience, from now on we shall suppress the
indices denoting the level for noncommutative functions, as
it will almost always be obvious from the context).
\begin{ex}\label{example21}
There are many examples of noncommutative functions. We provide here three.
\begin{enumerate}
\item The best known is provided by the classical
theory of analytic functions of one complex variable: if $D$ is a simply connected domain in $\mathbb C$
and $f\colon D\to\mathbb C$ is analytic, then $f$ is the first level of an nc map
$f\colon\coprod_{n=1}^\infty\{A\in\mathbb C^{n\times n}\colon\sigma(A)\subset D\}\to
\coprod_{n=1}^\infty\mathbb C^{n\times n}$ given by the classical analytic functional calculus:
$f_n(A)=(2\pi i)^{-1}\int_\gamma(A-\zeta I_n)^{-1}f(\zeta)\,{\rm d}\zeta$, for some simple closed
curve $\gamma$ which surrounds once counterclockwise the spectrum $\sigma(A)$ of $A$.
\item If $P(X_1,\dots,X_k)$ is a polynomial in $k$ non-commuting indeterminates $X_1,\dots,X_k$
and $\mathcal A$ is a $C^*$-algebra, then the evaluation $P(a_1,\dots,a_k)$,
$a_j\in\mathcal A^{n\times n}$, $n\in\mathbb N$, is an nc function. More
generally, this can be extended to power series $P$ with (finite or infinite)
radius of convergence (see, for instance, \cite{Popescu0}).
\item If $\mathcal A$ is a unital $C^*$-algebra and $B\subseteq\mathcal A$ is an inclusion of
$C^*$-algebras which share the same unit, assume that $E\colon\mathcal A\to B$ is a unit-preserving
conditional expectation. If $X=X^*\in\mathcal A$, then the map
$G_X$ defined by $G_{X,n}(b)=(E\otimes{\rm Id}_{\mathbb C^{n\times n}})
\left[(b-X\otimes I_n)^{-1}\right],$ $b\in B^{n\times n}$, is an nc function (see \cite{V2,V1}). Its domain
is the set of all $b$ such that $b-X\otimes I_n$ is invertible. The {\em noncommutative upper
half-plane} $\coprod_{n=1}^\infty\{b\in B^{n\times n}\colon(b-b^*)/2i>0\}$ is a natural nc
subdomain on which $G_X$ is defined.
\end{enumerate}
\end{ex}
\subsection{Noncommutative kernels}
This section follows mostly \cite{BMV}. Let $\Omega_{\rm nc}$
be a noncommutative subset of the operator space $\mathcal V$.
Consider two other operator spaces $\mathcal V_0$ and $\mathcal V_1$.
Denote by $\mathcal L(\mathcal V_0,\mathcal V_1)$ the space of
linear operators from $\mathcal V_0$ to $\mathcal V_1$. A {\em
global kernel} on $\Omega_{\rm nc}$ is a function
$K\colon\Omega_{\rm nc}\times\Omega_{\rm nc}\to\mathcal L(\mathcal V_0,\mathcal V_1)_{\rm nc}$
such that
\begin{eqnarray}
& & a\in\Omega_m,c\in\Omega_n\implies K(a,c)\in\mathcal L(\mathcal V_0^{m\times n},\mathcal V_1^{m\times n})\label{quatro}\\
& & K\left(\begin{bmatrix}
a & 0\\
0 & \tilde{a}
\end{bmatrix},\begin{bmatrix}
c & 0\\
0 & \tilde{c}
\end{bmatrix}\right)\left(\begin{bmatrix}
P_{1,1} & P_{1,2}\\
P_{2,1} & P_{2,2}
\end{bmatrix}\right)=\begin{bmatrix}
K(a,c)(P_{1,1}) & K(a,\tilde{c})(P_{1,2})\\
K(\tilde{a},c)(P_{2,1}) & K(\tilde{a},\tilde{c})(P_{2,2})
\end{bmatrix},\label{cinco}
\end{eqnarray}
for any $m,\tilde{m},n,\tilde{n}\in\mathbb N$, $a\in\Omega_m,\tilde{a}\in\Omega_{\tilde{m}},
c\in\Omega_n,\tilde{c}\in\Omega_{\tilde{n}},$ $P_{1,1}\in\mathcal V_0^{m\times n},
P_{1,2}\in\mathcal V_0^{m\times\tilde{n}},P_{2,1}\in\mathcal V_0^{\tilde{m}\times n},
P_{2,2}\in\mathcal V_0^{\tilde{m}\times\tilde{n}}$ (that is, $\begin{bmatrix}
P_{1,1} & P_{1,2}\\
P_{2,1} & P_{2,2}
\end{bmatrix}\in\mathcal V_0^{(m+\tilde{m})\times(n+\tilde{n})}$).
Obviously, condition \eqref{cinco} can be extended to evaluations of $K$ in diagonal matrices with
arbitrarily many blocks on the diagonal. The kernel $K$ is called an
{\em affine noncommutative kernel} if in addition to condition
\eqref{quatro}, it respects intertwinings:
\begin{eqnarray}
& & a\in\Omega_m,\tilde{a}\in\Omega_{\tilde{m}},S\in\mathbb C^{\tilde{m}\times m}\text{ are such that }
Sa=\tilde{a}S,\nonumber\\
& & c\in\Omega_n,\tilde{c}\in\Omega_{\tilde{n}},T\in\mathbb C^{n\times\tilde{n}}\text{ are such that }
cT=T\tilde{c},\nonumber\\
& & P\in\mathcal V_0^{m\times n}\implies SK(a,c)(P)T=K(\tilde{a},\tilde{c})(SPT).\label{sei}
\end{eqnarray}
Conditions \eqref{quatro} and \eqref{sei} are equivalent to
conditions \eqref{quatro}, \eqref{cinco} and
\begin{eqnarray}
& & a,\tilde{a}\in\Omega_m,S\in GL_m(\mathbb C)\text{ are such that }SaS^{-1}=\tilde{a},\nonumber\\
& & c,\tilde{c}\in\Omega_n,T\in GL_n(\mathbb C)\text{ are such that }T^{-1}cT=\tilde{c},\nonumber\\
& & P\in\mathcal V_0^{m\times n}\implies K(\tilde{a},\tilde{c})(P)=SK(a,c)(S^{-1}PT^{-1})T.\label{sette}
\end{eqnarray}
If $f\colon\Omega_{\rm nc}\to\mathcal W_{\rm nc}$ is a noncommutative
map, then $\Omega_{\rm nc}\times\Omega_{\rm nc}\ni(a,c)\mapsto
\Delta f(a,c)\in\mathcal L(\mathcal V,\mathcal W)_{\rm nc}$ satisfies
the above conditions (see \cite[Proposition 2.15]{ncfound}).
We call $K$ a {\em noncommutative (nc) kernel} if $K$ satisfies
\eqref{quatro} and respects intertwinings in the following sense:
\begin{eqnarray}
& & a\in\Omega_m,\tilde{a}\in\Omega_{\tilde{m}},S\in\mathbb C^{\tilde{m}\times m}\text{ are such that }
Sa=\tilde{a}S,\nonumber\\
& & c\in\Omega_n,\tilde{c}\in\Omega_{\tilde{n}},T\in\mathbb C^{\tilde{n}\times n}\text{ are such that }
Tc=\tilde{c}T,\nonumber\\
& & P\in\mathcal V_0^{m\times n}\implies SK(a,c)(P)T^*=K(\tilde{a},\tilde{c})(SPT^*).\label{otto}
\end{eqnarray}
Conditions \eqref{quatro} and \eqref{otto} are equivalent to
conditions \eqref{quatro}, \eqref{cinco} and
\begin{eqnarray}
& & a,\tilde{a}\in\Omega_m,S\in GL_m(\mathbb C)\text{ are such that }SaS^{-1}=\tilde{a},\nonumber\\
& & c,\tilde{c}\in\Omega_n,T\in GL_n(\mathbb C)\text{ are such that }TcT^{-1}=\tilde{c},\nonumber\\
& & P\in\mathcal V_0^{m\times n}\implies K(\tilde{a},\tilde{c})(P)=SK(a,c)(S^{-1}P(T^{-1})^*)T^*.\label{nove}
\end{eqnarray}
Observe that if $K$ is an affine nc kernel,
then $(a,c)\mapsto K(a,c^*)$ is an nc kernel.
We say that a noncommutative kernel $K$ is a {\em completely
positive noncommutative (cp nc) kernel} if in addition
\begin{equation}\label{dieci}
a\in\Omega_m,P\ge0\text{ in }\mathcal V_0^{m\times m}\implies K(a,a)(P)\ge0\text{ in }\mathcal
V_1^{m\times m}\text{ for all }m\in\mathbb N.
\end{equation}
If $\mathcal V_0,\mathcal V_1$ are $C^*$-algebras,
then \eqref{dieci} is equivalent to requiring that for all $N\in\mathbb N$, $m_1,m_2,\dots,m_N\in
\mathbb N$,
\begin{equation}\label{undici}
a^{(j)}\in\Omega_{m_j},P_j\in\mathcal V_0^{N\times m_j},b_j\in\mathcal V_1^{m_j},1\le j\le N\implies
\sum_{i,j=1}^Nb_i^*K(a^{(i)},a^{(j)})(P_i^*P_j)b_j\ge0
\end{equation}
(see \cite[Proposition 2.2]{BMV}). If $K(a,a)$ is completely positive,
then it is also completely bounded and $\|K(a,a)\|=\|K(a,a)\|_{\rm cb}
=\|K(a,a)(1)\|$.
\begin{ex}
Let $\mathcal A$ be a $C^*$-algebra.
The simplest non-constant nc kernel is $\mathcal A_{\rm nc}\times\mathcal A_{\rm nc}\ni(a,c)
\mapsto a\cdot c^*\in\mathcal L(\mathcal A,\mathcal A)_{\rm nc}.$ That is, for $m,n\in
\mathbb N$, $a\in\mathcal A^{m\times m},c\in\mathcal A^{n\times n}$ and $P\in
\mathcal A^{m\times n},$ we have $(a,c)\mapsto(P\mapsto aPc^*)$. More generally, if
$G,H$ are nc functions from $\Omega_{\rm nc}\subseteq\mathcal V_{\rm nc}$ to $\mathcal A_{\rm nc}$,
then $(a,c)\mapsto G(a)\cdot H(c)^*$ is an nc kernel. One can further pre-compose this
kernel with a completely bounded map $\Psi\colon\mathcal A\to\mathcal A$:
$$
\Omega_m\times\Omega_n\ni(a,c)\mapsto\left[\mathcal A^{m\times n}\ni P
\mapsto G(a)({\rm Id}_{\mathbb C^{m\times n}}\otimes\Psi)(P)H(c)^*\in\mathcal A^{m\times n}\right]
$$
is an nc kernel. If $G=H$ and $\Psi$ is completely positive, then this is a cp nc kernel. In a certain
sense, {\em all} nc kernels are of this form (we refer to \cite[Theorem 3.1]{BMV} for the precise statement).
Note also that $(a,c)\mapsto[P
\mapsto G(a)({\rm Id}_{\mathbb C^{m\times n}}\otimes\Psi)(P)H(c^*)^*]$ is an affine nc kernel.
\end{ex}
\begin{ex}\label{kernel-ex}
One of the main objectives of this paper is to analyze certain metric properties of
noncommutative sets. An important class of such sets is given precisely by noncommutative kernels.
Let $\mathcal A$ be a $C^*$-algebra, $\mathcal V$ be an operator space and $\Omega_{\rm nc}\subset
\mathcal V_{\rm nc}$ be an nc set. Assume that $K\colon\Omega_{\rm nc}\times\Omega_{\rm nc}
\to\mathcal L(\mathcal A)_{\rm nc}$ is a noncommutative kernel. We may define the set
$$
\mathcal D_K:=\coprod_{n=1}^\infty\underbrace{\{a\in\Omega_n\colon K(a,a)(I_n)>0\}}_{\mathcal D_n}.
$$
Observe that if $K$ were assumed instead to be an affine
nc kernel, then the above definition would change to
$\mathcal D_n=\{a\in\Omega_n\colon K(a,a^*)(I_n)>0\}$.
Clearly $\mathcal D_K$ may be empty or equal to $\Omega_{\rm nc}$.
If $a\in\Omega_m,\tilde{a}\in\Omega_{\tilde{m}}$, then, by
\eqref{quatro} and \eqref{cinco}, $K(a\oplus\tilde{a},a\oplus\tilde{a})
\in\mathcal L(\mathcal A^{(m+\tilde{m})\times(m+\tilde{m})})$ and
\begin{eqnarray*}
K(a\oplus\tilde{a},a\oplus\tilde{a})(I_{m+\tilde{m}})& = &
\begin{bmatrix}
K(a,a)(I_m) & K(a,\tilde{a})(0)\\
K(\tilde{a},a)(0) & K(\tilde{a},\tilde{a})(I_{\tilde{m}})
\end{bmatrix}\\
& = & \begin{bmatrix}
K(a,a)(I_m)& 0 \\
0 & K(\tilde{a},\tilde{a})(I_{\tilde{m}})
\end{bmatrix}>0.
\end{eqnarray*}
Thus, under the weaker assumptions that $K$ is a global kernel,
we are guaranteed that $\mathcal D_K$ is a noncommutative set.
Under our assumption that $K$ is a noncommutative kernel, we
have in addition that for any $S\in GL_m(\mathbb C)$,
$$
K(SaS^{-1},(S^{-1})^*aS^*)(I_m)=SK(a,a)(S^{-1}I_mS)S^{-1}=SK(a,a)(I_m)S^{-1}.
$$
Thus, if $S$ is unitary (that is, $S^*=S^{-1}$), then $K(SaS^{*},SaS^*)(I_m)>0$
whenever $K(a,a)(I_m)>0$. We conclude that {\em if $K$ is an nc kernel
on $\Omega_{\rm nc}$, then $\mathcal D_K$ is a noncommutative set
which is invariant with respect to conjugation by scalar unitary matrices.}
Some of the more famous examples of noncommutative sets are given by nc kernels:
\begin{enumerate}
\item[(i)] The noncommutative upper half-plane
$H^+(\mathcal A)=\coprod_{n=1}^\infty H^+(\mathcal A^{n\times n}))$,
where $H^+(\mathcal A^{n\times n})=\{a\in\mathcal A^{n\times n}\colon\Im a>0\}$
(we remind the reader that $\Im b=(b-b^*)/2i,\Re b=(b+b^*)/2$, so that
$b=\Re b+i\Im b$). The kernel in this case is $K(a,c)(P)=(aP-(cP^*)^*)/2i$,
$a\in\mathcal A^{m\times m},c\in\mathcal A^{n\times n}$,
$P\in\mathcal A^{m\times n}$. It is easy to verify that this is a
globally defined nc kernel. This set is important in free probability
(see \cite{V2,V1}).
\item[(ii)] The unit ball $B_1(\mathcal A)=\coprod_{n=1}^\infty B_1(\mathcal A^{n\times n})$,
where $B_1(\mathcal A^{n\times n})=\{a\in\mathcal A^{n\times n}\colon\|a\|<1\}$ (the
norm considered being the $C^*$-norm on $\mathcal A^{n\times n}$).
Here the kernel is even simpler: $K(a,c)(P)=1-aPc^*.$
\item[(iii)] More generally, if $G$ is a noncommutative
function with values in $\mathcal A$,
we could define $H^+(\mathcal A)_G$
by using the kernel $K(a,c)(P)=(G(a)P-(G(c)P^*)^*)/2i$ and
$B_1(\mathcal A)_G$ by using the kernel $K(a,c)(P)=1-G(a)PG(c)^*$.
\end{enumerate}
However, some are not:
\begin{enumerate}
\item[(iv)]
Consider $\mathcal N(\mathcal A)=\coprod_{n=1}^\infty\{a\in\mathcal A^{n\times n}\colon
a^n=0\}$. Clearly $\mathcal N(\mathcal A)$ is closed under direct sums, and, moreover,
if $S\in GL_n(\mathbb C)$ and $a\in\{a\in\mathcal A^{n\times n}\colon
a^n=0\}$, then $(SaS^{-1})^n=Sa^nS^{-1}=0$. So this set is in fact
invariant under conjugation by {\em all} of $GL_n(\mathbb C)$, not
just by the unitary group. This is because $\mathcal N(\mathcal A)$ is
``thin,'' in the sense that it has empty interior in all the natural topologies
on nc sets (see below). Thus, one cannot expect that $\mathcal N(\mathcal A)$
is of the form $\mathcal D_K$ for an nc kernel $K$.
\end{enumerate}
\end{ex}
\subsection{Three topologies on noncommutative sets}\label{top}
As already stated, operator spaces constitute
the natural framework for noncommutative
function theory. We recall that (see, for instance,
\cite{ER}) if $\mathcal V$
is an operator space, then
$$
\|a\oplus\tilde{a}\|_{m+\tilde{m}}=\max\{\|a\|_m,\|\tilde{a}\|_{\tilde{m}}\},\quad
m,\tilde{m}\in\mathbb N,a\in\mathcal V^{m\times m},\tilde{a}\in
\mathcal V^{\tilde{m}\times\tilde{m}},
$$
and
$$
\|SaT\|_n\leq\|S\|\|a\|_m\|T\|,\quad m,n\in\mathbb N,a\in\mathcal V^{m\times m},
S\in\mathbb C^{n\times m},T\in\mathbb C^{m\times n}.
$$
A topology naturally compatible with these norm conditions is the
{\em uniformly-open topology}. It has as basis balls defined the following
way: if $c\in\mathcal V^{s\times s}$ and $r\in(0,+\infty)$, then
$$
B_{\rm nc}(c,r)=\coprod_{n=1}^\infty\left\{a\in\mathcal V^{sn\times sn}\colon
\left\|a-\oplus_{j=1}^nc\right\|_{sn}<r\right\}.
$$
This topology is not Hausdorff.
A noncommutative function $f$ defined on a noncommutative
set $\Omega_{\rm nc}\subseteq\mathcal V_{\rm nc}$ with values
in an operator space is said to be {\em uniformly analytic} if
$\Omega_{\rm nc}$ is uniformly open, and $f$ is uniformly
locally bounded and complex differentiable at
each level. It is shown in \cite[Corollary 7.28]{ncfound}
that $f$ is analytic if and only if it is uniformly
locally bounded (that is, the requirement of complex
differentiability at each level is automatically satisfied by
an nc function which is uniformly locally bounded on
a uniformly open nc set).
The second important topology (already mentioned above)
is the {\em finitely open topology}: a set
$\Omega_{\rm nc}\subseteq\mathcal V_{\rm nc}$ is called
finitely open if for any $n\in\mathbb N$, the intersection
of $\Omega_n$ with any finite dimensional subspace
$\mathcal X$ of $\mathcal V^{n\times n}$ is open in
the Euclidean topology of $\mathcal X$. It is shown in
\cite[Theorem 7.2]{ncfound} that if $f$ is a noncommutative
function defined on $\Omega_{\rm nc}$ which is locally
bounded on slices, then $f$ is analytic on slices, in the
sense that for any $n\in\mathbb N$ and any finite
dimensional subspace $\mathcal X$ of
$\mathcal V^{n\times n}$, $f|_{\mathcal X}$ is
analytic as a function of several complex variables.
Finally, one can also consider the topology in which
a set $\Omega_{\rm nc}$ is open in $\mathcal V_{\rm nc}$
if and only if $\Omega_n$ is open in the topological vector space topology of
$\mathcal V^{n\times n}$ for all $n\in\mathbb N$. Observe
that such a set is also finitely open. We refer to it as the
{\em level topology.}
\section{A (pseudo)distance on noncommutative sets}\label{pseudodistance}
Let $\mathcal V$ be a complex topological vector space. As we
progress through the paper, we put more and more structure on $\mathcal V$,
but for our first definition, we need nothing more than the axioms
of a complex topological vector space. For now we endow
$\mathcal V^{n\times m}$, $n,m\in\mathbb N$, with the usual (product)
topology. Let $\mathcal D$ be a noncommutative subset of
$\mathcal V_{\rm nc}$ and consider the following properties:
\begin{enumerate}
\item For any $n\in\mathbb N$, $\mathcal D_n$ is open in
$\mathcal V^{n\times n}$;\label{propr1set}
\item If $U$ is a unitary $n\times n$ complex matrix and $a\in
\mathcal D_n$, then $UaU^*\in\mathcal D_n$;\label{propr2set}
\item If $a\in\mathcal V^{n\times n},c\in\mathcal V^{m\times m}$
are such that $\begin{bmatrix}
a & 0\\
0 & c
\end{bmatrix}\in\mathcal D_{n+m}$, then $a\in\mathcal D_n$, $c\in
\mathcal D_m$. (Note that this is a sort of ``converse'' of
part (b) of the definition of noncommutative sets.)\label{propr3set}
\end{enumerate}
Let $\mathcal S_{n,m}=\{g\colon\mathcal V^{n\times m}\to[0,+\infty]
\colon g(tb)=tg(b)\forall t\ge0\}$ (with the convention $0\times(+
\infty)=+\infty$), and define
$\mathcal S=\displaystyle\coprod_{n,m\in\mathbb N}\mathcal S_{n,m}$.
Define a function $\delta_\mathcal D\colon\mathcal D\times\mathcal D\to
\mathcal S$ such that $\delta_\mathcal D(a,c)\in\mathcal
S_{n,m}$ whenever $a\in\mathcal D_n,c\in\mathcal D_m$, by
\begin{equation}\label{InfDistance1}
\delta_\mathcal D(a,c)(b)=\left[\sup\left\{t\in[0,+\infty]\colon
\begin{bmatrix}
a & sb\\
0 & c
\end{bmatrix}\in\mathcal D_{n+m}\text{ for all }s\in[0,t]\right\}
\right]^{-1},
\end{equation}
with the convention $1/0=+\infty$.
Observe first that $\delta_\mathcal D(a,c)$ is indeed well-defined
because noncommutative sets respect direct sums:
$\begin{bmatrix}
a & sb\\
0 & c
\end{bmatrix}\in\mathcal D_{n+m}$ at least for $s=0$. Second,
$\delta_\mathcal D(a,c)(b)=0\iff\begin{bmatrix}
a & sb\\
0 & c
\end{bmatrix}\in\mathcal D_{n+m}$ for all $s\in[0,+\infty)$.
Third, if $s_0\in(0,+\infty)$ is given, then, as indicated
in the definition, $\delta_\mathcal D(a,c)(s_0b)=s_0
\delta_\mathcal D(a,c)(b)$. Indeed, if $\delta_\mathcal D(a,c)(b)=0$
or $+\infty$, then the statement is obvious. Else, if
$\begin{bmatrix}
a & sb\\
0 & c
\end{bmatrix}\in\mathcal D_{n+m}$ for all $s\in[0,
\delta_\mathcal D(a,c)(b)^{-1})$, then
$\begin{bmatrix}
a & sb\\
0 & c
\end{bmatrix}=\begin{bmatrix}
a & \frac{s}{s_0}s_0b\\
0 & c
\end{bmatrix}$, so that $\begin{bmatrix}
a & r(s_0b)\\
0 & c
\end{bmatrix}\in\mathcal D_{n+m}$ for all $r\in
[0,(s_0\delta_\mathcal D(a,c)(b))^{-1})$, which shows
that $s_0\delta_\mathcal D(a,c)(b)=\delta_\mathcal D(a,c)(s_0b)$.
\begin{remark}\label{rem-cont}
Given a complex vector space $\mathcal V$ endowed with a topology for which the multiplication with
positive scalars is continuous (a requirement automatically satisfied by a topological vector space),
the quantity $\delta$ is upper semicontinuous in its three variables whenever it is defined on an nc
set which satisfies property \eqref{propr1set} above. Indeed, consider such an nc set
$\Omega\subseteq\mathcal V_{\rm nc}$. It is enough to prove the statement
at level one. Thus, consider three nets $\{a_\iota\}_{\iota\in I},
\{c_\iota\}_{\iota\in I}$, and $\{b_\iota\}_{\iota\in I}$ converging to
$a,c\in\Omega_1$ and $b\in\mathcal V$, respectively. Let
$t\in(0,+\infty)$ be chosen so that $\begin{bmatrix}
a & sb \\
0 & c
\end{bmatrix}\in\Omega_2$ for all $s\leq t$. Since $\Omega_2$ is open in
the topology of $\mathcal V^{2\times 2}$, there exists an $\iota_0\in I$
such that $\begin{bmatrix}
a_\iota & [0,t]b_\iota \\
0 & c_\iota
\end{bmatrix}\subset\Omega_2$ for all $\iota\ge\iota_0$ (we have used
here the compactness of $[0,t]$). Thus, $t^{-1}>\delta(a,c)(b)$
implies that $t^{-1}>\delta(a_\iota,c_\iota)(b_\iota)$ for
all $\iota$ large enough. This implies that
\begin{equation}\label{limsup}
\limsup_{\iota\in I}\delta(a_\iota,c_\iota)(b_\iota)\leq\delta(a,c)(b),\quad a,c\in\Omega_1,b\in\mathcal V.
\end{equation}
This shows that $\delta$
is upper semicontinuous on nc sets that satisfy property \eqref{propr1set} under very
mild conditions on the topology of the underlying vector space.
Remarkably, under the supplementary hypothesis that the intersection
$\partial\Omega_{2k}\cap\begin{bmatrix}
a & \mathbb R_+b \\
0 & c
\end{bmatrix}$ is discrete for all $b\in\mathcal V^{k\times k}$,
$a,c\in\Omega_k$, the exact same argument applied to the
complement of $\Omega$ shows that $\delta$
is lower semicontinuous, and thus continuous.
\end{remark}
The following proposition is straightforward,
but, unless some of the hypotheses \eqref{propr1set} --
\eqref{propr3set} from above are assumed, it may well
be vacuous.
\begin{prop}\label{contr}
Let $\mathcal V,\mathcal W$ be two complex topological vector
spaces and let $\mathcal D$ and $\mathcal E$ be two noncommutative
subsets of $\mathcal V_{\rm nc}$ and $\mathcal W_{\rm nc},$
respectively. Assume that $f\colon\mathcal D\to\mathcal E$
is a function such that
\begin{enumerate}
\item[(a)] for any $a\in\mathcal D_n$, we have $f(a)\in\mathcal E_n$;
\item[(b)] $f$ respects direct sums;
\item[(c)] if $a\in\mathcal D_n,c\in\mathcal D_m$ and $b\in
\mathcal V^{n\times m}$ are such that $\begin{bmatrix}
a & b\\
0 & c
\end{bmatrix}\in\mathcal D_{n+m},$ then there exists
a function of three variables denoted $\Delta f(a,c)(b)$
such that $\Delta f(a,c)(tb)=t\Delta f(a,c)(b)$ for all $t\in[0,+\infty)$
with the property that $tb$ is in the domain of $\Delta f(a,c)(\cdot)$,
and $f$ satisfies
$$
f\left(\begin{bmatrix}
a & b\\
0 & c
\end{bmatrix}\right)=\begin{bmatrix}
f(a) & \Delta f(a,c)(b)\\
0 & f(c)
\end{bmatrix}.
$$
\end{enumerate}
Then
$$
\delta_{\mathcal D}(a,c)(b)\ge\delta_{\mathcal E}(f(a),f(c))
(\Delta f(a,c)(b)),\quad a\in\mathcal D_n,c\in\mathcal D_m,
b\in\mathcal V^{n\times m}.
$$
\end{prop}
Note that the hypothesis on the homogeneity of $\Delta f(a,c)(b)$ in
$b$ is meaningful only if there exists some interval $(t,r)$ such that
$\begin{bmatrix}
a & sb\\
0 & c
\end{bmatrix}\in\mathcal D_{n+m}$ for all $s\in(t,r)$. Otherwise,
one can simply define $\Delta f(a,c)(sb)$ as $s\Delta f(a,c)(b)$.
\begin{proof}
The statement is tautological: consider $a\in\mathcal D_n,c\in
\mathcal D_m$ and $b\in\mathcal V^{n\times m}$ such that
$\begin{bmatrix}
a & sb\\
0 & c
\end{bmatrix}\in\mathcal D_{n+m}$ for all $s\in[0,t_0)$. If
$t_0=+\infty$, then $\delta_{\mathcal D}(a,c)(b)=0$ and
$$
f\left(\begin{bmatrix}
a & sb\\
0 & c
\end{bmatrix}\right)=\begin{bmatrix}
f(a) & \Delta f(a,c)(sb)\\
0 & f(c)
\end{bmatrix}=\begin{bmatrix}
f(a) & s\Delta f(a,c)(b)\\
0 & f(c)
\end{bmatrix}
$$
for all $s\in[0,+\infty)$, so that $\delta_{\mathcal E}(f(a),f(c))
(\Delta f(a,c)(b))=0.$
If $t_0=0$ (i.e. $\delta_{\mathcal D}(a,c)(b)=+\infty$), then the inequality
$\delta_{\mathcal D}(a,c)(b)\ge\delta_{\mathcal E}(f(a),f(c))
(\Delta f(a,c)(b))$ is obvious thanks to hypothesis (b). Finally, if $t_0=
\delta_{\mathcal D}(a,c)(b)^{-1}\in(0,+\infty)$, then
$$
f\left(\begin{bmatrix}
a & sb\\
0 & c
\end{bmatrix}\right)=\begin{bmatrix}
f(a) & \Delta f(a,c)(sb)\\
0 & f(c)
\end{bmatrix}=\begin{bmatrix}
f(a) & s\Delta f(a,c)(b)\\
0 & f(c)
\end{bmatrix}\in\mathcal E_{n+m}
$$
for all $s\in[0,t_0)$, which implies
$t_0=\delta_{\mathcal D}(a,c)(b)^{-1}\leq\delta_{\mathcal E}
(f(a),f(c))(\Delta f(a,c)(b))^{-1}$. This concludes the proof.
\end{proof}
\begin{remark}
\begin{enumerate}
\item
If we assume hypotheses \eqref{propr1set} for $\mathcal D$,
then for any $a\in\mathcal D_n,c\in\mathcal D_m,b\in\mathcal V^{n
\times m}$ we are guaranteed that there exists a $t_0\in(0,+\infty]$
such that $\begin{bmatrix}
a & sb\\
0 & c
\end{bmatrix}\in\mathcal D_{n+m}$ for all $s\in[0,t_0)$. Thus,
under a very mild assumption of openness in a complex topological
vector space, we are guaranteed that $\delta_{\mathcal D}(a,c)(b)$
is finite (possibly zero).
\item Assumption \eqref{propr2set} on $\mathcal D$ is sufficient
(although not necessary) in order to guarantee that $\begin{bmatrix}
a & zb\\
0 & c
\end{bmatrix}\in\mathcal D_{n+m}$ for all $z\in\mathbb C,$
$|z|<\delta_{\mathcal D}(a,c)(b)^{-1}$. Indeed, one simply
conjugates $\begin{bmatrix}
a & sb\\
0 & c
\end{bmatrix}$ with the unitary $\begin{bmatrix}
e^{i\theta/2}1_n & 0\\
0 & e^{-i\theta/2}1_m
\end{bmatrix}\in\mathbb C^{(n+m)\times(n+m)},$ where $\theta$ is the
argument of $z$.
\item If, in Proposition \ref{contr}, the sets $\mathcal D$
and $\mathcal E$ are assumed to satisfy hypotheses
\eqref{propr1set} and \eqref{propr2set}, and in addition
$b\mapsto\Delta f(a,c)(b)$ satisfies $\Delta f(a,c)(zb)=
z\Delta f(a,c)(b)$, then we are guaranteed that the statement of
the proposition is not vacuous. In particular,
\end{enumerate}
\end{remark}
\begin{cor}\label{trentatre}
If $f\colon\mathcal D\to\mathcal E$ is a
locally bounded noncommutative function on a finitely
open subset, then $f$ satisfies
$\delta_{\mathcal E}(f(a),f(c))
(\Delta f(a,c)(b))\le\delta_{\mathcal D}(a,c)(b),$ $a\in\mathcal D_n,c\in\mathcal D_m,
b\in\mathcal V^{n\times m},m,n\in\mathbb N.$
\end{cor}
Next, we study some of the properties of $\delta_\mathcal D$
in more detail.
\begin{lemma}\label{lemma3.3}
Assume that the noncommutative subset $\mathcal D$ of $\mathcal V_{\rm nc}$ satisfies
properties \eqref{propr2set} and \eqref{propr3set}.
For any unitary matrices $U\in\mathbb C^{n\times n},V\in\mathbb
C^{m\times m}$, $a_1,a_2\in\mathcal D_n,c_1,c_2\in\mathcal D_m$,
$b_{11}\in\mathcal V^{n\times n},b_{12}\in\mathcal V^{n\times m},
b_{21}\in\mathcal V^{m\times n},b_{22}\in\mathcal V^{m\times m}$,
we have
\begin{equation}\label{unit}
\delta_\mathcal D(Ua_1U^*,Vc_2V^*)(Ub_{12}V^*)=\delta_\mathcal D(a_1,
c_2)(b_{12})
\end{equation}
\begin{equation}\label{diagonal}
\delta_\mathcal D\left(\begin{bmatrix}
a_1 & 0\\
0 & c_1\end{bmatrix},\begin{bmatrix}
a_2 & 0\\
0 & c_2\end{bmatrix}\right)\begin{bmatrix}
b_{11} & 0\\
0 & b_{22}\end{bmatrix} = \max\{\delta_\mathcal D(a_1,c_1)(b_{11}),
\delta_\mathcal D(a_2,c_2)(b_{22})\},
\end{equation}
\begin{equation}\label{counterdiagonal}
\delta_\mathcal D\left(\begin{bmatrix}
a_1 & 0\\
0 & c_1\end{bmatrix},\begin{bmatrix}
a_2 & 0\\
0 & c_2\end{bmatrix}\right)\begin{bmatrix}
0 & b_{12}\\
b_{21}& 0\end{bmatrix} = \max\{\delta_\mathcal D(a_1,c_2)(b_{12}),
\delta_\mathcal D(c_1,a_2)(b_{21})\}.
\end{equation}
\end{lemma}
\begin{proof}
Relation \eqref{unit} follows trivially from hypothesis
\eqref{propr2set}: for $s\ge0$, we have the chain of equivalences
\begin{eqnarray*}
\begin{bmatrix}
a_1 & sb_{12}\\
0 & c_2\end{bmatrix}\in\mathcal D_{n+m} & \iff &
\begin{bmatrix}
U & 0\\
0 & V\end{bmatrix}\begin{bmatrix}
a_1 & sb_{12}\\
0 & c_2\end{bmatrix}\begin{bmatrix}
U^* & 0\\
0 & V^*\end{bmatrix}\in\mathcal D_{n+m}\\
& \iff & \begin{bmatrix}
Ua_1U^* & sUb_{12}V^*\\
0 & Vc_2V^*\end{bmatrix}\in\mathcal D_{n+m}.
\end{eqnarray*}
A slight variation of this trick proves \eqref{diagonal} and
\eqref{counterdiagonal}.
Let
$$
U_0=\begin{bmatrix}
0_{m\times n} & I_m & 0_{m\times n} & 0_{m\times m}\\
0_{n\times n} & 0_{n\times m} & I_n & 0_{n\times m} \\
I_n & 0_{n\times m} & 0_{n\times n} & 0_{n\times m} \\
0_{m\times n} & 0_{m\times m} & 0_{m\times n} & I_m\end{bmatrix},
$$
a complex $(2n+2m)\times(2n+2m)$ unitary matrix. For any $s\ge0$, we have
\begin{eqnarray*}
\begin{bmatrix}
a_1 & 0 & 0 & sb_{12}\\
0 & c_1 & sb_{21} & 0 \\
0 & 0 & a_2 & 0 \\
0 & 0 & 0 & c_2\end{bmatrix}\in\mathcal D_{2(n+m)} & \iff &
U_0\begin{bmatrix}
a_1 & 0 & 0 & sb_{12}\\
0 & c_1 & sb_{21} & 0 \\
0 & 0 & a_2 & 0 \\
0 & 0 & 0 & c_2\end{bmatrix}U_0^*\in\mathcal D_{2(n+m)}\\
& \iff & \begin{bmatrix}
c_1 & sb_{21} & 0 & 0\\
0 & a_2 & 0 & 0\\
0 & 0 & a_1 & sb_{12}\\
0 & 0 & 0 & c_2\end{bmatrix}\in\mathcal D_{m+n+n+m}\\
& \iff & \begin{bmatrix}
c_1 & sb_{21}\\
0 & a_2\end{bmatrix},\begin{bmatrix}
a_1 & sb_{12}\\
0 & c_2\end{bmatrix}\in\mathcal D_{n+m},
\end{eqnarray*}
where we have
used property \eqref{propr3set} in the last equivalence
and property \eqref{propr2set} in the first.
This proves \eqref{counterdiagonal}. Relation
\eqref{diagonal} is proved the same way.
\end{proof}
The next lemma shows that, in a certain way, $\delta_\mathcal D$
is itself a sort of noncommutative function.
\begin{lemma}\label{lemma3.4}
Assume that $\mathcal D\subseteq\mathcal V_{\rm nc}$ satisfies properties
\eqref{propr2set} and \eqref{propr3set}. For any $n,m\in\mathbb N$,
$a\in\mathcal D_n,c\in\mathcal D_m,b\in\mathcal V^{n\times m}$,
and any $k\in\mathbb N$, we have
$$
\delta_\mathcal D(I_k\otimes a,I_k\otimes c)(Z\otimes b)=
\delta_\mathcal D(a,c)(b)\|ZZ^*\|^\frac12,\quad Z\in\mathbb C^{k\times
k}.
$$
\end{lemma}
\begin{proof}
We shall prove this lemma in two steps. In the first step, we assume
that $a=c$ (and implicitly $m=n$). Consider unitary
matrices $U,V^*\in\mathbb C^{k\times k}$ which diagonalize $Z$:
$UZV^*=\mathrm{diag}(\lambda_1,\dots,\lambda_k)$, where
$0\le\lambda_1\le\dots\le\lambda_k=\|Z^*Z\|^\frac12$ are
the singular values of $Z$.
Then
$$
\begin{bmatrix}
U\otimes 1 & 0\\
0 & V\otimes 1\end{bmatrix}
\begin{bmatrix}
I_k\otimes a & Z\otimes b\\
0 & I_k\otimes a\end{bmatrix}
\begin{bmatrix}
U^*\otimes 1 & 0\\
0 & V^*\otimes 1\end{bmatrix}=
\begin{bmatrix}
I_k\otimes a & UZV^*\otimes b\\
0 & I_k\otimes a\end{bmatrix}.
$$
Thus, by property \eqref{propr2set}, $\begin{bmatrix}
I_k\otimes a & sZ\otimes b\\
0 & I_k\otimes a\end{bmatrix}\in\mathcal D_{2kn}$ if and only if
the matrix
$$
\begin{bmatrix}
a & 0 & \cdots & 0 & s\lambda_1b & 0 & \cdots & 0\\
0 & a & \cdots & 0 & 0 & s\lambda_2b & \cdots & 0\\
\vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots\\
0 & 0 & \cdots & a & 0 & 0 & \cdots & s\lambda_kb\\
0 & 0 & \cdots & 0 & a & 0 & \cdots & 0\\
0 & 0 & \cdots & 0 & 0 & a & \cdots & 0\\
\vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots\\
0 & 0 & 0 & \cdots & 0 & 0 & \cdots & a
\end{bmatrix}\in\mathcal D_{2kn}.
$$
Successive permutations transform this into the condition
$$
\begin{bmatrix}
a & s\lambda_1b & \cdots & 0 & 0 & \cdots & 0 & 0\\
0 & a & \cdots & 0 & 0 & \cdots & 0 & 0\\
\vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots\\
0 & 0 & \cdots & a & s\lambda_jb & \cdots & 0 & 0\\
0 & 0 & \cdots & 0 & a & \cdots & 0 & 0\\
\vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots\\
0 & 0 & \cdots & 0 & 0 & \cdots & a & s\lambda_kb\\
0 & 0 & 0 & \cdots & 0 & \cdots & 0 & a
\end{bmatrix}\in\mathcal D_{2kn},
$$
i.e.
$$
\mathrm{diag}\left(
\begin{bmatrix}
a & s\lambda_1b\\
0 & a
\end{bmatrix},\dots,
\begin{bmatrix}
a & s\lambda_kb\\
0 & a
\end{bmatrix}
\right)\in\mathcal D_{2kn}.
$$
By property \eqref{propr3set} we have that this happens if
and only if each block
$\begin{bmatrix}
a & s\lambda_jb\\
0 & a
\end{bmatrix}
$ belongs to $\mathcal D_{2n}$. Since the largest singular
value $\lambda_k$ of $Z$
equals $\|Z^*Z\|^\frac12,$ the first step is proved.
In order to prove the second step, we use equation \eqref{counterdiagonal}
of Lemma \ref{lemma3.3}, which guarantees that
$$
\delta_\mathcal D(I_k\otimes a,I_k\otimes c)(Z\otimes b)=
\delta_\mathcal D\left(\begin{bmatrix}
I_k\otimes a & 0\\
0 & I_k\otimes c
\end{bmatrix},\begin{bmatrix}
I_k\otimes a & 0\\
0 & I_k\otimes c
\end{bmatrix}
\right)\left(\begin{bmatrix}
0 & Z\otimes b\\
0 & 0
\end{bmatrix}\right).
$$
By conjugating with a permutation matrix, it follows, again
via Lemma \ref{lemma3.3} and the first step, that
\begin{eqnarray*}
\lefteqn{\delta_\mathcal D\left(\begin{bmatrix}
I_k\otimes a & 0\\
0 & I_k\otimes c
\end{bmatrix},\begin{bmatrix}
I_k\otimes a & 0\\
0 & I_k\otimes c
\end{bmatrix}
\right)\left(\begin{bmatrix}
0 & Z\otimes b\\
0 & 0
\end{bmatrix}\right)}\\
& = & \delta_\mathcal D\left(I_k\otimes\begin{bmatrix}
a & 0\\
0 & c
\end{bmatrix},I_k\otimes \begin{bmatrix}
a & 0\\
0 & c
\end{bmatrix}
\right)\left( Z\otimes\begin{bmatrix}
0 & b\\
0 & 0
\end{bmatrix}\right)\\
& = & \delta_\mathcal D\left(\begin{bmatrix}
a & 0\\
0 & c
\end{bmatrix},\begin{bmatrix}
a & 0\\
0 & c
\end{bmatrix}
\right)\left(\begin{bmatrix}
0 & b\\
0 & 0
\end{bmatrix}\right)\|Z^*Z\|^\frac12\\
& = & \delta_\mathcal D\left(a,c
\right)(b)\|Z^*Z\|^\frac12.
\end{eqnarray*}
\end{proof}
With these two lemmas, we can prove now the main result of
this section. For simplicity, denote
$$
\tilde\delta_\mathcal D(a,c):=\delta_\mathcal D(a,c)(a-c).
$$
\begin{thm}\label{sep}
Assume that $\mathcal D\subseteq\mathcal V_{\rm nc}$ satisfies properties
\eqref{propr2set} and \eqref{propr3set}.
The following statements are equivalent for any $m,n\in\mathbb N$:
\begin{enumerate}
\item[(i)] For any $a\in\mathcal D_n,c\in\mathcal D_m$, $\delta_\mathcal D(a,c)(b)=0\implies b=0$;
\item[(ii)] For any $a\in\mathcal D_n$, $\delta_\mathcal D(a,a)(b)=0\implies b=0$;
\item[(iii)] For any $a,c\in\mathcal D_n$, $\tilde\delta_\mathcal D(a,c)=0\implies a=c$;
\item[(iv)] For any $k\in\mathbb N$, there exists no non-constant
noncommutative function $f\colon\mathbb C_{\rm nc}\to(\mathcal D_k)_{\rm nc}$.
\end{enumerate}
By $(\mathcal D_k)_{\rm nc}$ we denote all the levels of $\mathcal D$
which are multiples of $k$.
\end{thm}
\begin{proof}
Implications (i)$\implies$(ii) and (i)$\implies$(iii) are obvious.
If there exists a noncommutative $f\colon\mathbb C_{\rm nc}\to(\mathcal
D_k)_{\rm nc}$ for some $k\in\mathbb N$, then, by Proposition
\ref{contr}, it follows that for any $n\in\mathbb N$,
$Z,W\in\mathbb C^{n\times n}$,
$\tilde\delta_\mathcal D(f(Z),f(W))\leq\tilde\delta_{\mathbb C_{\rm nc}}(Z,W)=0$.
Thus, if $f$ is not constant, (iii) is violated. Thus,
(iii)$\implies$(iv).
The implication (ii)$\implies$(i) follows from equation
\eqref{counterdiagonal} of Lemma \ref{lemma3.3}
by writing
$$
\delta_\mathcal D\left(\begin{bmatrix}
a & 0\\
0 & c
\end{bmatrix},\begin{bmatrix}
a & 0\\
0 & c
\end{bmatrix}
\right)\left(\begin{bmatrix}
0 & b\\
0 & 0
\end{bmatrix}\right)= \delta_\mathcal D\left(a,c\right)(b).
$$
Finally, to prove (iv)$\implies$(ii), assume that we found $a_0\in
\mathcal D_{n_0}$ and $b_0\in\mathcal V^{n_0\times n_0}\setminus\{0\}$
such that $\delta_\mathcal D(a_0,a_0)(b_0)=0$.
We build the linear noncommutative function
$f(Z)=\begin{bmatrix}
a_0\otimes1_p & 0\\
0 & a_0\otimes1_p
\end{bmatrix}+Z\otimes\begin{bmatrix}
0 & b_0\\
0 & 0
\end{bmatrix}$, $Z\in\mathbb C^{p\times p}$.
By a conjugation with a permutation matrix and an application of
Lemma \ref{lemma3.4}, we conclude that $f$ takes values in
$\mathcal D_{2pn_0}$, so that (iv) does not hold. This completes
the proof.
\end{proof}
The function $\tilde\delta_\mathcal D$ allows us to define a distance
(possibly degenerate) on $\mathcal D$, by mimicking the
definition of the Kobayashi distance, with $\tilde\delta_\mathcal D$
playing the role of Lempert function.
\begin{defn}\label{Kobayashi}
If $\mathcal D$ is a noncommutative set in an operator space
satisfying assumptions \eqref{propr2set} and \eqref{propr3set},
then for any $n\in\mathbb N,a,c\in\mathcal D_n$,
$$
\tilde{d}_\mathcal D(a,c)=\inf\left\{\sum_{j=1}^N\tilde\delta_\mathcal D
(a_{j-1},a_j)\colon a_j\in\mathcal D_n,0\le j\le N,a_0=a,a_N=c,N\in
\mathbb N\right\}.
$$
We shall call such a finite sequence $a=a_0,a_1,\dots,a_N=c$
a {\em division} of $\tilde{d}_\mathcal D(a,c)$.
\end{defn}
The function $\tilde{d}_\mathcal D\colon\mathcal D\times\mathcal D\to[0,+
\infty]$ fails to separate the points of $\mathcal D$ if one (and hence all)
the conditions of Theorem \ref{sep} are satisfied.
It is quite easy to show that $\tilde{d}$ is a distance.
Indeed, since $\tilde\delta_\mathcal D(a,c)=\tilde\delta_\mathcal D(c,a)$,
it follows that $\tilde{d}_\mathcal D(a,c)=\tilde{d}_\mathcal D(c,a).$
So only the triangle inequality remains to be proved.
Let $a,c,v\in\mathcal D_n$. If
$a_0=a,a_1,\dots,a_N=c$ and $a_{N}=c,\dots,a_{N+p-1},
a_{N+p}=v$ are divisions for $\tilde{d}_\mathcal D(a,c)$ and
$\tilde{d}_\mathcal D(c,v)$, respectively, then
$a_0=a,a_1,\dots,a_N,a_{N+1},\dots,
a_{N+p}=v$ is a division for $\tilde{d}_\mathcal D(a,v)$.
In particular,
\begin{eqnarray*}
\lefteqn{\sum_{j=1}^{N}\tilde\delta_\mathcal D
(a_{j-1},a_j)+\sum_{j=N+1}^{N+p}\tilde\delta_\mathcal D
(a_{j-1},a_j)}\\
& \ge & \inf\left\{\sum_{j=1}^M\tilde\delta_\mathcal D
(d_{j-1},d_j)\colon d_0,\dots,d_j\text{ division of }\tilde{d}_\mathcal D(a,v)
\right\}
\end{eqnarray*}
for all divisions $a_0=a,a_1,\dots,a_N=c$ for $\tilde{d}_\mathcal D(a,c)$
and $a_{N}=c,\dots,a_{N+p-1},a_{N+p}=v$ for
$\tilde{d}_\mathcal D(c,v)$. Taking infimum separately
after each division provides
$$
\tilde{d}_\mathcal D(a,c)+\tilde{d}_\mathcal D(c,v)\ge \tilde{d}_\mathcal D(a,v).
$$
The most general version of the Schwarz-Pick Lemma tells us that an
analytic map between two hyperbolic domains is a contraction with
respect to the corresponding Kobayashi metrics. The following
corollary is a direct consequence of Proposition \ref{contr} and
the above definition.
\begin{cor}
Let $\mathcal D,\mathcal E$ be two noncommutative
sets satisfying assumptions \eqref{propr2set} and \eqref{propr3set}.
Let $f\colon\mathcal D\to\mathcal E$ be a noncommutative
function. Then $f$ is a contraction with respect to the above-defined
metric:
$$
\tilde{d}_{\mathcal E}(f(a),f(c))\leq\tilde{ d}_{\mathcal D}(a,c),\quad
a,c\in\mathcal D_n, n\in\mathbb N.
$$
\end{cor}
Note that assuming also hypothesis \eqref{propr1set} in the above
corollary guarantees that the two sides of the inequality above are
both finite (possibly zero).
Until now we have made no assumptions on the openness of $\mathcal D$.
As seen in Remark \ref{rem-cont}, hypotheses
\eqref{propr1set} --- \eqref{propr3set} guarantee
that $\delta_\mathcal D$ is upper semicontinuous in its three variables,
and in particular so is $\tilde{\delta}_\mathcal D$.
Thus, we may define an infinitesimal version of $\tilde{d}_\mathcal D$.
\begin{defn}\label{inf-Kobayashi}
If $\mathcal D$ is a noncommutative set in an operator space
satisfying assumptions \eqref{propr1set}---\eqref{propr3set},
then for any $n\in\mathbb N,a,c\in\mathcal D_n$,
\begin{eqnarray*}
\lefteqn{
{d}_\mathcal D(a,c)=\inf\left\{\int_{[0,1]}\delta({\bf a}(t),{\bf a}
(t))({\bf a}'(t))\,{\rm d}t:\right.}\\
& & \left.\frac{}{}{\bf a}\colon[0,1]\to\mathcal D_n\text{ continuously
differentiable, }{\bf a}(0)=a,{\bf a}(1)=c
\right\}.
\end{eqnarray*}
\end{defn}
Note that the openness of $\mathcal D_n$ implies that $\delta({\bf a}(t),{\bf a}
(t))({\bf a}'(t))$ is finite for all $t\in[0,1]$. Since upper semicontinuous functions
attain their supremum, this shows that $\{\delta({\bf a}(t),{\bf a}
(t))({\bf a}'(t))\colon t\in[0,1]\}$ is a bounded set, and the integrals defining $d_\mathcal D$
are necessarily finite, so that $d_\mathcal D$ is well-defined and finite (possibly zero).
The fact that $d_\mathcal D$ is a (possibly degenerate) metric follows easily: as before,
it is only the triangle inequality that needs to be verified. If $a,v,c\in\mathcal D_n$, then
the above infimum over all paths from $a$ to $c$ is necessarily no greater than the
infimum over all paths from $a$ to $c$ which go through $v$. Since $\delta_\mathcal D$
is continuous and paths which are continuous and differentiable everywhere except at one point
can be approximated arbitrarily well by paths which are differentiable everywhere, it follows immediately
that ${d}_\mathcal D(a,c)\leq{d}_\mathcal D(a,v)+{d}_\mathcal D(v,c)$.
Another application of Proposition \ref{contr} shows that noncommutative functions are contractions also
with respect to ${d}_\mathcal D$. We record this fact below.
\begin{cor}
Let $\mathcal D,\mathcal E$ be two noncommutative
sets satisfying assumptions \eqref{propr1set}---\eqref{propr3set}.
Let $f\colon\mathcal D\to\mathcal E$ be a noncommutative
function. Then $f$ is a contraction with respect to the above-defined
metric:
$$
{d}_{\mathcal E}(f(a),f(c))\leq{ d}_{\mathcal D}(a,c),\quad
a,c\in\mathcal D_n, n\in\mathbb N.
$$
\end{cor}
We establish next the relation between $\tilde{d}_\mathcal D$ and $d_\mathcal D$ under the assumptions
\eqref{propr1set} --- \eqref{propr3set}. As an immediate consequence of the upper semicontinuity of
$\delta$ (Remark \ref{rem-cont}), we obtain for any differentiable path ${\bf a}$ defined on $[0,1]$
and any $t\in[0,1]$ the relation
$$
\limsup_{h\to0}\delta_\mathcal D({\bf a}(t),{\bf a}(t+h))\left(\frac{{\bf a}(t+h)-{\bf a}(t)}{h}\right)\leq
\delta_\mathcal D({\bf a}(t),{\bf a}(t))({\bf a}'(t)).
$$
(When $t=0$ or $t=1$, the limit should of course be taken one-sided.)
In particular given an arbitrary path ${\bf a}$, a division of
$[0,1]$ translates into a division of $\tilde{d}(a,c)$. Given $\varepsilon>0$,
for any $t\in[0,1]$ there exists $\eta_{t,\varepsilon}>0$ such that
$\delta_\mathcal D({\bf a}(t),{\bf a}(t+h))\left(\frac{{\bf a}(t+h)-{\bf a}(t)}{h}\right)<
\delta_\mathcal D({\bf a}(t),{\bf a}(t))({\bf a}'(t))+\varepsilon$ for any $|h|<\eta_{t,\varepsilon}$.
The family $\{(t-\eta_{t,\varepsilon},t+\eta_{t,\varepsilon})\}_{0\le t\le1}$
is an open cover of $[0,1]$, so that we may extract a finite subcover
$(t_1-\eta_{t_1,\varepsilon},t_1+\eta_{t_1,\varepsilon}),\dots,
(t_N-\eta_{t_N,\varepsilon},t_N+\eta_{t_N,\varepsilon})$, $t_1<\cdots<t_N$. Let
$t_0=0,t_{N+1}=1$.
By choosing the smallest among $\eta_{t_j,\varepsilon}$, $1\le j\le N$, and increasing the number of
points $t_j$ if necessary, we may assume that $\eta_{t_1,\varepsilon}=\cdots=\eta_{t_N,\varepsilon}
=\eta_\varepsilon>0$ and $t_j\in(t_{j-1}-\eta_\varepsilon,t_{j-1}+\eta_\varepsilon)\cap
(t_{j+1}-\eta_\varepsilon,t_{j+1}+\eta_\varepsilon)$. Then
\begin{eqnarray}
\tilde{d}_{\mathcal D}(a,c)& \leq & \sum_{j=0}^N\tilde{\delta}_{\mathcal D}({\bf a}(t_j),{\bf a}(t_{j+1}))
\label{17}\\
& = & \sum_{j=0}^N(t_{j+1}-t_j)\nonumber
{\delta}_{\mathcal D}({\bf a}(t_j),{\bf a}(t_{j+1}))\left(\frac{{\bf a}(t_{j+1})-{\bf a}(t_j)}{t_{j+1}-t_j}\right)\\
& < & \sum_{j=0}^N(t_{j+1}-t_j)\delta_\mathcal D({\bf a}(s_j),{\bf a}(s_j))({\bf a}'(s_j))+\varepsilon\quad
\ (s_j\in[t_j,t_{j+1}])\nonumber\\
& \le & \sum_{j=0}^N(t_{j+1}-t_j)\int_{[t_j,t_{j+1}]}\delta_\mathcal D({\bf a}(t),{\bf a}(t))({\bf a}'(t))\,{\rm
d}t+\varepsilon\label{18}\\
& = & \int_{[0,1]}\delta({\bf a}(t),{\bf a}
(t))({\bf a}'(t))\,{\rm d}t+\varepsilon.\nonumber
\end{eqnarray}
We have used in \eqref{17} the definition of $\tilde{d}_\mathcal D$, and in relation \eqref{18}
the fact that we may choose $s_j$ arbitrarily in $[t_j,t_{j+1}]$, and we decide to choose
an $s_j$ such that
$$
\delta_\mathcal D({\bf a}(s_j),{\bf a}(s_j))({\bf a}'(s_j))\leq
\frac{1}{t_{j+1}-t_j}\int_{[t_j,t_{j+1}]}\delta_\mathcal D({\bf a}(t),{\bf a}(t))({\bf a}'(t))\,{\rm
d}t.
$$
Since ${\bf a}$ has been arbitrarily chosen, it follows that $\tilde{d}_\mathcal D(a,c)\leq
d_\mathcal D(a,c)$ for all $a,c$ belonging to the same level of $\mathcal D$. Thus,
\begin{equation}\label{ds}
\tilde{d}_\mathcal D\leq
d_\mathcal D\quad \text{for all }\mathcal D\subset\mathcal V_{\rm nc}\text{ satisfying hypotheses }
\eqref{propr1set} - \eqref{propr3set}.
\end{equation}
Since we have shown that $\delta$ and $\tilde\delta$ generate distances, it is
natural to ask what topology one may expect those distances to determine on the original space.
In the most general case, we are able to make only the following statement:
\begin{prop}\label{dreiwolf}
Assume that $\mathcal D$ is a noncommutative subset of a topological vector space
$\mathcal V$ which satisfies assumptions \eqref{propr1set}---\eqref{propr3set}.
If $n\in\mathbb N$ is
given and a subset $A$ of $\mathcal D_n$ is open in the topology generated by $\tilde{d}_\mathcal D$,
then it is open in the product topology induced by $\mathcal V$ on $\mathcal V^{n\times n}$.
\end{prop}
\begin{proof}
Assume that $a\in\mathcal D_n$ is given. For any net $\{a_\iota\}_{\iota\in I}\subseteq\mathcal D_n$
which converges to $a$ in the product topology of $\mathcal V^{n\times n}$, we have by Remark
\ref{rem-cont} that
$$
0=\tilde\delta_\mathcal D(a,a)\geq\limsup_{\iota\in I}\tilde\delta_\mathcal D(a_\iota,a)
\geq\limsup_{\iota\in I}\tilde{d}_\mathcal D(a_\iota,a)\ge0.
$$
Thus, $\lim_{\iota\in I}\tilde{d}_\mathcal D(a_\iota,a)=0$ whenever $\{a_\iota\}_{\iota\in I}\subseteq
\mathcal D_n$ converges to $a$ in the product topology of $\mathcal V^{n\times n}$. This completes
our proof.
\end{proof}
\section{A smooth (pseudo)metric}
We have seen above that simple properties of noncommutative
sets allow us to define a distance which is often nondegenerate,
and with respect to which analytic noncommutative functions
are natural contractions. These results have a ``metric space''
flavour. In this section we consider the case when the distance
defined has a ``differential geometry'' flavour.
\subsection{Hypotheses}\label{SecHyp}
Let $\mathcal V$ be an operator space and $\mathcal{J,K}$ be
${C}^*$-algebras. Let $\mathcal O_{\rm nc}
\subseteq\mathcal V_{\rm nc}$ be a noncommutative set,
and assume that $G\colon\mathcal O_{\rm nc}\times\mathcal O_{\rm nc}
\to\mathcal L(\mathcal J , \mathcal K)$ is an affine noncommutative
kernel. Recall that if $(a,c)\mapsto G(a,c)$ is an affine nc kernel,
then $(a,c)\mapsto G(a,c^*)$ is an nc kernel. We prefer to work
with the affine kernel $G$ because we will often need to take its
derivative (or, rather, difference-differential) on both the first and second coordinate
(which we denote by ${}_0\Delta G(a;a',c)$ and ${}_1\Delta G(a,c;c')$,
respectively). Consider the following properties:
\begin{enumerate}
\item $\mathcal O_{\rm nc}$ is uniformly open and $G$ is locally uniformly
bounded. Thus, $G$ is uniformly analytic in each of its two variables.
\item $\mathcal O_{\rm nc}$ is finitely open and $G$ is locally
bounded on slices. Thus, $G$ is analytic on slices in each of its two variables.
\item $\mathcal O_{\rm nc}$ is open in the level topology and $G$ is locally
bounded on slices. Thus, $G$ is analytic on slices in each of its two variables.
\item For any $n\in\mathbb N$ and $a,c
\in\mathcal O_n$ such that $a^*,c^*\in\mathcal O_n$, we have
$G(a,c)(v)^*=G(c^*,a^*)(v)$ for $v=v^*\in\mathcal J^{n\times n}$;
\item $\{a\in \mathcal O_{\rm nc}\colon G(a,a^*)(1)>0\}\neq
\varnothing.$
\item At each level at which the set $\{a\in \mathcal O_{\rm nc}
\colon G(a,a^*)(1)>0\}$ is nonempty, we have $\|G(a,a^*)(1)^{-1}\|\to+
\infty$ as $a$ tends to the norm-topology boundary of $\{a\in
\mathcal O_{\rm nc}\colon G(a,a^*)(1)>0\}$.
\item Let $\Omega$ be a connected component of $\{a\in\mathcal O_{\rm
nc}\colon G(a,a^*)(1)>0\}$. For any given $a\in\Omega_{n}$, $c\in
\overline{\Omega}_n$, we have $G(a,c^*)(1)$ invertible as an element
in the ${C}^*$-algebra $\mathcal K^{n\times n}$.
\item The function $G$ is analytic on a neighbourhood of $\Omega_n
\times\Omega_n$ for each $n\in\mathbb N$.
\end{enumerate}
In our results below, we will assume various subsets of the
above hypotheses. We would like to emphasize at this moment already
that they are not very restrictive, and important families of kernels
satisfy all of them.
Pick a point $a_0\in\{a\in\mathcal O_{\rm nc}\colon G(a,a^*)(1)>0\}$
at the first nonempty level. Let $\mathcal D_{G,\rm nc}$ be the
connected component of $a_0$ (i.e. at each multiple $k$ of the level
in which $a_0$ occurs, we consider the connected component of
$a_0\otimes 1_k$). In all applications we are currently aware of,
the set $\mathcal O_{\rm nc}$ is considerably bigger than
$\mathcal D_{G,\rm nc}$. It seems in fact that at the present level
of knowledge in this field, analyticity of $G$ on the boundary of
$\mathcal D_{G,\rm nc}$ is necessary in order to obtain powerful
results about arbitrary functions defined on it. Given the case of
single-variable analytic functions, that is probably not so
surprising. However, for the purposes of the next section, this
hypothesis is not needed.
We would like to emphasize that if $G(a,a^*)$ is completely
positive, then the condition $G(a,a^*)(1)>0$ can be replaced by
the condition $G(a,a^*)(x)>0$ for any $x>0$.
Indeed, one implication is obvious. Conversely, if $x>0$, then it
is invertible and $x\ge\|x^{-1}\|^{-1}1>0$, so that
$0<\|x^{-1}\|^{-1}G(a,a^*)(1)\le G(a,a^*)(x)$.
For our purposes, completely positive kernels are ``bad'':
they generate a degenerate pseudometric. However,
in the following, to the extent possible, we shall perform our
computations in such a way as to be able to draw conclusions for both
the case $G(a,a^*)$ completely positive and $G(a,a^*)(1)>0$ (without
the assumption that $G(a,a^*)$ is positive).
\subsection{The smooth pseudometric}
The following proposition gives a noncommutative version of
a hyperbolic pseudometric. This version is given in terms of the
defining functions of the domains in question and its definition
is purely algebraic. It is clear that noncommutative domains admit
hyperbolic pseudometrics level-by-level. However, there would be
apropri no reason to think that they are related to the pseudometric
we define here. We will see later that in some cases our pseudometric
indeed generates the Kobayashi metric, while in others it does not.
As in Theorem \ref{sep}, and as in the classical theory of several
complex variables, for the pseudometric to be nondegenerate,
it is necessary that the domains do not contain holomorphic
images of complex lines (i.e. copies of $\mathbb C$) at any level.
Consider $G$ satisfying properties [(1), (2) or (3)], (4), and (5), and define
$\mathcal D_{G,\rm nc}$ as above. Without loss of generality, we assume
that $\mathcal D_{G,1}\neq\varnothing.$ Recall that the spectrum of
an operator $V$ on a Hilbert space is denoted by $\sigma(V)$. For
$a\in\mathcal D_{G,n},c\in\mathcal D_{G,m},b\in\mathcal
V^{n\times m}$, we have
\begin{eqnarray}
\lefteqn{\delta_{\mathcal D_{G,\rm nc}}(a,c)(b)}\nonumber\\
& = & \max\left\{0,\ \sup\sigma\left(G(a,a^*)(1)^{-1/2}\left[
{}_0\Delta G(a;c,c^*)(b,1)
\left[G(c,c^*)(1)\right]^{-1}\right.\right.\right.\nonumber\\
& & \left.\left.\left.\frac{}{}\mbox{}\times{}_1\Delta G(c,c^*;a^*)
(1,b^*)-{}_1\Delta {}_0\Delta G(a;c,c^*;a^*)(b,1,b^*)
\right]G(a,a^*)(1)^{-1/2}\right)\right\}^\frac12,
\label{Infinitesimal}
\end{eqnarray}
and, when $m=n$,
\begin{equation}
\label{Distance}
\tilde\delta_{\mathcal D_{G,\rm nc}}(a,c)=\delta_{\mathcal D_{G,\rm nc}}
(a,c)(a-c).
\end{equation}
It will be seen below that
\begin{eqnarray*}
\tilde\delta_{\mathcal D_{G,\rm nc}}(a,c) & = & \max\left\{0,\
\sup\sigma\left(G(a,a^*)(1)^{-\frac12}G(a,c^*)
(1)G(c,c^*)(1)^{-1}\right.\right.\\
& & \left.\left.\mbox{}\times
G(c,a^*)(1)G(a,a^*)(1)^{-\frac12}-1\right)
\right\}^\frac12.\nonumber
\end{eqnarray*}
It will be apparent that these two objects coincide
with the ones defined in Section \ref{pseudodistance}
for the particular case of domains defined via
inequalities of the type described in hypothesis
(5) above.
Consider another function $H$ defined on some
noncommutative subset of $\mathcal W_{\rm nc}$,
which satisfies the same properties as $G$. We
define $\mathcal D_{H,\rm nc}$ the same
way as $\mathcal D_{G,\rm nc}$
\begin{prop}\label{prop:3.1}
Let $\mathcal D_{G,\rm nc},\mathcal D_{H,\rm nc}$ be two
domains defined as above. Let $f\colon\mathcal D_{G,\rm nc}\to
\mathcal D_{H,\rm nc}$ be a noncommutative map.
For any $n,m\in\mathbb N$, $a\in
\mathcal D_{G,n},c\in\mathcal D_{G,m},b\in\mathcal
V^{n\times m}$, we have
\begin{equation}\label{deriv}
\delta_{\mathcal D_{H,\rm nc}}(f(a),f(c))(\Delta f(a,c)(b))
\leq\delta_{\mathcal D_{G,\rm nc}}(a,c)(b).
\end{equation}
If $m=n$, then
$$
\tilde\delta_{\mathcal D_{H,\rm nc}}(f(a),f(c))
\leq\tilde\delta_{\mathcal D_{G,\rm nc}}(a,c).
$$
and
\begin{eqnarray}
\lefteqn{\left\|H(f(a),f(a)^*)(1)^{-\frac12}
H(f(a),f(c)^*)(1)H(f(c),f(c)^*)(1)^{-1}H(f(c),f(a)^*)(1)\right.}
\nonumber\\
& & \left.\mbox{}\times H(f(a),f(a)^*)(1)^{-\frac12}\right\|\nonumber\\
& \leq & \left\|G(a,a^*)(1)^{-\frac12}G(a,c^*)
(1)G(c,c^*)(1)^{-1}G(c,a^*)(1)G(a,a^*)(1)^{-\frac12}\right\|
\label{basic3}.
\end{eqnarray}
In addition,
\begin{eqnarray}
\lefteqn{H(f(a),f(c)^*)(1)H(f(c),
f(c)^*)(1)^{-1}H(f(c),f(a)^*)(1)-H(f(a),f(a)^*)(1)}\nonumber\\
& \leq & H(f(a),f(a)^*)(1)\times\nonumber\\
& & \mbox{}\left\|G(a,a^*)(1)^{-\frac12}G(a,c^*)
(1)G(c,c^*)(1)^{-1}G(c,a^*)(1)G(a,a^*)(1)^{-\frac12}-1\right\|
\label{basic2}.
\end{eqnarray}
\end{prop}
\begin{remark}\label{rmk3.3}
If in addition $H(u,v^*)(1)H(v,v^*)(1)^{-1}H(v,u^*)(1)-H(u,u^*)
(1)\ge0$ for all $u,v\in\mathcal D_{H,{\rm nc}}$, then
relation \eqref{basic2} is equivalent to
\begin{eqnarray}
\lefteqn{\left\|H(f(a),f(a)^*)(1)^{-\frac12}H(f(a),f(c)^*)(1)
\right.}\nonumber\\
& & \left.\mbox{}\times H(f(c),f(c)^*)(1)^{-1}H(f(c),f(a)^*)(1)
H(f(a),f(a)^*)(1)^{-\frac12}-1\right\|\label{basicnorm}\\
& \leq & \mbox{}\left\|G(a,a^*)(1)^{-\frac12}G(a,c^*)(1)
G(c,c^*)(1)^{-1}G(c,a^*)(1)G(a,a^*)(1)^{-\frac12}-1\right\|
.\nonumber
\end{eqnarray}
\end{remark}
\begin{remark}\label{generalizeds}
The condition $H(u,v^*)(1)H(v,v^*)(1)^{-1}H(v,u^*)(1)-H(u,u^*)
(1)\ge0$ for all $u,v\in\mathcal D_{H,{\rm nc}}$ is satisfied by
a large and important class of noncommutative domains. In
particular, it is satisfied by generalized noncommutative
half-planes and generalized noncommutative balls. Indeed,
consider the upper half-plane from Example \ref{kernel-ex}(i),
$H^+(\mathcal A^{n\times n})=\{b\in\mathcal A^{n\times n}
\colon\Im b>0\}$. It is given by the affine kernel
$H(a,c)(P)=(2i)^{-1}(aP-Pc)$. Then the inequality reduces to
$$
\frac{a-c^*}{2i}\left(\frac{c-c^*}{2i}\right)^{-1}\frac{c-a^*}{2i}-
\frac{a-a^*}{2i}\ge0.
$$
But
$$
\frac{a-c^*}{2i}\left(\frac{c-c^*}{2i}\right)^{-1}\frac{c-a^*}{2i}-
\frac{a-a^*}{2i}=\frac14(a-c)\left(\frac{c-c^*}{2i}\right)^{-1}(a-c)^*
\ge0
$$
whenever $a\neq c$ in the upper half-plane. One can generalize this
to kernels of the form $H(a,c)(P)=(2i)^{-1}(h(a)P-Ph(c^*)^*)$, for
some noncommutative function $h$.
A better-known class of kernels is given by the formula
$H(a,c)(P)=1-h(a)Ph(c^*)^*$ with $h$ a noncommutative function.
Then $H(a,c^*)(1)=1-h(a)h(c)^*$, so from the point of view of
the above inequality it is enough to consider the case when $h$
is the identity function. If $H(a,c)(P)=1-aPc$, this comes down to:
\begin{eqnarray*}
\lefteqn{(1-ac^*)(1-cc^*)^{-1}(1-ca^*)-(1-aa^*)}\\
& = &1-ca^*+(c-a)c^*(1-cc^*)^{-1}
c(c-a)^*+(c-a)c^*-1+aa^*\\
& = & (c-a)c^*(1-cc^*)^{-1}
c(c-a)^*+cc^*+aa^*-ac^*-ca^*\\
& = & (c-a)c^*(1-cc^*)^{-1}
c(c-a)^*+(c-a)(c-a)^*\ge0
\end{eqnarray*}
whenever $a\neq c$ satisfy $\|a\|,\|c\|<1$.
Note that this also proves
that
$$
\left\|H(a,a^*)(1)^{-\frac12}H(a,c^*)(1)H(c,c^*)(1)^{-1}
H(c,a^*)(1)H(a,a^*)(1)^{-\frac12}-1\right\|=0
$$
for $H(a,c)(P)=1-h(a)Ph(c^*)^*$ if and only if $h(a)=h(c)$.
So the pseudodistance defined by this formula separates
points if and only if $h$ is injective. The same
fact holds for the generalized half-plane.
\end{remark}
\begin{proof}[Proof of Proposition \ref{prop:3.1}]
The proof is based on showing that formula \eqref{Infinitesimal}
for $\delta_{\mathcal D_{G,\rm nc}}$ coincides with the definition
\eqref{InfDistance1} in the particular case of a domain defined by a
noncommutative kernel as in assumption (5). An application of
Proposition \ref{contr} will allow us to conclude. On the way to proving
\eqref{deriv}, we will obtain formulas allowing us to argue that
\eqref{basic3} and \eqref{basic2} hold by applying the same
principle as in the proof of Proposition \ref{contr}. In some cases,
for future reference, we will perform computations which are
slightly more involved than absolutely necessary.
Thus, let us start by evaluating $G$ on elements
$\begin{bmatrix}
a & {b}\\
0 & c
\end{bmatrix}$, $a\in\mathcal D_{G, n},c\in\mathcal D_{G,m}$. We have
$
G\left(\begin{bmatrix}
a & {b}\\
0 & c
\end{bmatrix},\begin{bmatrix}
a^* & 0\\
{b}^* & c^*
\end{bmatrix}\right)\left(I_{n+m}\right)>0
$
whenever $\begin{bmatrix}
a & {b}\\
0 & c
\end{bmatrix}\in\mathcal D_{G,n+m}.$ As $a\in\mathcal
D_{G,n},c\in\mathcal D_{G,m}$, we can use the properties of
nc functions/kernels to write explicitly the entries of this matrix. For
future reference, we consider the general case, with $P_{11}\in
\mathcal J^{n\times n},P_{12}\in\mathcal J^{n\times m},
P_{21}\in\mathcal J^{m\times n},P_{22}\in\mathcal J^{m\times m}$,
$a_1,a_2\in\mathcal D_{G_1,n},c_1,c_2\in\mathcal D_{G_1,m},
b_1\in\mathcal V^{n\times m},b_2\in\mathcal V^{m\times n}$.
According to condition \eqref{sette} in the definition of affine nc kernels,
\begin{eqnarray*}
\lefteqn{G\left(\begin{bmatrix}
a_1 & {b}_1\\
0_{m\times n} & c_1
\end{bmatrix},\begin{bmatrix}
a_2 & 0_{n\times m}\\
{b}_2 & c_2
\end{bmatrix}\right)\left(\begin{bmatrix}
P_{11} & P_{12}\\
P_{21} & P_{22}
\end{bmatrix}\right)}\\
& = & G\left(\begin{bmatrix}
a_1 & {b}_1\\
0 & c_1
\end{bmatrix},\begin{bmatrix}
0 & 1_n\\
1_m & 0
\end{bmatrix}
\begin{bmatrix}
c_2 & {b}_2\\
0_{n\times m} & a_2
\end{bmatrix}\begin{bmatrix}
0 & 1_m\\
1_n & 0
\end{bmatrix}\right)\left(\begin{bmatrix}
P_{12} & P_{11}\\
P_{22} & P_{21}
\end{bmatrix}\begin{bmatrix}
0 & 1_m\\
1_n & 0
\end{bmatrix}\right)\\
& = & G\left(\begin{bmatrix}
a_1 & {b}_1\\
0 & c_1
\end{bmatrix},\begin{bmatrix}
c_2 & {b}_2\\
0 & a_2
\end{bmatrix}\right)\left(\begin{bmatrix}
P_{12} & P_{11}\\
P_{22} & P_{21}
\end{bmatrix}\right)\begin{bmatrix}
0 & 1_m\\
1_n & 0
\end{bmatrix},
\end{eqnarray*}
On the other hand,
\begin{eqnarray*}
\lefteqn{G\left(\begin{bmatrix}
a_1 & {b}_1\\
0 & c_1
\end{bmatrix},\begin{bmatrix}
c_2 & b_2\\
0 & a_2
\end{bmatrix}\right)\left(\begin{bmatrix}
P_{12} & P_{11}\\
P_{22} & P_{21}
\end{bmatrix}\right)=}\\
& & \begin{bmatrix}
G\left(a_1,\begin{bmatrix}
c_2 & b_2\\
0 & a_2
\end{bmatrix}\right)\left(\begin{bmatrix}
P_{12} & P_{11}
\end{bmatrix}
\right)+{}_0\Delta G\left(a_1;c_1,\begin{bmatrix}
c_2 & b_2\\
0 & a_2
\end{bmatrix}\right)\left(b_1,\begin{bmatrix}
P_{22} & P_{21}
\end{bmatrix}
\right)\\
G\left(c_1,\begin{bmatrix}
c_2 & b_2\\
0 & a_2
\end{bmatrix}\right))\left(\begin{bmatrix}
P_{22} & P_{21}
\end{bmatrix}
\right)
\end{bmatrix}.
\end{eqnarray*}
We identify each of the two components of this column vector.
\begin{eqnarray*}
\lefteqn{G\left(a_1,\begin{bmatrix}
c_2 & b_2\\
0 & a_2
\end{bmatrix}\right)\left(\begin{bmatrix}
P_{12} & P_{11}
\end{bmatrix}
\right)}\\
& = & \begin{bmatrix}
G(a_1,c_2)([P_{12}]) & {}_1\Delta G(a_1,c_2;a_2)([P_{12}],b_2)+
G(a_1,a_2)([P_{11}])
\end{bmatrix},
\end{eqnarray*}
and
\begin{eqnarray*}
\lefteqn{G\left(c_1,\begin{bmatrix}
c_2 & b_2\\
0 & a_2
\end{bmatrix}\right)\left(\begin{bmatrix}
P_{22} & P_{21}
\end{bmatrix}
\right)}\\
& = & \begin{bmatrix}
G(c_1,c_2)([P_{22}]) & {}_1\Delta G(c_1,c_2;a_2)([P_{22}],b_2)+
G(c_1,a_2)([P_{21}])
\end{bmatrix}.
\end{eqnarray*}
Finally,
\begin{eqnarray*}
\lefteqn{{}_0\Delta G\left(a_1,c_1,\begin{bmatrix}
c_2 & b_2\\
0 & a_2
\end{bmatrix}\right)\left(b_1,\begin{bmatrix}
P_{22} & P_{21}
\end{bmatrix}
\right)= }\\
& & \left[
\begin{array}{ll}
{}_0\Delta G\left(a_1;c_1,c_2\right)(b_1,[P_{22}]) &
{}_1\Delta{}_0\Delta G(a_1;c_1,c_2;a_2)(b_1,[P_{22}],b_2)\\
& \ \quad +{}_0\Delta G(a_1;c_1,a_2)(b_1,[P_{21}])
\end{array}\right],
\end{eqnarray*}
a row vector with two components. To centralize all results, if
$$
G\left(\begin{bmatrix}
a_1 & {b}_1\\
0 & c_1
\end{bmatrix},\begin{bmatrix}
c_2 & b_2\\
0 & a_2
\end{bmatrix}\right)\left(\begin{bmatrix}
P_{12} & P_{11}\\
P_{22} & P_{21}
\end{bmatrix}\right)=\begin{bmatrix}
G_{11} & G_{12}\\
G_{21} & G_{22}
\end{bmatrix},
$$
then
\begin{equation}\label{G11}
G_{11}=G(a_1,c_2)([P_{12}])+{}_0\Delta G\left(a_1;c_1,c_2\right)(b_1,
[P_{22}]),
\end{equation}
\begin{eqnarray}\label{G12}
G_{12}& = & {}_1\Delta G(a_1,c_2;a_2)([P_{12}],b_2)\\
& & \mbox{}+
G(a_1,a_2)([P_{11}])+{}_1\Delta{}_0\Delta G(a_1;c_1,c_2;a_2)
(b_1,[P_{22}],b_2)\nonumber\\
& & \mbox{}+{}_0\Delta G(a_1;c_1,a_2)(b_1,[P_{21}]),\nonumber
\end{eqnarray}
\begin{equation}\label{G21}
G_{21}=G(c_1,c_2)([P_{22}]),
\end{equation}
\begin{equation}\label{G22}
G_{22}={}_1\Delta G(c_1,c_2;a_2)([P_{22}],b_2)+
G(c_1,a_2)([P_{21}]).
\end{equation}
For any C${}^*$-algebra $\mathcal A$,
$\begin{bmatrix}
u & v\\
v^* & w
\end{bmatrix}\in\mathcal A^{(n+m)\times(n+m)}$ is strictly positive
if and only if $u>0,w>0$ and $v^*u^{-1}v<w$ (or, equivalently,
$u>vw^{-1}v^*$ - see \cite[Chapter 3]{Paulsen}).
Thus, $\begin{bmatrix}
a & b\\
0 & c
\end{bmatrix}\in\mathcal D_{G,n+m}$ if and only if\footnote{
If the requirement in the definition of $\mathcal
D_{G,\rm nc}$
were that
$G\left(\begin{bmatrix}
a & {b}\\
0 & c
\end{bmatrix},\begin{bmatrix}
a^* & 0\\
{b}^* & c^*
\end{bmatrix}\right)$ is completely positive (equivalently,
$G\left(\begin{bmatrix}
a & {b}\\
0 & c
\end{bmatrix},\begin{bmatrix}
c^* & {b}^*\\
0 & a^*
\end{bmatrix}\right)\left(\begin{bmatrix}
P_{12} & P_{11}\\
P_{22} & P_{21}
\end{bmatrix}\right)\begin{bmatrix}
0 & 1 \\
1 & 0
\end{bmatrix}>0$ for all
$\begin{bmatrix}
P_{11} & P_{12}\\
P_{21} & P_{22}
\end{bmatrix}>0$
in $\mathcal J^{(n+m)\times(n+m)}$ -- so $P_{12}
=P_{21}^*$) for all $n,m$, then, according to the above formula applied
to $a_1=a,a_2=a^*,c_1=c,c_2=c^*,b_1=b,b_2=b^*$, the requirement
$\begin{bmatrix}
G_{12} & G_{11}\\
G_{22} & G_{21}
\end{bmatrix}>0$, would become
\begin{eqnarray*}
0& < & G(c,c^*)([P_{22}]),\\
0 & < & G(a,a^*)([P_{11}])+{}_1\Delta {}_0\Delta G(a;c,c^*;a^*)(b,[P_{22}],b^*)\\
& & \mbox{}+{}_1\Delta G(a,c^*;a^*)([P_{12}],b^*)
+{}_0\Delta G(a;c,a^*)(b,[P_{21}]),
\end{eqnarray*}
and
\begin{eqnarray}
\lefteqn{\left[G(a,c^*)([P_{12}])+{}_0\Delta G(a;c,c^*)(b,[P_{22}])
\right]\left[G(c,c^*)([P_{22}])\right]^{-1}}\nonumber\\
& & \mbox{}\times\left[{}_1\Delta G(c,c^*;a^*)([P_{22}],b^*)+G
(c,a^*)([P_{21}])\right]\nonumber\\
& < &
G(a,a^*)([P_{11}])+{}_0\Delta G(a;c,a^*)(b,[P_{21}])
+{}_1\Delta G(a,c^*;a^*)([P_{12}],b^*)\nonumber\\
& & \mbox{}+{}_1\Delta {}_0\Delta G(a;c,c^*;a^*)(b,[P_{22}],b^*).
\label{long}
\end{eqnarray}
}
$$
a\in\mathcal D_{G,n},c\in\mathcal D_{G,m}\ \text{ and } \
G\left(\begin{bmatrix}
a & {b}\\
0 & c
\end{bmatrix},\begin{bmatrix}
c^* & {b}^*\\
0 & a^*
\end{bmatrix}\right)\left(\begin{bmatrix}
0 & 1_n\\
1_m & 0
\end{bmatrix}\right)\begin{bmatrix}
0 & 1_m \\
1_n & 0
\end{bmatrix}>0.
$$
The requirement of positivity for $G$ applied to a block-diagonal
$P=\begin{bmatrix}
P_{11} & P_{12}\\
P_{21} & P_{22}
\end{bmatrix}=\begin{bmatrix}
P_{11} & 0\\
0 & P_{22}
\end{bmatrix}$, means
\begin{eqnarray*}
0& < & G(c,c^*)([P_{22}]),\\
0 & < & G(a,a^*)([P_{11}])+{}_1\Delta {}_0\Delta G(a;c,c^*;a^*)
(b,[P_{22}],b^*),
\end{eqnarray*}
and
\begin{eqnarray}
\lefteqn{{}_0\Delta G(a;c,c^*)(b,[P_{22}])
\left[G(c,c^*)([P_{22}])\right]^{-1}{}_1\Delta G(c,c^*;a^*)
([P_{22}],b^*)}\nonumber\\
& < &
G(a,a^*)([P_{11}])+{}_1\Delta {}_0\Delta G(a;c,c^*;a^*)
(b,[P_{22}],b^*).\label{useful}
\end{eqnarray}
(Note that if $G_1(x,x^*)$ were cp, by letting $P_{11}$ go to zero in
the above, we'd
conclude that the map $P_{22}\mapsto{}_1\Delta {}_0\Delta G_1
(a,c,c^*,a^*)(b,[P_{22}],b^*)$ is necessarily a completely positive
map whenever $b$ is so that
$\begin{bmatrix}
a & b\\
0 & c
\end{bmatrix}\in\mathcal D_{G,n+m}$.)
Given $a,c$ as above, by the openness of
$\mathcal O_{\rm nc}$, which is a consequence of
condition (5) and of the analyticity of $G$, we
know that there is an $\epsilon>0$ depending on
$a,c$ so that $\begin{bmatrix}
a & b\\
0 & c
\end{bmatrix}\in\mathcal D_{G,n+m}$ for all $b\in
\mathcal V^{n\times m}$ with $\|b\|<\epsilon$. Fix a direction
$b_0\in\mathcal V^{n\times m}$. Then, recalling the definition \eqref{InfDistance1} for $\delta$,
$$
\varepsilon_0:=\left[\delta_{\mathcal D_{G,\rm nc}}(a,c)(b)\right]^{-1}=\sup\left\{t\in(0,+\infty)\colon
\begin{bmatrix}
a & sb_0\\
0 & c
\end{bmatrix}\in\mathcal D_{G,n+m}\forall s<t\right\}\in(0,+\infty].
$$
Observe that if
$$
{}_1\Delta {}_0\Delta G(a;c,c^*;a^*)(b_0,1,b_0^*)
\ge{}_0\Delta G(a;c,c^*)(b_0,1)
G(c,c^*)(1)^{-1}{}_1\Delta G(c,c^*;a^*)(1,b_0^*)
$$
then $\mathcal D_{G,n+m}$ contains a complex
line. Indeed, one simply divides
by $|z|^2$ in \eqref{useful}.
We argue that $\delta_{\mathcal D_{G,\rm nc}}(a,c)(b)$
is indeed given in this case by formula \eqref{Infinitesimal}.
If $\varepsilon_0<+\infty$, then it can be written as
\begin{eqnarray*}
\lefteqn{\delta_{\mathcal D_{G,\rm nc}}(a,c)(b)^2=\varepsilon_0^{-2}=}\\
& & \sup\left\{\varphi\left(G(a,a^*)(1)^{-1/2}\left[
{}_0\Delta G(a;c,c^*)(b_0,1)
\left[G(c,c^*)(1)\right]^{-1}{}_1\Delta G(c,c^*;a^*)
(1,b_0^*)\right.\right.\right.\\
& & \quad\ \ \mbox{}-{}_1\Delta {}_0\Delta G(a;c,c^*;a^*)
(b_0,1,b_0^*)\left.\left.\left.\frac{}{}
\right]G(a,a^*)(1)^{-1/2}\right)
\colon\varphi\colon\mathcal K^{n\times n}\to\mathbb C\text{ state}
\right\}.
\end{eqnarray*}
Thus, formula \eqref{Infinitesimal} holds. By Proposition \ref{contr}, we conclude
relation \eqref{deriv}. If $\varepsilon_0=+\infty$, there is nothing to prove.
Consider now the case $b_0=\epsilon(a-c)$ for some arbitrary
$\epsilon\in\mathbb C$. We apply Equation \eqref{FDC} to write
\begin{itemize}
\item ${}_0\Delta G(a;c,a^*)(\epsilon(a-c),[P_{21}])=
G(a,a^*)([\epsilon P_{21}])-G(c,a^*)([\epsilon P_{21}])$;
\item ${}_1\Delta G(a,c^*;a^*)([P_{12}],\overline{\epsilon}(a-c)^*)
=G(a,a^*)([\overline{\epsilon}P_{12}])-G(a,c^*)
([\overline{\epsilon}P_{12}]);$
\item ${}_0\Delta G(a;c,c^*)(\epsilon(a-c),[P_{22}])=
G(a,c^*)([\epsilon P_{22}])-G(c,c^*)([\epsilon P_{22}])$;
\item ${}_1\Delta G(c,c^*;a^*)
([P_{22}],\overline{\epsilon}(a-c)^*)=G(c,a^*)([\overline{\epsilon}
P_{22}])-G(c,c^*)([\overline{\epsilon}P_{22}]);$
\item $
{}_1\Delta {}_0\Delta G(a;c,c^*;a^*)
(\epsilon(a-c),[P_{22}],\overline{\epsilon}(a-c)^*)=$
\centerline{$
G(a,a^*)([|\epsilon|^2P_{22}])-G(c,a^*)([|\epsilon|^2P_{22}])
-G(a,c^*)([|\epsilon|^2P_{22}])+G(c,c^*)([|\epsilon|^2P_{22}]).$}
\end{itemize}
We record for future reference the expressions for $G_{ij}$
corresponding to $b_0=\epsilon(a-c)$.
\begin{eqnarray}
G_{11} & = & G(a,c^*)([P_{12}])+\epsilon(G(a,c^*)([P_{22}])-
G(c,c^*)([P_{22}]))\nonumber\\
G_{12} & = & G(a,a^*)([\epsilon P_{21}]+[\overline{\epsilon}P_{12}])
-G(a,c^*)([\overline{\epsilon}P_{12}])-G(c,a^*)([\epsilon P_{21}])
\nonumber\\
& & \mbox{}+G(a,a^*)([P_{11}])\label{a-c}\\
& & \mbox{}+|\epsilon|^2(G(a,a^*)([P_{22}])+
G(c,c^*)([P_{22}])-G(a,c^*)([P_{22}])-G(c,a^*)([P_{22}]))
\nonumber\\
G_{21} & = & G(c,c^*)([P_{22}])\nonumber\\
G_{22} & = & G(c,a^*)([P_{21}])+\overline{\epsilon}(G(c,a^*)([P_{22}])
-G(c,c^*)([P_{22}]))\nonumber
\end{eqnarray}
For $\epsilon=1$, we obtain that for any state $\psi$ on $\mathcal K^{n
\times n}$ and $\varepsilon>0$, there is a state $\varphi$ on
$\mathcal K^{n\times n}$ depending on $\varepsilon$ such
that
\begin{eqnarray}
\lefteqn{\psi\left(H(f(a),f(a)^*)(1)^{-\frac12}H(f(a),f(c)^*)(1)^{}
\left[H(f(c),f(c)^*)(1)\right]^{-1}\right.}\nonumber\\
& & \mbox{}\times H(f(c),f(a)^*)(1)\left.
H(f(a),f(a)^*)(1)^{-\frac12}-1\right)-\varepsilon\nonumber\\
& \leq &
\varphi\left(G(a,a^*)(1)^{-\frac12}G(a,c^*)(1)
\left[G(c,c^*)(1)\right]^{-1}G(c,a^*)(1)G(a,a^*)(1)^{-\frac12}-1
\right).
\nonumber
\end{eqnarray}
Recall that $\psi,\varphi$ are states, so that $\varphi(1)=
\psi(1)=1$, which implies that
\begin{eqnarray}
\lefteqn{\psi\left(H(f(a),f(a)^*)(1)^{-\frac12}H(f(a),f(c)^*)(1)^{}
\left[H(f(c),f(c)^*)(1)\right]^{-1}\right.}\nonumber\\
& & \mbox{}\times H(f(c),f(a)^*)(1)\left.
H(f(a),f(a)^*)(1)^{-\frac12}\right)-\varepsilon\nonumber\\
& \leq &
\varphi\left(G(a,a^*)(1)^{-\frac12}G(a,c^*)(1)
\left[G(c,c^*)(1)\right]^{-1}G(c,a^*)(1)G(a,a^*)(1)^{-\frac12}
\right).
\nonumber
\end{eqnarray}
Clearly the elements under the states above are nonnegative, so this
reduces to
\begin{eqnarray}
\lefteqn{\left\|H(f(a),f(a)^*)(1)^{-\frac12}H(f(a),f(c)^*)(1)^{}
\left[H(f(c),f(c)^*)(1)\right]^{-1}\right.}\nonumber\\
& & \mbox{}\times H(f(c),f(a)^*)(1)\left.
H(f(a),f(a)^*)(1)^{-\frac12}\right\|\nonumber\\
& \leq &
\left\|G(a,a^*)(1)^{-\frac12}G(a,c^*)(1)
\left[G(c,c^*)(1)\right]^{-1}G(c,a^*)(1)G(a,a^*)(1)^{-\frac12}
\right\|.
\nonumber
\end{eqnarray}
The last inequality of our proposition, \eqref{basic2}, is
a trivial consequence of the selfadointness of the elements
involved, together with the previous results.
\end{proof}
We are not automatically able to conclude the norm-inequality
\eqref{basicnorm} only because the norm of the left-hand
side might be achieved at the lower bound of the spectrum.
However, assuming the hypothesis of Remark \ref{rmk3.3} guarantees
this is not the case. It wouldn't be unreasonable to suppose
that this hypothesis is satisfied in most cases of interest. So we
discuss next three things related to it.
\begin{remark}
First, not surprisingly, the inequality
$G(a,a^*)(1)-G(a,c^*)(1)G(c,c^*)(1)^{-1}$
$G(c,a^*)(1)\geq0$, opposite to the one introduced in Remark \ref{rmk3.3}, cannot hold
under the assumption of no complex lines in $\mathcal D_{G,\rm nc}$.
Indeed, if we put $b=\epsilon(a-c)$ in formulas
\eqref{G11}, \eqref{G12}, \eqref{G21} and \eqref{G22} for $a_1=
a_2^*=a,c_1=c_2^*=c$, we obtain, according to \eqref{a-c} with
$\epsilon>0,P_{11}=P_{22}=1,P_{12}=P_{21}=0$, the matrix inequality
{\footnotesize
$$
\begin{bmatrix}
G(a,a^*)(1+\epsilon^2)-\epsilon^2G(a,c^*)(1)-\epsilon^2G(c,a^*)(1)+
\epsilon^2G(c,c^*)(1) & \epsilon[G(a,c^*)(1)-G(c,c^*)(1)]\\
\epsilon[G(c,a^*)(1)-G(c,c^*)(1)] & G(c,c^*)(1)
\end{bmatrix}>0.
$$
}
Multiplying with $\begin{bmatrix}
1 & \epsilon\\
0 & 1
\end{bmatrix}$ left and its adjoint right does not change the
positivity of the matrix, so that
\begin{equation}\label{gac}
\begin{bmatrix}
(1+\epsilon^2)G(a,a^*)(1) & \epsilon G(a,c^*)(1)\\
\epsilon G(c,a^*)(1) & G(c,c^*)(1)
\end{bmatrix}>0,
\end{equation}
for any $\epsilon>0$ such that $\begin{bmatrix}
a & \epsilon(a-c)\\
0 & c \end{bmatrix}\in\mathcal D_{G,2n}.$
Since $\mathcal D_{G,2n}$ contains no complex line, it follows that
there is an $\epsilon_0(a,c)>0$ maximal beyond which the matrix
inequality above fails. Thus, necessarily
$G(a,a^*)(1)-G(a,c^*)(1)G(c,c^*)(1)^{-1}G(c,a^*)(1)\not\geq0$.
\end{remark}
\begin{remark}
Second, we observe that certain obvious transformations of
$G$ have similarly obvious effects on $\mathcal D_{G,\rm nc}.$ For
example, composing $G$ with a completely positive
unital map increases $\mathcal D_{G,\rm nc}$. Indeed, if
$\Phi$ is such a map, then $G(a,a^*)(1)>\varepsilon1\implies
\Phi(G(a,a^*)(1))>\varepsilon\Phi(1)=\varepsilon1$, so that
$\mathcal D_{G,\rm nc}\subseteq\mathcal D_{\Phi\circ G,\rm nc}$.
Subtracting a positive multiple of 1 from $G$ decreases
$\mathcal D_{G,\rm nc}$, adding increases it. However,
if for any $t\in\mathbb R\setminus\sigma(G(c,c^*)(1))$ we let
$$
f(t)=[G(a,c^*)(1)-t1]
\left[G(c,c^*)(1)-t1\right]^{-1}[G(c,a^*)(1)-t1]-[G(a,a^*)(1)-t1],
$$
then
\begin{eqnarray*}
\lefteqn{f'(t)=
\left[(G(c,c^*)(1)-t1)^{-1}(G(c,a^*)(1)-t1)-1\right]}\\
& & \quad\quad\mbox{}\times\left[(G(a,c^*)(1)-t1)(G(c,c^*)(1)-t1)^{-1}-1\right]\ge0,
\end{eqnarray*}
(recall hypothesis (3) which states that $G(a,c)(1)^*=G(c^*,a^*)(1))$,
and
\begin{eqnarray*}
\lefteqn{f''(t)=2\left[1-(G(a,c^*)(1)-t1)(G(c,c^*)(1)-t1)^{-1}\right]}\\
& & \mbox{}\times(G(c,c^*)(1)-t1)^{-1}
\left[1-(G(c,c^*)(1)-t1)^{-1}(G(c,a^*)(1)-t1)\right]\ge0,
\end{eqnarray*}
for all $t\in\mathbb R\setminus\sigma(G(c,c^*)(1)$. This means
that for any state $\varphi$, the map $t\mapsto\varphi\circ f$ is
convex and increasing on each connected component of
$\mathbb R\setminus\sigma(G(c,c^*)(1)$. Clearly,
$\lim_{t\to\pm\infty}\|f(t)-(G(a,c^*)(1)+G(c,c^*)(1)+G(c,a^*)(1)-G(a,a^*)(1))\|=0$,
so if $a$ is such that $G(a,c^*)(1)+G(c,c^*)(1)+G(c,a^*)(1)-G(a,a^*)(1)\ge0$
(for example, $c\in\mathcal D_{G,n}$ and $a$ close to $c$), then
$f(t)\ge0$ for all real $t$ in the connected component of $-\infty$. Conversely,
if $G(a,c^*)(1)+G(c,c^*)(1)+G(c,a^*)(1)-G(a,a^*)(1)\not\ge0$, then
$f(t)\not\ge0$ for all real $t$ in the connected component of $+\infty$.
\end{remark}
\begin{remark}\label{MatrixConvexity}
Finally, the function $\delta$ has been defined in terms of the length of a ``ray''
in a given direction. In this remark we look at the whole set of points $b$ for which the upper
triangular matrix $\begin{bmatrix}
a & b\\
0 & c\end{bmatrix}$ belongs to the chosen noncommutative set.
Consider a nc set $\mathcal D$ which satisfies property \eqref{propr2set}.
Fix $m,n\in\mathbb N$ and $a\in\mathcal D_n,c\in\mathcal D_m$. Let
$$
\daleth(a,c)_{\rm nc}=\coprod_{k\in\mathbb N}\left\{b\in\mathcal V^{nk\times mk}\colon
\begin{bmatrix}
I_k\otimes a & b\\
0 & I_k\otimes c\end{bmatrix}\in\mathcal D_{km+kn}\right\}.
$$
It is quite easy to see that this set is noncommutative: if $b=b_1\oplus b_2$ with $b_j\in
\daleth(a,c)_{k_j}$, $j=1,2$, then
$$
\begin{bmatrix}
I_{k_1+k_2}\otimes a & b\\
0 & I_{k_1+k_2}\otimes c\end{bmatrix}=
\begin{bmatrix}
I_{k_1}\otimes a & 0 & b_1 & 0 \\
0 & I_{k_2}\otimes a & 0 & b_2 \\
0 & 0 & I_{k_1}\otimes c & 0 \\
0 & 0 & 0 & I_{k_2}\otimes c\end{bmatrix},
$$
By permuting rows 2 and 3 and columns 2 and 3 (which comes to the conjugation with a
scalar matrix), we obtain
$$
\begin{bmatrix}
I_{k_1}\otimes a & b_1 & 0 & 0 \\
0 & I_{k_1}\otimes c & 0 & 0 \\
0 & 0 & I_{k_2}\otimes a & b_2 \\
0 & 0 & 0 & I_{k_2}\otimes c\end{bmatrix},
$$
which belongs to $\mathcal D_{(n+m)(k_1+k_2)}$ because $\mathcal D$ is a noncommutative set.
Similarly, $
\daleth(a,c)_{\rm nc}$ is invariant by conjugation with scalar unitary matrices: if $b\in
(\mathcal V^{n\times m})^{k\times k}$, then for any unitary matrix $U\in\mathbb C^{k\times k}$,
$$
\begin{bmatrix}
I_k\otimes a & UbU^*\\
0 & I_k\otimes c\end{bmatrix}=\begin{bmatrix}
U\otimes I_n & 0\\
0 & U\otimes I_m\end{bmatrix}\begin{bmatrix}
I_k\otimes a & b\\
0 & I_k\otimes c\end{bmatrix}\begin{bmatrix}
U^*\otimes I_n & 0\\
0 & U^*\otimes I_m\end{bmatrix},
$$
which belongs to $\mathcal D_{k(m+n)}$ by assumption \eqref{propr2set} on the set $\mathcal D$.
Given the unitary invariance of the set $\daleth(a,c)_{\rm nc}$ and Lemma
\ref{lemma3.4}, one is justified in asking whether $\daleth(a,c)_{\rm nc}$
is in fact matrix convex. That turns out to be false in general.
Let us recall Wittstock's definition of a matrix convex set (see \cite[Section 3]{EW}): a matrix convex set
is a noncommutative set $K=\coprod_nK_n$ such that for any $S\in\mathbb C^{r\times n}$
satisfying $S^*S=I_n$, we have $S^*K_rS\subseteq K_n$. Since $\daleth(a,c)_{\rm nc}$
is invariant by conjugation with scalar unitary matrices, matrix convexity of $\daleth(a,c)_{\rm nc}$
is equivalent to the following statement: for any $k<k'\in\mathbb N$ and $b\in\daleth(a,c)_{k'}$,
we have $\begin{bmatrix}
I_k & 0\end{bmatrix} b\begin{bmatrix}
I_k\\
0\end{bmatrix}\in\daleth(a,c)_{k},$ i.e. the upper right $k\times k$ corner of $b$ is
an element of $\daleth(a,c)_{\rm nc}$ whenever $b$ is. There is a simple counterexample
to this statement: consider the unit disk $\mathbb D$ in the complex plane, and the noncommutative
set
$$
\mathcal D=\coprod_{k\in\mathbb N}\{A\in\mathbb C^{k\times k}\colon \sigma(A)\subset\mathbb D,
\|A\|<k\}.
$$
This is clearly a noncommutative set (if $A_j\in\mathcal D_{k_j}$, then $\|A_1\oplus A_2\|=\max
\{\|A_1\|,\|A_2\|\}<\max\{k_1,k_2\}<k_1+k_2$) which is unitarily invariant ($\|U^*AU\|=\|A\|$).
However,
$$
\begin{bmatrix}
0 & 0 & 3 & 0\\
0 & 0 & 0 & \frac12\\
0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0
\end{bmatrix}\in\mathcal D_4,\quad\text{while}\quad \begin{bmatrix}
0 & 3\\
0 & 0
\end{bmatrix}\not\in\mathcal D_2,
$$
which means that $\begin{bmatrix}
3 & 0\\
0 & \frac12
\end{bmatrix}\in\daleth(0,0)_{2}$, while $3\not\in\daleth(0,0)_{1}$.
However, there are important classes of nc sets for which the set $\daleth$ is matrix convex.
One such example is the class of generalized half-planes (see Remark \ref{generalizeds}).
Consider an injective nc map $h\colon\mathcal V_{\rm nc}\to\mathcal A_{\rm nc}$ for some
unital $C^*$-algebra $\mathcal A$. Recall that a generalized half-plane is
$$
H^+_h(\mathcal V)=\coprod_{n=1}^\infty\{a\in\mathcal V^{n\times n}\colon h(a)+h(a)^*>0\}.
$$
Then elements $b\in\daleth(a,c)_{\rm nc}$ must satisfy
$$
(\Re h(a))^{-1/2}
\Delta h(a,c)(b)(\Re h(c))^{-1}\Delta h(a,c)(b)^*(\Re h(a))^{-1/2}<4\cdot1.
$$
That is, for any $k'\in\mathbb N$,
\begin{eqnarray*}
\lefteqn{(I_{k'}\otimes\Re h(a))^{-1/2}\Delta h(I_{k'}\otimes a,I_{k'}\otimes c)(b)
(I_{k'}\otimes\Re h(c))^{-1}}\\
& & \mbox{}\times\Delta h(I_{k'}\otimes a,I_{k'}\otimes c)(b)^*(I_{k'}\otimes\Re h(a))^{-1/2}\\
& = &
\left[\sum_{l=1}^{k'}(\Re h(a))^{-1/2}\Delta h(a,c)(b_{il})(\Re h(c))^{-1}
\Delta h(a,c)(b_{jl})^*(\Re h(a))^{-1/2}\right]_{1\le i,j\le k'}\\
&<&4I_{k'}\otimes1.
\end{eqnarray*}
If one fixes such a $k'>1$ in $\mathbb N$ and a $b\in\daleth(a,c)_{k'}$, proving matrix convexity
comes to proving that the upper right $k\times k$ corner of $b$ is in $\daleth(a,c)_k$ for all $0<k<k'$.
That is,
$$
\left[\sum_{l=1}^{k}(\Re h(a))^{-1/2}\Delta h(a,c)(b_{il})(\Re h(c))^{-1}
\Delta h(a,c)(b_{jl})^*(\Re h(a))^{-1/2}\right]_{1\le i,j\le k}<4I_k\otimes1.
$$
Denoting $P_k$ the projection onto the first $k$ coordinates of $\mathbb C^{k'\times k'}$,
the above relation is equivalent to
\begin{eqnarray*}
\lefteqn{(P_kI_{k'}\otimes\Re h(a))^{-1/2}\Delta h(I_{k'}\otimes a,I_{k'}\otimes c)(b)
(P_kI_{k'}\otimes\Re h(c))^{-1}}\\
& & \mbox{}\times \Delta h(I_{k'}\otimes a,I_{k'}\otimes c)(b)^*(P_kI_{k'}\otimes\Re
h(a))^{-1/2}<4P_kI_{k'}\otimes1=4I_k\otimes 1.
\end{eqnarray*}
This is implied by the general fact that $AA^*<4I_{k'}\implies PAPA^*P\le4P=4I_k.$ Indeed,
clearly $AA^*<4I_{k'}\implies PAA^*P\le4P=4I_k$ and $P\leq I_{k'}\implies(PA)P(P^*A)^*\le
PAA^*P\le4I_k.$ Thus, $P_kbP_k\in\daleth(a,c)_k$ for all $0<k<k'$.
Clearly this proof applies as well to generalized balls.
\end{remark}
As mentioned in Section \ref{pseudodistance}, the definition of $\tilde{d}_\mathcal D$
is similar to the definition of the Kobayashi distance. We show next that in fact there
are large families of sets $\mathcal D$ on which ${d}_\mathcal D$ actually
coincides with the Kobayashi distance.
\begin{prop}
Let $\mathcal V$ be an operator system and consider an injective noncommutative function
$h$ defined on $\mathcal V_{\rm nc}$ with values in a unital $C^*$-algebra
$\mathcal A$. Define the kernel $H(a,c)=1-h(a)\cdot h(c^*)^*$ and the set
$$
\mathcal D_{H,\rm nc}=\coprod_{n=1}^\infty\{a\in\mathcal V^{n\times n}\colon
H(a,a^*)(I_n)>0\}.
$$
Then ${d}_{\mathcal D_{H,\rm nc}}$ coincides level-by-level with the Kobayashi distance on
$\mathcal D_{H,\rm nc}$.
\end{prop}
\begin{proof}
Let us start by noting that, according to relation \eqref{NCdisk}, the infinitesimal
Poincar\'e (or hyperbolic) metric on the unit disk $\mathbb D$,
$\kappa_\mathbb D(z,v)$, coincides with $\delta_\mathbb D(z,z)(v)$:
they both equal $\frac{|v|}{1-\overline{z}z}$. Thus, the metric
generated by $\delta_\mathbb D(z,z)(v)$ coincides with the one
generated by $\kappa_\mathbb D(z,v)$, the Poincar\'e metric. We denote
by $k_A(\cdot,\cdot)$ the Kobayashi distance on the set $A$. Recall that the
Kobayashi metric is the largest metric (with the given normalization) that
is decreasing under holomorphic mappings. For any $n\in\mathbb N$ and $f\colon
\mathbb D\to\mathcal D_{H,n}$, we have
$$
k_{\mathcal D_{H,n}}(f(z),f(w))\leq k_\mathbb D(z,w)=d_{\mathbb D}(z,w),\quad z,w\in\mathbb D.
$$
Thus, $d_{\mathcal D_{H,n}}\leq k_{\mathcal D_{H,n}}$.
Let $x\in\mathcal D_{H,n},b\in\mathcal V^{n\times n}$. We claim that
\begin{eqnarray*}
\lefteqn{\delta_{\mathcal D_{H,n}}(x,x)(b)=}\\
& & \left[\sup\left\{t\ge0\colon\exists f\colon\mathbb D_{\rm nc}
\to\left(\mathcal D_{H,2n}\right)_{\rm nc},
f(0)=I_2\otimes x,\Delta f(0,0)(1)=t\begin{bmatrix}0&b\\0&0\end{bmatrix}\right\}\right]^{-1}.
\end{eqnarray*}
By $\left(\mathcal D_{H,n}\right)_{\rm nc}$ we of course denote the subset of
$\mathcal D_{H,\rm nc}$ formed of all levels which are multiples of $n$.
Indeed, inequality $\leq$ follows easily: as shown in Proposition \ref{contr},
and Lemma \ref{lemma3.3}, if $f$ is as in the right-hand side of the above relation, then
\begin{eqnarray*}
t\delta_{\mathcal D_{H,n}}(x,x)(b) & = & \delta_{\mathcal D_{H,n}}(x,x)(tb)\\
&=&\delta_{\mathcal D_{H,2n}}\left(\begin{bmatrix}x&0\\0&x\end{bmatrix},\begin{bmatrix}x&0\\0&x\end{bmatrix}\right)
\left(\begin{bmatrix}0&tb\\0&0\end{bmatrix}\right)\\
& = & \delta_{\mathcal D_{H,2n}}(f(0),f(0))(\Delta f(0,0)(1))\leq\delta_\mathbb D(0,0)(1)=1.
\end{eqnarray*}
We show the reverse inequality by finding an ``extremal'' function. Let $\iota=
\frac{1}{\delta_{\mathcal D_{H,n}}(x,x)(b)}$.
Consider the nc function $f=\{f_p\}_{p\in\mathbb N},$ $f_p(Z)=\begin{bmatrix}
I_p\otimes x & 0 \\
0 & I_p\otimes x\end{bmatrix}+Z\otimes\begin{bmatrix}
0 & \iota b \\
0 & 0\end{bmatrix}$, $Z\in\mathbb C^{p\times p}$, $\|Z\|<1$, $p\in\mathbb N$.
According to Lemmas \ref{lemma3.4} and \ref{lemma3.3}, we have
\begin{eqnarray*}
\delta_{\mathcal D_{H,n}}(x,x)(b) & > & \delta_{\mathcal D_{H,n}}(x,x)(b)\|Z\|\\
& = & \delta_{\mathcal D_{H,n}}\left(\begin{bmatrix}
I_p\otimes x & 0 \\
0 & I_p\otimes x\end{bmatrix},\begin{bmatrix}
I_p\otimes x & 0 \\
0 & I_p\otimes x\end{bmatrix}\right)\left(Z\otimes\begin{bmatrix}
0 & b \\
0 & 0\end{bmatrix}\right),
\end{eqnarray*}
whenever $Z$ is a contraction. Thus, $f$ takes values in $\left(\mathcal D_{H,2n}\right)_{\rm nc}$
and for $Z=1\in\mathbb C,$ we actually reach the supremum in the above equality.
Recall that $h$ is a noncommutative map by hypothesis.
The quantity $\delta_{\mathcal D_{H,n}}(x,x)(b)$ has an explicit formulation in terms of $h$:
$$
\delta_{\mathcal D_{H,n}}(x,x)(b)=\left\|(1-h(x)h(x)^*)^{-1/2}\Delta h(x,x)(b)
(1-h(x)^*h(x))^{-1/2}\right\|.
$$
The composition $h\circ f\colon
\mathbb D_{\rm nc}\to\mathcal A_{\rm nc}$ is then an nc map going from the nc unit ball
of $\mathbb C$ to the nc unit ball of the unital $C^*$-algebra $\mathcal A$, and we have
$\delta_\mathcal A((h\circ f)(0),(h\circ f)(0))(\Delta(h\circ f)(0,0)(1))=
\delta_\mathcal D(f(0),f(0))(\Delta f(0,0)(1))$. Thus, the two infinitesimal metrics are equal,
so, since on a ball in a $C^*$-algebra the Kobayashi distance is obtained by integrating
the (infinitesimal) metric, we conclude that $d_{\mathcal D_{H,n}}=k_{\mathcal D_{H,n}}$.
\end{proof}
\section{A classification of noncommutative domains of
holomorphy}
We turn now towards a classification of noncommutative domains (with respect to the level topology)
which contain no complex lines at any level, up to noncommutative holomorphic
equivalence, in terms of $\tilde\delta$. In this section we assume that
$\mathcal V$ is a Banach space, and when dealing with domains defined
by kernels, we further assume that $\mathcal V$ is an operator space.
Observe that if $f\colon\mathcal D\to
\mathcal E$ is a noncommutative automorphism
(i.e. a map which is bijective at each level, with analytic inverse), then inequality
stated in Corollary \ref{trentatre} must hold in both directions
(for $f$ and $f^{\langle-1\rangle}$), so they must become equalities.
That is,
\begin{eqnarray}\label{conformal}
\tilde\delta_\mathcal D(a,c)=\tilde\delta_\mathcal E(f(a),f(c)),\quad a,c\in\mathcal D_n,n\in\mathbb N.
\end{eqnarray}
Conversely, assume that there is a function $f$ as above such that
equality \eqref{conformal} holds for all $a,c\in
\mathcal D_{n}$, $n\in\mathbb N$.
Then it follows trivially that $f$ is injective. Indeed, if not, there would be an $n\in\mathbb N$
and points $a\neq c\in\mathcal D_n$ such that $f(a)=f(c)$. Then we would have
$0=\tilde\delta_\mathcal E(f(a),f(c))=\tilde\delta_\mathcal D(a,c)$, a contradiction, according to
Theorem \ref{sep}, to the hypothesis that $\mathcal D$ contains no
complex lines.
Proving the surjectivity of $f$ as a consequence of equality \eqref{conformal} is not possible in full
generality. We make the following assumption about our domains:
Given a noncommutative set $\mathcal D$ in the noncommutative extension $\mathcal V_{\rm nc}$ of a
Banach space $\mathcal V$, which is invariant under conjugation with scalar matrices,
$$
\text{For any }n\in\mathbb N\text{ and }a\in\mathcal D_n,\text{ if }\{c_k\}_{k\in\mathbb N}
\subset\mathcal D_n\text{ satisfies }\lim_{k\to\infty}\inf_{x\in\mathcal D_n^{\rm c}}\|x-c_k\|=0,
\text{ then }
$$
\vskip-0.5truecm\begin{equation}\label{hypo}
\lim_{k\to\infty}\tilde{\delta}_\mathcal D(a,c_k)=+\infty.
\end{equation}
This hypothesis does not exclude the possibility that $\tilde{\delta}_\mathcal D\equiv+\infty$.
\begin{thm}\label{classif}
Consider two noncommutative domains $\mathcal D$ and $\mathcal E$ in a given space
$\mathcal V_{\rm nc}$ which are invariant under conjugation by unitary scalar matrices and contain
no complex lines, and a noncommutative function $f\colon\mathcal D\to\mathcal E$. Assume that both
$\mathcal D$ and $\mathcal E$ satisfy hypothesis \eqref{hypo}. Then the following are equivalent:
\begin{enumerate}
\item $f$ satisfies $\tilde{\delta}_\mathcal D(a,c)=\tilde{\delta}_\mathcal E(f(a),f(c)),a,c\in\mathcal D.$
\item $f$ is a bijective noncommutative map, with
noncommutative inverse.
\end{enumerate}
\end{thm}
The reader might worry about a trivial counterexample: the map from the nc disk to the
nc bidisk sending $z$ to $(z,0)$. However, we excluded this possibility by the way we formulated our
statement: in this case, the nc disk is equal to its boundary in the ``environment'' in which the bidisk
lives, so according to \eqref{hypo}, its $\tilde\delta$ would have to be constantly equal to infinity.
\begin{proof} (2)$\implies$(1): This implication is trivial. We know that
$\tilde{\delta}_\mathcal D(a,c)\ge\tilde{\delta}_\mathcal E(f(a),f(c))$. If $a',c'\in\mathcal E$,
then by (2) there exist $a,c\in\mathcal D$ such that $f(a)=a',f(c)=c'$,
which means $f^{\langle-1\rangle}(a')=a,f^{\langle-1\rangle}(c')=c$. By Proposition \ref{contr},
$$
\tilde{\delta}_\mathcal E(f(a),f(c))=\tilde{\delta}_\mathcal E(a',c')\ge\tilde{\delta}_\mathcal D
\left(f^{\langle-1\rangle}(a'),f^{\langle-1\rangle}(c')\right)=\tilde{\delta}_\mathcal D(a,c).
$$
(1)$\implies$(2): We have already seen that under condition (1), $f$ is injective. Thus, we
need to show that $f$ is also surjective. Once we showed that, the noncommutativity of
the correspondence $a'\mapsto f^{\langle-1\rangle}(a')$ allows us to conclude.
The essential part of the proof is in the following quite obvious lemma,
which we nevertheless state separately, since it might be of independent
interest.
\begin{lemma}\label{equality}
Consider a noncommutative domain $\mathcal D$ and a noncommutative subset $\mathcal D'
\subseteq\mathcal D$. Assume that both $\mathcal D$ and $\mathcal D'$ are invariant under
conjugation by scalar unitary matrices and satisfy hypothesis \eqref{hypo}. If $\tilde{\delta}_\mathcal D
(a,c)=\tilde{\delta}_{\mathcal D'}(a,c)$ for all $a,c\in\mathcal D'$, then $\mathcal D=\mathcal D'$.
\end{lemma}
\begin{proof}
The proof of this lemma is utterly trivial: assume towards contradiction that there exist points
in $\mathcal D\setminus\mathcal D'$. Pick a point $x\in\mathcal D\cap\partial\mathcal D'$ (by
$\partial\mathcal D'$ we understand the boundary of the set $\mathcal D'$ at the corresponding level
$n$ in the norm topology of the Banach space $\mathcal V^{n\times n}$) and a point $a\in\mathcal D'$.
By the definition of the boundary, there exists a sequence $\{c_k\}_{k\in\mathbb N}\subset\mathcal D'$
converging to $x$ in norm. In particular, $\{c_k\}_{k\in\mathbb N}$ satisfies the condition of hypothesis
\eqref{hypo}, so that $\tilde{\delta}_{\mathcal D'}(a,c_k)\to+\infty$ as $k\to\infty$. By Remark
\ref{rem-cont}, we have
$$
\infty>\tilde{\delta}_{\mathcal D}(a,x)\ge\limsup_{n\to\infty}\tilde{\delta}_{\mathcal D}(a,c_k)=
\limsup_{n\to\infty}\tilde{\delta}_{\mathcal D'}(a,c_k)=\infty,
$$
an obvious contradiction. Thus, $\mathcal D'=\mathcal D$, as claimed.
\end{proof}
Consider the set
$f(\mathcal D)\subset\mathcal E$. For any $x,y\in f(\mathcal D)$, there exist
unique $a,c\in\mathcal D$ such that $f(a)=x,f(c)=y$.
It follows from Proposition \ref{contr} that $\delta_{f(\mathcal D)}(x,y)(x-y)\le\delta_\mathcal D(a,c)(a-c)$.
Since $f(\mathcal D)\subset\mathcal E,$ we necessarily have $\tilde\delta_{f(\mathcal D)}(x,y)
\ge\tilde\delta_\mathcal E(x,y)$. Together with the hypothesis of (1), we obtain
$$
\tilde\delta_\mathcal D(a,c)=\tilde\delta_\mathcal E(f(a),f(c))=
\tilde\delta_\mathcal E(x,y)\le\tilde\delta_{f(\mathcal D)}(x,y)\le\tilde\delta_\mathcal D(a,c),
$$
so that $\tilde\delta_{f(\mathcal D)}(x,y)=\tilde\delta_\mathcal E(x,y)$ for all $x,y\in
f(\mathcal D)$. By Lemma \ref{equality}, we conclude that $f(\mathcal D)=\mathcal E$.
\end{proof}
\begin{remark}\label{bdryblow-up}
It turns out that in Theorem \ref{classif} we cannot dispense with the requirement
that $\tilde\delta$ blows up at the boundary. The following counterexample, similar to the
one in Remark \ref{MatrixConvexity}, shows what goes wrong if this requirement is dropped.
Consider a domain $D\subseteq\frac12\mathbb D\subset\mathbb C$ and define the
nc set
$$
\mathcal D=\coprod_{n=1}^\infty\{A\in\mathbb C^{n\times n}\colon\sigma(A)\subset D,\|A\|<1\}.
$$
The proof from Remark \ref{MatrixConvexity} applies to show that $\mathcal D$ is a unitarily
invariant noncommutative set which is open at each level. However, a direct computation shows that
$\left\|\begin{bmatrix}
a & b\\
0 & c\end{bmatrix}\right\|<1$ if and only if $aa^*+bb^*<1,cc^*<1$, and
$bc^*(1-cc^*)^{-1}cb^*<1-aa^*-bb^*$ (this holds in an arbitrary $C^*$-algebra).
Since the other restriction in the definition of $\mathcal D$ is on the spectrum of the
matrix $A$, it only affects $a$ and $c$; there is no other restriction on $b$.
The last inequality is equivalent to
$b((1-c^*c)^{-1}-1)b^*<1-aa^*-bb^*$, which is in its own turn equivalent to
\begin{equation}\label{NCBall}
(1-aa^*)^{-\frac12}b(1-c^*c)^{-1}b^*(1-aa^*)^{-\frac12}<1.
\end{equation}
Thus,
\begin{equation}\label{NCdisk}
\delta_\mathcal D(a,c)(b)=\left\|(1-aa^*)^{-\frac12}b(1-c^*c)^{-\frac12}\right\|
\end{equation}
However, for any choice of selfadjoints $a$ and $c$, we have that $\tilde{\delta}_\mathcal D(a,c)
\leq \frac43$. Thus, $\tilde{\delta}$ stays bounded (by $4/3$) on the intersection of the selfadjoints with
$\mathcal D$.
On the other hand, we have $\mathcal D\subsetneq B_1(\mathbb C)$, the nc unit ball of $\mathbb C$,
and $\delta_{B_1(\mathbb C)}|_\mathcal D=\delta_\mathcal D$.
\end{remark}
In the context of Lemma \ref{equality}, we record here the ``opposite'' case: we show that, under
certain conditions, strict inclusion of domains leads to strict inequalities between the associated
distances.
\begin{prop}\label{ka}
Consider an operator space $\mathcal V$. Let $\mathcal D'\subset\mathcal D$ be an inclusion of
noncommutative domains in $\mathcal V_{\rm nc}$. Assume that
\begin{enumerate}
\item $M:=\sup_{n\in\mathbb N}\sup_{x\in\mathcal D'_n}\|x\|<+\infty$;
\item $m:=\inf_{n\in\mathbb N}
\inf\{\|x-w\|\colon{x\in\mathcal D'_n,w\in\mathcal V^{n\times n}\setminus\mathcal D_n}\}>0.$
\end{enumerate}
Then there exists a constant $k\in[0,1)$ such that
$k\tilde\delta_{\mathcal D'}\ge\tilde\delta_\mathcal D$.
\end{prop}
\begin{proof}
Let $n$ be a fixed level, and pick $a,c\in\mathcal D'_n$. By definition,
$$
\tilde\delta_{\mathcal D'}(a,c)^{-1}=\sup\left\{t>0\colon\begin{bmatrix}
a & s(c-a)\\
0 & c
\end{bmatrix}\in\mathcal D'_{2n} \text{ for all }s<t\right\},
$$
$$
\tilde\delta_{\mathcal D}(a,c)^{-1}=\sup\left\{t>0\colon\begin{bmatrix}
a & r(c-a)\\
0 & c
\end{bmatrix}\in\mathcal D_{2n} \text{ for all }r<t\right\}.
$$
We know that the distance from $\mathcal D'_{2n}$ to $\mathcal V^{2n\times 2n}\setminus\mathcal
D_{2n}$ is at least $m$, so that
$$
(\tilde\delta_{\mathcal D}(a,c)^{-1}-s)\|c-a\|=\left\|\begin{bmatrix}
a & s(c-a)\\
0 & c
\end{bmatrix}-\begin{bmatrix}
a & \tilde\delta_{\mathcal D}(a,c)^{-1}(c-a)\\
0 & c
\end{bmatrix}\right\|\ge m
$$
whenever $\begin{bmatrix}
a & s(c-a)\\
0 & c
\end{bmatrix}\in\mathcal D'_{2n}$,
and thus, $\tilde\delta_{\mathcal D}(a,c)^{-1}-\tilde\delta_{\mathcal D'}(a,c)^{-1}\ge
\frac{m}{\|c-a\|}$ for any $a,c\in\mathcal D'_{2n},a\neq c$. It follows that
$$
\frac{\tilde\delta_{\mathcal D'}(a,c)}{\tilde\delta_{\mathcal D}(a,c)}\ge
1+\frac{m\tilde\delta_{\mathcal D'}(a,c)}{\|a-c\|}=1+m
\delta_{\mathcal D'}(a,c)\left(\frac{a-c}{\|a-c\|}\right).
$$
We bound from below $\delta_{\mathcal D'}(a,c)(b)$ when $\|b\|=1$ and $a,c\in\mathcal D'_{2n}$.
We have $\delta_{\mathcal D'}(a,c)(b)>\xi\iff \delta_{\mathcal D'}(a,c)(b)^{-1}<\xi^{-1}$; but
any element in $\mathcal D'_{2n}$ has norm bounded from above by $M$, so $\begin{bmatrix}
a & sb\\
0 & c
\end{bmatrix}\in\mathcal D'_{2n}$ implies $|s|=\|sb\|\leq M$. Thus, $\delta_{\mathcal D'}(a,c)(b)\ge
M^{-1}.$ We obtain
$
\frac{\tilde\delta_{\mathcal D'}(a,c)}{\tilde\delta_{\mathcal D}(a,c)}\ge
1+\frac{m}{M},
$
for the constant $k=\frac{M}{m+M}<1$.
\end{proof}
For the distance $\tilde{d}$, we have
\begin{cor}\label{kar}
Under the assumptions, and with the notations, of Proposition \ref{ka},
we have $k\tilde{d}_{\mathcal D'}\ge\tilde{d}_{\mathcal D}.$
\end{cor}
\begin{proof}
For any $n\in\mathbb N$, $a,c\in\mathcal D'_{n}$, and division $a=a_0,a_1,\dots,a_N=c\in
\mathcal D'_n$, we have
$$
k\sum_{j=1}^N\tilde\delta_{\mathcal D'}(a_{j-1},a_j)>\sum_{j=1}^N\tilde\delta_{\mathcal D}(a_{j-1},a_j).
$$
Taking infimum in the left side provides $k\tilde{d}_{\mathcal D'}(a,c)$. Increasing the number of divisions
in the right hand side can only decrease the infimum, so that $k\tilde{d}_{\mathcal D'}
(a,c)\ge\tilde{d}_{\mathcal D}(a,c)$
\end{proof}
As a side benefit, we obtain from the proof of Proposition \ref{ka} that on bounded domains in
operator spaces, $\tilde\delta$ and the norm are locally equivalent. We have already seen in
Proposition \ref{dreiwolf} that if $\|a_k-a\|\to0$, then $\tilde\delta_\mathcal D(a_k,a)\to0$
and thus $\tilde{d}_\mathcal D(a_k,a)\to0$. Now assume that in a bounded domain $\mathcal D$
we have a sequence $\{a_k\}_{k\in\mathbb N}\subset\mathcal D$ and a point $a\in\mathcal D_n$
so that $\tilde{d}_\mathcal D(a_k,a)\to0$ as $k\to\infty$. We have seen in the proof of
Proposition \ref{ka} that $\delta_\mathcal D(a,c)(b)\ge M^{-1}$ if $\mathcal D$ is included
in a norm-ball of radius $M$, uniformly in $a,c\in\mathcal D_n,b\in\mathcal V^{n\times n}$, $\|b\|=1$,
$n\in\mathbb N$. Thus, $\tilde\delta_{\mathcal D}(a,c)\ge M^{-1}\|a-c\|$, so that
for any division $a=a_0,a_1,\dots,a_N=c$ of $\tilde\delta_{\mathcal D}(a,c)$, we have
$\sum_{j=1}^N\tilde\delta_{\mathcal D}(a_{j-1},a_j)\ge M^{-1}\sum_{j=1}^N\|a_j-a_{j-1}\|
\ge M^{-1}\|a-c\|.$ Thus, $\tilde{d}_\mathcal D(a,c)\ge M^{-1}\|a-c\|$. Applying this to
$c=a_k$ yields $\lim_{k\to\infty}\|a-a_k\|=0$. We have proved
\begin{prop}\label{coincidence}
If $\mathcal D$ is a bounded nc domain in an operator space $\mathcal V$ and $n\in\mathbb N$, then
on any subset $A\subset\mathcal D_n$ which is at a positive distance from $\mathcal D_n^c$, the
topologies induced by $\tilde{d}_\mathcal D$ and the norm of $\mathcal V^{n\times n}$ coincide.
\end{prop}
\begin{remark}
A very similar proof shows that the result stated in Proposition \ref{coincidence} holds also
for bounded strict subsets of half-planes.
\end{remark}
\section{An application to a problem in free probability}
In this section, we use some of the tools introduced before in order to study a problem in
free probability. We consider a $C^*$-noncommutative probability space $(M, E,B)$, where
$B\subseteq M$ is a unital inclusion of $C^*$-algebras and $E\colon M\to B$
is a unit-preserving conditional expectation. Elements in $M$ are called operator-valued
(or, sometimes, $B$-valued) random variables. If $X=X^*\in M$, we define the
{\em distribution of $X$ with respect to $E$} to be the collection of multilinear maps
$$
\mu_X=\{m_{n,X},n\in\mathbb N\},
$$
where $m_{0,X}=1\in B\subseteq M$, $m_{1,X}=E[X]\in B$, and
$$
m_{n,X}\colon\underbrace{B\times\cdots\times B}_{n-1\textrm{ times}}\to B,
\ m_{n,X}(b_1,\dots,b_{n-1})=E[Xb_1Xb_2\cdots Xb_{n-1}X],n>1.
$$
Such distributions are encoded analytically by the noncommutative Cauchy-Stieltjes
transform (see Example \ref{example21}(3)):
$$
G_{X,n}(b)=E\left[(b-I_n\otimes X)^{-1}\right],\quad n\in\mathbb N,b\in B^{n\times n},\Im b>0.
$$
This is a noncommutative function mapping the noncommutative upper half-plane
of $B$ into the noncommutative lower half-plane (see, for instance, \cite{V2}).
It has several good properties, including the fact that $\Im G_{X,n}(b)<0$, so that
$F_{X,n}(b):=G_{X,n}(b)^{-1}$ exists and maps elements of positive imaginary part
into elements of positive imaginary part. Moreover, it has been shown in
\cite{BPV1} that $\Im F_{X,n}(b)\ge\Im b$, so that $h_{X,n}(b):=F_{X,n}(b)-b,$
$\Im b>0$, takes values elements of nonnegative imaginary part.
It has been shown in \cite{ABFN} that for any given selfadjoint $X\in M$
and completely positive map $\rho\colon B\to B$ such that $\rho-
\mathrm{Id}_B$ is still completely positive on $B$, there exists
a selfadjoint $X_\rho$ in a possibly larger $C^*$-algebra
containing $M$ such that $E$ extends to this possibly
larger algebra and the following relations hold:
\begin{equation}\label{semigroup}
G_{X_\rho,n}(b)=G_{X,n}(\omega_\rho(b)),\quad\omega_\rho(b)
=b+(\rho-
\mathrm{Id}_B)h_{X,n}(\omega_\rho(b)),\quad\Im b>0,n\in\mathbb N.
\end{equation}
In terms of the free probability significance of $X_\rho$, we only
mention that $\mu_{X_\rho}=\mu_X^{\boxplus\rho}$, and refer
the interested reader to \cite{ABFN} for details. We wish to
mention, however, that, thanks to a trick due to Hari Bercovici,
understanding free convolution powers indexed by
completely positive maps suffices in order to
understand free additive convolutions of operator-valued
distributions, so, in a certain sense, $\{\mu_X^{\boxplus\rho}\colon
\rho\textrm{ and }\rho-
\mathrm{Id}_B\textrm{ completely positive}\}$ is the most
general object to understand in the context of free convolutions
of operator-valued distributions.
All of the above has been done for selfadjoint
operators that belong to $M$, that is, bounded
selfadjoint operators. We will apply our results in
order to show that, under certain hypotheses,
this can be also done for unbounded
operators $X=X^*$ affiliated to $M$, making a step in the direction
of a full generalization of the results of \cite{BVIUMJ}. Our
hypotheses will be the following:
\begin{enumerate}
\item[(H1)] $B$ and $X$ generate an algebra of (unbounded)
operators $B\langle X\rangle$, such that the spectral
projections of any selfadjoint element of $B\langle X\rangle$
belong to $M$. In particular, the distribution of any selfadjoint
element from $B\langle X\rangle$ with respect to any continuous
linear functional on $M$ must be a probability measure;
\item[(H2)] $E\left[\Im(b-X)^{-1}\right]<0$ whenever $\Im b>0$ in $B$.
\end{enumerate}
Hypothesis (H1) is very natural, in the sense that otherwise there
would hardly be a way to conceive a $B$-valued distribution of $X$.
It is clearly satisfied under the assumption that $M$ is a finite factor.
Hypothesis (H2) deserves a few more comments. It is natural in terms
of allowing for the analytic functions tools (including the $R$-transform
of Voiculescu - see \cite{V*,VFAQ2}) to be deployed. But it can be also
viewed as a measure of nondegeneracy of $E$: indeed, let $b=u+iv,
u=u^*,v>0$. Then
\begin{eqnarray*}
E\left[\Im(b-X)^{-1}\right] & = & E\left[\Im ((u-X)+iv)^{-1}\right]\\
& = & -v^{-\frac12}
E\left[\left(\left(v^{-\frac12}(u-X)v^{-\frac12}\right)^2+1\right)^{-1}\right]
v^{-\frac12},
\end{eqnarray*}
so that $E\left[\Im(b-X)^{-1}\right]<0$ if and only if
$E\left[\left(\left(v^{-\frac12}(u-X)v^{-\frac12}\right)^2+1\right)^{-1}\right]>0$.
It is clear that, since $X$ is unbounded, $0=\min
\sigma\left(\left(\left(v^{-1/2}(u-X)v^{-1/2}\right)^2+1\right)^{-1}\right)$. Also,
$k:=\left\|\left(\left(v^{-1/2}(u-X)v^{-1/2}\right)^2+1\right)^{-1}\right\|\leq1$.
Thus, non-invertibility of $E\left[\Im(b-X)^{-1}\right]$ becomes equivalent
to the equality
$$
\left\|E\left[\left(\left(v^{-\frac12}(u-X)v^{-\frac12}\right)^2+1\right)^{-1}-k\right]\right\|
=\left\|\left(\left(v^{-\frac12}(u-X)v^{-\frac12}\right)^2+1\right)^{-1}-k\right\|.
$$
That is, $E$ is isometric on an element which is not in $B$. Thinking in terms of the
duals of $M$ and $B$, respectively, this tells us that there exists an element $\varphi$
of norm one in the dual of $B$ such that
$\left(\left(v^{-\frac12}(u-X)v^{-\frac12}\right)^2+1\right)^{-1}-k$
reaches its norm on $\varphi\circ E$. Thus, hypothesis (H2) is implied
by the requirement that elements in $M$ but not in $B$ do not reach
their norms on $B^*\circ E$. It may be worth mentioning that in the case of
a tracial $W^*$-probability space with normal faithful trace state $\tau$
which is left invariant by $E$, hypothesis (H2) comes to stating that
$\left(\left(v^{-\frac12}(u-X)v^{-\frac12}\right)^2+1\right)^{-1}-k$
does not reach its norm on $L^2(B,\tau)$, and in the case when $B$ is
finite dimensional, (H2) is equivalent to not allowing algebraic relations
between $X$ and elements in $B$.
In this section, we shall show that
the fixed point equation \eqref{semigroup}
has a nontrivial solution also when $X$ is unbounded, but
still satisfies hypotheses (H1) and (H2) above. Unfortunately,
that is not exactly sufficient in order to characterize the
distribution of $X_\rho$ for all possible unbounded
random variables $X$ as above, as shown in \cite{Wil}. However,
it does cover a significant number of special cases, including
of many unbounded operators with no moments.
\begin{thm}\label{semigr}
Consider a noncommutative function $h\colon H^+(B)\to B$ such that $\Im h(b)\ge0$
and $\lim_{y\to+\infty}\frac{\Im h(\Re b+iy\Im b)}{y}=0$ in the {\rm wo}-topology for all $b\in H^+(B)$.
For any given $b>0$, the map $w\mapsto b+h(w)$ has a unique attracting
fixed point in $H^+(B)$, to be denoted by $\omega(b)$, and the correspondence
$b\mapsto\omega(b)$ is a noncommutative self-map of $H^+(B)$, hence,
in particular, analytic.
\end{thm}
\begin{proof}
It is clearly enough to prove the theorem at level 1. Thus, fix
$b_0\in B$ such that $\Im b_0>\varepsilon_01>0$. For any $n\ge1$,
the map $h_0\colon w\mapsto b_0\otimes I_n+h(w)$ sends $H^+(B^{n\times n})$
into $H^+(B^{n\times n})+i\varepsilon_01,$ so that, as a
noncommutative map, it sends $H^+(B)$ to
$H^+(B)+i\varepsilon_01.$ We re-write the proof of Corollary \ref{trentatre}
for this context: if $\Im\begin{bmatrix} a & b \\ 0 & c\end{bmatrix}
\in H^+(B^{2\times 2})$, then $\Im h_0\left(\begin{bmatrix}
a & b \\ 0 & c\end{bmatrix}\right)
\in H^+(B^{2\times 2})+i\varepsilon_01.$ That means
$(\Im h_0(a)-\varepsilon_01)^{-1/2}\Delta h_0(a,c)(b)
(\Im h_0(c)-\varepsilon_01)^{-1}\Delta h_0(a,c)(b)^*
(\Im h_0(a)-\varepsilon_01)^{-1/2}\leq\|(\Im a)^{-1/2}b(\Im c)^{-1/2}\|^2\cdot1$
for all $a,c\in H^+(B)$, $b\in B$. We re-write this as
$$
\Delta h_0(a,c)(b)
(\Im h_0(c)-\varepsilon_01)^{-1}\Delta h_0(a,c)(b)^*\leq
\|(\Im a)^{-\frac12}b(\Im c)^{-\frac12}\|^2(\Im h_0(a)-\varepsilon_01).
$$
Multiplying left and right by $(\Im h_0(a))^{-1/2}$, we obtain
\begin{eqnarray*}
\lefteqn{(\Im h_0(a))^{-1/2}\Delta h_0(a,c)(b)
(\Im h_0(c)-\varepsilon_01)^{-1}\Delta h_0(a,c)(b)^*(\Im h_0(a))^{-1/2}}\\
& \leq & \|(\Im a)^{-\frac12}b(\Im c)^{-\frac12}\|^2(1-\varepsilon_0(\Im h_0(a))^{-1})\\
& \leq & \|(\Im a)^{-\frac12}b(\Im c)^{-\frac12}\|^2\|1-\varepsilon_0(\Im h_0(a))^{-1}\|\\
& = & \|(\Im a)^{-\frac12}b(\Im c)^{-\frac12}\|^2(1-\varepsilon_0\|(\Im h_0(a))^{-1}\|).
\quad\quad\quad\quad\quad\quad\quad\quad\quad
\end{eqnarray*}
Since $xx^*\leq M\cdot1\iff x^*x\leq M\cdot1$, we immediately
obtain
\begin{eqnarray}
\lefteqn{(\Im h_0(a))^{-1/2}\Delta h_0(a,c)(b)
(\Im h_0(c))^{-1}\Delta h_0(a,c)(b)^*(\Im h_0(a))^{-1/2}}\nonumber\\
& \leq & \|(\Im a)^{-\frac12}b(\Im c)^{-\frac12}\|^2
(1-\varepsilon_0\|(\Im h_0(a))^{-1}\|)
(1-\varepsilon_0\|(\Im h_0(c))^{-1}\|).\label{strict-deriv}
\end{eqnarray}
Applying this to $b=a-c$ yields
\begin{eqnarray}
\lefteqn{(\Im h_0(a))^{-1/2}(h_0(a)-h_0(c))
(\Im h_0(c))^{-1}(h_0(a)-h_0(c))^*(\Im h_0(a))^{-1/2}}\nonumber\\
& \leq & \|(\Im a)^{-\frac12}(a-c)(\Im c)^{-\frac12}\|^2
(1-\varepsilon_0\|(\Im h_0(a))^{-1}\|)
(1-\varepsilon_0\|(\Im h_0(c))^{-1}\|).\label{strict}
\end{eqnarray}
It thus follows that if $\omega(b_0)\in H^+(B)+i\varepsilon_01$
is a fixed point for $h_0$, then it must be the unique and
attracting fixed point of $h_0$. Indeed, for an arbitrary
$a\in H^+(B)$, if we let
$r=\|(\Im a)^{-1/2}(a-\omega(b_0))(\Im\omega(b_0))^{-1/2}\|$,
it follows that
$$
h_0(B(\omega(b_0),2r))\subset B(\omega(b_0),2r),
$$
where, as in \cite[Proposition 3.2]{JLMS}, we denote
\begin{equation}\label{pseudoball}
B(c,t)=\{a\in B\colon\|(\Im a)^{-1/2}(a-c)(\Im c)^{-1/2}\|\leq t\}.
\end{equation}
It has been shown in \cite[Proposition 3.2]{JLMS} that
$B(\omega(b_0),2r)$ is bounded in norm in the sense that
$$
\|d\|\leq\|\Re\omega(b_0)\|+\|\Im\omega(b_0)\|\left(
2r^2+1+2r\sqrt{r^2+1}+2r\sqrt{2r^2+1+2r\sqrt{r^2+1}}\right)
,$$
and that it is bounded away from the boundary of $H^+(B)$ in the sense that
$$
\Im d\ge\frac{1}{2+4r^2}\Im\omega(b_0), \quad d\in B(\omega(b_0),2r).
$$
Thus, for any $N\in\mathbb N$, we have, by an iteration of \eqref{strict},
\begin{eqnarray}
\lefteqn{\left\|(\Im h_0^{\circ N}(a))^{-\frac12}(h_0^{\circ N}(a)-\omega(b_0))
(\Im\omega(b_0))^{-\frac12}\right\|^2}\nonumber\\
& = & \left\|(\Im h_0^{\circ N}(a))^{-\frac12}(h_0^{\circ N}(a)-h_0^{\circ N}(\omega(b_0)))
(\Im h_0^{\circ N}(\omega(b_0)))^{-\frac12}\right\|^2\nonumber\\
& \leq & \|(\Im a)^{-\frac12}(a-\omega(b_0))(\Im\omega(b_0))^{-\frac12}\|^2\nonumber\\
& & \mbox{}\times\prod_{j=1}^N(1-\varepsilon_0\|(\Im h_0^{\circ j}(a))^{-1}\|)
(1-\varepsilon_0\|(\Im h_0^{\circ j}(\omega(b_0)))^{-1}\|)\nonumber\\
& = & \|(\Im a)^{-\frac12}(a-\omega(b_0))(\Im\omega(b_0))^{-\frac12}\|^2\nonumber\\
& & \mbox{}\times(1-\varepsilon_0\|(\Im\omega(b_0))^{-1}\|)^N
\prod_{j=1}^N(1-\varepsilon_0\|(\Im h_0^{\circ j}(a))^{-1}\|).\label{iter}
\end{eqnarray}
Letting $N$ go to infinity sends $(1-\varepsilon_0\|(\Im\omega(b_0))^{-1}\|)^N$ to
zero, so that
$$
\lim_{N\to\infty}\left\|(\Im h_0^{\circ N}(a))^{-\frac12}(h_0^{\circ N}(a)-\omega(b_0))
(\Im\omega(b_0))^{-\frac12}\right\|^2=0.
$$
Recall that $x^{-1}\ge\frac{1}{\|x\|}$ for any positive operator $x$. Since
$$
\frac{1}{2+4r^2}\Im\omega(b_0)\leq\Im h_0^{\circ N}(a)\leq
\|\Re\omega(b_0)\|+\|\Im\omega(b_0)\|\left(4r+1\right)^2,
$$
we have
\begin{eqnarray*}
\lefteqn{\left\|(\Im h_0^{\circ N}(a))^{-\frac12}(h_0^{\circ N}(a)-\omega(b_0))
(\Im\omega(b_0))^{-\frac12}\right\|^2}\\
& \geq &
\frac{\|h_0^{\circ N}(a)-\omega(b_0)\|^2}{\|\Im\omega(b_0)\|\|\Im h_0^{\circ N}(a)\|}\\
& \ge & \frac{\|h_0^{\circ N}(a)-\omega(b_0)\|^2}{\|\Im\omega(b_0)\|
\|\Re\omega(b_0)\|+\|\Im\omega(b_0)\|^2\left(4r+1\right)^2},
\end{eqnarray*}
which allows us to conclude that
$$
\lim_{N\to\infty}\left\|h_0^{\circ N}(a)-\omega(b_0))\right\|=0,
$$
uniformly on bounded sets which are at strictly positive
norm-distance from the complement of $H^+(B)$.
Iterating in relation \eqref{strict-deriv} for $a=c=\omega(b_0)$
yields
\begin{eqnarray*}
\frac{\|[h'_0(\omega(b_0))]^{\circ N}(b)\|}{\|\Im\omega(b_0)\|} & \leq &
\|(\Im\omega(b_0))^{-\frac12}[h'_0(\omega(b_0))]^{\circ N}(b)(\Im\omega(b_0))^{-\frac12}\|\\
& \leq &
\|(\Im\omega(b_0))^{-\frac12}b(\Im\omega(b_0))^{-\frac12}\|
(1-\varepsilon_0\|(\Im\omega(b_0))^{-1}\|)^N\\
& \leq & \|(\Im\omega(b_0))^{-1}\|(1-\varepsilon_0\|(\Im\omega(b_0))^{-1}\|)^N\|b\|,
\end{eqnarray*}
which implies that
$$
\|[h'_0(\omega(b_0))]^{\circ N}(b)\|\leq\|\Im\omega(b_0)\|
\|(\Im\omega(b_0))^{-1}\|(1-\varepsilon_0\|(\Im\omega(b_0))^{-1}\|)^N\|b\|
$$
for all $b\in B$, so that
$$
\|[h'_0(\omega(b_0))]^{\circ N}\|\leq\|\Im\omega(b_0)\|
\|(\Im\omega(b_0))^{-1}\|(1-\varepsilon_0\|(\Im\omega(b_0))^{-1}\|)^N,
$$
the norm of $[h'_0(\omega(b_0))]^{\circ N}$ being the norm
of a bounded linear map on the $C^*$-algebra $B$. Thus, for
$N>\frac{\log(\|\Im\omega(b_0)\|
\|(\Im\omega(b_0))^{-1}\|)}{-\log(1-\varepsilon_0\|(\Im\omega(b_0))^{-1}\|)}$,
we have $\|[h'_0(\omega(b_0))]^{\circ N}\|<1$. In general, if a linear
operator $T$ on a Banach space $B$ satisfies $\|T^N\|<1$, we may write
$\sum_{j=0}^{kN-1}T^j=(1+T+\cdots+T^{N-1})+
T^N(1+T+\cdots+T^{N-1})+T^{2N}(1+T+\cdots+T^{N-1})
+\cdots+T^{(k-1)N}(1+T+\cdots+T^{N-1})=
(1+T+\cdots+T^{N-1})\sum_{j=0}^{k-1}(T^N)^j,$
which tends to $(1+T+\cdots+T^{N-1})(1-T^N)^{-1}$ as $k\to\infty$.
Since $N$ is fixed, it follows easily that in fact so does $\sum_{j=0}^{k}T^j$.
A simple algebraic manipulation shows that $(1+T+\cdots+T^{N-1})(1-T^N)^{-1}=
(1-T)^{-1}.$ Thus, $\mathrm{Id}_B-h'_0(\omega(b_0))$ is invertible as
a linear self-map of the Banach space $B$. By the implicit function theorem
for analytic maps on Banach spaces, it follows that $\omega$ depents
analytically on $b_0$. This result, together with the properties of
fixed points for noncommutative maps proved in \cite{AKV} allow us to
conclude that $\omega$ is a noncommutative map on a noncommutative
neighbourhood of $b_0$.
All of the above has been established under the assumption that a fixed
point $\omega(b_0)$ exists. We have not proved its existence, though.
Relation \eqref{strict} would allow us easily to prove such an existence
along the lines of the above proof if we could somehow guarantee
the boundedness of the iterates $\{h_0^{\circ N}(a)\}_{N\in\mathbb N}$
for some given $a\in H^+(B)$. Unfortunately, this does not seem possible
to do in a direct way. Thus, we show the existence of the fixed point
$\omega(b_0)$ by a perturbative argument, most of which is
contained in the following proposition, which, we believe, might be
of independent interest. Define
\begin{equation}\label{kazero}
k_0(a)=-h_0(-a^{-1})^{-1},\quad a\in H^+(B).
\end{equation}
As $\Im h(a)>\varepsilon_01$, it follows that
$k_0(H^+(B))\subseteq\{w\colon\|w-i(2\varepsilon_0)^{-1}1\|<
(2\varepsilon_0)^{-1}\},$ the noncommutative ball centered at
an imaginary multiple of the identity.
\begin{prop}\label{horro}
For any $a\in H^+(B)\cup\{0\}$, the fixed-point equation $x=a+k_0(x)$
has a unique solution $x(a)$ in $H^+(B)$. $x(a)$ is a noncommutative
function of $a$ whenever $a\in H^+(B)$, and $x(0_m\oplus0_n)=
x(0_m)\oplus x(0_n)$ for all $m,n\in\mathbb N$.
\end{prop}
\begin{proof}
Note that the set $a+k_0(H^+(B))$ is bounded and bounded
away from the complement of $H^+(B)$. Thus, the argument
used above allows us to conclude the existence, uniqueness
and analyticity of $x$ on $H^+(B)$. The existence of $x(0)$
in $H^+(B)$ is the only difficult part of the proof. For this,
we shall use some results from \cite{B-CAOT}, specifically
Proposition 3.1, Remark 3.2(2), and Corollary 3.3, together with the definition
of a noncommutative version of horodisks in the noncommutative
upper half-plane (see \cite[Relation (22)]{B-CAOT}). These results
have been formulated for functions of a slightly different nature,
but it is very easy to see that all elements of the proofs involved
adapt to bounded functions like $k_0$ which satisfy $k_0(a^*)^*=k_0(a)$.
We claim that $x(H^+(B))=\{m+in\colon m=m^*,n>\Im k_0(m+in)\}$.
Since $x(a)=a+k_0(x(a))$, the inclusion $\subseteq$ is quite obvious.
To prove $\supseteq$, recall that the map $B^{\rm sa}\ni p\mapsto
\Re x(p+iq)\in B^{\rm sa}$ is a bijection for any given $q>0$ (see
\cite[Corollary 3.3]{B-CAOT}). We also know that there exists a
smooth function $g_q\colon B^{\rm sa}\to\{b\in B\colon\Im b>0\}$
such that $g_q(\Re x(p+iq))=\Im x(p+iq)$. In particular, for any
$m\in B^{\rm sa}$, there exists a unique $n>0$ such that $g_q(m)
=n$: we have
$$
m+in=x(p+iq)=p+iq+k_0(x(p+iq))=p+iq+k_0(m+in),
$$
so that $p=m-\Re k_0(m+in), q=n-\Im k_0(m+in)$. This proves $\supseteq$.
Since $k_0(H^+(B))$ is bounded, it folows that for any pair
$m=m^*,n>0$, we have $yn>k_0(m+iyn)$ for all
sufficiently large $y\in(0,+\infty)$. Thus, we may define
$$
0\leq t_{m,n}=\inf\{y>0\colon sn>\Im k_0(m+isn)\text{ for all }s>y\}.
$$
We argue that for all $s>t_{m,n}$, we have $sn>\Im k_0(m+isn)$, and
for all $0\le s\le t_{m,n}$, we have $sn\not>\Im k_0(m+isn)$.
The argument is virtually identical to the one in \cite[Lemma 5.8]{BC}
and is based on related works in the case of scalar, classical distributions
by Biane \cite{Biane} and by Huang \cite{Huang}, so we will only sketch it. We consider the map
$\mathbb C^+\ni z\mapsto\varphi(m+zn-k_0(m+zn))\in\mathbb C$
for an arbitrary state $\varphi$. If $H(z)=\frac{\varphi(m)}{\varphi(n)}
+z-\frac{\varphi(k_0(m+zn))}{\varphi(n)}$, then $\lim_{y\to+\infty}
\frac{H(iy)}{iy}=1$ and $\Im H(z)\leq\Im z$. Then Huang's version
\cite[Section 3]{Huang} of Biane's results \cite[Lemmas 2 and 4]{Biane}
applies to $H$ to guarantee that if $\frac{\Im\varphi(k_0(m+iy_0n))}{\varphi(n)}
\ge y_0$, then $\frac{\Im\varphi(k_0(m+iyn))}{\varphi(n)}\ge y$ for all
$y\in(0,y_0]$. Since this holds for any state $\varphi$, our claim follows.
Obviously, there are two possibilities: either $t_{m,n}>0$ or $t_{m,n}=0$.
Consider first the case when $t_{m,n}=0$. Pick a state $\varphi$ on $B$
and $n'>0$. We have
$$
\left\|(yn)^{-\frac12}(yn-yn')(yn')^{-\frac12}\right\|^2\ge
\frac{|\varphi(k_0(m+iyn))-\varphi(k_0(m+iyn'))|^2}{\Im\varphi(k_0(m+iyn))\Im\varphi(k_0(m+iyn'))},
$$
which in its own turn implies
$$
\left\|n^{-\frac12}(n-n')(n')^{-\frac12}\right\|^2\ge
\left|\frac{\Im\varphi(k_0(m+iyn))}{\Im\varphi(k_0(m+iyn'))}-
\frac{\Im\varphi(k_0(m+iyn'))}{\Im\varphi(k_0(m+iyn))}\right|^2.
$$
As $t_{m,n}=0$, we have $\frac{\varphi(\Im k_0(m+iyn))}{y}<\varphi(n)$
for all $y>0$, so that necessarily
\begin{eqnarray*}
\lefteqn{0\leq\liminf_{y\to0}\frac{\Im\varphi(k_0(m+iyn'))}{y}}\\
& \leq &
\varphi(n)\frac{2+\left\|n^{-\frac12}(n-n')(n')^{-\frac12}\right\|^2
+\sqrt{\left(2+\left\|n^{-\frac12}(n-n')(n')^{-\frac12}\right\|^2\right)^2-4}}{2}.
\end{eqnarray*}
As this holds for any $n'>0$ and any state $\varphi$ on $B$,
we conclude that $k_0$ satisfies the hypotheses of \cite[Theorem 2.3]{JLMS}.
Thus, if there exists a pair $m=m^*,n>0$ such that $t_{m,n}=0$, then for any $n'>0$,
$$
\lim_{y\to0}k_0(m+iyn')=\alpha=\alpha^*
$$
exists in the norm topology. However, observe that since
$k_0(H^+(B))\subseteq\{w\colon\|w-i(2\varepsilon_0)^{-1}1\|<
(2\varepsilon_0)^{-1}\},$ and the limit is in norm, we must have
$\alpha=0$.
Now consider the case when $t_{m,n}>0$. As seen above, for any $y>0$,
there exist $p_y=m-\Re k_0(m+iyn),q_y=yn-\Im k_0(m+iyn)$ such that
$x(p_y+iq_y)=p_y+iq_y+k_0(x(p_y+iq_y))=p_y+iq_y+k_0(m+iyn)$ (in particular, $q_y>0$).
This provides the expression of $\lim_{y\to t_{m,n}}x(p_y+iq_y)=m+it_{m,n}n\in H^+(B)$.
Simple continuity guarantees that $x(p_{t_{m,n}}+iq_{t_{m,n}})=
p_{t_{m,n}}+iq_{t_{m,n}}+k_0(x(p_{t_{m,n}}+iq_{t_{m,n}}))$. Thus,
$H^+(B)\ni w\mapsto p_{t_{m,n}}+iq_{t_{m,n}}+k_0(w)\in H^+(B)$
has a fixed point in $H^+(B)$. Since the range of this map is bounded
in the unbounded set $H^+(B)$, the fixed point is necessarily unique
and attracting (indeed, one can apply the argument from the first part of the proof
of Theorem \ref{semigr}, for ex., to the map
$\{w+p_{t_{m,n}}+iq_{t_{m,n}}\colon\|w-i\varepsilon_0^{-1}1\|<
\varepsilon_0^{-1}\}\ni w\mapsto p_{t_{m,n}}+iq_{t_{m,n}}+k_0(w)\in
\{w\colon\|w-i(2\varepsilon_0)^{-1}1\|<(2\varepsilon_0)^{-1}\}$
to conclude uniqueness and norm-convergence of iterates to the
fixed point, or one can appeal to Proposition \ref{ka}). Thus, $x$ extends to a norm-neighbourhood
of $p_{t_{m,n}}+iq_{t_{m,n}}$.
To summarize: either $t_{m,n}=0$, and then $k_0$ has a Julia-Carath\'eodory
derivative at $m$, and $\lim_{y\to0}k_0(m+iyn')=0$ in norm for all $n'>0$,
or $t_{m,n}>0$, and then $x$ extends analytically around
$p_{t_{m,n}}+iq_{t_{m,n}}=m-\Re k_0(m+it_{m,n}n)+i
(t_{m,n}n-\Im k_0(m+it_{m,n}n))$. We apply this to $m=0$.
Assume towards contradiction that $t_{0,n}=0$ for some $n>0$.
Recall from \cite[Relation (22)]{B-CAOT} the definition of the
pseudo-horodisks at zero in ``direction'' $n$ (with $n$
normalized so that $\|n\|=1$):
\begin{eqnarray*}
\mathcal H(0,n) & = & \{w\in H^+(B)\colon(w-0)^*(\Im w)^{-1}(w-0)\leq n\}\\
& = & \{w\in H^+(B)\colon n^{-1/2}\Im wn^{-1/2}+n^{-1/2}\Re w(\Im w)^{-1}\Re wn^{-1/2}\leq1\},
\end{eqnarray*}
and
\begin{eqnarray*}
\mathring{\mathcal H}(0,n) & = & \{w\in H^+(B)\colon(w-0)^*(\Im w)^{-1}(w-0)<n\}\\
& = & \{w\in H^+(B)\colon n^{-1/2}\Im wn^{-1/2}+n^{-1/2}\Re w(\Im w)^{-1}\Re wn^{-1/2}<1\}.
\end{eqnarray*}
Note that the only selfadjoint element in $\mathcal H(0,n)$ is zero. Indeed,
by definition, if $w\in\mathcal H(0,n),$ then $n\ge\Re w(\Im w)^{-1}\Re w+\Im w$, so that if
$\|\Im w\|\to0$, then necessarily $\|\Re w\|\to0$ (in fact one can easily obtain the estimate
$\Im w>\Re w n^{-1}\Re w\ge\frac{(\Re w)^2}{\|n\|}=(\Re w)^2$). Consider
$B(iyn,y^{-1/2}),y>0$, with $B$ defined in relation \eqref{pseudoball}. We have:
$$
\mathring{\mathcal H}(0,n)\subseteq\bigcap_{0<t<1}\bigcup_{0<y<t}B(iyn,y^{-1/2})\subseteq
{\mathcal H}(0,n).
$$
This has been shown in \cite{B-CAOT}, but we will provide a sketch of the proof below.
Thus, assume towards contradiction that $a\in\mathring{\mathcal H}(0,n)$, but
$a\not\in\bigcap_{0<t<1}\bigcup_{0<y<t}B(iyn,y^{-1/2})$. Then there exist
a $t_0\in(0,1)$ such that $a\not\in B(iyn,y^{-1/2})$ for any $y\in(0,t_0)$. That is,
$(a-iyn)^*(\Im a)^{-1}(a-iyn)\not\leq n$ for all $y\in(0,t_0)$. At the same time,
there exists an $\epsilon_{a,n}\in(0,+\infty)$ such that $a^*(\Im a)^{-1}a\leq n-
\epsilon_{a,n}\cdot1$. However, $(a-iyn)^*(\Im a)^{-1}(a-iyn)=a^*(\Im a)^{-1}a
+y(in(\Im a)^{-1}a-ia^*(\Im a)^{-1}n+yn(\Im a)^{-1}n)\leq a^*(\Im a)^{-1}a
+y(2\|n\|\|(\Im a)^{-1}\|\|a\|+y\|n\|^2\|(\Im a)^{-1}\|)<a^*(\Im a)^{-1}a+
\epsilon_{a,n}\cdot1\leq n$ for all $y\in(0,\sqrt{\|a\|^2+\epsilon_{a,n}\|(\Im a)^{-1}\|^{-1}}
-\|a\|)$ (recall that $\|n\|=1$). This is a contradiction. Thus the first inclusion holds.
The second inclusion is equally simple: $a\in B(iy_jn,y_j^{-1/2})$ for some sequence
$y_j$ decreasing to zero is equivalent to
$a^*(\Im a)^{-1}a+y_j(in(\Im a)^{-1}a-ia^*(\Im a)^{-1}n+y_jn(\Im a)^{-1}n)
\leq n$ for all $j\in\mathbb N$, which implies $a^*(\Im a)^{-1}a\leq n$, that is,
$a\in\mathcal H(0,n)$. We have
\begin{equation}\label{something}
k_0\left(\mathring{\mathcal H}(0,n)\right)\subseteq
k_0\left(\bigcap_{0<t<1}\bigcup_{0<y<t}B(iyn,y^{-1/2})\right)
\end{equation}
$$
\subseteq\bigcap_{0<t<1}k_0\left(\bigcup_{0<y<t}B(iyn,y^{-1/2})\right)
=\bigcap_{0<t<1}\bigcup_{0<y<t}k_0(B(iyn,y^{-1/2})).
$$
Recall that $iyn=x(p_y+iq_y)=p_y+iq_y+k_0(x(p_y+iq_y))=p_y+iq_y+k_0(yn)$,
that is, $iyn=x(p_y+iq_y)$ is a fixed point for $w\mapsto p_y+iq_y+k_0(w)$.
Thus,
\begin{equation}\label{someotherthing}
k_0(B(yn,y^{-1/2}))\subseteq B(iyn,y^{-1/2})-(p_y+iq_y).
\end{equation}
We have seen that $p_y=-\Re k_0(iyn)$ tends to zero in norm (in fact
$\|p_y/y\|$ is bounded as $y\to0$), and $q_y=yn-\Im k_0(iyn)\to0$ in norm
as $y\to0$ (in fact, $\|q_y/y\|$ is uniformly bounded for $y\in(0,1)$).
We claim that
$$
\bigcap_{0<t<1}\bigcup_{0<y<t}(B(iyn,y^{-1/2})-(p_y+iq_y))\subseteq\mathcal H(0,n).
$$
Assume that is not the case. Then there exists
$$
a_0\in\bigcap_{0<t<1}\bigcup_{0<y<t}(B(iyn,y^{-1/2})-(p_y+iq_y))\setminus\mathcal H(0,n),
$$
that is, for all $t\in(0,1)$, there exists $0<y<t$ such that $a_0\in B(iyn,y^{-1/2})-(p_y+iq_y)$,
and yet $a^*_0(\Im a_0)^{-1}a_0\not\leq n$. So (representing $B$ on a Hilbert space
via the GNS construction), there exists a unit vector $\xi$ and a number $\eta>0$
such that
\begin{equation}\label{et}
\langle(\Im a_0)^{-1}a_0\xi,a_0\xi\rangle>\langle n\xi,\xi\rangle+\eta
\end{equation}
and $a_0=\alpha_0-p_y-iq_y$, where $(\alpha_0-iyn)^*(\Im\alpha_0)^{-1}(\alpha_0-iyn)
\leq n$. Thus, we found a sequence $\{y_j\}_{j\in\mathbb N}$ decreasing to zero
such that
$$
(a_0+p_{y_j}+iq_{y_j}-iy_jn)^*(\Im a_0+q_{y_j})^{-1}(a_0+p_{y_j}+iq_{y_j}-iy_jn)\leq n;
$$
in particular,
$$
\left\langle (\Im a_0+q_{y_j})^{-1}(a_0+p_{y_j}+iq_{y_j}-iy_jn)\xi,
(a_0+p_{y_j}+iq_{y_j}-iy_jn)\xi\right\rangle\leq\langle n\xi,\xi\rangle.
$$
Expanding, we obtain
\begin{eqnarray}
\lefteqn{\left\langle (\Im a_0+q_{y_j})^{-1}a_0\xi,a_0\xi\right\rangle+2\Re\left\langle
(\Im a_0+q_{y_j})^{-1}a_0\xi,(p_{y_j}+iq_{y_j}-iy_jn)\xi\right\rangle}\nonumber\\
& & \mbox{}+
\left\langle(\Im a_0+q_{y_j})^{-1}(p_{y_j}+iq_{y_j}-iy_jn)\xi,(p_{y_j}+iq_{y_j}-iy_jn)\xi\right\rangle
\nonumber\\
& \leq & \langle n\xi,\xi\rangle.\label{lang}
\end{eqnarray}
From \eqref{et} and \eqref{lang} together we obtain (by cancelling $\langle n\xi,\xi\rangle$)
\begin{eqnarray}
\lefteqn{\left\langle(\Im a_0)^{-1}a_0\xi,a_0\xi\right\rangle-\eta}\nonumber\\
& > & \left\langle (\Im a_0+q_{y_j})^{-1}a_0\xi,a_0\xi\right\rangle+2\Re\left\langle
(\Im a_0+q_{y_j})^{-1}a_0\xi,(p_{y_j}+iq_{y_j}-iy_jn)\xi\right\rangle\nonumber\\
& & \mbox{}+
\left\langle(\Im a_0+q_{y_j})^{-1}(p_{y_j}+iq_{y_j}-iy_jn)\xi,(p_{y_j}+iq_{y_j}-iy_jn)\xi\right\rangle.
\nonumber
\end{eqnarray}
We re-arrange this relation to get
\begin{eqnarray}
\lefteqn{\left\langle(\Im a_0)^{-1}a_0\xi,a_0\xi\right\rangle-
\left\langle (\Im a_0+q_{y_j})^{-1}a_0\xi,a_0\xi\right\rangle-\eta}\nonumber\\
& = & \left\langle (\Im a_0+q_{y_j})^{-1}q_{y_j}(\Im a_0)^{-1}a_0\xi,a_0\xi\right\rangle-\eta\nonumber\\
& > & 2\Re\left\langle
(\Im a_0+q_{y_j})^{-1}a_0\xi,(p_{y_j}+iq_{y_j}-iy_jn)\xi\right\rangle\nonumber\\
& & \mbox{}+\left\langle(\Im a_0+q_{y_j})^{-1}(p_{y_j}+iq_{y_j}-iy_jn)\xi,(p_{y_j}+iq_{y_j}-iy_jn)\xi
\right\rangle.
\nonumber
\end{eqnarray}
Since $\lim_{j\to\infty}\|q_{y_j}\|=\lim_{j\to\infty}\|p_{y_j}\|=\lim_{j\to\infty}y_j=0$, when we take
limit as $j\to\infty$ in the above inequality, we obtain $-\eta>0$, a contradiction. Thus,
$$
\bigcap_{0<t<1}\bigcup_{0<y<t}(B(iyn,y^{-1/2})-(p_y+iq_y))\subseteq\mathcal H(0,n).
$$
Combining this with relations \eqref{something} and \eqref{someotherthing},
we obtain
$$
k_0(\mathcal H(0,n))\subseteq\mathcal H(0,n).
$$
Quite trivially,
\begin{eqnarray*}
& & a\in\mathcal H(0,n)\iff n^{-1/2}a^*(\Im a)^{-1}an^{-1/2}\le1\\
& & \iff n^{-1/2}\Im an^{-1/2}+n^{-1/2}\Re a(\Im a)^{-1}\Re an^{-1/2}\le 1\\
& & \iff (\Im a+\Re a(\Im a)^{-1}\Re a)^{-1}\ge n^{-1}\\
& & \iff \Im(-a^{-1})\ge n^{-1}.
\end{eqnarray*}
Thus, $\mathcal H(0,n)$ is mapped bijectively (and as a noncommutative set)
onto the set $\{a\in H^+(B)\colon \Im a\ge n^{-1}\}$ by the idempotent correspondence
$a\mapsto -a^{-1}.$ By the definition of $k_0$ (see \eqref{kazero}), it follows that
$h(\{a\in H^+(B)\colon \Im a\ge n^{-1}\})\subseteq\{a\in H^+(B)\colon \Im a\ge n^{-1}\}$.
However, our hypothesis on $h$ states that $\lim_{y\to\infty}\frac{\langle\Im h(\Re a+iy\Im a)\xi,\xi
\rangle}{y}=0$ for any $a\in H^+(B)$ and unit vector $\xi$. This means that given $u=u^*,v>0$, there
exists an $y>0$ depending on $u$, $v$ and $\xi$ such that $\langle\Im h(u+iyv)\xi,\xi\rangle<y\langle
v\xi,\xi\rangle/2.$ But $\Im h(u+iyv)\ge\Im(u+iyv)=yv$, a contradiction. This concludes the proof of our
proposition.
\end{proof}
Proposition \ref{horro} shows that the map $w\mapsto b_0+h(w)$ has an attracting fixed point
in $H^+(B)$. The results of \cite{AKV} allow us to conclude the proof of Theorem \ref{semigr}.
\end{proof}
In order to argue that Theorem \ref{semigr} solves to a certain extent the problem of
defining free convolution powers of unbounded selfadjoint random variables, let
us show that if $X=X^*\in M$, then $h_X(b)=E[(b-X)^{-1}]^{-1}-b$ satisfies the
hypothesis of Theorem \ref{semigr}. Fix $b=u+iv$, $u=u^*,v>0$.
Then
\begin{eqnarray*}
h_X(u+zv) & = & E\left[(u-X+zv)^{-1}\right]^{-1}-u-zv\\
& = & v^{1/2}E\left[\left(z+v^{-1/2}(u-X)v^{-1/2}\right)^{-1}\right]^{-1}v^{1/2}-u-zv\\
& = & v^\frac12\left\{E\left[\left(z+v^{-\frac12}(u-X)v^{-\frac12}\right)^{-1}\right]^{-1}-z-
v^{-1/2}uv^{-1/2}\right\} v^\frac12.
\end{eqnarray*}
We argue that $h_X$ satisfies the hypothesis of Theorem \ref{semigr}. This
means (via a polarization argument) to show that
$\lim_{y\to+\infty}\frac{\langle\Im h_X(u+iyv)\xi,\xi\rangle}{y}=0$.
First, since $\Im (m+in)^{-1}=-(mn^{-1}m+n)^{-1},$ $\Re(m+in)^{-1}
=n^{-1}m(mn^{-1}m+n)^{-1}$, we have
$$
\Im E\left[\frac{1}{z-Y}\right]=-E\left[\frac{y}{y^2+(x-Y)^2}\right]<0,\quad
\Re E\left[\frac{1}{z-Y}\right]=E\left[\frac{x-Y}{y^2+(x-Y)^2}\right],
$$
where $z=x+iy$. Thus,
\begin{eqnarray*}
\lefteqn{\Im E\left[\left(z-Y\right)^{-1}\right]^{-1}=}\\
& & \left\{E\left[\frac{x-Y}{y^2+(x-Y)^2}\right]E\left[\frac{y}{y^2+(x-Y)^2}\right]^{-1}
E\left[\frac{x-Y}{y^2+(x-Y)^2}\right]\right.\\
& & \left.\mbox{}+E\left[\frac{y}{y^2+(x-Y)^2}\right]\right\}^{-1}\\
& \leq & E\left[\frac{y}{y^2+(x-Y)^2}\right]^{-1},
\end{eqnarray*}
which makes
$$
\Im E\left[\left(z-Y\right)^{-1}\right]^{-1}-y
\leq y\left(E\left[\frac{y^2}{y^2+(x-Y)^2}\right]^{-1}-1\right).
$$
Dividing by $y$ provides us with the majorizing term $E\left[\frac{y^2}{y^2+(x-Y)^2}\right]^{-1}-1$.
This, as a function of $y$, is decreasing, as it can be seen by taking the (classical) derivative
with respect to $y$:
\begin{eqnarray*}
\lefteqn{
\partial_yE\left[\frac{y^2}{y^2+(x-Y)^2}\right]^{-1}=}\\
& & \mbox{}-E\left[\frac{y^2}{y^2+(x-Y)^2}\right]^{-1}
E\left[\frac{2y(x-Y)^2}{(y^2+(x-Y)^2)^2}\right]E\left[\frac{y^2}{y^2+(x-Y)^2}\right]^{-1}\leq0.
\end{eqnarray*}
Thus, $E\left[\frac{y^2}{y^2+(x-Y)^2}\right]^{-1}-1$ is a decreasing function of $y$.
If it does not decrease to zero, then there exists a positive operator $0\neq c\ge0$
which belongs to the universal envelopping von Neumann algebra of $B$
such that $\lim_{y\to\infty}E\left[\frac{y^2}{y^2+(x-Y)^2}\right]^{-1}-1=c$
in the weak operator topology.
Multiplying left and right by $(1+c)^{-1/2}$ allows us to conclude that
$(1+c)^{1/2}E\left[\left(z-Y\right)^{-1}\right](1+c)^{1/2}$
belongs to the norm-ball of center $-i/(2y)$ and radius $1/(2y)$. Taking the imaginary part
and multiplying by $y$ yields
$$
\lim_{y\to\infty}(1+c)^{1/2}E\left[\frac{y^2}{y^2+(x-Y)^2}\right](1+c)^{1/2}=1
$$
in the wo-topology. Thus\footnote{We use here that if $0<b_j^{-1}$ decreases to 1,
then $0<b_j$ increases to 1; this can be seen by evaluating
$\langle(1-b_j)^{1/2}\xi,\xi\rangle^2=\langle b_j^{1/2}(b_j^{-1}-1)^{1/2}\xi,\xi\rangle^2
\leq\langle b_j\xi,\xi\rangle\langle(b_j^{-1}-1)\xi,\xi\rangle$.}
$$
\lim_{y\to\infty}E\left[\frac{y^2}{y^2+(x-Y)^2}\right]=(1+c)^{-1}.
$$
Composing this with any wo-continuous state $\varphi$ on the universal
envelopping algebra of $B$ provides us with a state $\varphi\circ E$ on $M$
with respect to which the distribution of $Y$ is not a probability, contradicting (H1).
\begin{cor}
Under hypotheses {\rm (H1)} and {\rm (H2)}, if the distribution of $X$ is encoded by the
restriction of $E[(b-X)^{-1}]$ to $H^+(B)$, then $\mu_X^{\boxplus\rho}$ is well-defined for all
cp maps $\rho\colon B\to B$ such that $\rho-{\rm Id}_B$ is still cp.
\end{cor}
\begin{proof}
Apply Theorem \ref{semigr} to $h(w)=(\rho-
\mathrm{Id}_B)h_{X,n}(w)$.
\end{proof}
Let us briefly comment on the Nevanlinna representation of
$h_X$. If $X\in M$, results of \cite{PV} guarantee the
existence of an extension of $B$ in which there exists a bounded
selfadjoint element $\mathcal X$ and of a completely positive map
$\rho\colon B\langle\mathcal X\rangle\to B$ such that
$h_X(b)=-E[X]+\rho\left[(\mathcal X-b)^{-1}\right]$, $b\in H^+(B)$.
As in the case of the classical Nevanlinna representation, for unbounded
operators $X$, $\rho$ is not anymore the appropriate cp map. We
define $\eta\colon B\langle\mathcal X\rangle\to B$ by $\eta[a]=
\rho\left[(\mathcal X-i)^{-1}a(\mathcal X+i)^{-1}\right].$ The
correspondence becomes now
$$
h_X(b)=\Re h_X(i)+\eta\left[(\mathcal X-b)^{-1}+b+b(\mathcal X-b)^{-1}b\right],\quad\Im b>0.
$$
Observe that indeed $\Im h(i)=\eta[i]$. Rewriting this map as
$$
h_X(b)=
\Re h_X(i)+\eta\left[(\mathcal X-b)^{-1}-\mathcal X+\mathcal X(\mathcal X-b)^{-1}\mathcal X\right]
$$
makes it clear that it maps $H^+(B)$ in its closure. | {"config": "arxiv", "file": "1707.09762.tex"} |
TITLE: Choice function to obtain all elements of an infinite set
QUESTION [0 upvotes]: I'm following a proof of "Axiom of Choice implies Well Ordering Theorem" (from a not-well-known book we use for class), and the author uses this procedure:
First, he takes any non-empty set $X$ and defines the relation $\le$ so that for every $b \in X$, the following is true:
$b=f(X-\{ x\in X, x \neq b / x \le b \}) = \text{least element of }(X-\{ x \in X, x \neq b / x \le b \})$
Where $f$ is a choice function. In other words, the idea is to get the $f$ to select some element of $X$, and we call this the least element. Then we do the same for $X$ minus the elements we already ordered. And so on.
It appears like this procedure gets all the elements of a set, but the book then mentions that we can't guarantee this is the case for infinite sets. That's why the author uses the following method:
We consider subsets $B$ of $X$ whose elements have already been ordered by $f$, meaning $(B,\le_B)$ is a well-ordered set, such that for all $b\in B$:
$ b=f(X-\{ x \in B, x \neq b / x \le_B b \}$
($\le_B$ is the order relation induced in $B$ by $\le$)
We then get the union of all these sets and we get $(X,\le)$, using 2 more pages for this part and the complete proof in general.
My question is, why it isn't guaranteed that we will get all the elements of $X$ using the original procedure? Why should we create these subsets and then get their union?
REPLY [2 votes]: Taken literally, the question has an easy (but probably unsatisfying) answer: The subsets that the proof creates and forms the union of are, as you wrote, all the well-ordered sets $(B,\leq_B)$ such that each element $b$ of $B$ is the result of applying the choice function $f$ to the complement of the set of strict predecessors of $b$. One such $B$ is the empty set. Another is the singleton $\{a\}$ where $a=f(X)$. And there are (for reasonably large $X$) lots of other $B$'s that are nowhere near all of $X$.
Now let me take the question less literally, so as to address what you probably meant. My guess is that you intended $B$ to be not just any old well-ordered set of the sort described above, but rather the one you get by continuing as long as possible. That is, as long as what you've put into $B$ so far isn't all of $X$, you should apply $f$ to the rest of $X$ to select an element that you then put into $B$ as the next element. You stop only when your well-ordering has exhausted $X$.
That's the correct intuition, but it isn't mathematically precise. It uses concepts like "put into $B$", "so far", "continue"; these would need to be defined (and some of their basic properties would have to be proved) in order to convert the idea into a real proof. There are some serious problems in a direct attempt to make these concepts precise. For example, the concepts seem to involve a notion of time, but the cardinality of $X$ might exceed the cardinality of the real line, so this notion of time couldn't be the usual, physical one.
The idea of taking the union of all the possible $B$'s, including those that, like my examples in the first paragraph, are obviously "too short", is to provide a mathematically precise substitute for these temporal notions. The key is that, among all the possible $B$'s, the "right" one is the biggest one. Since all the possible $B$'s (the right one and the too-short ones) form a linearly ordered chain, you can get the biggest one by taking the union of them all. | {"set_name": "stack_exchange", "score": 0, "question_id": 913528} |
\section{Related Work}
\label{sec:related_work}
This work is rooted in the broader literature on surrogate methods for speeding up simulations and solutions of dynamical systems \citep{grzeszczuk1998neuroanimator,james2003precomputing,gorissen2010surrogate}. Differently from these approaches, we investigate a methodology to enable faster solution during a downstream, online optimization problem involving a potential mismatch compared to data seen during pre--training. We achieve this through the application of the \textit{hypersolver} \citep{poli2020hypersolvers} paradigm.
Modeling mismatches between approximate and nominal models is explored in \citep{saveriano2017residualdynamics} where residual dynamics are learned efficiently along with the control policy while \citep{fisac2018uncertainrobotics, taylor2019learningsafetycritical} model systems uncertainties in the context of safety--critical control. In contrast to previous work, we model uncertainties with the proposed multi--stage hypersolver approach by closely interacting with the underlying ODE base solvers and their residuals to improve solution accuracy.
The synergy between machine learning and optimal control continues a long line of research on introducing neural networks in optimal control \citep{hunt1992neuralautomatica}, applied to modeling \citep{lin1995new}, identification \citep{chu1990neural} or parametrization of the controller itself \citep{lin1991neural}. Existing surrogate methods for systems \citep{grzeszczuk1998neuroanimator,james2003precomputing} pay a computational cost upfront to accelerate downstream simulation. However, ensuring transfer from offline optimization to the online setting is still an open problem. In our approach, we investigate several strategies for an accurate offline--online transfer of a given hypersolver, depending on desiderata on its performance in terms of average residuals and error propagation on the online application. Beyond hypersolvers, our approach further leverages the latest advances in hardware and machine learning software \citep{paszke2019pytorch} by solving thousands of ODEs in parallel on \textit{graphics processing units (GPUs)}. | {"config": "arxiv", "file": "2203.08072/Chapters/6_Related_Work.tex"} |
TITLE: Why does $\sum_{k=1}^\infty (ζ[2k+1]-1)=\frac{1}{4}$
QUESTION [2 upvotes]: Can someone explain why
$$\sum_{k=1}^\infty (ζ[2k+1]-1)=\frac{1}{4}?$$
REPLY [1 votes]: Another approach. Since:
$$\zeta(2k+1) = \int_{0}^{+\infty}\frac{x^{2k}}{(2k)!}\cdot\frac{1}{e^x-1}\,dx,\qquad 1=\int_{0}^{+\infty}\frac{x^{2k}}{(2k)!}\cdot\frac{1}{e^x}\,dx\tag{1}$$
we have:
$$ \begin{eqnarray*}\sum_{k\geq 1}\left(\zeta(2k+1)-1\right)&=&\int_{0}^{+\infty}\frac{1}{e^{2x}-e^{x}}\sum_{k\geq 1}\frac{x^{2k}}{(2k)!}\,dx \\&=&\int_{0}^{+\infty}\frac{\cosh(x)-1}{e^{2x}-e^x}\,dx\\&=&\frac{1}{2}\int_{0}^{+\infty}\left(e^{-x}-e^{-2x}\right)\,dx\\&=&\frac{1}{2}\cdot\left(1-\frac{1}{2}\right)=\color{red}{\frac{1}{4}}.\tag{2}\end{eqnarray*}$$ | {"set_name": "stack_exchange", "score": 2, "question_id": 1432172} |
TITLE: Inequality Question
QUESTION [3 upvotes]: Assume that $a_1, \dots,a_n $ and $b_1, \dots,b_n$ are $2n$ non-negative real numbers.
We have $$\sum_{i=1}^na_i = \sum_{i=1}^nb_i$$
We're to prove that $$\sqrt2 \sum_{i=1}^n (\sqrt{a_i}-\sqrt {b_i})^2 \ge \sum_{i=1}^n|a_i-b_i|.$$ Can anyone help!
I encountered it while i was surfing in olympiad section of artofproblemsolving and found it interesting , since my olympiads are very near so I tried to solve this inequality but failed to do so.
I tried to apply AM-GM-HM Inequality but it doesnt works here & also tried Cauchy-Schwarz & Tchebycheff's Inequality too but with no success . I just cant figure out what to keep as variables in the formulae stated above .
REPLY [7 votes]: If $n=2$ and $a_1=b_2=100, a_2=b_1=121$, then the inequality becomes $2\sqrt{2}\ge 42$, which is false. So the inequality does not actually hold. | {"set_name": "stack_exchange", "score": 3, "question_id": 247530} |
TITLE: Meaning of "Heat engine working between two Temperatures"
QUESTION [0 upvotes]: According to Carnot Theorem:
"Of all engines working between two given temperature, none is more efficient than a carnot engine."
I want to know what actually is meant by an engine working between two temperatures? Is it always necessary that the engine should follow the Carnot cycle(four path) or would any other number of cyclic processes do the same?
My Answer:
"I think any engine would do the same because in the proof they do not use the fact that the engine (other than carnot) is not necessarily reversible!"
Now need a clarification!
REPLY [1 votes]: Heat flows from higher temperature regions to lower temperature regions. A heat engine is a device which extracts useful work from such a heat flow while it is flowing from one region to the other. This is what the "two given temperatures" refers to.
Carnot proved that regardless of the details of the engine's operation, the temperature difference between the hotter source and the colder sink establishes the maximum theoretical efficiency of the engine, which can never be exceeded in actual practice. | {"set_name": "stack_exchange", "score": 0, "question_id": 607078} |
\begin{document}
\title[Graev metrics on free products]{Graev metrics on free products and HNN extensions}
\author{Konstantin Slutsky}
\thanks{Research supported in part by grant no. 10-082689/FNU from Denmark's Natural Sciences Research Council.}
\address{
Institut for Matematiske Fag\\
K\o benhavns Universitet\\
Universitetsparken 5\\
2100 K\o benhavn \O,\ Denmark \\}
\email{kslutsky@gmail.com}
\keywords{Graev metric, free product, HNN extension}
\begin{abstract}
We give a construction of two-sided invariant metrics on free products (possibly with
amalgamation) of groups with two-sided invariant metrics and, under certain conditions, on HNN
extensions of such groups. Our approach is similar to the Graev's construction of metrics on free
groups over pointed metric spaces.
\end{abstract}
\maketitle
\section{Introduction}
\label{sec:introduction}
\subsection{History}
\label{sec:history}
Back in the 40's in his seminal papers \cite{MR0004634,MR0012301} A. Markov came up with a notion of
the free topological group over a completely regular (\tychonoff) space. This notion gave birth to
a deep and important area in the general theory of topological groups. We highly recommend an
excellent overview of free topological groups by O. Sipacheva \cite{MR2056625transl}. Later
M. Graev \cite{MR0038357} gave another proof of the existence of free topological groups over
completely regular spaces. In his approach Graev starts with a pointed metric space
\( (X,x_{0},d) \) and defines in a canonical way a two-sided invariant metric on
\( F \big( X\setminus \{x_{0}\} \big) \) --- the free group with bases \( X \setminus \{x_{0}\} \).
Moreover, this metric extends the metric \( d \) on \( X \setminus \{x_{0}\} \). In modern terms,
Graev constructed a functor from the category of pointed metric spaces with \lipschitz maps to the
category of groups with two-sided invariant metrics and \lipschitz homomorphisms.
The topology given by the Graev metric on the free group \( F(X \setminus \{x_{0}\}) \) is, in
general, much weaker than the free topology on \( F \big( X \setminus \{x_{0}\} \big) \). Since the
early 40's a lot of work was done to understand the free topology on free groups, and some of this
work shed light onto properties of the Graev metrics.
Graev metrics were used to construct exotic examples of Polish groups (see \cite{MR1288299,
MR2332614,MR2541347}). For example, the group completion of the free group
\( F(\mathbb{N}^{\mathbb{N}}) \) over the Baire space with the topology given by the Graev metric is
an example of a surjectively universal group in the class of Polish groups that admit compatible
two-sided invariant metrics (see \cite{MR1288299} for the proof).
Once the notion of a free topological group is available, the next step is to construct free
products. It was made by Graev himself in \cite{MR0036768}, where he proves the existence of free
products in the category of topological groups. For this he uses, in a clever and unexpected way,
Graev metrics on free groups. But this time his approach does not produce a canonical metric on the
free product out of metrics on factors.
In this paper we would like to try to push Graev's method from free groups to free products of
groups with and without amalgamation. As will be evident from the construction, the natural realm
for this approach is the category of groups with two-sided invariant metrics. To be precise, a
basic object for us will be an abstract group \( G \) with a two-sided invariant metric \( d \) on
it. We recall that \( G \) will then automatically be a topological group in the topology given by
\( d \). Topological groups that admit a compatible two-sided invariant metric form a very
restrictive subclass of the class of all the metrizable topological groups, but it
includes compact metrizable and abelian metrizable groups.
\subsection{Main results}
\label{sec:main-results}
The paper roughly consists of two parts. In the first part we show the existence of free products
of groups with two-sided invariant metrics. Here is a somewhat simplified version of the main
theorem.
\begin{theorem-nn}[Theorem \ref{sec:metrics-amalgams-MAIN-Graev-metric-on-products}]
Let \( (G_{1},d_{1}) \) and \( (G_{2},d_{2}) \) be groups with two-sided invariant metrics. If
\( A < G_{i} \) is a common closed subgroup and \( d_{1}|_{A} = d_{2}|_{A} \), then there is a
two-sided invariant metric \( \dist \) on the free product with amalgamation
\( G_{1} *_{A} G_{2} \) such that \( \dist |_{G_{i}} = d_{i} \). Moreover, if \( G_{1} \) and
\( G_{2} \) are separable, then so is \( G_{1} *_{A} G_{2} \).
\end{theorem-nn}
Next we address the question of when a two-sided invariant metric can be extended to an HNN
extension. We obtain the following results.
\begin{theorem-nn}[Theorem \ref{sec:hnn-extens-class-existence-of-hnn-extension}]
Let \( (G,d) \) be a tsi group, \( \phi : A \to B \) be a \( d \)-isometric isomorphism between
the closed subgroups \( A, B \). Let \( H \) be the HNN extension of \( (G,\phi) \) in the
abstract sense, and let \( t \) be the stable letter of the HNN extension. If
\( \diam{A} \le K \), then there is a tsi metric \( \dist \) on \( H \) such that
\( \dist|_{G} = d \) and \( \dist(t,e) = K \).
\end{theorem-nn}
\begin{theorem-nn}[Theorem \ref{sec:induc-conj-hnn-general-theorem}]
Let \( G \) be a SIN metrizable group. Let \( \phi : A \to B \) be a topological isomorphism
between two closed subgroups. There exist a SIN metrizable group \( H \) and an element
\( t \in H \) such that \( G < H \) is a topological subgroup and \( tat^{-1} = \phi(a) \) for all
\( a \in A \) if and only if there is a compatible tsi metric \( d \) on \( G \) such that
\( \phi \) becomes a \( d \)-isometric isomorphisms.
\end{theorem-nn}
\subsection{Notations}
\label{sec:notations}
We use the following conventions. By an interval we always mean an interval of natural numbers, there will be no
intervals of reals in this paper. An interval \( \{m, m+1, \ldots, n\} \) is denoted by \( [m,n] \). For a finite set
\( F \) of natural numbers \( m(F) \) and \( M(F) \) denote its minimal and maximal elements respectively. For two sets
\( F_{1} \) and \( F_{2} \) if \( M(F_{1})<m(F_{2}) \), then we say that \( F_{1} \) is less than \( F_{2} \) and denote this
by \( F_{1} < F_{2} \).
A finite set \( F \) of natural numbers can be represented uniquely as a union of its maximal sub-intervals, i.e.,
there are intervals \( \{I_{k}\}_{k=1}^{n} \) such that
\begin{enumerate}[(i)]
\item \( F = \bigcup_{k} I_{k} \);
\item \( M(I_{k}) + 1 < m(I_{k+1}) \) for all \( k \in \seg{1}{n-1} \).
\end{enumerate}
We refer to such a decomposition of \( F \) as to the \emph{family of maximal sub-intervals}.
By a \emph{tree} we mean a directed graph connected as an undirected graph without undirected cycles and with a
distinguished vertex, which is called the \emph{root} of the tree. For any tree \( T \) its root
will be denoted by \( \emptyset \). The \emph{height} on a tree \( T \) is a function \( H_{T} \)
that assigns to a vertex of the tree its graph-theoretic distance to the root. For example
\( H_{T}(\emptyset) = 0 \) and \( H_{T}(t) = 1 \) for all \( t \in T \setminus \{\emptyset \} \)
such that \( (t,\emptyset) \in E(T) \), where \( E(T) \) is the set of directed edges of \( T \).
We use the word \emph{node} as a synonym for the phrase \emph{vertex of a tree}.
We say that a node \( s \in T \) is a \emph{predecessor} of \( t \in T \), and denote this by \( s \prec t \), if there
are nodes \( s_{0}, \ldots, s_{m} \in T \) such that \( s_{0} = s, s_{m} = t \) and \( (s_{i}, s_{i+1}) \in E(T) \).
For a metric space \( X \) its density character, i.e., the smallest cardinality of a dense subset, is denoted by
\( \chi(X) \).
\subsection{Acknowledgment}
\label{sec:acknowledgment}
The author wants to thank Christian Rosendal for his tireless support and numerous helpful and very inspiring
conversations. Part of this work was done during the author's participation in the program on ``Asymptotic geometric
analysis'' at the Fields Institute in the Fall, 2010 and during the trimester on ``Von Neumann algebras and ergodic
theory of group actions'' at the Institute of Henri Poincare in Spring, 2011. The author thanks sincerely the
organizers of these programs.
The author also thanks the anonymous referee for the valuable help in improving paper's writing.
\section{Trivial words in amalgams}
\label{sec:triv-words-amalg}
Let a family \( \{G_{\lambda}\}_{\lambda \in \Lambda} \) of groups be given, where \( \Lambda \) is
an index set. Suppose all of the groups contain a subgroup \( A \subseteq G_{\lambda} \), and
assume that \( G_{\lambda_{1}} \cap G_{\lambda_{2}} = A \) for all \( \lambda_{1} \ne \lambda_{2}
\). Let \( G = \bigcup_{\lambda \in \Lambda} G_{\lambda} \) denote the union of the
groups \( G_{\lambda} \). The identity element in any group is denoted by \( e \), the ambient group
will be evident from the context. Let \( 0 \) be a symbol not in \( \Lambda \). For
\( g_{1}, g_{2} \in G \) we set \( g_{1} \cong g_{2} \) to denote the existence of
\( \lambda \in \Lambda \) such that \( g_{1}, g_{2} \in G_{\lambda} \). If \( g_{1} \cong g_{2} \),
we say that \( g_{1} \) and \( g_{2} \) are \emph{\multipliable.} We also define a relation
on \( \Lambda \cup \{0\} \) by declaring that \( x, y \in \Lambda \cup \{0\} \) are in relation if and
only if either \( x = y \) or at least one of \( x, y \) is \( 0 \). This relation on
\( \Lambda \cup \{0\} \) is also denoted by \( \cong \).
The free product of the groups \( G_{\lambda} \) with amalgamation over the subgroup \( A \) is
denoted by \( \amalgam \). We carefully distinguish words over the alphabet \( G \) from elements
of the amalgam \( \amalgam \). For that we introduce the following notation. \( \words \) denotes
the set of finite nonempty words over the alphabet \( G \). The length of a word
\( \alpha \in \words \) is denoted by \( |\alpha| \), the concatenation of two words \( \alpha \)
and \( \beta \) is denoted by \( \alpha \concat \beta \), and the \( i^{th} \) letter of
\( \alpha \) is denoted by \( \alpha(i) \); in particular, for any \( \alpha \) \( \in \words \)
\[ \alpha = \alpha(1) \concat \alpha(2) \concat \cdots \concat \alpha(|\alpha|). \]
Two words \( \alpha, \beta \in \words \) are said to be \emph{\multipliable} if \( |\alpha| = |\beta| \)
and \( \alpha(i) \cong \beta(i) \) for all \( i \in \seg{1}{|\alpha|} \). For technical reasons (to
be concrete, for the induction argument in Proposition
\ref{sec:triv-words-amalg-structure-of-the-trivial-word}) we need the following notion of a labeled
word. A \emph{labeled word} is a pair \( (\alpha,l_{\alpha}) \), where \( \alpha \) is a word of
length \( n \), and \( l_{\alpha} : \seg{1}{n} \to \Lambda \cup \{0\} \) is a function, called the
label of \( \alpha \), such that
\[ \alpha(i) \in G_{\lambda} \setminus A \implies l_{\alpha}(i) = \lambda \]
for all \( i \in \seg{1}{n} \).
\begin{example}
\label{exm:canonical-labeling}
Let \( \alpha \in \words \) be any word. There is a canonical label for \( \alpha \) given by
\begin{displaymath}
l_{\alpha}(i) =
\begin{cases}
0& \textrm{if \( \alpha(i) \in A \)}; \\
\lambda & \textrm{if \( \alpha(i) \in G_{\lambda} \setminus A \)}.
\end{cases}
\end{displaymath}
In fact, everywhere, except for the proof of Proposition \ref{sec:triv-words-amalg-structure-of-the-trivial-word}, we use
this canonical labeling only.
\end{example}
Let \( \alpha \) be a word of length \( n \). For a subset \( F \subseteq \seg{1}{n} \), with \( F =
\{i_{k}\}_{k=1}^{m} \), where \( i_{1} < i_{2} < \ldots < i_{m} \), set
\[ \alpha[F] = \alpha(i_{1}) \concat \alpha(i_{2}) \concat \cdots \concat \alpha(i_{m}). \]
We say that a subset \( F \subseteq \seg{1}{n} \) is \emph{\( \alpha \)-\multipliable} if \( \alpha(i) \cong \alpha(j) \) for all \( i, j \in F \).
There is a natural evaluation map from the set of words \( \words \) over the alphabet \( G \) to the amalgam
\( \amalgam \) given by the multiplication of letters in the group \( \amalgam \):
\[ \alpha \mapsto \alpha(1) \cdot \alpha(2) \cdots \alpha(|\alpha|). \]
This map is denoted by a hat
\[ \widehat{}\ : \words \to \amalgam. \]
Note that this is map is obviously surjective. For a word \( \alpha \in \words \) and a subset
\( F \subseteq \seg{1}{|\alpha|} \) we write \( \hat{\alpha}[F] \) instead of
\( \widehat{\alpha[F]} \). We hope this will not confuse the reader too much. A word \( \alpha \)
is said to be \emph{trivial} if \( \hat{\alpha} = e \).
\subsection{Structure of trivial words}
\label{sec:struct-triv-words}
Elements of the group \( A \) will be special for us. Let \( \alpha \in \words\) be a word of
length \( n \). We say that its \( i^{th} \) letter is \emph{outside of \( A \)} if, as the name
suggests, \( \alpha(i) \not \in A \). The \emph{list of external letters} of \( \alpha \) is a,
possibly empty, sequence \( \{i_{k}\}_{k=1}^{m} \) such that
\begin{enumerate}[(i)]
\item \( i_{k} < i_{k+1} \) for all \( k \in \seg{1}{m-1} \);
\item \( \alpha(i_{k}) \not \in A \) for all \( k \in \seg{1}{m} \);
\item \( \alpha(i) \not \in A \) implies \( i = i_{k} \) for some \( k \in \seg{1}{m} \).
\end{enumerate}
In other words, this is just the increasing list of all the letters in \( \alpha \) that are outside
of \( A \).
\begin{definition}
\label{sec:triv-words-amalg-alternating-word}
Let \( \alpha \in \words \) be a word with the list of external letters \( \{i_{k}\}_{k=1}^{m} \).
The word \( \alpha \) is called \emph{alternating} if
\( \alpha(i_{k}) \not \cong \alpha(i_{k+1}) \) for all \( k \in \seg{1}{m-1} \). Note that a word
is always alternating if \( m \le 1 \). The word \( \alpha \) is said to be \emph{reduced} if
\( \alpha(i) \not \cong \alpha(i+1) \) for all \( i \in \seg{1}{|\alpha|-1}\), and it is called a
\emph{reduced form of \( f \in \amalgam \)} if additionally \( \hat{\alpha} = f \).
\end{definition}
The following is a basic fact about free products with amalgamation.
\begin{lemma}
\label{sec:triv-words-amalg-non-triviality-of-reduced-words}
Let \( \alpha \in \words \) be a reduced word. If \( \alpha \ne e \), then
\( \hat{\alpha} \ne e \).
\end{lemma}
It is worth mentioning that if \( A \ne \{e\} \), then an element \( f \in \amalgam \) has many
different reduced forms (unless \( f \in G \), then it has only one). But all these reduced forms
have the same length, therefore it is legitimate to talk about the length of an element \( f \)
itself.
\begin{lemma}
\label{sec:triv-words-amalg-reduced-forms}
Any element \( f \in \amalgam \) has a reduced form \( \alpha \in \words \). Moreover, if
\( \beta \in \words \) is another reduced form of \( f \), then \( |\alpha| = |\beta| \) and
\( A\alpha(i)A = A\beta(i)A \) for all \( i \in \seg{1}{|\alpha|} \).
\end{lemma}
\begin{proof}
The existence of a reduced form of \( f \in \amalgam \) is obvious. Suppose \( \alpha \) and
\( \beta \) are both reduced forms of \( f \). Set
\[ \zeta = \alpha(|\alpha|)^{-1} \concat \cdots \concat \alpha(1)^{-1} \concat \beta(1) \concat
\cdots \concat \beta(|\beta|). \]
Since \( \hat{\zeta} = e \) and \( \zeta \ne e \), by Lemma
\ref{sec:triv-words-amalg-non-triviality-of-reduced-words} \( \zeta \) is not reduced. By
assumption, \( \alpha \) and \( \beta \) were reduced, therefore \( \alpha(1) \cong \beta(1) \).
We claim that \( \alpha(1)^{-1} \beta(1) \in A \). Indeed, if
\( \alpha(1)^{-1}\beta(1) \not \in A \), then the word
\[ \xi = \alpha(|\alpha|)^{-1} \concat \cdots \concat \alpha(1)^{-1} \cdot \beta(1) \concat \cdots
\concat \beta(|\beta|) \]
is reduced, \( \hat{\xi} = e \), and \( \xi \ne e \), contradicting Lemma
\ref{sec:triv-words-amalg-non-triviality-of-reduced-words}. So \( \alpha(1)^{-1}\beta(1) \in A
\), and therefore \( \beta(1) = \alpha(1)a_{1} \) for some \( a_{1} \in A \) and
\( A\alpha(1)A = A\beta(1)A \). Now set
\[ \alpha_{1} = \alpha(2) \concat \cdots \concat \alpha(|\alpha|), \quad \beta_{1} = a_{1} \cdot
\beta(2) \concat \cdots \concat \beta(|\beta|). \]
Since \( \hat{\alpha}_{1} = \hat{\beta}_{1} \) and \( \alpha_{1}, \beta_{1} \) are reduced, we can
apply the same argument to get \( \alpha_{1}(1) = \beta_{1}(1)a_{2} \) for some \( a_{2} \in A \),
whence
\[ A\alpha(2)A = A\alpha_{1}(1)A = A\beta_{1}(1)A = A\beta(2)A. \]
And we proceed by induction on \( |\alpha| + |\beta| \).
\end{proof}
\begin{lemma}
\label{sec:triv-words-amalg-reduced-form-length}
Let \( f \in \amalgam \) and \( \alpha, \beta \in \words \) be given. If \( \alpha \) is a
reduced form of \( f \), \( |\alpha| = |\beta| \) and \( \hat{\alpha} = \hat{\beta} \), then
\( \beta \) is a reduced form of \( f \).
\end{lemma}
\begin{proof}
If \( \beta \) is not a reduced form of \( f \), we perform cancellations in \( \beta \) and get a
reduced word \( \beta_{1} \) such that \( \hat{\beta}_{1} = f \) and \( |\beta_{1}| < |\beta| \).
By Lemma \ref{sec:triv-words-amalg-reduced-forms} we have \( |\beta_{1}| = |\alpha| \),
contradicting \( |\beta| = |\alpha| \). Hence \( \beta \) is reduced.
\end{proof}
\begin{lemma}
\label{sec:triv-words-amalg-alternatin-word-non-trivial}
If \( \alpha \) is an alternating word with a nonempty list of external letters, then
\( \hat{\alpha} \ne e \).
\end{lemma}
\begin{proof}
Let \( \{i_{k}\}_{k=1}^{m} \) be the list of external letters of \( \alpha \). For
\( k \in \seg{2}{m-1} \) set
\[ \xi_{1} = \alpha(1) \cdots \alpha(i_{2}-1), \]
\[ \xi_{k} = \alpha(i_{k}) \cdot \alpha(i_{k}+1) \cdots \alpha(i_{k+1}-1), \]
\[ \xi_{m} = \alpha(i_{m}) \cdot \alpha(i_{m}+1) \cdots \alpha(n), \] and put
\[ \xi = \xi_{1} \concat \cdots \concat \xi_{m}. \]
Then \( \hat{\xi} = \hat{\alpha} \), \( \xi \ne e \) (since \( \xi_{i} \ne e \) for all
\( i \in \seg{1}{m} \)), and, as one easily checks, \( \xi \) is reduced. An application of Lemma
\ref{sec:triv-words-amalg-non-triviality-of-reduced-words} finishes the proof.
\end{proof}
\begin{lemma}
\label{sec:triv-words-amalg-subword-of-trivial-word}
If \( \zeta \) is a trivial word of length \( n \) with a nonempty list of external letters, then
there is an interval \( I \subseteq \seg{1}{n} \) such that
\begin{enumerate}[(i)]
\item\label{lem:subword-trivial-word-item:evaluation} \( \hat{\zeta}[I] \in A \);
\item\label{lem:subword-trivial-word-item:congruence} \( I \) is \( \zeta \)-\multipliable;
\item\label{lem:subword-trivial-word-item:endpoints}
\( \zeta \big( m(I) \big), \zeta \big( M(I) \big) \not \in A \).
\end{enumerate}
\end{lemma}
\begin{proof}
Let \( \{i_{k}\}_{k=1}^{m} \) be the list of external letters. For all \( k \in \seg{1}{m} \) define \( m_{k} \) and
\( M_{k} \) by
\[ m_{k} = \min\{j \in \seg{1}{k}: \seg{i_{j}}{i_{k}}\ \textrm{is \( \zeta \)-\multipliable}\}, \]
\[ M_{k} = \max\{j \in \seg{k}{m}: \seg{i_{k}}{i_{j}}\ \textrm{is \( \zeta \)-\multipliable}\}. \]
Set \( I_{k} = \seg{m_{k}}{M_{k}} \), and note that for \( k, l \in \seg{1}{m} \)
\[ I_{k} \cap I_{l} \ne \emptyset \implies I_{l} = I_{k}. \]
Let \( I_{k_{1}}, \ldots, I_{k_{p}} \) be a list of all the distinct intervals \( I_{k_{i}} \). Then
\( \{I_{k_{i}}\}_{i=1}^{p} \) are pairwise disjoint. Note that each of \( I_{k_{i}} \) satisfies items
\eqref{lem:subword-trivial-word-item:congruence} and \eqref{lem:subword-trivial-word-item:endpoints}. To prove the
lemma it is enough to show that for some \( i \in \seg{1}{p} \) the corresponding \( I_{k_{i}} \) satisfies also item
\eqref{lem:subword-trivial-word-item:evaluation}. Suppose this is false and \( \hat{\zeta}[I_{k_{i}}] \not \in A \)
for all \( i \in \seg{1}{p}\). Set \( \xi_{i} = \hat{\zeta}[I_{k_{i}}] \) and
\begin{multline*}
\xi = \zeta(1) \concat \cdots \concat \zeta(m(I_{k_{1}})-1) \concat \xi_{1} \concat \zeta(M(I_{k_{1}})+1) \concat \cdots \\
\cdots \concat \zeta(m(I_{k_{2}}) - 1) \concat \xi_{2} \concat \zeta(M(I_{k_{2}})+1) \concat \cdots\\
\cdots \concat \zeta(m(I_{k_{p}})-1) \concat \xi_{p} \concat \zeta(M(I_{k_{p}})+1) \concat \cdots \concat \zeta(n).
\end{multline*}
Then, of course, \( \hat{\xi} = \hat{\zeta} = e \) and \( \xi \) is alternating by the choice of \( \{I_{k_{i}}\} \).
By Lemma \ref{sec:triv-words-amalg-alternatin-word-non-trivial} the word \( \xi \) is non-trivial, which is a
contradiction.
\end{proof}
\begin{lemma}
\label{sec:triv-words-amalg-congruent-interval-in-trivial-word}
If \( (\zeta,l_{\zeta}) \) is a trivial labeled word of length \( n \) with a nonempty list of external letters, then
there is an interval \( I \subseteq \seg{1}{n} \) such that
\begin{enumerate}[(i)]
\item\label{lem:congruent-interval-item:evaluation} \( \hat{\zeta}[I] \in A \);
\item\label{lem:congruent-interval-item:congruence} \( I \) is \( \zeta \)-\multipliable;
\item\label{lem:congruent-interval-item:non-triviality} \( \zeta(i) \not \in A \) for some
\( i \in I \);
\item\label{lem:congruent-interval-item:weak-maximality} if \( m(I)>1 \), then
\( l_{\zeta}(m(I)-1) \ne 0 \); if \( M(I)<n \), then \( l_{\zeta}(M(I)+1) \ne 0 \);
\item\label{lem:congruent-interval-item:endpoints} if \( \zeta(m(I)) \in A \), then
\( l_{\zeta}(m(I)) = 0 \); if \( \zeta(M(I)) \in A \), then \( l_{\zeta}(M(I)) = 0 \).
\end{enumerate}
\end{lemma}
\begin{proof}
We start by applying Lemma \ref{sec:triv-words-amalg-subword-of-trivial-word} to the word
\( \zeta \). This Lemma gives as an output an interval \( J \subseteq \seg{1}{n} \). We will now
enlarge this interval as follows. If \( l_{\zeta}(i) = 0 \) for all \( i \in \seg{1}{m(J)-1} \),
then set \( j_{l} = 1 \). If there is some \( i < m(J) \) such that \( l_{\zeta}(i) \ne 0 \),
then let \( j \in \seg{1}{m(J)-1} \) be maximal such that \( l_{\zeta}(j) \ne 0 \) and set
\( j_{l} = j+1 \). Similarly, if \( l_{\zeta}(i) = 0 \) for all \( i \in \seg{M(J)+1}{n} \), then
set \( j_{r} = n \). If there is some \( i > M(J) \) such that \( l_{\zeta}(i) \ne 0 \), then let
\( j \in \seg{M(J)+1}{n} \) be minimal such that \( l_{\zeta}(j) \ne 0 \) and set
\( j_{r} = j-1 \). Define
\[ I = J \cup \seg{j_{l}}{m(J)} \cup \seg{M(J)}{j_{r}} = \seg{j_{l}}{j_{r}}. \]
We claim that \( I \) satisfies the assumptions. Note that \( J \subseteq I \) and \( \zeta(i) \in A \) for all
\( i \in I \setminus J \), so \eqref{lem:congruent-interval-item:evaluation},
\eqref{lem:congruent-interval-item:congruence} and
\eqref{lem:congruent-interval-item:non-triviality} follow from items
\eqref{lem:subword-trivial-word-item:evaluation}, \eqref{lem:subword-trivial-word-item:congruence}
and \eqref{lem:subword-trivial-word-item:endpoints} of Lemma
\ref{sec:triv-words-amalg-subword-of-trivial-word}. Items
\eqref{lem:congruent-interval-item:weak-maximality} and
\eqref{lem:congruent-interval-item:endpoints} follow from the choice of \( j_{l} \) and
\( j_{r} \) and from item \eqref{lem:subword-trivial-word-item:endpoints} of Lemma
\ref{sec:triv-words-amalg-subword-of-trivial-word}.
\end{proof}
\begin{definition}
\label{sec:struct-triv-words-def-of-evaluation-tree}
Let \( (\zeta, l_{\zeta}) \) be a trivial labeled word of length \( n \), and let \( T \) be a
tree. Suppose that to each node \( t \in T \) an interval \( I_{t} \subseteq \seg{1}{n} \) is
assigned. Set \( R_{t} = I_{t} \setminus \bigcup_{t' \prec t} I_{t'} \). The tree \( T \)
together with the assignment \( t \mapsto I_{t} \) is called \emph{an evaluation tree for
\( (\zeta,l_{\zeta}) \)} if for all \( s, t \in T \) the following holds:
\begin{enumerate}[(i)]
\item\label{lem:structure-item:root} \( I_{\emptyset} = \seg{1}{n} \);
\item\label{lem:structure-item:evaluation} \( \hat{\zeta}[I_{t}] \in A \);
\item\label{lem:structure-item:endpoints} if \( t \ne \emptyset \) and \( \zeta(m(I_{t})) \in A \), then
\( l_{\zeta}(m(I_{t})) = 0 \); if \( t \ne \emptyset \) and \( \zeta(M(I_{t})) \in A \), then
\( l_{\zeta}(M(I_{t})) = 0 \);
\item\label{lem:structure-item:intervals-order} if \( H(t) \le H(s) \) and
\( I_{s} \cap I_{t} \ne \emptyset \), then \( s \prec t \) or \( s=t \);
\item\label{lem:structure-item:strict-inclusion} if \( s \prec t \) and \( t \ne \emptyset \),
then
\[ m(I_{t}) < m(I_{s}) \le M(I_{s}) < M(I_{t}); \]
\item\label{lem:structure-item:congruence} \( \zeta(i) \cong \zeta(j) \) for all
\( i, j \in R_{t} \);
\end{enumerate}
An evaluation tree \( T \) is called \emph{balanced} if additionally the following two conditions
hold:
\begin{enumerate}[(i)]
\setcounter{enumi}{6}
\item\label{lem:structure-item:non-trivial-interior} if \( T \ne \{\emptyset\} \), then
for any \( t \in T \) if \( R_{t} \) is written as a disjoint union of maximal
sub-intervals \(\{ \mathcal{I}_{j} \}_{j=1}^{k}\), then for any \( j \) there is
\( i \in \mathcal{I}_{j} \) such that \( l_{\zeta}(i) \ne 0 \);
\item\label{lem:structure-item:non-trivial-boundary} if \( s \prec t \), then
\[ m(I_{s}) - 1 \in R_{t} \implies l_{\zeta}(m(I_{s}) - 1) \ne 0; \]
\[ M(I_{s}) + 1 \in R_{t} \implies l_{\zeta}(M(I_{s}) + 1) \ne 0. \]
\end{enumerate}
\end{definition}
\begin{remark}
\label{sec:struct-triv-words-standard-label-vacuous-condition}
Note that if \( \zeta \in \words \) is a trivial word with the canonical label as in Example
\ref{exm:canonical-labeling}, then item \eqref{lem:structure-item:endpoints} in the definition of
an evaluation tree is vacuous.
\end{remark}
\begin{proposition}
\label{sec:triv-words-amalg-structure-of-the-trivial-word}
Any trivial labeled word \( (\zeta,l_{\zeta}) \) has a balanced evaluation tree.
\end{proposition}
\begin{proof}
We prove the proposition by induction on the cardinality of the list of external letters of
\( \zeta \). Suppose first that the list is empty, and \( \zeta(i) \in A \) for all
\( i \in \seg{1}{n} \). Set \( T_{\zeta} = \{\emptyset\} \) and \( I_{\emptyset} = \seg{1}{n} \).
It is easy to check that all the conditions are satisfied, and \( T_{\zeta} \) is a balanced
evaluation tree for \( (\zeta, l_{\zeta}) \).
From now on we assume there is \( i \in \seg{1}{n} \) such that \( \zeta(i) \not \in A \). Apply
Lemma \ref{sec:triv-words-amalg-congruent-interval-in-trivial-word} to \( (\zeta, l_{\zeta}) \)
and let \( I \) be the interval granted by this lemma. Set \( \lambda_{0} = l_{\zeta}(i) \) for
some (equivalently, any) \( i \in I \) such that \( \zeta(i) \not \in A \). Note that
\( \lambda_{0} \ne 0 \). Let \( p = |I| \) be the length of \( I \). If \( p = n \), then we set
\( T_{\zeta} = \{\emptyset\} \) and \( I_{\emptyset} = \seg{1}{n} \). Similarly to the base of
induction this tree is a balanced evaluation tree for \( (\zeta, l_{\zeta}) \). From now on we
assume that \( p < n \). We define the word \( \xi \) of length \( n-p+1 \) as follows. Set
\begin{displaymath}
\xi(i) =
\begin{cases}
\zeta(i) & \textrm{if \( i < m(I) \)}\\
\hat{\zeta}[I] & \textrm{if \( i = m(I) \)}\\
\zeta(i + p-1) & \textrm{if \( i > m(I) \)}.
\end{cases}
\end{displaymath}
Define the label for \( \xi \) to be
\begin{displaymath}
l_{\xi}(i) =
\begin{cases}
l_{\zeta}(i) & \textrm{if \( i < m(I) \)}\\
\lambda_{0} & \textrm{if \( i = m(I) \)}\\
l_{\zeta}(i + p-1) & \textrm{if \( i > m(I) \)}.
\end{cases}
\end{displaymath}
We claim that
\[ \big| \{i \in \seg{1}{|\xi|} : \xi(i) \not \in A\} \big| < \big| \{i \in \seg{1}{n} : \zeta(i)
\not \in A\} \big|. \]
Indeed, by the construction \( \zeta[I] \) has at least one letter (in fact, at least two letters)
not from \( A \).
By inductive assumption applied to the labeled word \( (\xi,l_{\xi}) \), there is a balanced
evaluation tree \( T_{\xi} \) with intervals \( J_{t} \subseteq \seg{1}{|\xi|} \) for
\( t \in T_{\xi} \). Since \( J_{\emptyset} = \seg{1}{|\xi|} \), there is at least one
\( t \in T_{\xi}\) (namely \( t = \emptyset \)) such that the interval \( J_{t} \) contains
\( m(I) \). By item \eqref{lem:structure-item:intervals-order} there is the smallest node
\( t_{0} \in T_{\xi} \) such that \( m(I) \in J_{t_{0}} \).
We define \( T_{\zeta} \) to be \( T_{\xi} \cup \{s_{0}\} \), where \( s_{0} \) is a new
predecessor of \( t_{0} \), i. e. , \( s_{0} \prec t_{0} \). For \( t \in T_{\xi} \) set
\begin{displaymath}
I_{t} =
\begin{cases}
\seg{m(J_{t})}{M(J_{t})} & \textrm{if \( M(J_{t}) < m(I) \)};\\
\seg{m(J_{t})}{M(J_{t})+p-1} & \textrm{if \( m(J_{t}) \le m(I) \le M(J_{t}) \)};\\
\seg{m(J_{t})+p-1}{M(J_{t})+p-1} & \textrm{if \( m(I)< m(J_{t}) \)};\\
\end{cases}
\end{displaymath}
and
\[ I_{s_{0}} = \seg{m(I)}{M(I)}. \]
We claim that such a tree \( T_{\zeta} \) with such an assignment of intervals \( I_{t} \) is a
balanced evaluation tree for \( (\zeta,l_{\zeta}) \).
\eqref{lem:structure-item:root} Since \( J_{\emptyset} = \seg{1}{|\xi|} \), it follows that
\( I_{\emptyset} = \seg{1}{n} \).
\eqref{lem:structure-item:evaluation} For any \( t \in T_{\xi} \) one has
\( \hat{\xi}[J_{t}] = \hat{\zeta}[I_{t}] \). Also, \( \hat{\zeta}[I_{s_{0}}] \in A \) by item
\eqref{lem:congruent-interval-item:evaluation} of Lemma
\ref{sec:triv-words-amalg-congruent-interval-in-trivial-word}.
\eqref{lem:structure-item:endpoints} Since \( \xi(m(I)) \in A \) and
\( l_{\xi}(m(I)) = \lambda_{0} \ne 0 \), by inductive hypothesis \( m(I_{t}) \ne m(I) \) and
\( M(I_{t}) \ne m(I) \) for all \( t \in T_{\xi} \setminus \{\emptyset\} \). Therefore
\( l_{\xi}(m(J_{t})) = l_{\zeta}(m(I_{t})) \), \( l_{\xi}(M(J_{t})) = l_{\zeta}(M(I_{t})) \) for
all \( t \in T_{\xi} \setminus \{\emptyset\} \). Thus for \( t \ne s_{0} \) the item follows from
the inductive hypothesis, and for \( t = s_{0} \) it follows from item
\eqref{lem:congruent-interval-item:endpoints} of Lemma
\ref{sec:triv-words-amalg-congruent-interval-in-trivial-word}.
\eqref{lem:structure-item:intervals-order} Follows from the inductive hypothesis and the
definition of \( s_{0} \).
\eqref{lem:structure-item:strict-inclusion} It follows from the inductive hypothesis that this
item is satisfied for all \( s, t \in T_{\xi} \). We need to consider the case \( s = s_{0} \),
\( t = t_{0} \) only. By item \eqref{lem:structure-item:endpoints} of the definition of an
evaluation tree, and since \( l_{\xi}(m(I)) = \lambda_{0} \ne 0 \), it follows that if
\( t_{0} \ne \emptyset \), then \( m(I_{t_{0}}) < m(I_{s_{0}}) \) and
\( M(I_{s_{0}}) < M(I_{t_{0}}) \).
\eqref{lem:structure-item:congruence} Follows easily from the inductive hypothesis and item
\eqref{lem:congruent-interval-item:congruence} of Lemma
\ref{sec:triv-words-amalg-congruent-interval-in-trivial-word}.
Thus \( T_{\zeta} \) is an evaluation tree for \( (\zeta,l_{\zeta}) \). It remains to check that
it is balanced.
\eqref{lem:structure-item:non-trivial-interior} For \( t \in T_{\xi} \setminus \{t_{0}\} \) the
maximal sub-intervals of \( J_{t} \setminus \bigcup_{s \prec t} J_{s} \) naturally correspond to the
maximal sub-intervals of \( I_{t} \setminus \bigcup_{s \prec t} I_{s} \), and hence for such a
\( t \) the item follows from the inductive hypothesis. For \( t = s_{0} \) the item follows from
item \eqref{lem:congruent-interval-item:non-triviality} of Lemma
\ref{sec:triv-words-amalg-congruent-interval-in-trivial-word}. The remaining case \( t = t_{0} \)
follows from item \eqref{lem:congruent-interval-item:weak-maximality} of Lemma
\ref{sec:triv-words-amalg-congruent-interval-in-trivial-word}.
\eqref{lem:structure-item:non-trivial-boundary} Again, for \( s \ne s_{0} \) this item follows
from the inductive hypothesis and for \( s = s_{0} \), \( t =t_{0} \) follows from item
\eqref{lem:congruent-interval-item:weak-maximality} of Lemma
\ref{sec:triv-words-amalg-congruent-interval-in-trivial-word}.
\end{proof}
If \( \zeta \) is just a word with no labeling, then we canonically associate a label to it by
declaring \( l_{\zeta}(i) = 0 \) if and only if \( \zeta(i) \in A \) (as in Example
\ref{exm:canonical-labeling}).
\emph{From now on we view all trivial words as labeled words with the canonical labeling.}
\begin{definition}
\label{sec:triv-words-amalg-slim-trivial-word}
A trivial word \( \zeta \in \words \) of length \( n \) is called \emph{slim} if there exists an
evaluation tree \( T_{\zeta} \) such that \( \hat{\zeta}[I_{t}] = e \) for all
\( t \in T_{\zeta} \); such a tree is then called a \emph{slim} evaluation tree. We say that
\( \zeta \) is \emph{simple} if it is slim and \( \zeta(i) \in A \) implies \( \zeta(i) = e \) for
all \( i \in \seg{1}{n} \).
\end{definition}
\begin{definition}
\label{sec:metrics-amalgams-f-pair}
Let \( f \in \amalgam \). A pair of words \( (\alpha, \zeta) \) is called an \emph{\( f \)-pair} if
\( |\alpha| = |\zeta| \) and \( \hat{\alpha} = f \), \( \hat{\zeta} = e \). An \( f \)-pair \( (\alpha,\zeta) \) is
said to be a \emph{\multipliable} \( f \)-pair if \( \alpha \) and \( \zeta \) are \multipliable. An \( f \)-pair
\( (\alpha,\zeta) \) is called \emph{slim} if it is \multipliable and \( \zeta \) is slim. It is called \emph{simple}
if it is \multipliable and \( \zeta \) is simple.
\end{definition}
For a \multipliable pair \( (\alpha,\beta) \) of length \( n \) we define the notions of right and left
transfers. Let \( a \in A \) and \( i \in \seg{1}{n-1} \) be given. The \emph{right \( (a,i)
\)-transfer} of \( (\alpha,\beta) \) is the pair
\( \transferr{\alpha}{\beta}{a}{i} = (\gamma,\delta) \) defined as follows:
\begin{displaymath}
(\gamma(j),\delta(j)) =
\begin{cases}
(\alpha(j),\beta(j)) & \textrm{if \( j \not \in \{i,i+1\} \)};\\
(\alpha(i)a^{-1},\beta(i)a^{-1}) & \textrm{if \( j = i \)};\\
(a\alpha(i+1),a\beta(i+1)) & \textrm{if \( j = i+1 \)}.
\end{cases}
\end{displaymath}
For \( a \in A \) and \( i \in \seg{2}{n} \) the \emph{left \( (a,i) \)-transfer} of
\( (\alpha,\beta) \) is denoted by \( \transferl{\alpha}{\beta}{a}{i} = (\gamma,\delta) \) and is
defined as
\begin{displaymath}
(\gamma(j),\delta(j)) =
\begin{cases}
(\alpha(j),\beta(j)) & \textrm{if \( j \not \in \{i-1,i\} \)};\\
(a^{-1}\alpha(i), a^{-1}\beta(i)) & \textrm{if \( j = i \)};\\
(\alpha(i-1)a,\beta(i-1)a) & \textrm{if \( j = i-1 \)}.
\end{cases}
\end{displaymath}
We will typically have specific sequences of transfers, so it is convenient to make the following
definition. Let \( (\alpha,\zeta) \) be a \multipliable pair of words of length \( n \). In all the
applications \( \zeta \) will be a trivial word. Let \( \{I_{k}\}_{k=1}^{m} \) be a sequence of
intervals such that:
\begin{enumerate}
\item \( I_{k} \subseteq \seg{1}{n} \);
\item \( I_{k} < I_{k+1} \) for all \( k \in \seg{1}{m-1} \);
\item \( \hat{\zeta}[I_{k}] \in A \) for all \( k \in \seg{1}{m} \);
\item \( M(I_{m}) < n \).
\end{enumerate}
Such a sequence is called \emph{right transfer admissible}. If together with items \( (1)-(3) \)
the following is satisfied
\begin{enumerate}
\item[\( (4') \)] \( m(I_{1}) > 1, \)
\end{enumerate}
then the sequence \( \{I_{k}\}_{k=1}^{m} \) is called \emph{left transfer admissible}.
Let \( \{I_{k}\}_{k=1}^{m} \) be a right transfer admissible sequence of intervals. Define
inductively words \( (\beta_{k},\xi_{k}) \) by setting \( (\beta_{0}, \xi_{0}) = (\alpha,\zeta) \)
and
\[ (\beta_{k+1}, \xi_{k+1}) =
\transferr{\beta_{k}}{\xi_{k}}{\hat{\xi}_{k}[I_{k+1}]}{M(I_{k+1})}. \]
We have to show that the right-hand side is well-defined, i.e., that
\( \hat{\xi}_{k}[I_{k+1}] \in A\). For the first step of the construction we have
\( \hat{\xi}_{0}[I_{1}] = \hat{\zeta}[I_{1}] \in A \), because the sequence is right transfer
admissible. Suppose we have proved that \( \hat{\xi}_{k-1}[I_{k}] \in A \). There are two cases:
either \( M(I_{k}) + 1 = m(I_{k+1}) \), and then
\[ \hat{\xi}_{k}[I_{k+1}] = (\hat{\xi}_{k-1}[I_{k}]) \cdot \hat{\zeta}[I_{k+1}], \]
or \( M(I_{k}) + 1 < m(I_{k+1})\), and then \( \hat{\xi}_{k}[I_{k+1}] = \hat{\zeta}[I_{k+1}] \). In
both cases we get \( \hat{\xi}_{k}[I_{k+1}] \in A \).
By definition, the right \( \{I_{k}\} \)-transfer of \( (\alpha,\zeta) \) is the pair
\( (\beta_{m},\xi_{m}) \).
The left transfer is defined similarly, but with one extra change: we apply left transfers in the
decreasing order from \( I_{m} \) to \( I_{1} \). Here is a formal definition. For a left
admissible sequence of intervals \( \{I_{k}\}_{k=1}^{m} \) set inductively
\( (\beta_{0},\xi_{0}) = (\alpha,\zeta) \) and
\[ (\beta_{k+1}, \xi_{k+1}) =
\transferl{\beta_{k}}{\xi_{k}}{\hat{\xi}_{k}[I_{m-k}]}{m(I_{m-k})}. \]
Similarly to the case of the right transfer one shows that the right-hand side in the above
construction is well-defined. By definition, the left \( \{I_{k}\} \)-transfer of
\( (\alpha,\zeta) \) is the pair \( (\beta_{m},\xi_{m}) \).
This notion of transfer, though a bit technical, will be crucial in some reductions in the next
section. The following lemma establishes basic properties of the transfer operation with respect to
the earlier notion of the evaluation tree.
\begin{lemma}
\label{sec:triv-words-amalg-transfer-preserves}
Let \( (\alpha,\zeta) \) be a \multipliable \( f \)-pair of length \( n \) and let \( T_{\zeta} \) be
a [balanced] evaluation tree for \( \zeta \). Let \( \{I_{k}\}_{k=1}^{m} \) be a right [left]
transfer admissible sequence of intervals. Let \( (\beta,\xi) \) be the right [left]
\( \{I_{k}\} \)-transfer of \( (\alpha,\zeta) \). Then
\begin{enumerate}[(i)]
\item\label{lem:transfer-preserves-item:same-length} \( |\beta| = n = |\xi| \);
\item\label{lem:transfer-preserves-item:congruent} \( (\beta,\xi) \) is a \multipliable \( f \)-pair;
\item\label{lem:transfer-preserves-item:same-tree} \( T_{\zeta} \) is a [balanced] evaluation tree
for \( \xi \).
\item\label{lem:transfer-preserves-item:indices-under-change} \( \xi(i) = \zeta(i) \) for all
\( i \not \in \{M(I_{k}), M(I_{k})+1 : k \in \seg{1}{m}\} \) for the right transfer and for all
\( i \not \in \{m(I_{k}), m(I_{k})-1 : k \in \seg{1}{m}\} \) in the case of the left
transfer;
\item\label{lem:transfer-preserves-item:triviality-of-transfer-intervals}
\( \hat{\xi}[I_{k}] = e \) for all \( k \in \seg{1}{m} \).
\end{enumerate}
\end{lemma}
\begin{proof}
Items \eqref{lem:transfer-preserves-item:same-length},
\eqref{lem:transfer-preserves-item:congruent}, and
\eqref{lem:transfer-preserves-item:indices-under-change} are trivial; item
\eqref{lem:transfer-preserves-item:same-tree} follows easily from the observation that
\( \xi(i) \in A \) if and only if \( \zeta(i) \in A\). For item
\eqref{lem:transfer-preserves-item:triviality-of-transfer-intervals} let \( \xi_{k} \) be as in
the definition of the \( \{I_{k}\} \)-transfer. Suppose for definiteness that we are in the case
of the right transfer. Then \( \hat{\xi}_{k}[I_{k}] = e \) by construction and also
\( \xi_{k+1}[I_{j}] = \xi_{k}[I_{j}] \) for all \( j \in \seg{1}{k} \). The lemma follows.
\end{proof}
We will later need another operation on words, we call it symmetrization. Here is the definition.
\begin{definition}
\label{sec:metrics-amalgams-symmetrization-for-simple}
Let \( (\alpha,\zeta) \) be a slim \( f \)-pair with a slim evaluation tree \( T_{\zeta} \). Let
\( t \in T_{\zeta} \) and \( \{i_{k}\}_{k=1}^{m} \subseteq R_{t}\) be a list such that
\begin{enumerate}[(i)]
\item \( i_{k} < i_{k+1} \) for \( k \in \seg{1}{m-1} \);
\item if \( \zeta(i) \ne e \) for some \( i \in R_{t} \), then \( i = i_{k} \) for some
\( k \in \seg{1}{m} \);
\item \( \alpha(i_{k}) \cong \alpha(i_{l}) \) for all \( k, l \in \seg{1}{m} \).
\end{enumerate}
Such a list is called \emph{symmetrization admissible}. For \( j_{0} \in \{i_{k}\}_{k=1}^{m} \)
let \( k_{0} \) be such that \( j_{0} = i_{k_{0}} \) and define a \emph{symmetrization}
\( \symmet{\alpha}{\zeta}{j_{0}}{\{i_{k}\}_{k=1}^{m}} \) of \( \zeta \) to be the word \( \xi \)
such that
\begin{displaymath}
\xi(i) =
\begin{cases}
\zeta(i) & \textrm{if \( i \ne i_{p} \) for all \( p \in \seg{1}{m} \)};\\
\alpha(i) & \textrm{if \( i \in \{i_{k}\}_{k=1}^{m} \setminus \{j_{0}\} \)};\\
\alpha(i_{k_{0}-1})^{-1} \ldots \alpha(i_{1})^{-1} \cdot \alpha(i_{m})^{-1} \ldots
\alpha(i_{k_{0}+1})^{-1} & \textrm{if \( i = j_{0} \).}
\end{cases}
\end{displaymath}
If \( m = 1 \), the above definition does not make sense, so we set that in this case
\( \symmet{\alpha}{\zeta}{i_{1}}{i_{1}} = \zeta \).
\end{definition}
\begin{lemma}
\label{sec:metrics-amalgams-symmetrization-properties}
Let \( (\alpha,\zeta) \) be a slim \( f \)-pair with a slim evaluation tree \( T_{\zeta} \). Let
\( t \in T_{\zeta} \), and let \( \{i_{k}\}_{k=1}^{m} \subseteq R_{t} \) be a symmetrization
admissible list. Fix some \( j_{0} \in \{i_{k}\}_{k=1}^{m} \). If \( \xi \) is the
symmetrization \( \symmet{\alpha}{\zeta}{j_{0}}{\{i_{k}\}_{k=1}^{m}} \) of \( \zeta \), then
\( (\alpha,\xi) \) is a slim \( f \)-pair and \( T_{\zeta} \) is a slim evaluation tree for
\( \xi \) with the same assignment of intervals \( s \mapsto I_{s} \).
\end{lemma}
\begin{proof}
The only non-trivial part in the lemma is to show that \( \hat{\xi}[I_{t}] = e \). This follows
from the facts that \( \hat{\zeta}[I_{s}] = e \) for all \( s \prec t \) (because \( T_{\zeta} \)
is slim) and that \( \zeta(i) = e \) for all \( i \in R_{t} \setminus \{i_{1}, \ldots, i_{m}\} \)
(by the definition of the symmetrization admissible list).
\end{proof}
\medskip
\section{Groups with two-sided invariant metrics}
\label{sec:tsi-groups}
In this section we would like to recall some facts from the theory of groups with two-sided invariant metrics. The
reader can consult \cite{MR2455198} for the details.
\begin{definition}
\label{sec:tsi-groups-tsi-group}
A metric \( d \) on a group \( G \) is called two-sided invariant if
\[ d(gf_{1},gf_{2}) = d(f_{1},f_{2}) = d(f_{1}g,f_{2}g) \]
for all \( g, f_{1}, f_{2} \in G \). A tsi group is a pair \( (G,d) \), where \( G \) is a group and \( d \) is a
two-sided invariant metric on \( G \); tsi stands for two-sided invariant.
\end{definition}
\begin{proposition}
\label{sec:tsi-groups-tsi-topological-group}
If \( (G,d) \) is a tsi group, then \( G \) is a topological group in the topology of the metric \( d \).
\end{proposition}
\begin{proposition}
\label{sec:tsi-groups-tsi-criterion}
Let \( d \) be a left invariant metric on the group \( G \).
\begin{enumerate}[(i)]
\item If for all \( g_{1}, g_{2}, f_{1}, f_{2} \in G \)
\[ d(g_{1}g_{2},f_{1}f_{2}) \le d(g_{1},f_{1}) + d(g_{2},f_{2}), \] then \( d \) is two-sided invariant;
\item If \( d \) is two-sided invariant, then for all \( g_{1}, \ldots, g_{k}, f_{1}, \ldots ,f_{k} \in G \)
\[ d(g_{1} \cdots g_{k}, f_{1} \cdots f_{k}) \le \sum_{i=1}^{k}d(g_{i},f_{i}). \]
\end{enumerate}
\end{proposition}
Because of Proposition \ref{sec:tsi-groups-tsi-topological-group} we choose to speak not about topological groups that
admit a compatible two-sided invariant metric, but rather about abstract groups with a two-sided invariant metric. Note
that the class of metrizable groups that admit a compatible two-sided invariant metric is very small, but it includes
two important subclasses: abelian and compact metrizable groups.
The class of tsi groups is closed under taking factors by closed normal subgroups, and, moreover, there is a canonical
metric on the factor.
\begin{proposition}
\label{sec:tsi-groups-factor-metric}
If \( (G,d) \) is a tsi group and \( N < G \) is a closed normal subgroup, then the function
\[ d_{0}(g_{1}N,g_{2}N) = \inf\{ d(g_{1}h_{1}, g_{2}h_{2}) : h_{1}, h_{2} \in N\} \]
is a two-sided invariant metric on the factor group \( G/N \) and the factor map \( \pi : G \to G/N \) is a \( 1
\)-\lipschitz surjection from \( (G,d) \) onto \( (G/N,d_{0}) \).
\end{proposition}
The metric \( d_{0} \) is called the \emph{factor metric}.
\begin{proposition}
\label{sec:tsi-groups-completion-tsi-group}
Let \( (G,d) \) be a tsi group. Let \( (\overline{G},d) \) be the completion of \( G \) as a metric space; the
extension of the metric \( d \) on \( G \) to the completion \( \overline{G} \) is again denoted by \( d \). There is
a unique extension of group operation from \( G \) to \( \overline{G} \). This extension turns \( (\overline{G},d) \)
into a tsi group.
\end{proposition}
This proposition states that for tsi groups metric and group completions are the same.
\section{Graev metric groups}
\label{sec:graev-metric-groups}
Before going into the details of the construction of Graev metrics on free products we would like to recall the
definition of the Graev metrics on free groups. The reader may consult \cite{MR0038357}, \cite{MR2332614},
\cite{MR2455198} or \cite{MR1288299} for the details and proofs.
Classically one starts with a pointed metric space \( (X,e,d) \), where \( d \) is a metric and \( e \in X \) is a
distinguished point. Take another copy of this space, denote it by \( (X^{-1},d) \), and its elements are the formal
inverses of the elements in \( X \) with the agreement \( e^{-1} = e \) and \( X \cap X^{-1} = \{e\}\). Then
\( X^{-1} \) is also a metric space and we can amalgamate \( (X,d) \) and \( (X^{-1},d) \) over the point \( e \).
Denote the resulting space by \( (\env{X},e, d) \). Equivalently, \( \env{X} = X \cup X^{-1}, \) and for all
\( x, y \in X \)
\[ d(x^{-1},y^{-1}) = d(x,y), \quad d(x,y^{-1}) = d(x,e) + d(e,y). \]
With the set \( \env{X} \) we associate two objects: the set of \emph{nonempty} words \( \word{\env{X}} \) over the
alphabet \( \env{X} \) and the free group \( F(X) \) over the basis \( X \). There is a small issue with the second
object. We want \( e \) to be the identity element of this group rather than an element of the basis. In other words,
we formally have to write \( F(X \setminus\{e\}) \), but we adopt the convention that given a pointed metric space
\( (X,e,d) \), in \( F(X) \) the letter \( e \in X \) is interpreted as the identity element. The inverse operation in
\( F(X) \) naturally extends the inverse operation on \( \env{X} \). We have a natural map
\[ \widehat{ }\ : \word{\env{X}} \to F(X), \]
for \( u \in \word{\env{X}} \) its image \( \hat{u} \) is just the reduced form of \( u \). For a word
\( u \in \word{\env{X}} \) its length is denoted by \( |u| \) and its \( i^{th} \) letter is denoted by \( u(i) \). For
two words \( u, v \in \word{\env{X}} \) of the same length \( n \) we define a function
\[ \rho(u,v) = \sum_{i=1}^{n} d(u(i),v(i)). \] And finally, we define a metric \( \dist \) by
\[ \dist(f,g) = \inf\{\rho(u,v) : |u| = |v| \textrm{ and } \hat{u} = f, \hat{v} = g\}. \]
A theorem of Graev \cite{MR0038357} states that \( \dist \) is indeed a two-sided invariant metric on \( F(X) \), and
moreover, it extends the metric \( d \) on the amalgam \( \env{X} \). It is straightforward to see that \( \dist \) is
a two-sided invariant \emph{pseudo}-metric and the hard part of the Graev's theorem is to show that it assigns a
non-zero distance to distinct elements. Graev showed this by proving some restrictions on \( u \) and \( v \) in the
infimum in the definition of \( d \).
The effective formula for the Graev metric was first suggested by O. Sipacheva and V. Uspenskij in \cite{MR913066} and
later, but independently, a similar result was obtained in \cite{MR2332614} by L. Ding and S. Gao.
In our presentation we follow \cite{MR2332614}.
\begin{definition}
\label{sec:graev-metric-groups-match}
Let \( I \) be an interval of natural numbers. A bijection \( \theta : I \to I \) is called a \emph{match} if
\begin{enumerate}[(i)]
\item \( \theta \circ \theta = \id \);
\item there are no \( i, j \in I \) such that \( i < j < \theta(i) < \theta(j) \).
\end{enumerate}
\end{definition}
\begin{definition}
\label{sec:graev-metric-groups-match-word}
Let \( w \in \word{\env{X}} \) be a word of length \( n \), let \( \theta \) be a match on \( \seg{1}{n} \). A word
\( w^{\theta} \) has length \( n \) and is defined as
\begin{displaymath}
w^{\theta}(i) =
\begin{cases}
e & \textrm{if \( \theta(i) = i \)}; \\
w(i) & \textrm{if \( \theta(i) > i \)};\\
w \big( \theta(i) \big)^{-1} & \textrm{if \( \theta(i) < i \)}.
\end{cases}
\end{displaymath}
\end{definition}
It is not hard to check that for any word \( w \) and any match \( \theta \) on \( \seg{1}{|w|} \) the word
\( w^{\theta} \) is trivial, i.e. \( \widehat{w^{\theta}} = e \).
\begin{theorem}[Sipacheva--Uspenskij, Ding--Gao]
\label{sec:graev-metric-groups-graev-metric-computation}
If \( f \in F(X)\) and \( w \in \word{\env{X}} \) is the reduced form of \( f \), then
\[ \dist(f,e) = \min\big\{\rho\big(w,w^{\theta}\big) : \textrm{\( \theta \) is a match on \( \seg{1}{|w|} \)}
\big\}. \]
\end{theorem}
Here are some of the properties of the Graev metrics. They are easy consequences of the definition of the Graev metric
and Theorem \ref{sec:graev-metric-groups-graev-metric-computation}.
\begin{proposition}
\label{sec:graev-metric-groups-properties}
Let \( (X,e,d) \) be a pointed metric space, and let \( \dist \) be the Graev metric on \( F(X) \).
\begin{enumerate}[(i)]
\item\label{prop:graev-metric-group-properties-item:extending-lipschitz} If \( (T,d_{T}) \) is a tsi group and
\( \phi : X \to T \) is a \( K \)-\lipschitz map such that \( \phi(e) = e \), then this map extends uniquely to a
\( K \)-\lipschitz homomorphism \( \phi : F(X) \to T \).
\item\label{prop:graev-metric-group-properties-item:induced-metric} If \( Y \subseteq X \), \( e \in Y \) is a pointed
subspace of \( X \) with the induced metric, then the natural embedding \( i : Y \to X \) extends uniquely to an
isometric embedding
\[ i : F(Y) \to F(X). \]
Moreover, if \( Y \) is closed in \( X \), then \( F(Y) \) is closed in \( F(X) \).
\item\label{prop:graev-metric-group-properties-item:maximality} If \( \delta \) is any tsi metric \( F(X) \) that
extends \( d \), i.e., if \( d(x_{1},x_{2}) = \delta(x_{1},x_{2}) \) for all \( x_{1}, x_{2} \in X \), then
\( \delta(u_{1}, u_{2}) \le \dist(u_{1},u_{2}) \) for all \( u_{1}, u_{2} \in F(X) \). In other words, \( \dist \)
is maximal among all the tsi metrics that extend \( d \).
\item If \( X \ne \{e\} \), then
\[ \chi(F(X)) = \max \{\aleph_{0}, \chi(X)\}. \] In particular, if \( X \) is separable, then so is \( F(X) \).
\end{enumerate}
\end{proposition}
\subsection{Free groups over metric groups}
\label{sec:free-groups-over}
In this subsection we prove a technical result that will be used later in Section \ref{sec:prop-graev-metr}.
Suppose \( X \) is itself a group and \( e \in X \) is the identity element of that group. Let \( \circ \) denote the
multiplication operation on \( X \), and let \( {x}^{\dagger} \) denote the group inverse of an element \( x \in X \).
Suppose also that \( d \) is a two sided invariant metric on \( X \). For \( u \in \word{\env{X}} \) define a word
\( u^{\sharp} \) by
\begin{displaymath}
u^{\sharp}(i) =
\begin{cases}
u(i) & \textrm{if \( u(i) \in X \)};\\
(u(i)^{-1})^{\dagger} & \textrm{if \( u(i) \in X^{-1} \)}.
\end{cases}
\end{displaymath}
For \( h \in F(X) \) let \( h^{\sharp} = \widehat{w^{\sharp}} \), where \( w \) is the reduced form of \( h \).
\begin{proposition}
\label{sec:free-groups-over-1}
Let \( f \in F(X) \), and let \( w \) be the reduced form of \( f \). If \( w \in \word{X} \), then for any
\( h \in F(X) \)
\[ \dist(fh,e) \ge \dist(fh^{\sharp},e). \]
\end{proposition}
\begin{proof}
Suppose \( w \in \word{X} \) and fix an \( h \in F(X) \). Let \( u \in \word{\env{X}} \) be the reduced form of
\( h \). It is enough to show that
\[ \rho \Big( w \concat u, \big( w \concat u \big)^{\theta} \Big) \ge \rho \Big( w \concat u^{\sharp}, \big( w \concat
u^{\sharp} \big)^{\theta} \Big) \]
for any match \( \theta \) on \( \seg{1}{|w|+|u|} \). This follows from the following inequalities:
\begin{itemize}
\item if \( x, y \in X^{-1} \), then by the two-sided invariance of the metric \( d \)
\begin{displaymath}
\begin{aligned}
d(x,y) = d(x^{-1},y^{-1}) = d\big((x^{-1})^{\dagger},(y^{-1})^{\dagger}\big);
\end{aligned}
\end{displaymath}
\item if \( x \in X^{-1} \) and \( y \in X \), then by the two-sided invariance of the metric \( d \)
\begin{displaymath}
\begin{aligned}
d(x,y) =&\ d(x,e) + d(e,y) = d(x^{-1},e) + d(e,y) = \\
&\ d\big((x^{-1})^{\dagger},e \big) + d(e,y) \ge d\big((x^{-1})^{\dagger},y \big).
\end{aligned}
\end{displaymath}
\end{itemize}
Thus \( \dist(fh,e) \ge \dist(fh^{\sharp},e) \).
\end{proof}
\section{Metrics on amalgams}
\label{sec:metrics-amalgams}
\subsection{Basic set up}
\label{sec:basic-set-up}
Let \( (G_{\lambda},d_{\lambda}) \) be a family of \tsi groups, \( A < G_{\lambda}\) be a common closed subgroup,
\( G_{\lambda_{1}} \cap G_{\lambda_{2}} = A \), and assume additionally that the metrics \( \{d_{\lambda}\} \) agree on
\( A \):
\[ \quad d_{\lambda_{1}}(a_{1},a_{2}) = d_{\lambda_{2}}(a_{1},a_{2}) \quad \textrm{for all \( a_{1}, a_{2} \in A \) and
all \( \lambda_{1}, \lambda_{2} \in \Lambda \)}. \]
Our main goal is to define a metric on the free product of \( G_{\lambda} \) with amalgamation over \( A \) that extends
all the metrics \( d_{\lambda} \). It will be an analog of the Graev metrics on free groups.
First of all, let \( d \) denote the amalgam metric on \( G = \bigcup_{\lambda} G_{\lambda} \) given by
\begin{displaymath}
d(f_{1},f_{2}) =
\begin{cases}
d_{\lambda}(f_{1},f_{2}) & \textrm{if \( f_{1}, f_{2} \in G_{\lambda} \) for some \( \lambda \in \Lambda \);}\\
\inf\limits_{a \in A} \big\{ d_{\lambda_{1}}(f_{1},a) + d_{\lambda_{2}}(a,f_{2}) \big\} & \textrm{if
\( f_{1} \in G_{\lambda_{1}} \), \( f_{2} \in G_{\lambda_{2}} \) for \( \lambda_{1} \ne \lambda_{2} \)}.
\end{cases}
\end{displaymath}
If \( \alpha_{1} \) and \( \alpha_{2} \) are two words in \( \words \) of the same length \( n \), then the value
\( \rho(\alpha_{1}, \alpha_{2}) \) is defined by
\[ \rho(\alpha_{1},\alpha_{2}) = \sum_{i=1}^{n} d\big(\alpha_{1}(i),\alpha_{2}(i)\big). \]
Finally, for elements \( f_{1}, f_{2} \in \amalgam \) the Graev metric on the free product with amalgamation
\( \amalgam \) is defined as
\[ \dist(f_{1},f_{2}) = \inf \big\{\rho(\alpha_{1},\alpha_{2}) : |\alpha_{1}| = |\alpha_{2}| \textrm{ and } \hat{\alpha}_{i} =
f_{i}\big\}. \]
\begin{lemma}
\label{sec:metrics-amalgams-tsi-pseudo-metric}
\( \dist \) is a \tsi pseudo-metric.
\end{lemma}
\begin{proof}
It is obvious that \( \dist \) is non-negative, symmetric and attains value zero on the diagonal. We show that it is
two-sided invariant. Let \( f_{1}, f_{2}, h \in \amalgam \) be given. Let \( \gamma \in \words \) be any word such
that \( \hat{\gamma} = h \). For any \( \alpha_{1}, \alpha_{2} \in \words \) that have the same length and are such
that \( \hat{\alpha}_{i} = f_{i} \) we get
\[ \rho(\alpha_{1}, \alpha_{2}) = \rho(\gamma \concat \alpha_{1}, \gamma \concat \alpha_{2}), \]
and therefore \( \dist(hf_{1}, hf_{2}) \le \dist(f_{1},f_{2}) \). But similarly, if \( \beta_{1}, \beta_{2} \) are of
the same length and \( \hat{\beta}_{i} = hf_{i} \), then
\[ \rho(\beta_{1}, \beta_{2}) = \rho(\gamma^{-1} {}\concat \beta_{1}, \gamma^{-1} {}\concat \beta_{2}), \]
where \( \gamma^{-1} = \gamma(|\gamma|)^{-1} \concat \ldots \concat \gamma(1)^{-1} \). Hence
\( \dist(f_{1}, f_{2}) = \dist(hf_{1},hf_{2}) \), i.e., \( \dist \) is left invariant. Right invariance is shown
similarly.
We also need to check the triangle inequality. By the two-sided invariance triangle inequality is equivalent to
\[ \dist(f_{1}f_{2},e) \le \dist(f_{1},e) + \dist(f_{2},e) \quad \textrm{for all \( f_{1}, f_{2} \in \amalgam \)}. \]
The latter follows immediately from the observation that if \( \hat{\alpha}_{i} = f_{i} \),
\( |\alpha_{i}| = |\zeta_{i}| \), and \( \hat{\zeta}_{1} = e = \hat{\zeta_{2}} \), then
\( \widehat{\alpha_{1} {}\concat \alpha_{2}} = f_{1}f_{2} \), \( \widehat{\zeta_{1} \concat \zeta_{2}} = e \), and
also
\[ \rho(\alpha_{1} \concat \alpha_{2}, \zeta_{1} \concat \zeta_{2}) = \rho(\alpha_{1}, \zeta_{1}) + \rho(\alpha_{2},
\zeta_{2}). \qedhere \]
\end{proof}
We will show eventually that, in fact, \( \dist \) is not only a pseudo-metric, but a genuine metric. This will take us
a while though.
It will be convenient for us to talk about norms rather than about metrics. For this we set \( \norm(f) = \dist(f,e)
\). Then \( \norm \) is a \tsi pseudo-norm on \( G \) (again, it will turn out to be a norm). Note that \( \dist \) is
a metric if and only if \( \norm \) is a norm, i. e., if and only if \( \norm(f) = 0 \) implies \( f = e \).
\subsection{Reductions}
\label{sec:reductions}
We start a series of reductions and will gradually simplify the structure of \( \alpha \) in the definition of the
pseudo-norm \( \norm \).
Using the notion of an \( f \)-pair the definition of \( \norm \) can be rewritten as
\[ \norm(f) = \inf \big\{\rho(\alpha,\zeta) : \textrm{\( (\alpha,\zeta) \) is an \( f \)-pair}\}. \]
\begin{lemma}
\label{sec:metrics-amalgams-congruent-reduction}
For all \( f \in \amalgam \)
\[ \norm(f) = \inf \big\{\rho(\alpha,\zeta) : (\alpha,\zeta) \textrm{ is a \multipliable \( f \)-pair}\}. \]
\end{lemma}
\begin{proof}
Fix an \( f \in \amalgam \). We need to show that for any \( f \)-pair \( (\alpha,\zeta) \) and for any
\( \epsilon > 0 \) there is a \multipliable \( f \)-pair \( (\beta, \xi) \) such that
\[ \rho(\beta,\xi) \le \rho(\alpha,\zeta) + \epsilon. \]
Take an \( f \)-pair \( (\alpha,\zeta) \) and fix an \( \epsilon > 0 \). Let \( n \) be the length of \( \alpha \).
For an \( i \in \seg{1}{n} \) we define a pair of words \( \beta_{i}, \xi_{i} \) as follows: if
\( \alpha(i) \cong \zeta(i) \), then \( \beta_{i} = \alpha(i) \), \( \xi_{i} = \zeta(i) \); if
\( \alpha(i) \not \cong \zeta(i) \), then \( \beta_{i} = \alpha(i) \concat e \) and
\( \xi_{i} = a_{i} \concat a_{i}^{-1}\zeta(i) \), where \( a_{i} \in A \) is any element such that
\[ d\big(\alpha(i),\zeta(i)\big) + \frac{\epsilon}{n} \ge d\big(\alpha(i),a_{i}\big) + d\big(a_{i},\zeta(i)\big), \]
which exists by the definition of the amalgam metric \( d \). Then
\[ \rho(\beta_{i},\xi_{i}) \le \rho\big(\alpha(i),\zeta(i)\big) + \frac{\epsilon}{n} \quad \textrm{for all \( i
\)}. \]
Set \( \beta = \beta_{1} \concat \ldots \concat \beta_{n} \), \( \xi = \xi_{1} \concat \ldots \concat \xi_{n} \). It
is now easy to see that \( (\beta,\xi) \) is a \multipliable \( f \)-pair and that indeed
\[ \rho(\beta,\xi) \le \rho(\alpha,\zeta) + \epsilon. \qedhere \]
\end{proof}
The next lemma follows immediately from the two-sided invariance of the metrics \( d_{\lambda} \).
\begin{lemma}
\label{sec:metrics-amalgams-transfer-isometry}
Let \( (\alpha,\zeta) \) be a \multipliable pair of length \( n \), and let \( \{I_{k}\}_{k=1}^{m} \) be a right [left]
transfer admissible sequence of intervals. If \( (\beta,\xi) \) is the right [left] \( \{I_{k}\}_{k=1}^{m} \)-transfer
of the pair \( (\alpha, \zeta) \), then
\[ \rho(\alpha,\zeta) = \rho(\beta,\xi). \]
\end{lemma}
\begin{lemma}
\label{sec:metrics-amalgams-slim-reduction}
Let \( (\alpha,\zeta) \) be a \multipliable \( f \)-pair, and let \( T_{\zeta} \) be an evaluation tree for \( \zeta \).
There is a slim \( f \)-pair \( (\beta,\xi) \) such that
\begin{enumerate}[(i)]
\item\label{lem:slim-reduction:item-same-length} \( |\alpha| = |\beta| \);
\item\label{lem:slim-reduction:item-same-rho} \( \rho(\alpha,\zeta) = \rho(\beta,\xi) \);
\item\label{lem:slim-reduction:item-same-eval-tree} \( T_{\zeta} \) is a slim evaluation tree for \( \xi \);
\item\label{lem:slim-reduction:item-still-balanced} if \( T_{\zeta} \) is a balanced evaluation tree for \( \zeta \),
then it is also balanced as an evaluation tree for \( \xi \).
\end{enumerate}
\end{lemma}
\begin{proof}
Let \( (\alpha,\zeta) \) be a \multipliable \( f \)-pair, let \( T_{\zeta} \) be an evaluation tree for \( \zeta \), and
let \( H_{T_{\zeta}} \) denote the height of the tree \( T_{\zeta} \). We do an inductive construction of words
\( (\beta_{k},\xi_{k}) \) for \( k = 0, \ldots, H_{T_{\zeta}} \) and claim that
\( (\beta_{H_{T_{\zeta}}},\xi_{H_{T_{\zeta}}}) \) is as desired. We start by setting
\( (\beta_{0},\xi_{0}) = (\alpha,\zeta) \).
Suppose the pair \( (\beta_{k},\xi_{k}) \) has been constructed. Let \( t_{1}, \ldots, t_{m} \in T \) be all the
nodes at the level \( H_{T_{\zeta}}-k \) listed in the increasing order: \( M(I_{t_{i}}) < m(I_{t_{i+1}}) \). We
define a relation \( \sim \) on \( \seg{1}{m} \) by setting \( k \sim l \) if for any
\( i \in \seg{m(I_{t_{k}}\cup I_{t_{l}})}{M(I_{t_{k}} \cup I_{t_{l}})} \) there is \( j \in \seg{1}{m} \) such that
\( i \in I_{t_{j}} \). It is straightforward to check that \( \sim \) is an equivalence relation on \( \seg{1}{m} \).
Note that any \( \sim \)-equivalence class is a sub-interval of \( \seg{1}{m} \). Let \( J_{1}, \ldots, J_{p} \) be
the increasing list of all the distinct equivalence classes, \( J_{1} < J_{2} < \ldots < J_{p} \).
\setcounter{case}{0}
\begin{case}
\label{sec:reductions-case-1}
\( p \ge 2 \). Set \( (\gamma,\omega) \) to be the right \( \{I_{t_{r}}\}_{r=1}^{M(J_{p-1})} \)-transfer of
\( (\beta_{k}, \xi_{k}) \), and define \( (\beta_{k+1}, \xi_{k+1}) \) to be the left
\( \{I_{t_{r}}\}_{r=m(J_{p})}^{m} \)-transfer of \( (\gamma,\omega) \).
\end{case}
\begin{case}
\label{sec:reductions-case-2}
\( p =1 \). Suppose there is only one equivalence class. We have a trichotomy:
\begin{itemize}
\item if \( M(I_{M(J_{1})}) < n \), then set
\[ (\beta_{k+1},\xi_{k+1}) = \textrm{the right } \{I_{t_{r}}\}_{r=1}^{m} \textrm{-transfer of }
(\beta_{k},\xi_{k}); \]
\item if \( M(I_{M(J_{1})}) = n \), but \( m(I_{m(J_{1})}) > 1 \), then set
\[ (\beta_{k+1},\xi_{k+1}) = \textrm{the left } \{I_{t_{r}}\}_{r=1}^{m}\textrm{-transfer of }
(\beta_{k},\xi_{k}); \]
\item if \(m(I_{m(J_{1})}) = 1 \) and \( M(I_{M(J_{1})}) = n \), then set
\[ (\beta_{k+1},\xi_{k+1}) = \textrm{the right } \{I_{t_{r}}\}_{r=1}^{m-1} \textrm{-transfer of }
(\beta_{k},\xi_{k}). \]
Notice the difference from the first case: the last element of the transfer sequence is \( r = m-1 \), not \( m
\).
\end{itemize}
\end{case}
Denote \( (\beta_{H_{T_{\zeta}}}, \xi_{H_{T_{\zeta}}}) \) simply by \( (\beta,\xi) \). We claim that this pair
satisfies all the requirements. Since \( (\beta,\xi) \) is obtained by the sequence of transfers, items
\eqref{lem:slim-reduction:item-same-length} and \eqref{lem:slim-reduction:item-still-balanced} follow from Lemma
\ref{sec:triv-words-amalg-transfer-preserves}. Item \eqref{lem:slim-reduction:item-same-rho} is a consequence of
Lemma \ref{sec:metrics-amalgams-transfer-isometry}.
It remains to check that \( \hat{\xi}[I_{t}] = e \) for all \( t \in T_{\zeta} \). By item
\eqref{lem:transfer-preserves-item:triviality-of-transfer-intervals} of Lemma
\ref{sec:triv-words-amalg-transfer-preserves} \( \hat{\xi}_{k+1}[I_{t}] = e \) for all \( t \in T_{\zeta} \) such that
\( H_{T_{\zeta}}(t) = H_{T_{\zeta}} - k \). Therefore it is enough to show that
\( \hat{\xi}_{k+1}[I_{t}] = \hat{\xi}_{k}[I_{t}] \) for all \( t \in T_{\zeta} \) such that
\( H_{T_{\zeta}}(t) > H_{T_{\zeta}} - k \). This follows from item
\eqref{lem:transfer-preserves-item:indices-under-change} of Lemma \ref{sec:triv-words-amalg-transfer-preserves} and
item \eqref{lem:structure-item:strict-inclusion} of the definition of the evaluation tree.
\end{proof}
\begin{lemma}
\label{sec:metrics-amalgams-simple-reduction}
Let \( (\alpha,\zeta) \) be a slim \( f \)-pair, and let \( T_{\zeta} \) be a slim balanced evaluation tree for
\( \zeta \). There is a simple \( f \)-pair \( (\beta,\xi) \) such that
\begin{enumerate}[(i)]
\item\label{lem:simple-reduction:item-same-length} \( |\alpha| = |\beta| \);
\item\label{lem:simple-reduction:item-same-rho} \( \rho(\alpha,\zeta) = \rho(\beta,\xi) \);
\item\label{lem:simple-reduction:item-same-eval-tree} \( T_{\zeta} \) is a slim balanced evaluation tree for \( \xi
\).
\end{enumerate}
\end{lemma}
\begin{proof}
Let \( (\alpha,\zeta) \) be a slim \( f \)-pair of length \( n \), and let \( T_{\zeta} \) be a slim evaluation tree
for \( \zeta \). Sets \( \{R_{t}\}_{t \in T_{\zeta}} \) form a partition of \( \seg{1}{n} \). For \( t \in T \) let
\( J^{t}_{1}, \ldots, J^{t}_{q_{t}} \) be the maximal sub-intervals of \( R_{t} \). Let \( \{i_{k}\}_{k=1}^{m} \) be
the list of external letters in \( \zeta \). Set
\[ F(J^{t}_{i}) = \{i_{k}\} \cap J^{t}_{i}. \]
Assume first that \( F(J_{i}^{t}) \ne \emptyset \) for all \( t \in T_{\zeta} \) and all \( i \in \seg{1}{q_{t}} \).
Note that by item \eqref{lem:structure-item:non-trivial-interior} of the definition of the balanced evaluation tree
this is the case once \( T \ne \{\emptyset\} \). Set
\[ U = \Big( \bigcup_{t \in T_{\zeta}} \bigcup_{i = 1}^{q_{t}} \seg{m(J^{t}_{i})}{M(F(J^{t}_{i}))} \Big) \setminus
\{i_{k}\}_{k=1}^{m}, \]
\[ V = \Big( \bigcup_{t \in T_{\zeta}} \bigcup_{i=1}^{q_{t}} \seg{M(F(J^{t}_{i}))}{M(J^{t}_{i})} \Big) \setminus
\{i_{k}\}_{k=1}^{m}. \]
Now write \( U = \{u_{k}\}_{k=1}^{p_{u}} \), \( V = \{v_{k}\}_{k=1}^{p_{v}} \) as increasing sequences. Set
\( (\gamma,\omega) \) to be the right \( \{u_{k}\} \)-transfer of the pair \( (\alpha,\zeta) \) and \( (\beta,\xi) \)
to be the left \( \{v_{k}\} \)-transfer of \( (\gamma,\omega) \) (we view \( u_{k} \)'s and \( v_{k} \)'s as intervals
that consist of a single point). We claim that the pair \( (\beta,\xi) \) satisfies all the assumptions of the lemma.
Item \eqref{lem:simple-reduction:item-same-length} follows from item \eqref{lem:transfer-preserves-item:same-length}
of Lemma \ref{sec:triv-words-amalg-transfer-preserves}. The latter lemma also implies that \( T_{\zeta} \) is a
balanced evaluation tree for \( \xi \). Item \eqref{lem:simple-reduction:item-same-rho} follows from Lemma
\ref{sec:metrics-amalgams-transfer-isometry}.
\eqref{lem:simple-reduction:item-same-eval-tree}. We show that \( T_{\zeta} \) is a slim evaluation tree for
\( \xi \). Let \( t \in T_{\zeta} \). Since \( T_{\zeta} \) was slim for \( \zeta \), we have
\( \hat{\zeta}[I_{t}] = e \). Note that if \( u_{k} \in U \cap R_{t} \), then \( u_{k}+1 \in R_{t}\) (by the
construction of \( U \)). Similarly for \( v_{k} \in V \), \( v_{k} \in R_{t} \) implies \( v_{k}-1 \in R_{t}\). It
now follows from item \eqref{lem:transfer-preserves-item:indices-under-change} of Lemma
\ref{sec:triv-words-amalg-transfer-preserves} that \( \hat{\xi}[I_{t}] = \hat{\zeta}[I_{t}] = e \) and therefore
\( T_{\zeta} \) is slim.
Finally, the simplicity of \( (\beta,\xi) \) is a consequence of items
\eqref{lem:transfer-preserves-item:indices-under-change} and
\eqref{lem:transfer-preserves-item:triviality-of-transfer-intervals} of Lemma
\ref{sec:triv-words-amalg-transfer-preserves}.
So have we proved the lemma under the assumption that \( F(J_{i}^{t}) \ne \emptyset \) for all \( t \in T_{\zeta} \)
and all \( i \in \seg{1}{q_{t}} \). Suppose this assumption was false. By item
\eqref{lem:structure-item:non-trivial-interior} of the definition of the balanced evaluation tree we get
\( T_{\zeta} = \{\emptyset\} \) and \( F(I_{\emptyset}) = \emptyset \). Therefore \( \zeta(i) \in A \) for all
\( i \). Set \( (\beta,\xi) \) to be the right \( (i)_{i=1}^{n-1} \)-transfer of \( (\alpha,\zeta) \). Then
\( \xi = e \concat \ldots \concat e \) and obviously \( (\beta,\xi) \) is a simple \( f \)-pair of the same length and
\( T_{\zeta} = \{\emptyset\} \) is a simple balanced evaluation tree for \( \xi \).
\end{proof}
\begin{lemma}
\label{sec:metrics-amalgams-symmet-rho-decreases}
Let \( (\alpha,\zeta) \) be a slim \( f \)-pair of length \( n \) with a slim evaluation tree \( T_{\zeta} \). Let
\( t \in T_{\zeta}\) be given and let \( \{i_{k}\}_{k=1}^{m} \subseteq R_{t} \) be a symmetrization admissible
list. If \( \xi = \symmet{\alpha}{\zeta}{i'}{\{i_{k}\}} \) for some \(i' \in \{i_{k}\}_{k=1}^{m} \), then
\[ \rho(\alpha,\zeta) \ge \rho(\alpha,\xi). \]
\end{lemma}
\begin{proof}
Since \( \zeta \) is slim, we have
\[ \zeta(i_{1}) \cdot \zeta(i_{2}) \cdots \zeta(i_{m}) = e, \]
and by Proposition \ref{sec:tsi-groups-tsi-criterion} we get
\[ d(\alpha(i_{1}) \cdots \alpha(i_{m}), e) = d(\alpha(i_{1}) \cdots \alpha(i_{m}), \zeta(i_{1}) \cdots \zeta(i_{m}))
\le \sum_{j=1}^{m} d(\alpha(i_{j}),\zeta(i_{j})). \] If \( i' = i_{k} \), then
\begin{eqnarray*}
\begin{aligned}
& \rho(\alpha,\zeta) - \rho(\alpha,\xi) = \\
& \sum_{j=1}^{m} d\big(\alpha(i_{j}),\zeta(i_{j})\big) - d\big(\alpha(i_{k}),
\alpha(i_{k-1})^{-1} \cdots \alpha(i_{1})^{-1} \cdot \alpha(i_{m})^{-1} \cdots \alpha(i_{k+1})^{-1}\big) = \\
& \sum_{j=1}^{m} d\big(\alpha(i_{j}),\zeta(i_{j})\big) - d\big(\alpha(i_{1}) \cdots \alpha(i_{m}), e\big) \ge 0.
\end{aligned}
\end{eqnarray*}
This proves the lemma.
\end{proof}
\begin{definition}
\label{sec:metrics-amalgams-simple reduced}
A simple \( f \)-pair \( (\alpha,\zeta) \) is called \emph{simple reduced} if \( \alpha \) is a reduced form of
\( f \).
\end{definition}
\begin{lemma}
\label{sec:metrics-amalgams-simple reduced-reduction}
For any \( f \in \amalgam \)
\[ \norm(f) = \inf\{\rho(\alpha,\zeta) : \textrm{\( (\alpha,\zeta) \) is a simple reduced \( f \)-pair}\}. \]
\end{lemma}
\begin{proof}
In view of Lemmas \ref{sec:metrics-amalgams-congruent-reduction}, \ref{sec:metrics-amalgams-slim-reduction}, and
\ref{sec:metrics-amalgams-simple-reduction}, it is enough to show that for any simple \( f \)-pair
\( (\alpha,\zeta) \) there is a simple reduced \( f \)-pair \( (\beta, \xi) \) such that
\( \rho(\alpha,\zeta) \ge \rho(\beta,\xi) \). Let \( (\alpha,\zeta) \) be a simple \( f \)-pair. Let
\( (\gamma, \omega) \) be a simple \( f \)-pair of the smallest length among all simple \( f \)-pairs
\( (\gamma_{0},\omega_{0}) \) such that
\[ \rho(\alpha,\zeta) \ge \rho(\gamma_{0},\omega_{0}). \]
It is enough to show that \( \gamma \) is a reduced form of \( f \). If \( |\gamma| = 1 \) this is obvious. Suppose
\( |\gamma| = n \ge 2 \).
\textbf{Claim 1.} There is no \( j \in \seg{1}{n} \) such that \( \gamma(j) \in A \). Suppose this is false and there
is such a \( j \in \seg{1}{n} \).
\setcounter{case}{0}
\begin{case}
\label{sec:reductions-2}
\( \omega(j) \in A \). (In fact, since \( (\gamma,\omega) \) is simple, \( \omega(j) \in A \) implies
\( \omega(j) = e \), but this is not used here.) Suppose \( j < n \). Since \( \gamma(j) \in A \),
\( \omega(j) \in A \) and \( \gamma(j+1) \cong \omega(j+1) \), we have
\( \gamma(j) \cdot \gamma(j+1) \cong \omega(j) \cdot \omega(j+1) \). Define \( (\gamma_{1}, \omega_{1}) \) by
\begin{displaymath}
\gamma_{1}(i) =
\begin{cases}
\gamma(i) & \textrm{if \( i < j \)};\\
\gamma(j) \cdot \gamma(j+1) & \textrm{if \( i = j \)};\\
\gamma(i+1) & \textrm{if \( i > j \)};
\end{cases}
\end{displaymath}
\begin{displaymath}
\omega_{1}(i) =
\begin{cases}
\omega(i) & \textrm{if \( i < j \)};\\
\omega(j) \cdot \omega(j+1) & \textrm{if \( i = j \)};\\
\omega(i+1) & \textrm{if \( i > j \)}.
\end{cases}
\end{displaymath}
It is easy to see that \( |\gamma_{1}| = |\gamma| - 1 \) and \( (\gamma_{1}, \omega_{1}) \) is a \multipliable \( f
\)-pair. Moreover, since by the two-sided invariance
\[ d(\gamma(j)\gamma(j+1),\omega(j)\omega(j+1)) \le d(\gamma(j),\omega(j)) + d(\gamma(j+1),\omega(j+1)), \]
we also have \( \rho(\gamma,\omega) \ge \rho(\gamma_{1},\omega_{1}) \). Since \( \gamma_{1}, \omega_{1} \) is a
\multipliable \( f \)-pair, by Lemmas \ref{sec:metrics-amalgams-slim-reduction} and
\ref{sec:metrics-amalgams-simple-reduction} there is a simple \( f \)-pair \( (\gamma_{0}, \omega_{0}) \) such that
\( |\gamma_{0}| = |\gamma_{1}| = n-1 \) and \( \rho(\gamma_{0},\omega_{0}) = \rho(\gamma_{1},\omega_{1}) \). This
contradicts the choice of \( (\gamma,\omega) \).
If \( j = n \), define
\begin{displaymath}
\gamma_{1}(i) =
\begin{cases}
\gamma(i) & \textrm{if \( i < j-1 \)};\\
\gamma(j-1) \cdot \gamma(j) & \textrm{if \( i = j-1 \)};\\
\gamma(i+1) & \textrm{if \( i > j-1 \)};
\end{cases}
\end{displaymath}
\begin{displaymath}
\omega_{1}(i) =
\begin{cases}
\omega(i) & \textrm{if \( i < j-1 \)};\\
\omega(j-1) \cdot \omega(j) & \textrm{if \( i = j-1 \)};\\
\omega(i+1) & \textrm{if \( i > j-1 \)},
\end{cases}
\end{displaymath}
and proceed as before.
\end{case}
\begin{case}
\label{sec:reductions-1}
\( \omega(j) \not \in A \). Let \( T_{\omega} \) be a slim evaluation tree for \( \omega \). Let
\( t \in T_{\omega} \) be such that \( j \in R_{t} \). Let \( \{i_{k}\}_{k=1}^{m} \) be the list of external
letters in \( R_{t} \); this list is symmetrization admissible. Let \( j_{0} \in \{i_{k}\}_{k=1}^{m} \) be any such
that \( j_{0} \ne j \), set \( \omega_{2} = \symmet{\gamma}{\omega}{j_{0}}{\{i_{k}\}} \). By Lemma
\ref{sec:metrics-amalgams-symmetrization-properties} \( (\gamma, \omega_{2}) \) is a slim \( f \)-pair and
\( \omega_{2}(j) = \gamma(j) \in A \). And we can decrease the length of the pair \( (\gamma,\omega_{2}) \) as in
the previous case. This proves the case and the claim.
\end{case}
\medskip
\textbf{Claim 2.} There is no \( j \in \seg{1}{n-1} \) such that \( \gamma(j) \cong \gamma(j+1) \). Suppose this is
false and there is such a \( j \in \seg{1}{n-1} \). Note that by the previous claim \( \gamma(j) \not \in A \) and
\( \gamma(j+1) \not \in A \). Hence there is \( \lambda_{0} \in \Lambda \) such that
\[ \gamma(j),\ \gamma(j+1),\ \omega(j),\ \omega(j+1) \in G_{\lambda_{0}}. \]
Therefore \( \gamma(j) \cdot \gamma(j+1) \cong \omega(j) \cdot \omega(j+1) \). The rest of the proof is similar to
what we have done in the previous claim. Define \( (\gamma_{3}, \omega_{3}) \) by
\begin{displaymath}
\gamma_{3}(i) =
\begin{cases}
\gamma(i) & \textrm{if \( i < j \)}\\
\gamma(j) \cdot \gamma(j+1) & \textrm{if \( i = j \)}\\
\gamma(i+1) & \textrm{if \( i > j \)}
\end{cases}
\end{displaymath}
\begin{displaymath}
\omega_{3}(i) =
\begin{cases}
\omega(i) & \textrm{if \( i < j \)}\\
\omega(j) \cdot \omega(j+1) & \textrm{if \( i = j \)}\\
\omega(i+1) & \textrm{if \( i > j \)}
\end{cases}
\end{displaymath}
Then \( |\gamma_{3}| = |\gamma| - 1 \), \( (\gamma_{3}, \omega_{3}) \) is a \multipliable \( f \)-pair, and
\( \rho(\gamma,\omega) \ge \rho(\gamma_{1},\omega_{1}) \). By Lemmas \ref{sec:metrics-amalgams-slim-reduction} and
\ref{sec:metrics-amalgams-simple-reduction} there is a simple \( f \)-pair \( (\gamma_{0},\omega_{0}) \) such that
\( |\gamma_{0}| = |\gamma_{3}| \) and \( \rho(\gamma_{3},\omega_{3}) = \rho(\gamma_{0},\omega_{0}) \), contradicting
the choice of \( (\gamma,\omega) \). The claim is proved.
\medskip
From the second claim it follows that \( \gamma(j) \not \cong \gamma(j+1) \) for any \( j \in \seg{1}{n-1} \) and
therefore \( \gamma \) is reduced.
\end{proof}
\begin{proposition}
\label{sec:metrics-amalgams-norm-lower-bound}
Let \( f \in \amalgam \) be an element of length \( n \). If \( \alpha \) is a reduced form of \( f \), then
\[ \norm(f) \ge \min\{d(\alpha(i),A) : i \in \seg{1}{n}\}. \]
\end{proposition}
\begin{proof}
Fix a reduced form \( \alpha \) of \( f \), the word \( \alpha \) has length \( n \). By Lemma
\ref{sec:metrics-amalgams-simple reduced-reduction} it remains to show that for any simple reduced \( f \)-pair
\( (\beta,\xi) \) we have
\[ \rho(\beta,\xi) \ge \min\{d(\alpha(i),A) : i \in \seg{1}{n}\}. \]
Let \( (\beta,\xi) \) be a simple reduced \( f \)-pair. Note that by Lemma \ref{sec:triv-words-amalg-reduced-forms}
the length of \( \beta \) is \( n \). Let \( T_{\xi} \) be a slim evaluation tree for \( \xi \), and let
\( t \in T_{\xi} \) be a leaf (i.e., a node with no predecessors). Since \( I_{t} \) is \( \xi \)-\multipliable and
\( (\beta,\xi) \) is a simple reduced pair, it follows that there is \( i_{0} \in I_{t} \) such that
\( \xi(i_{0}) = e \) (in fact, either \( \xi(m(I_{t})) = e \) or \( \xi(m(I_{t})+1) = e \)). By Lemma
\ref{sec:triv-words-amalg-reduced-forms} there are \( a_{1}, a_{2} \in A \) such that
\( a_{1} \alpha(i_{0}) a_{2} = \beta(i_{0}) \). By the two-sided invariance we get
\[ \rho(\beta,\xi) \ge d(\beta(i_{0}),e) = d(a_{1}\alpha(i_{0})a_{2},e) = d(\alpha(i_{0}),a_{1}^{-1}a_{2}^{-1}) \ge
d(\alpha(i_{0}),A). \qedhere \]
\end{proof}
We are now ready to prove that the pseudo-metric \( \dist \) is, in fact, a metric.
\begin{theorem}
\label{sec:metrics-amalgams-MAIN-Graev-metric-on-products}
If \( \dist \) is (as before) the pseudo-metric on \( \amalgam \) associated with the pseudo-norm \( \norm \),
\( \dist(f,e) = \norm(f) \), then
\begin{enumerate}[(i)]
\item\label{thm:main-item:metric} \( \dist \) is a two-sided invariant metric on \( \amalgam \);
\item\label{thm:main-item:extension} \( \dist \) extends \( d \).
\end{enumerate}
\end{theorem}
\begin{proof}
\eqref{thm:main-item:metric} By Proposition \ref{sec:metrics-amalgams-tsi-pseudo-metric} we know that \( \dist \) is a
tsi pseudo-metric. It only remains to show that \( \dist(f,e) = 0 \) implies \( f = e \). Let \( f \in \amalgam \)
be such that \( \dist(f,e) = 0\), and let \( \alpha \) be a reduced form of \( f \). Suppose first that
\( |\alpha| \ge 2 \) and therefore \( \alpha(i) \not \in A \) for all \( i \) by the definition of the reduced form.
By Proposition \ref{sec:metrics-amalgams-norm-lower-bound} and since \( A \) is closed in \( G_{\lambda} \) for all
\( \lambda \), we have
\[ \dist(f,e) \ge \min \big\{ d(\alpha(i),A) : i \in \seg{1}{|\alpha|} \big\} > 0. \]
Suppose now \( |\alpha| = 1 \) and therefore \( \alpha = f\), \( f \in G \), and the reduced form of \( f \) is
unique. By Lemma \ref{sec:metrics-amalgams-simple reduced-reduction} the distance \( d(f,e) \) is given as the
infimum over all simple reduced \( f \)-pairs, but there is only one such pair: \( (f,e) \), where \( f \) is viewed
as a letter in \( G \). Hence \( d(f,e) = 0 \) implies \( f = e \).
\eqref{thm:main-item:extension} Fix \( g_{1}, g_{2} \in G \) and suppose first that \( g_{1} \not \cong g_{2} \). Let
\( (\alpha,\zeta) \) be a simple reduced \( g_{1}g_{2}^{-1} \)-pair. We claim that there is \( a \in A \) such that
\( g_{1}a = \alpha(1) \), and \( a^{-1}g_{2}^{-1} = \alpha(2) \). Indeed,
\begin{multline*}
\alpha(1) \alpha(2) = g_{1} g_{2}^{-1} \implies g_{2}g_{1}^{-1}\alpha(1) \alpha(2) = e \implies
g_{1}^{-1}\alpha(1) \in A \implies \\
\exists a \in A \textrm{ such that } \alpha(1) = g_{1}a, \textrm{ and } \alpha(2) = a^{-1}g_{2}^{-1}.
\end{multline*}
Moreover, since \( g_{1} \not \cong g_{2} \) and since \( (\alpha,\zeta) \) is \multipliable, we get
\( \zeta = e \concat e\) and thus
\begin{displaymath}
\begin{aligned}
\dist(g_{1},g_{2}) =&\ \dist(g_{1}g_{2}^{-1},e) = \inf\{\rho(g_{1}a \concat a^{-1}g_{2}^{-1}, e \concat e) : a \in
A\} = \\
& \inf \{ d(g_{1},a^{-1}) + d(a^{-1},g_{2}) : a \in A \} = d(g_{1},g_{2}).
\end{aligned}
\end{displaymath}
If \( g_{1} \cong g_{2} \), then there is only one simple reduced \( g_{1}g_{2}^{-1} \)-pair, namely
\( (g_{1}g^{-1}, e) \) and the item follows.
\end{proof}
\section{Properties of Graev metrics}
\label{sec:prop-graev-metr}
Theorem \ref{sec:metrics-amalgams-MAIN-Graev-metric-on-products} allows us to make the following
definition: the metric \( \dist \) constructed in the previous section is called the \emph{Graev
metric} on the free product of groups \( (G_{\lambda}, d_{\lambda}) \) with amalgamation over
\( A \).
Theorem \ref{sec:graev-metric-groups-graev-metric-computation} implies that the Graev metric on a
free group is, in some sense, computable, that is if one can compute the metric on the base, then to
find the norm of an element \( f \) in the free group one has to calculate the function \( \rho \)
for only \emph{finitely many} trivial words, moreover those words are constructable from the letters
of \( f \). For the case of free products without amalgamation, i.e., when \( A = \{e\} \), we have
a similar result (see Corollary \ref{sec:prop-graev-metr-computability-graev-for-free-products}
below).
\begin{definition}
\label{sec:metrics-amalgams-symmetric-word}
Let \( (\alpha,\zeta) \) be a slim \( f \)-pair with a slim evaluation tree \( T_{\zeta} \). The
pair \( (\alpha,\zeta) \) is called \emph{symmetric with respect to the tree \( T_{\zeta} \)} if
for each \( t \in T_{\zeta} \) there are a symmetrization admissible list
\( \{i_{t,k}\}_{k=1}^{m_{t}} \) and \( j_{t} \in \{i_{t,k}\}_{k=1}^{m_{t}} \) such that
\[ \zeta = \symmet{\alpha}{\zeta}{j_{t}}{\{i_{t,k}\}_{k=1}^{m_{t}}}. \]
An \( f \)-pair \( (\alpha,\zeta) \) is called \emph{symmetric} if there is a slim evaluation tree
\( T_{\zeta} \) such that \( (\alpha,\zeta) \) is a symmetric \( f \)-pair with respect to
\( T_{\zeta} \).
\end{definition}
\begin{remark}
\label{sec:metrics-amalgams-finitely-many-symmetric}
Note that for any word \( \alpha \) there are only finitely many words \( \zeta \) such that
\( (\alpha,\zeta) \) is symmetric.
\end{remark}
\begin{proposition}
\label{sec:metrics-amalgams-graev-computability}
If \( f \in \amalgam \), then
\[ \norm(f) = \inf\{\rho(\alpha,\xi) : (\alpha,\xi) \textrm{ is a symmetric reduced \( f
\)-pair}\}. \]
\end{proposition}
\begin{proof}
By Lemma \ref{sec:metrics-amalgams-simple reduced-reduction} it is enough to show that for any
simple reduced \( f \)-pair \( (\alpha,\zeta) \) there is a symmetric reduced \( f \)-pair
\( (\alpha,\xi) \) such that
\[ \rho(\alpha,\zeta) \ge \rho(\alpha,\xi). \]
Let \( (\alpha,\zeta) \) be a simple reduced \( f \)-pair, and let \( T_{\zeta} \) be a slim
evaluation tree for \( \zeta \). We construct a new slim evaluation tree \( T^{*}_{\zeta} \) for
\( \zeta \) with the following property: for any \( t \in T^{*}_{\zeta} \) and any
\( i \in R^{*}_{t} \) if \( \zeta(i) = e \), then \( t \) is a leaf and, moreover,
\( R^{*}_{t} = I^{*}_{t} = \{ i \} \).
Let \( \{j_{k}\}_{k=1}^{m} \) be such that \( \zeta(j_{k}) = e \) for all \( k \) and
\( \zeta(j) = e \) implies \( j = j_{k} \) for some \( k \in \seg{1}{m} \). We construct a
sequence of slim evaluation trees \( T^{(k)}_{\zeta} \) for \( \zeta \) and claim that
\( T_{\zeta}^{(m)} \) is as desired. Set \( T^{(0)}_{\zeta} = T_{\zeta} \). Suppose
\( T^{(k)}_{\zeta} \) has been constructed. Let \( t_{0} \in T^{(k)}_{\zeta} \) be such that
\( j_{k+1} \in R^{(k)}_{t_{0}} \). If \( |R_{t_{0}}^{(k)}| = 1 \), that is if
\( R_{t_{0}}^{(k)} = I^{(k)} = \{j_{k+1}\} \), then do nothing: set
\( T^{(k+1)}_{\zeta} = T^{(k)}_{\zeta} \).
Suppose \( |R_{t_{0}}^{(k)}| > 1 \). Let \( s \) be a symbol for a new node. For all
\( t \in T_{\zeta}^{(k)} \setminus \{t_{0}\} \) set
\[ T^{(k+1)}_{\zeta} = T^{(k)}_{\zeta} \cup \{s\},\ I^{(k+1)}_{t} = I^{(k)}_{t},\ I^{(k+1)}_{s} =
\seg{j_{k+1}}{j_{k+1}} = \{j_{k+1}\}. \]
We need to turn the set \( T_{\zeta}^{(k+1)} \) into a tree. For that let the ordering of the
nodes in \( T^{(k+1)}_{\zeta} \) extend the ordering of the nodes of \( T^{(k)}_{\zeta} \). To
finish the construction it remains to define the place for the node \( s \) inside
\( T_{\zeta}^{(k+1)} \) and an interval \( I^{(k+1)}_{t_{0}} \).
\begin{itemize}
\item If \( j_{k+1} \) is the minimal element of \( R^{(k)}_{t_{0}} \), i.e., if
\( j_{k+1} = m(R^{(k)}_{t_{0}}) \), then set
\( I^{(k+1)}_{t_{0}} = \seg{m(I_{t_{0}}^{(k)})+1}{M(I_{t_{0}}^{(k)})} \). Let
\( t_{1} \in T^{(k)}_{\zeta}\) be such that \( (t_{0},t_{1}) \in E(T^{(k)}_{\zeta}) \). Set
\((s,t_{1}) \in E(T^{(k+1)}_{\zeta}) \), or in other words, \( s \prec t_{1} \) in \( T_{\zeta}^{(k+1)} \).
\item If \( j_{k+1} \) is the maximal element of \( R^{(k)}_{t_{0}} \), i.e., if
\( j_{k+1} = M(R^{(k)}_{t_{0}}) \), then set
\( I^{(k+1)}_{t_{0}} = \seg{m(I_{t_{0}}^{(k)})}{M(I_{t_{0}}^{(k)})-1} \). Let
\( t_{1} \in T^{(k)}_{\zeta}\) be such that \( (t_{0},t_{1}) \in E(T^{(k)}_{\zeta}) \). Set
\((s,t_{1}) \in E(T^{(k+1)}_{\zeta}) \), or in other words, \( s \prec t_{1} \) in \( T_{\zeta}^{(k+1)} \).
\item If \( j_{k+1} \) is neither maximal nor minimal element of \( R^{(k)}_{t_{0}} \), then set
\( I^{(k+1)}_{t_{0}} = I^{(k)}_{t_{0}} \) and \( (s,t_{0}) \in E(T^{(k+1)}_{\zeta}) \).
\end{itemize}
It is straightforward to check that \( T_{\zeta}^{(k+1)} \) is a slim evaluation tree for
\( \zeta \).
Finally, we define \( T^{*}_{\zeta} = T_{\zeta}^{m} \). Then \( T^{*}_{\zeta} \) is a slim
evaluation tree for \( \zeta \) and, by construction, if \( j \) is such that \( \zeta(j) = e \),
then \( I_{t_{0}}^{*} = \{j\} \) for some \( t_{0} \in T^{*}_{\zeta}\).
Let \( \{i_{k}\}_{k=1}^{p} \) be the list of external letters of \( \zeta \). Set
\begin{displaymath}
F^{*}_{t} =
\begin{cases}
R^{*}_{t} \cap \{i_{k}\}_{k=1}^{p} & \textrm{if \( R^{*}_{t} \cap \{i_{k}\}_{k=1}^{p} \ne \emptyset \)};\\
I^{*}_{t} & \textrm{otherwise}.
\end{cases}
\end{displaymath}
Note that \( F^{*}_{t} \) is symmetrization admissible for all \( t \). Let
\( \{t_{j}\}_{j=1}^{N} \) be the list of nodes of \( T_{\zeta}^{*} \). For any
\( j \in \seg{1}{N} \) pick some \( l_{j} \) such that \( l_{j} \in F_{t_{j}} \). Set
\( \xi_{0} = \zeta \) and construct inductively
\[ \xi_{k+1} = \symmet{\alpha}{\xi_{k}}{l_{k+1}}{F_{t_{k+1}}}. \]
Finally, set \( \xi = \xi_{N} \). It follows from Lemma
\ref{sec:metrics-amalgams-symmetrization-properties} that \( (\alpha,\xi) \) is a slim \( f
\)-pair and is symmetric with respect to \( T_{\zeta}^{*} \) by construction. Lemma
\ref{sec:metrics-amalgams-symmet-rho-decreases} implies
\[ \rho(\alpha,\zeta) \ge \rho(\alpha,\xi) \] as desired.
\end{proof}
If \( A = \{e\} \), that is we have a free product without amalgamation, then for any
\( f \in \amalgam \) there is exactly one reduced word \( \alpha \in \words \) such that
\( \hat{\alpha} = f \). This observation together with Remark
\ref{sec:metrics-amalgams-finitely-many-symmetric} gives us the following
\begin{corollary}
\label{sec:prop-graev-metr-computability-graev-for-free-products}
If \( A = \{e\} \), then for any \( f \in \amalgam \)
\[ N(f) = \min\{\rho(\alpha,\xi) : (\alpha,\xi) \textrm{ is a symmetric reduced \( f
\)-pair}\}. \]
\end{corollary}
We can now prove an analog of Proposition \ref{sec:graev-metric-groups-properties} for the Graev
metrics on the free products with amalgamation.
\begin{proposition}
\label{sec:graev-metric-amalgam-properties}
The Graev metric \( \dist \) has the following properties:
\begin{enumerate}[(i)]
\item\label{prop:graev-metric-amalgam-properties-item:extending-lipschitz} if \( (T,d_{T}) \) is a
tsi group, \( \phi_{\lambda} : G_{\lambda} \to T \) are \( K \)-\lipschitz homomorphisms
(\( K \) does not depend on \( \lambda \)) such that for all \( a \in A \) and all
\( \lambda_{1}, \lambda_{2} \in \Lambda \)
\[ \phi_{\lambda_{1}}(a) = \phi_{\lambda_{2}}(a), \]
then there exist a unique \( K \)-\lipschitz homomorphism \( \phi : \amalgam \to T \) that
extends \( \phi_{\lambda} \);
\item\label{prop:graev-metric-amalgam-properties-item:induced-metric} let
\( H_{\lambda} < G_{\lambda} \) be subgroups such that \( A < H_{\lambda} \) for all
\( \lambda \) and think of \( \amalgamH \) as being a subgroup of \( \amalgam \). Endow
\( H_{\lambda} \) with the metric induced from \( G_{\lambda} \). The Graev metric on
\( \amalgamH \) is the same as the induced Graev metric from \( \amalgam \). Moreover, if
\( H_{\lambda} \) are closed subgroups, then \( \amalgamH \) is a closed subgroup \( \amalgam
\);
\item\label{prop:graev-metric-amalgam-properties-item:maximality} let \( \delta \) be any other
tsi metric on the amalgam \( \amalgam \). If \( \delta \) extends \( d \), then
\( \delta(f_{1},f_{2}) \le \dist(f_{1},f_{2}) \) for all \( f_{1}, f_{2} \in \amalgam \), i.e.,
\( \dist \) is maximal among all the tsi metrics that extend \( d \);
\item\label{prop:graev-metric-amalgam-properties-item:density-character} if
\( \Lambda' = \{\lambda \in \Lambda : G_{\lambda} \ne A\} \) and \( |\Lambda'| \ge 2 \), then
\[ \chi(\amalgam) = \max \big\{ \aleph_{0}, \sup\{\chi(G_{\lambda}) : \lambda \in \Lambda\},
|\Lambda'| \big\}. \]
In particular, if \( \Lambda \) is at most countable and \( G_{\lambda} \) are all separable,
then the amalgam is also separable.
\end{enumerate}
\end{proposition}
\begin{proof}
\eqref{prop:graev-metric-amalgam-properties-item:extending-lipschitz} By the universal property
for the free products with amalgamation there is a unique extension of the homomorphisms
\( \phi_{\lambda} \) to a homomorphism \( \phi : \amalgam \to T \), it remains to check that
\( \phi \) is \( K \)-\lipschitz. Let \( (\alpha,\zeta) \) be a \multipliable \( f \)-pair of length
\( n \). Then
\begin{displaymath}
\begin{aligned}
K\rho(\alpha,\zeta) = & \sum_{i=1}^{n} K d(\alpha(i),\zeta(i)) \ge \sum_{i=1}^{n}d_{T}\big(
\phi(\alpha(i)),\phi(\zeta(i)) \big)
\ge \\
& d_{T}(\phi(\hat{\alpha}),\phi(\hat{\zeta})) = d_{T}(\phi(f),e).
\end{aligned}
\end{displaymath}
And therefore
\[ K \dist(f,e) = \inf\{ K \rho(\alpha,\zeta) : (\alpha,\zeta) \textrm{ is a \multipliable \( f
\)-pair}\} \ge d_{T}(\phi(f),e). \] Hence \( \phi \) is \( K \)-\lipschitz.
\eqref{prop:graev-metric-amalgam-properties-item:induced-metric} Let \( \dist_{H} \) be the Graev
metric on \( \amalgamH \) and \( \dist \) be the Graev metric on \( \amalgam \). From Proposition
\ref{sec:metrics-amalgams-graev-computability} it follows that
\( \dist_{H} = \dist |_{\amalgamH} \).
For the moreover part suppose that \( H_{\lambda} \) are closed in \( G_{\lambda} \) for all \( \lambda \in \Lambda
\). Set \( H = \bigcup_{\lambda \in \Lambda} H_{\lambda} \). Note that \( H \) is a closed subset of \( G \) by the
definition of the metric on \( G \). Suppose towards a contradiction that there exists \( f \in \amalgam \) such that
\( f \not \in \amalgamH \), but \( f \in \overline{\amalgamH} \). Let \( \alpha \in \words \) be a reduced form of
\( f \), and let \( n = |\alpha| \). Set
\begin{align*}
\label{eq:1}
&\epsilon_{1} = \min \big\{ d(\alpha(i), A) : i \in \seg{1}{n} \big\}, \\
&\epsilon_{2} = \min \big\{ d(\alpha(i), H) : i \in \seg{1}{n},\ \alpha(i) \not \in H \big \}.
\end{align*}
Note that \( \epsilon_{1} > 0 \) and \( \epsilon_{2} > 0 \). Let \( i_{0} \in \seg{1}{n} \) be
the largest such that \( \alpha(i_{0}) \not \in H \). By Lemma
\ref{sec:triv-words-amalg-reduced-forms} the numbers \( \epsilon_{i} \) and \( i_{0} \) are
independent of the choice of the reduced form \( \alpha \). Set
\( \epsilon = \min\{\epsilon_{1},\epsilon_{2}\}. \) Let \( h \in \amalgamH \) be such that
\( \dist(f,h) < \epsilon \). By Lemma \ref{sec:metrics-amalgams-simple reduced-reduction} there
is a simple reduced \( fh^{-1} \)-pair \( (\beta, \xi) \) such that
\( \rho(\beta,\xi) < \epsilon \). Let \( T_{\xi} \) be a slim evaluation tree for \( \xi \), and
let \( t_{0} \in T_{\xi} \) be such that \( i_{0} \in R_{t_{0}} \). It is easy to see that there
is a word \( \alpha' \) such that \( \alpha' \) is a reduced form of \( f \),
\( \alpha'(i) = \beta(i) \) for all \( i \in \seg{1}{i_{0}-1} \), and
\( \alpha'(i_{0}) = \beta(i_{0}) \cdot h_{0} \) for some \( h_{0} \in H \). Without loss of
generality assume that \( \alpha' = \alpha \). Note that \( \beta(i) \in H \) for all \( i > i_{0} \).
We claim that \( i_{0} = m(R_{t_{0}}) \). Suppose not. Let \( j_{0} \in R_{t_{0}} \) be such that
\( j_{0} < i_{0} \) and \( \seg{j_{0}+1}{i_{0}-1} \cap R_{t_{0}} = \emptyset \) (i.e., \( j_{0} \)
is the predecessor of \( i_{0} \) in \( R_{t_{0}} \)). Let \( I = \seg{j_{0}+1}{i_{0}-1} \).
Because \( T_{\xi} \) is slim, \( \hat{\xi}[I] = e \). Since \( \beta \) is reduced and \(
(\beta,\xi) \) is \multipliable, there is
\( i_{1} \in I \) such that \( \xi(i_{1}) \in A \) (in fact, \( \xi(i_{1}) = e \)). But then
\[ \rho(\beta,\xi) \ge d(\beta(i_{1}),\xi(i_{1})) \ge d(\alpha(i_{1}),A) \ge \epsilon, \]
contradicting the assumption \( \rho(\beta,\xi) < \epsilon \). The claim is proved.
Therefore \( i_{0} = m(R_{t_{0}}) \). Let \( \{j_{k}\}_{k=1}^{m} \) be the list of external
letters of \( \xi \), and let \( F_{t_{0}} = R_{t_{0}} \cap \{j_{k}\}_{k=1}^{m} \). We know that
\( \xi(i_{0}) \not \in A \), since otherwise \( \rho(\beta,\xi) \ge \epsilon \). Thus
\( i_{0} \in F_{t_{0}} \). Let \( \xi' = \symmet{\beta}{\xi}{i_{0}}{F_{t_{0}}} \). By Lemma
\ref{sec:metrics-amalgams-symmet-rho-decreases} \( \rho(\beta,\xi) \ge \rho(\beta,\xi') \).
Since \( \beta(i) \in H \) for all \( i > i_{0} \), we get \( \xi'(i) \in H \) for all \( i \in
R_{t_{0}} \setminus \{i_{0}\} \). Let \( \lambda_{0} \) be such
that \( \xi'(i) \in H_{\lambda_{0}} \) for all \( i \in R_{t_{0}} \setminus \{i_{0}\} \).
Since \( \hat{\xi}'[R_{t_{0}}] = e \), it follows
that \( \xi'(i_{0}) \in H_{\lambda_{0}} \) as well. Finally, we get
\[ \rho(\beta,\xi) \ge \rho(\beta,\xi') \ge d(\beta(i_{0}),\xi'(i_{0})) \ge d(\alpha(i_{0}),H_{\lambda_{0}}) \ge
\epsilon, \] contradiction the choice of \( (\beta,\xi) \). Therefore there is no \( f \in
\overline{\amalgamH} \) such that \( f \not \in \amalgamH \).
\eqref{prop:graev-metric-amalgam-properties-item:maximality} Let \( f \in \amalgam \) be given,
let \( (\alpha,\zeta) \) be a \multipliable \( f \)-pair of length \( n \). Since \( \delta \)
extends \( d \), we get
\[ \delta(f,e) \le \sum_{i=1}^{n} \delta(\alpha(i), \zeta(i)) = \sum_{i=1}^{n}
d(\alpha(i),\zeta(i)). \]
By taking the infimum over all such pairs \( (\alpha,\zeta) \) we get
\( \delta(f,e) \le \dist(f,e) \). By the left invariance
\( \delta(f_{1},f_{2}) \le \dist(f_{1},f_{2}) \) for all \( f_{1}, f_{2} \in \amalgam \).
\eqref{prop:graev-metric-amalgam-properties-item:density-character} If \( |\Lambda'| \ge 2 \), then
\( \amalgam \) is an infinite metric space, therefore \( \chi(\amalgam) \ge \aleph_{0} \). Since
\( G_{\lambda} < \amalgam \), it follows that \( \chi(\amalgam) \ge \chi(G_{\lambda}) \). We now
show that \( \chi(\amalgam) \ge |\Lambda'| \). It is enough to consider the case
\( |\Lambda'| \ge \aleph_{0} \). There is an \( \epsilon_{0} > 0 \) such that
\[ \big| \big\{ \lambda \in \Lambda : \sup\{d(g,A) : g \in G_{\lambda}\} > \epsilon_{0} \big\} \big| = |\Lambda'|. \]
For any such \( \lambda \) choose a \( g_{\lambda} \in G_{\lambda} \) such that
\( d(g_{\lambda}, A) > \epsilon_{0}\). The family \( \{g_{\lambda}\}_{\lambda \in \Lambda} \) is
\( 2 \epsilon_{0} \)-separated and hence \( \chi(\amalgam) \ge |\Lambda'| \).
Finally, for the reverse inequality, let \( F_{\lambda} \subseteq G_{\lambda} \) be dense sets
such that \( |F_{\lambda}| = \chi(G_{\lambda}) \) and \( F_{\lambda_{1}} \cap A = F_{\lambda_{2}} \cap A \) for all \(
\lambda_{1}, \lambda_{2} \in \Lambda \). The set
\[ \Big\{ \hat{\alpha} : \alpha \in \word{\bigcup_{\lambda \in
\Lambda}F_{\lambda}} \Big\} \]
is dense in
\( \amalgam \) and
\[ \Big| \word{\bigcup_{\lambda \in \Lambda} F_{\lambda}} \Big| =
\max \Big\{\aleph_{0}, \sup\{\chi(G_{\lambda}) : \lambda \in \Lambda\}, |\Lambda'| \Big\}. \qedhere \]
\end{proof}
\subsection{Factors of Graev metrics.}
\label{sec:graev-metr-amalg-factors}
Note that one can naturally view \( G \) as a pointed metric space \( (G,e,d) \), and the identity
map \( G \to \amalgam \) is \( 1 \)-\lipschitz (in fact, we have shown in Theorem
\ref{sec:metrics-amalgams-MAIN-Graev-metric-on-products} that it is an isometric embedding). We can
construct the Graev metric on the free group \( (F(G),d_{F}) \), and by item
(\ref{prop:graev-metric-group-properties-item:extending-lipschitz}) of Proposition
\ref{sec:graev-metric-groups-properties} there is a \( 1 \)-\lipschitz homomorphism
\[ \phi : F(G) \to \amalgam \]
such that \( \phi(g) = g \) for all \( g \in G \). Since \( G \) generates \( \amalgam \), the map
\( \phi \) is onto. Let \( \nsbg = \mathrm{ker}(\phi) \) be the kernel of this homomorphism. If
\( d_{0} \) is the factor metric on \( F(G) / \nsbg \) (see the remark after Proposition
\ref{sec:tsi-groups-factor-metric}), then \( (F(G) / \nsbg, d_{0}) \) is a tsi group and
\( F(G) / \nsbg \) is isomorphic to \( \amalgam \) as an abstract group.
\begin{proposition}
\label{sec:metrics-amalgams-factor-isometry}
In the above setting \( (F(G)/\nsbg, d_{0}) \) is \emph{isometrically} isomorphic to
\( (\amalgam, \dist) \).
\end{proposition}
\begin{proof}
We recall the definition of the factor metric: for \( f_{1}\nsbg, f_{2} \nsbg \in F(G)/\nsbg \)
\[ d_{0}(f_{1}\nsbg, f_{2}\nsbg) = \inf\{d_{F}(f_{1}h_{1},f_{2}h_{2}) : h_{1}, h_{2}\in
\nsbg\}. \]
Of course, by construction \( F(G)/\nsbg \) is isomorphic to \( \amalgam \) and we check that the
natural isomorphism is an isometry.
Let \( f \in \amalgam \), and let \( w \in \words \) be a reduced form of \( f \). We can
naturally view \( w \) as a reduced form of the element in \( F(G) \), call it \( f' \).
It is enough to show
that for any such \( f \) and \( f' \) we have
\[ d_{0}(f'\nsbg, \nsbg) = \dist(f,e). \]
Note that if \( h \in \nsbg \), then \( h^{\sharp} \in \nsbg\) (for the definition of \( h^{\sharp} \) see
Subsection \ref{sec:free-groups-over}). Therefore by Proposition \ref{sec:free-groups-over-1}
\[ d_{0}(f'\nsbg, \nsbg) = \inf\{ d_{F}(f'h,e) : h \in \nsbg \} = \inf\{d_{F}(f'h^{\sharp},e) :
h \in \nsbg \}.\]
Let \( h \in \nsbg \) and \( \gamma \in \word{G} \) be the reduced form of \( h^{\sharp} \in F(G) \), we claim that
\[ d_{F}(f'h^{\sharp},e) = \inf \Big\{ \rho \big( w \concat \gamma, (w \concat \gamma)^{\theta} \big) :
\textrm{\( \theta \) is a match on \( \seg{1}{|w \concat \gamma|} \)} \Big\}. \]
In general, \( w \concat \gamma \) may not be reduced, so let \( w = w_{0} \concat \alpha \),
\( \gamma = \alpha^{-1} \concat \gamma_{0} \) be such that \( w_{0} \concat \gamma_{0} \) is reduced. By Theorem
\ref{sec:graev-metric-groups-graev-metric-computation}
\[ d_{F}(f'h^{\sharp},e) = \inf \Big\{ \rho \big( w_{0} \concat \gamma_{0}, (w_{0} \concat \gamma_{0})^{\theta}
\big) : \textrm{\( \theta \) is a match on \( \seg{1}{|w_{0} \concat \gamma_{0}|} \)} \Big\}. \]
To see the claim it remains to note that for any match \( \theta \) on \( \seg{1}{|w_{0} \concat \gamma_{0}|} \) there
is a canonical match \( \theta' \) on \( \seg{1}{|w \concat \gamma|} \) such that
\[ \rho \big(w_{0} \concat \gamma_{0}, (w_{0} \concat \gamma_{0})^{\theta} \big) = \rho\big(w \concat \gamma, (w \concat
\gamma)^{\theta'}\big). \]
The match \( \theta' \) can formally be defined by
\begin{displaymath}
\theta'(i) =
\begin{cases}
\theta(i) & \textrm{if \( i \le |w_{0}| \) and \( \theta(i) \le |w_{0}| \)},\\
\theta(i)+2|\alpha| & \textrm{if \( i \le |w_{0}| \) and \( \theta(i) > |w_{0}| \)},\\
2|w| - i + 1 & \textrm{if \( |w_{0}| < i \le |w_{0}|+2|\alpha| \)},\\
\theta(i-2|\alpha|) & \textrm{if \( i > |w_{0}| + 2|\alpha| \) and \( \theta(i-2|\alpha|) \le |w_{0}| \)},\\
\theta(i-2|\alpha|)+2|\alpha| & \textrm{if \( i > |w_{0}| + 2|\alpha| \) and \( \theta(i-2|\alpha|) > |w_{0}| \)}.\\
\end{cases}
\end{displaymath}
Therefore
\[ d_{F}(f'h^{\sharp},e) = \inf \Big\{ \rho \big( w \concat \gamma, (w \concat \gamma)^{\theta} \big) :
\textrm{\( \theta \) is a match on \( \seg{1}{|w \concat \gamma|} \)} \Big\}. \]
Since \( w, \gamma \in \word{G} \) and since \( \hat{\gamma} = e \), we get
\( \dist(f,e) \le d_{0}(f'\nsbg,\nsbg). \) Since \( f \) was arbitrary and because of the
left invariance of the metrics \( \dist \) and \( d_{0} \), we get \( \dist \le d_{0} \).
For the reverse inequality note that \( d_{0} \) is a two-sided invariant metric on \( \amalgam \)
and it extends the metric \( d \) on \( G \), therefore by item
(\ref{prop:graev-metric-amalgam-properties-item:maximality}) of Proposition
\ref{sec:graev-metric-amalgam-properties} we have \( d_{0} \le \dist \) and hence
\( d_{0} = \dist \).
\end{proof}
\subsection{Graev metrics for products of Polish groups}
\label{sec:graev-metr-prod}
We would like to note that the construction of metrics on the free products with amalgamation works
well with respect to group completions. Let us be more precise. Suppose we start with tsi groups
\( (G_{\lambda}, d_{\lambda}) \) and a common closed subgroup \( A < G_{\lambda} \), assume
additionally that all the groups \( G_{\lambda} \) are complete as metrics spaces. The group
\( (\amalgam, \dist) \), in general, is not complete, so let's take its group completion (for tsi
groups this is the same as the metric completion), which we denote by
\( (\overline{\amalgam}, \dist) \). We have an analog of item
(\ref{prop:graev-metric-amalgam-properties-item:extending-lipschitz}) of Proposition
\ref{sec:graev-metric-amalgam-properties} for complete tsi groups. But first we need a simple
lemma.
\begin{lemma}
\label{sec:further-remarks-extension-to-complete-group}
Let \( (H_{1}, d_{1}) \) and \( (H_{2}, d_{2}) \) be complete tsi groups, \( \Lambda < H_{1}\) be
a dense subgroup and \( \phi : \Lambda \to H_{2} \) be a \( K \)-\lipschitz homomorphism. Then
\( \phi \) extends uniquely to a \( K \)-\lipschitz homomorphism
\[ \psi : H_{1} \to H_{2}. \]
\end{lemma}
\begin{proof}
Let \( h \in H_{1} \) and let \( \{b_{n}\}_{n=1}^{\infty} \subseteq \Lambda \) be such that
\( b_{n} \to h \). Since \( \psi \) is \( K \)-\lipschitz, we have
\[ d_{2}(\psi(b_{n}),\psi(b_{m})) \le K d_{1}(b_{n},b_{m}). \]
Hence \( \{\psi(b_{n})\}_{n=1}^{\infty} \) is a \( d_{2} \)-Cauchy sequence, and thus there is \(
f \in H_{2}\) such that \( \psi(b_{n}) \to f \). Set \( \psi(h) = f \). This extends \( \psi \)
to a map \( \psi : H_{1} \to H_{2} \) and it is easy to see that is extension is still \( K \)-\lipschitz.
\end{proof}
Combining the above result with item
\eqref{prop:graev-metric-amalgam-properties-item:extending-lipschitz} of Proposition
\ref{sec:graev-metric-amalgam-properties} we get
\begin{proposition}
\label{sec:further-remarks-extension-lipschitz-comlete-groups}
Let \( (T,d_{T}) \) be a complete tsi group, let \( \phi_{\lambda} : G_{\lambda} \to T \) be \( K
\)-\lipschitz homomorphisms such that for all \( a \in A \) and all
\( \lambda_{1}, \lambda_{2} \in \Lambda \)
\[ \phi_{\lambda_{1}}(a) = \phi_{\lambda_{2}}(a). \]
There exist a unique \( K \)-\lipschitz homomorphism \( \phi : \overline{\amalgam} \to T \) such that
\( \phi \) extends \( \phi_{\lambda} \) for all \( \lambda \).
\end{proposition}
This proposition together with item \eqref{prop:graev-metric-amalgam-properties-item:density-character} of Proposition
\ref{sec:graev-metric-amalgam-properties} shows that there are countable
coproducts in the category of tsi Polish metric groups and \( 1 \)-\lipschitz homomorphisms.
\subsection{Tsi groups with no Lie sums and Lie brackets}
\label{sec:tsi-groups-with-no-Lie}
In \cite{MR2541347} L. van den Dries and S. Gao gave an example of a group, which they denote by \( F \),
and a two-sided invariant metric \( d \) on \( F \) such that the completion \( (\overline{F}, d) \)
of this group has neither Lie sums nor Lie brackets. More precisely, they constructed two
one-parameter subgroups
\[ A_{i} = \Big (f_{t}^{(i)} \Big)_{t \in \mathbb{R}} < \overline{F} \quad i =1,2, \]
such that neither Lie sum nor Lie bracket of \( A_{1} \) and \( A_{2} \) exist.
Their group can be nicely explained in out setting. It turns out that the group \( F \) that they
have constructed is isometrically isomorphic to the group \( \mathbb{Q} * \mathbb{Q} \) with the
Graev metric (and the metrics on the copies of the rationals are the usual absolute-value metrics).
The group completion of \( \mathbb{Q} * \mathbb{Q} \) is then the same as the group completion of
the group \( \mathbb{R} * \mathbb{R} \) with the Graev metric. And moreover, \( A_{1} \) and
\( A_{2} \) are just the one-parameter subgroups given by the \( \mathbb{R} \) factors.
\section{Metrics on SIN groups}
\label{sec:free-prod-topol}
Recall that topological group is SIN if for every open neighborhood of the identity there is
a smaller open neighborhood \( V \subseteq G \) such that \( gVg^{-1} = V \) for all \( g \in G \).
SIN stands for Small Invariant Neighborhoods. It is well-knows that a metrizable topological group
admits a compatible two-sided invariant metric if and only if it is a SIN group.
Suppose \( G_{\lambda} \) are metrizable topological groups that admit compatible two-sided
invariant metrics and \( A < G_{\lambda}\) is a common closed subgroup. It is natural to ask whether one
can find compatible tsi metrics \( d_{\lambda} \) that agree on \( A \).
\begin{question}
\label{sec:further-remarks-metrics-agree-on-subgroup}
Let \( G_{1} \) and \( G_{2} \) be metrizable SIN topological groups, and let \( A < G_{i} \) be a
common closed subgroup. Are there compatible tsi metrics \( d_{i} \) on \( G_{i} \) such that
\[ d_{1}(a_{1},a_{2}) = d_{2}(a_{1},a_{2}) \]
for all \( a_{1}, a_{2} \in A \)?
\end{question}
We do not know the answer to this question. Before discussing some partial results let us recall the
notion of a \birkhoff-Kakutani family of neighborhoods.
\begin{definition}
\label{sec:further-remarks-birkhoff-kakutani-family}
Let \( G \) be a topological group. A family \( \{U_{i}\}_{i=0}^{\infty} \) of open
neighborhoods of the identity \(e \in G \) is called \emph{\birkhoff-Kakutani} if the following
conditions are met:
\begin{enumerate}[(i)]
\item \( U_{0} = G \);
\item \( \bigcap_{i} U_{i} = e\);
\item \( U_{i}^{-1} = U_{i} \);
\item \( U_{i+1}^{3} \subseteq U_{i} \).
\end{enumerate}
If additionally
\begin{enumerate}[(i)]
\item[(v)] \( gU_{i}g^{-1} = U_{i} \) for all \( g \in G \),
\end{enumerate}
then the sequence is called \emph{conjugacy invariant}.
\end{definition}
It is well known (see, for example, \cite{MR2455198}) that a topological group \( G \) admits a
\birkhoff-Kakutani family if and only if it is metrizable. Moreover, let
\( \{U_{i}\}_{i=0}^{\infty} \) be a \birkhoff-Kakutani family in a group \( G \), for
\( g_{1}, g_{2} \in G \) set
\[ \eta(g_{1},g_{2}) = \inf\{ 2^{-n} : g_{2}^{-1}g_{1} \in U_{n}\}, \]
\[ d(g_{1},g_{2}) = \inf\Big\{ \sum_{i=1}^{n-1} \eta(f_{i},f_{i+1}) : \{f_{i}\}_{i=1}^{n} \subseteq
G,\ f_{1} = g_{1}, f_{n} = g_{2}\Big\}. \]
Then the function \( d \) is a compatible left invariant metric on \( G \) and for all \( g_{1}, g_{2} \in G \)
\begin{displaymath}
\frac{1}{2} \eta(g_{1},g_{2}) \le d(g_{1},g_{2}) \le \eta(g_{1},g_{2}).
\end{displaymath}
We call this metric \( d \) a \emph{\birkhoff-Kakutani metric} associated with the family \( \{U_{i}\} \).
A metrizable topological group admits a compatible tsi metric if and only if there is a conjugacy
invariant \birkhoff-Kakutani family, and moreover, if \( \{U_{i}\} \) is conjugacy invariant, then the metric \( d \)
constructed above is two-sided invariant.
\begin{proposition}
\label{sec:further-remarks-common-bilipschitz-metric}
Let \( G_{1} \) and \( G_{2} \) be metrizable SIN groups, let \( A < G_{i} \) be a common
subgroup. There are compatible tsi metrics \( d_{i} \) on \( G_{i} \) such that \( d_{1}|_{A} \)
is bi-\lipschitz equivalent to \( d_{2}|_{A} \), i.e, there is \( K > 0 \) such that
\[ \frac{1}{K} d_{1}(a_{1},a_{2}) \le d_{2}(a_{1},a_{2}) \le K d_{1}(a_{1},a_{2}) \]
for all \( a_{1}, a_{2} \in A \).
\end{proposition}
\begin{proof}
Since \( G_{1} \) and \( G_{2} \) are metrizable, we can fix two compatible metrics \( \mu_{1} \)
and \( \mu_{2} \) on \( G_{1} \) and \( G_{2} \) respectively such that \( \mu_{i} \)-\(\diam{G_{i}} < 1 \).
We construct conjugacy invariant \birkhoff-Kakutani families \( \{U_{i}^{(j)}\}_{i=0}^{\infty} \)
for \( G_{j} \), \( j = 1,2 \), such that
\begin{enumerate}[(i)]
\item \( U_{2i+1}^{(1)} \cap A \subseteq U_{2i}^{(2)} \cap A \);
\item \( U_{2i+2}^{(2)} \cap A \subseteq U_{2i+1}^{(1)} \cap A \).
\end{enumerate}
For the base of construction let \( U_{0}^{j} = G_{j} \). Suppose we have constructed
\( \{U_{i}^{(j)}\}_{i=1}^{N} \) and suppose \( N \) is even (if \( N \) is odd, switch the roles of
\( G_{1} \) and \( G_{2} \)). If \( V = U_{N}^{(2)} \cap A \), then \( V \) is an open
neighborhood of the identity in \( A \) and therefore there is an open set \( U \subseteq G_{1} \) such
that \( U \cap A = V \). Let \( U_{N+1}^{(1)} \subseteq G_{1} \) be any open neighborhood of the
identity such that \( (U_{N+1}^{(1)})^{-1} = U_{N+1}^{(1)} \),
\( gU_{N+1}^{(1)}g^{-1} = U_{N+1}^{(1)} \) for all \( g \in G_{1}\), \(
\mu_{1}\)-\(\diam{U^{(1)}_{N+1}} < 1/N \) and
\[ (U_{N+1}^{(1)})^{3} \subseteq U \cap U_{N}^{(1)}. \]
Such a \( U_{N+1}^{(1)} \) exists because \( G_{1} \) is SIN. Set
\( U_{N+1}^{(2)} \) to be any open symmetric neighborhood of \( e \in G_{2} \) such that
\( (U_{N+1}^{(2)})^{3} \subseteq U_{N}^{(2)} \).
It is straightforward to check that such sequences \( \{U_{i}^{(j)}\}_{i=1}^{\infty} \) indeed
satisfy all the requirements. If \( d_{j} \) are the \birkhoff-Kakutani metrics that
correspond to the families \( \{U_{i}^{(j)}\} \), then for all \( a_{1}, a_{2} \in A \)
\[ \frac{1}{4}\eta_{1}(a_{1},a_{2}) \le \eta_{2}(a_{1}, a_{2}) \le 4 \eta_{1}(a_{1},a_{2}), \]
whence
\[ \frac{1}{8}d_{1}(a_{1},a_{2}) \le d_{2}(a_{1}, a_{2}) \le 8 d_{1}(a_{1},a_{2}), \]
and therefore \( d_{1}|_{A} \) and \( d_{2}|_{A} \) are bi-\lipschitz equivalent with a constant
\( K = 8 \).
\end{proof}
\begin{remark}
\label{sec:further-remarks-infinite-bilipschitz}
It is, of course, straightforward to generalize the above construction to the case of finitely
many groups \( G_{j} \), but we do not know if the result is true for infinitely many
groups \( G_{j} \).
\end{remark}
\begin{remark}
\label{sec:further-remarks-constant-multiply}
Note that one can always multiply the metric \( d_{2} \) by a suitable constant (which is \( 8 \) in
the above construction) to assure that \( d_{1}|_{A} \le d_{2}|_{A} \). We use this observation later in
Remark \ref{sec:further-remarks-normal-subgroup}.
\end{remark}
\begin{proposition}
\label{sec:further-remarks-extension-left-invariant}
Let \( G \) be a topological group, \( A < G \) be a closed subgroup of \( G \), \( N_{G} \) be a
tsi norm on \( G \), \( N_{A} \) be a tsi norm on \( A \) and suppose that for all \( a \in A \)
\[ N_{A}(a) \le N_{G}(a). \]
There exists a compatible norm \( N \) on \( G \) such that
\begin{enumerate}[(i)]
\item\label{ext-left-inv:extension} \( N \) extends \( N_{A} \), that is \( N_{A}(a) = N(a) \) for
all \( a \in A \);
\item\label{ext-left-inv:domin} \( N \le N_{G} \).
\end{enumerate}
If, moreover, \( A \) is a normal subgroup of \( G \), then \( N \) is two-sided invariant.
\end{proposition}
\begin{proof}
For \( g \in G \) set
\[ N(g) = \inf\{N_{A}(a) + N_{G}(a^{-1}g) : a \in A\}. \]
We claim that \( N \) is a pseudo-norm on \( G \).
\begin{itemize}
\item \( N(e) = 0 \) is obvious.
\item For any \( g \in G \) and any \( a \in A \) by the two-sided invariance of \( N_{G} \)
\[ N_{A}(a) + N_{G}(a^{-1}g) = N_{A}(a^{-1}) + N_{G}(g^{-1}a) = N_{A}(a^{-1}) + N_{G}(ag^{-1}) \]
and therefore \( N(g) = N(g^{-1}) \).
\item If \( g_{1}, g_{2} \in G \), then
\begin{displaymath}
\begin{aligned}
N(g_{1}g_{2}) =& \inf\{N_{A}(a) + N_{G}(a^{-1}g_{1}g_{2}) : a \in A\} = \\
&\inf\{N_{A}(a_{1}a_{2}) + N_{G}(a_{2}^{-1}a_{1}^{-1}g_{1}g_{2}) : a_{1}, a_{2} \in A\} \le \\
&\inf\{N_{A}(a_{1}) + N_{A}(a_{2}) + N_{G}(a_{1}^{-1}g_{1}) + N_{G}(g_{2}a_{2}^{-1}): a_{1}, a_{2} \in A\} =\\
&\inf\{N_{A}(a_{1}) + N_{G}(a_{1}^{-1}g_{1}) : a_{1} \in A\} + \\
&\inf\{N_{A}(a_{2}) + N_{G}(a_{2}^{-1}g_{2}) : a_{2} \in A\} = \\
&N(g_{1}) + N(g_{2}).
\end{aligned}
\end{displaymath}
\end{itemize}
Next we show that \( N \) is a compatible pseudo-norm. For a sequence
\( \{g_{n}\}_{n=1}^{\infty} \subseteq G \) we have
\begin{displaymath}
\begin{aligned}
N(g_{n}) \to 0 \iff & \exists \{a_{n}\}_{n=1}^{\infty} \subseteq A \quad N_{A}(a_{n}) +
N_{G}(a_{n}^{-1}g_{n}) \to 0
\iff \\
& \exists \{a_{n}\}_{n=1}^{\infty} \subseteq A \quad a_{n} \to e \textrm{ and } a_{n}^{-1}g_{n} \to e \iff\\
& g_{n} \to e.
\end{aligned}
\end{displaymath}
In particular, \( N \) is a norm.
\eqref{ext-left-inv:extension} Now we claim that \( N \) extends \( N_{A} \). Let
\( b \in A \). Using \( N_{G} \ge N_{A} \) we get
\begin{displaymath}
\begin{aligned}
N(b) = &\inf\{N_{A}(a) + N_{G}(a^{-1}b) : a \in A\} \ge\\
&\inf\{N_{A}(a) + N_{A}(a^{-1}b) : a \in A\} \ge N_{A}(b).
\end{aligned}
\end{displaymath}
On the other hand
\[ N(b) \le N_{A}(b) + N_{G}(b^{-1}b) = N_{A}(b), \] and therefore \( N(b) = N_{A}(b) \).
\eqref{ext-left-inv:domin} Finally, for any \( g \in G \) we have
\begin{displaymath}
\begin{aligned}
N(g) = & \inf\{N_{A}(a) + N_{G}(a^{-1}g) : a \in A\} \le\\
& \inf\{N_{G}(a) + N_{G}(a^{-1}g) : a \in A\} \le\\
& N_{G}(e) + N_{G}(g) = N_{G}(g),
\end{aligned}
\end{displaymath}
and therefore \( N \le N_{G} \).
For the moreover part suppose that \( A \) is a normal subgroup. If \( g_{1} \in G \), then
\begin{displaymath}
\begin{aligned}
N(g_{1} g g_{1}^{-1}) = & \inf\{ N_{A}(a) + N_{G}(a^{-1}g_{1} g g_{1} ^{-1}) : a \in A\} = \\
&\inf\{N_{A}(g_{1}^{-1}ag_{1}) + N_{G}(g_{1}^{-1}a^{-1}g_{1} g) : a \in A\} = N(g),
\end{aligned}
\end{displaymath}
and so \( N \) is two-sided invariant.
\end{proof}
\begin{remark}
\label{sec:further-remarks-normal-subgroup}
Proposition \ref{sec:further-remarks-common-bilipschitz-metric} (with Remark
\ref{sec:further-remarks-constant-multiply}) and Proposition
\ref{sec:further-remarks-extension-left-invariant} together yield a positive answer to Question
\ref{sec:further-remarks-metrics-agree-on-subgroup} when \( A \) is a normal subgroup of one of
\( G_{j} \).
\end{remark}
It is natural to ask whether it is really necessarily to assume in Proposition
\ref{sec:further-remarks-extension-left-invariant} the existence of a norm \( N_{G} \) such that
\( N_{A} \le N_{G} \). The following example shows that this assumption cannot be dropped.
\begin{example-nn}
\label{sec:free-prod-topol-heisenberg}
Let \( G \) be the discrete Heisenberg group
\[ G = \left \{
\begin{pmatrix}
1 & a & b\\
0 & 1 & c\\
0 & 0 & 1
\end{pmatrix} : a, b, c \in \mathbb{Z} \right \},
\]
and let \( A \) be the center of \( G \)
\[ A = \left \{
\begin{pmatrix}
1 & 0 & b\\
0 & 1 & 0\\
0 & 0 & 1
\end{pmatrix} : b \in \mathbb{Z} \right \}.
\]
The subgroup \( A \) is, of course, isomorphic to the group of integers \( \mathbb{Z} \). Let
\( d \) be a metric on \( A \) given by the absolute value: \( d(b_{1}, b_{2}) = |b_{1} - b_{2}| \).
We claim that this tsi metric can not be extended to a tsi (in fact, even to a left invariant)
metric on \( G \). Indeed, suppose there is such an extension \( \dist \). The group \( G \)
is generated by the three matrices:
\[ x =
\begin{pmatrix}
1 & 1 & 0\\
0 & 1 & 0\\
0 & 0 & 1
\end{pmatrix}, \ y =
\begin{pmatrix}
1 & 0 & 0\\
0 & 1 & 1\\
0 & 0 & 1
\end{pmatrix}, \ \textrm{and } z =
\begin{pmatrix}
1 & 0 & -1\\
0 & 1 & 0\\
0 & 0 & 1
\end{pmatrix}. \]
It is easy to check that \( z^{n^{2}} = [x^{n},y^{n}] = x^{n}y^{n}x^{-n}y^{-n}. \) Therefore
\[ n^{2} = d(z^{(n^{2})},e) = \dist(z^{n^{2}},e) = \dist(x^{n}y^{n}x^{-n}y^{-n},e) \le 2n \big(\dist(x,e) + \dist(y,e)
\big), \]
for all \( n \), which is absurd.
\end{example-nn}
\section{Induced metrics}
\label{sec:hnn-extensions}
In this section \( (G,d) \) denotes a tsi group, and \( A < G \) is a closed subgroup. This section
is a preparation for the HNN construction, which is given in the next section. Let
\( \langle t \rangle \) denote a copy of the free group on one element \( t \), i.e., a copy of the
integers, with the usual metric \( d(t^{m}, t^{n}) = |m-n| \). The Graev metric on the free product
\( G * \langle t \rangle \) is denoted again by the letter \( d \). Consider the subgroup of the
free product generated by \( G \) and \( tAt^{-1} \); it not hard to check that, in fact, as an
abstract group it is isomorphic to the free product \( G * tAt^{-1} \). Thus we have two metrics on
the group \( G * tAt^{-1} \): one is just the metric \( d \), the other one is the Graev metric on
this free product; denote the latter by \( \dist \). When are these two metrics the same? It turns
out that they are the same if and only if the diameter of \( A \) is at most \( 1 \). The proof of
this fact is the core of this section.
We can naturally view \( \word{G \cup tAt^{-1}} \) as a subset of
\( \word{G \cup \langle t \rangle} \) by treating a letter \( tat^{-1} \in tAt^{-1} \) as a word
\( t \concat a \concat t^{-1} \in \word{G \cup \langle t \rangle} \). In what follows we identify
\( \word{G \cup tAt^{-1}} \) with a subset of \( \word{G \cup \langle t \rangle} \).
Let \( f \in G * tAt^{-1} \) be given and let \( \alpha \in \word{G \cup tAt^{-1}}\) be the reduced
form of \( f \). Note that since we have a free product (no amalgamation), reduced form is unique.
The word \( \alpha \in \word{G \cup \langle t \rangle} \) can be written as
\[ \alpha = g_{1} \concat t \concat a_{1} \concat t^{-1} g_{2} \concat t \concat a_{2} \concat t^{-1} \concat \cdots
\concat t \concat a_{n} \concat t^{-1} \concat g_{n+1}, \]
where \( g_{i} \in G \), \( a_{i} \in A \), and also \( g_{1}\) or \( g_{n+1} \) may be absent.
Lemma \ref{sec:metrics-amalgams-simple reduced-reduction} implies
\[ d(f,e) = \inf\{ \rho(\alpha,\zeta) : (\alpha,\zeta) \textrm{ is a \multipliable \( f \)-pair}\}, \]
and notice that the infimum is taken over all pairs with the same first coordinate \( \alpha \) --- the
reduced form of \( f \). We can also impose some restrictions on \( \zeta \) and change the infimum to
a minimum, but we do not need this for a moment.
\textit{In the rest of the section \( \zeta, \xi, \delta \) denote words in the alphabet \( G \cup \langle t \rangle \)}.
\subsection{Hereditary words}
\begin{definition}
\label{sec:hereditary-pair}
A trivial word \( \zeta \in \word{G \cup \langle t \rangle} \) is called \emph{hereditary} if
\( \zeta(i) \in \langle t \rangle \setminus \{e\} \) implies \( \zeta(i) = t^{\pm 1} \) for all \( i \in \seg{1}{n}
\). A \multipliable \( f \)-pair \( (\alpha,\zeta) \), where \( f \in G* tAt^{-1} \), is called \emph{hereditary} if
\( \alpha \) is the reduced form of \( f \), \( \zeta \) is hereditary, and moreover,
\[ \zeta(i) = t^{\pm 1} \implies \zeta(i) = \alpha(i). \]
\end{definition}
\begin{lemma}
\label{sec:reduction-to-hereditary-pair}
Let \( f \in G * tAt^{-1} \), and let \( \alpha \in \word{G \cup \langle t \rangle } \) be the reduced form of \( f
\). If \( (\alpha,\zeta) \) is a \multipliable \( f \)-pair, then there exists a trivial word
\( \xi \in \word{G \cup \langle t \rangle} \) such that \( (\alpha,\xi) \) is a hereditary \( f \)-pair and
\( \rho(\alpha,\xi) \le \rho(\alpha,\zeta) \).
\end{lemma}
\begin{proof}
Let \( T_{\zeta} \) be an evaluation tree for \( \zeta \). Fix \( s \in T_{\zeta} \). Suppose there exists
\( j \in R_{s} \) such that \( \alpha(j) = t^{\pm 1} \) and neither \( \zeta(j) = \alpha(j) \) nor \( \zeta(j) = e \).
Since \( \zeta(j) \ne e \) and because the pair \( (\alpha,\zeta) \) is \multipliable, it must be the case that
\( \zeta(j) = t^{M} \) for some \( M \ne 0 \). Let \( \{i_{k}\}_{k=1}^{m} \subseteq R_{s} \) be the complete list of
external letters of \( \zeta \) in \( R_{s} \), note that \( j \in \{i_{k}\}_{k=1}^{m} \). Since \( R_{s} \) is
\( \zeta \)-\multipliable, we have \( \zeta(i_{k}) \cong t \) for all \( k \in \seg{1}{m} \). Note that since we have a
free product, any evaluation tree is, in fact, slim, and any \multipliable \( f \)-pair is, in fact, a simple \( f
\)-pair. So we can perform a symmetrization. Set
\[ \delta = \symmet{\alpha}{\zeta}{i_{1}}{\{i_{k}\}}. \]
By Lemma \ref{sec:metrics-amalgams-symmet-rho-decreases} \( \rho(\alpha,\delta) \le \rho(\alpha,\zeta) \) and also for
all \( i \in R_{s} \) we have
\[ (\alpha(i) = \delta(i)) \textrm{ or } (\delta(i) = e) \textrm{ or } (i = i_{1}). \]
Let \( \epsilon_{k} \in \{-1,+1\} \) be such that \( \alpha(i_{k}) = t^{\epsilon_{k}} \). For all
\( k \in \seg{2}{m} \)
\[ \delta(i_{k}) = \alpha(i_{k}) = t^{\epsilon_{k}}. \]
Let \( N \) be such that \( \delta(i_{1}) = t^{N} \). Note that since \( \hat{\delta}[I_{s}] = e \),
\[ N + \epsilon_{2} + \ldots + \epsilon_{m} = 0.\] We now construct a word \( \bar{\xi} \) as follows.
\setcounter{case}{-1}
\begin{case}
If \( N = 0 \) or \( N = \epsilon_{1} \), then set \( \bar{\xi} = \delta \).
\end{case}
In cases below we assume \( N \not \in \{0,\epsilon_{1}\} \).
\begin{case}
Suppose \( \sign{N} = \sign{\epsilon_{1}} \). Find different indices \( k_{1}, \ldots, k_{|N|-1} \) such that
\( \sign{N} = -\sign{\epsilon_{k_{p}}} \) for all \( p \in \seg{1}{|N|-1} \). Set
\begin{displaymath}
\bar{\xi}(i) =
\begin{cases}
\delta(i) & \textrm{if \( i \not \in \{i_{k_{p}}\}_{p=1}^{|N|-1} \) and \( i \ne i_{1} \)} ;\\
\alpha(i_{1}) & \textrm{if \( i = i_{1} \)}; \\
e & \textrm{if \( i \in \{i_{k_{p}}\}_{p=1}^{|N|-1} \)}.
\end{cases}
\end{displaymath}
\end{case}
\begin{case}
Suppose \( \sign{N} = -\sign{\epsilon_{1}} \). Find different indices \( k_{1}, \ldots, k_{|N|} \) such that
\( \sign{N} = -\sign{\epsilon_{k_{p}}} \) for all \( p \in \seg{1}{|N|} \). Set
\begin{displaymath}
\bar{\xi}(i) =
\begin{cases}
\delta(i) & \textrm{if \( i \not \in \{i_{k_{p}}\}_{p=1}^{|N|} \) and \( i \ne i_{1} \)};\\
e & \textrm{if \( i \in \{i_{k_{p}}\}_{p=1}^{|N|} \) or \( i = i_{1} \)}.
\end{cases}
\end{displaymath}
\end{case}
It is easy to check that \( \rho(\alpha,\delta) = \rho(\alpha,\bar{\xi}) \) and \( \hat{\bar{\xi}} = e \). Moreover, for
all \( i \in R_{s} \) either \( \bar{\xi}(i) = \alpha(i) \) or
\( \bar{\xi}(i) = e \).
Now apply the same procedure for all \( s \in T_{\zeta} \) and denote the result by \( \xi \). The word \( \xi \) is
as desired.
\end{proof}
To analyze the structure of hereditary words we introduce the following notion of a structure tree.
\begin{definition}
\label{sec:structure-tree}
Let \( \zeta \) be a hereditary word of length \( n \). A tree \( T_{\zeta} \) together with a function that assigns
to a node \( s \in T_{\zeta} \) an interval \( I_{s} \subseteq \seg{1}{n} \) is called a \emph{structure tree for
\( \zeta \)} if for all \( s', s \in T_{\zeta} \) the following conditions are met:
\begin{enumerate}[(i)]
\item \( I_{\emptyset} = \seg{1}{n} \);
\item \( \hat{\zeta}[I_{s}] = e \);
\item if \( s \ne \emptyset \), then \( \zeta(m(I_{s})) = t^{\pm 1} \) and \( \zeta(M(I_{s})) = t^{\mp 1} \) (in
particular \( \zeta(m(I_{s})) = \zeta(M(I_{s}))^{-1} \)).
\end{enumerate}
Set \( R_{s} = I_{s} \setminus \bigcup_{s' \prec s} I_{s'} \); then also
\begin{enumerate}[(i)]
\setcounter{enumi}{4}
\item for all \( i \in R_{s} \) if \( i \not \in \{m(I_{s}), M(I_{s})\} \), then \( \zeta(i) \in G \) (in particular
\( R_{s} \setminus\{m(I_{s}), M(I_{s})\}\) is \( \zeta \)-\multipliable);
\item \( \zeta(i) \in G \) for all \( i \in R_{\emptyset} \) (in general \( R_{\emptyset} \) may be empty);
\item\label{lem:structure-item:intervals-order-str} if \( H(s) \le H(s') \) and \( I_{s'} \cap I_{s} \ne \emptyset \),
then \( s' \prec s \) or \( s'=s \);
\item\label{lem:structure-item:strict-inclusion-str} if \( s' \prec s \) and \( s \ne \emptyset \), then
\[ m(I_{s}) < m(I_{s'}) < M(I_{s'}) < M(I_{s}). \]
\end{enumerate}
\end{definition}
\begin{lemma}
\label{sec:hnn-extensions-symmetry-of-heriditary-pair}
If \( \zeta \) is a hereditary word of length \( n \), then
\[ |\{i \in \seg{1}{n} : \zeta(i) = t\}| = |\{i \in \seg{1}{n} : \zeta(i) = t^{-1}\}|. \]
\end{lemma}
\begin{proof}
Let \( \{i_{k}\}_{k=1}^{m} \) be the list of letters such that
\begin{enumerate}[(i)]
\item \( \zeta(i_{k}) = t^{\epsilon_{k}} \) for some \( \epsilon_{k} \in \{-1,1\} \);
\item \( \zeta(i) = t^{\epsilon} \), \( \epsilon \in \{-1,1\} \), implies \( i = i_{k} \) for some \( k \).
\end{enumerate}
Since \( \hat{\zeta} = e \), we get
\[ \epsilon_{1} + \ldots + \epsilon_{m} = 0, \] and therefore
\[ |\{i \in \seg{1}{n} : \zeta(i) = t\}| = |\{i \in \seg{1}{n} : \zeta(i) = t^{-1}\}|. \qedhere \]
\end{proof}
\begin{lemma}
\label{sec:hnn-extensions-induction-step-in-structure-tree}
Let \( \zeta \) be a hereditary word of length \( n \). If there is \( i \in \seg{1}{n} \) such that
\( \zeta(i) = t \), then there is an interval \( I \subseteq \seg{1}{n} \) such that
\begin{enumerate}[(i)]
\item \( \zeta(m(I)) = t^{\pm 1} \) and \( \zeta(M(I)) = t^{\mp 1} \);
\item \( \zeta(i) \in G \) for all \( i \in I \setminus \{m(I), M(I)\} \);
\item \( \hat{\zeta}[I] = e \).
\end{enumerate}
\end{lemma}
\begin{proof}
Let \( I_{1}, \ldots, I_{m} \) be the list of intervals such that
\begin{enumerate}[(i)]
\item \( \zeta(m(I_{k})) = t^{\pm 1} \), \( \zeta(M(I_{k})) = t^{\mp 1} \);
\item \( \zeta(i) \in G \) for all \( i \in I_{k} \setminus\{m(I_{k}), M(I_{k})\} \);
\item \( M(I_{k}) \le m(I_{k+1}) \);
\item if \( I \) is an interval that satisfies (i) and (ii) above, then \( I = I_{k} \) for some
\( k \in \seg{1}{m} \).
\end{enumerate}
It follows from Lemma \ref{sec:hnn-extensions-symmetry-of-heriditary-pair} that the list of such intervals is
nonempty. Let \( J_{0}, \ldots, J_{m} \) be the complementary intervals:
\[ J_{0} = \seg{1}{m(J_{1})-1}, \quad J_{m} = \seg{M(J_{m})+1}{n}, \]
\[ J_{k} = \seg{M(I_{k})+1}{m(I_{k+1}) +1} \quad \textrm{for \( k \in \seg{2}{m-1} \)}. \]
Some (and even all) of the intervals \( J_{k} \) may be empty. If for some \( j_{1}, j_{2} \in J_{k} \) we have
\( \zeta(j_{1}) = t^{\epsilon_{1}}\), \( \zeta(j_{2}) = t^{\epsilon_{2}} \), then \( \epsilon_{1} = \epsilon_{2} \),
and moreover, \( \zeta(M(I_{k})) = \zeta(j_{1}) = \zeta(m(I_{k+1})) \). It is now easy to see that
\(\hat {\zeta}[I_{k}] \ne e \) for all \( k \in \seg{1}{m} \) implies \( \hat{\zeta} \ne e\), contradicting the
assumption that \( \zeta \) is trivial.
\end{proof}
\begin{lemma}
\label{sec:hnn-extensions-existence-of-structure-tree}
If \( \zeta \) is a hereditary word of length \( n \), then there is a structure tree \( T_{\zeta} \) for \( \zeta \).
\end{lemma}
\begin{proof}
We prove the lemma by induction on \( |\{i \in \seg{1}{n} : \zeta(i) = t\}| \). For the base of induction suppose
that \( \zeta(i) \ne t \) for all \( i \). By the definition of a hereditary word and by Lemma
\ref{sec:hnn-extensions-symmetry-of-heriditary-pair} we have \( \zeta(i) \in G \) for all \( i \in \seg{1}{n} \). Set
\( T_{\zeta} = \{\emptyset\} \) and \( I_{\emptyset} = \seg{1}{n} \). It is easy to see that this gives a structure
tree.
Suppose now there is \( i \in \seg{1}{n} \) such that \( \zeta(i) = t \). Apply Lemma
\ref{sec:hnn-extensions-induction-step-in-structure-tree} and let \( I \) denote an interval granted by this lemma.
Let \( m \) be the length of \( I \). If \( m = n \), that is if \( I = \seg{1}{n} \), then set
\( T_{\zeta} = \{\emptyset, s\} \) with \( s \prec \emptyset \) and \( I_{s} = I_{\emptyset} = \seg{1}{n} \). One
checks that this is a structure tree. Assume now that \( m < n \) . Define a word \( \delta \) of length \( n-m \)
by
\begin{displaymath}
\delta(i) =
\begin{cases}
\zeta(i) & \textrm{if \( i < m(I) \)} \\
\zeta(i+m) & \textrm{if \( i \ge m(I) \)}.
\end{cases}
\end{displaymath}
The word \( \delta \) is a hereditary word and
\[ |\{ i \in \seg{1}{|\delta|} : \delta(i) = t\}| < |\{i \in \seg{1}{n} : \zeta(i) = t\}|. \]
Therefore, by induction hypothesis, there is a structure tree \( T_{\delta} \) and intervals \( J_{s} \),
\( s \in T_{\delta} \), for the word \( \delta \). Let \( s' \) be a symbol for a new node. Set
\( T_{\zeta} = T_{\delta} \cup \{s'\}\). If \( m(I) = 1 \) or \( M(I) = n \), set
\( (s', \emptyset) \in E(T_{\delta}) \). Otherwise let \( s \in T_{\delta} \) be the minimal node such that
\( m(J_{s}) < m(I) \le M(J_{s}) \) (\( s \) may still be the root \( \emptyset \)) and set
\( (s',s) \in E(T_{\delta}) \). Finally, define for \( s \in T_{\delta} \)
\begin{displaymath}
I_{s} =
\begin{cases}
J_{s} & \textrm{if \( M(J_{s}) < m(I) \)}; \\
\seg{m(J_{s})}{M(J_{s}) + m} & \textrm{if \( m(J_{s}) < m(I) \le M(J_{s}) \)}; \\
\seg{M(J_{s})+m}{M(J_{s})+m} & \textrm{if \( m(I) \le m(J_{s}) \)}.
\end{cases}
\end{displaymath}
and set \( I_{s'} = I \).
It is now straightforward to check that \( T_{\zeta} \) is a structure tree for \( \zeta \).
\end{proof}
\subsection{From hereditary to rigid words}
\label{sec:from-hered-rigid}
\textit{From now on \( A \) will denote a closed subgroup of \( G \) of diameter \( \diam{A} \le 1 \),} unless stated
otherwise.
\begin{lemma}
\label{sec:hnn-extensions-a-cancellation-error}
If \( (G,d) \) is a tsi group, then for all \( g_{1}, \ldots, g_{n-1} \in G \), for all
\( a_{1}, \ldots, a_{n} \in A \) such that \( d(a_{i}, e) \le 1 \)
\[ d(g_{1} \cdots g_{n-1}, a_{1}g_{1}a_{2} \cdots a_{n-1}g_{n-1}a_{n}) \le n \]
\end{lemma}
\begin{proof}
By induction. For \( n=2 \) we have
\[ d(g_{1},a_{1}g_{1}a_{2}) \le d(g_{1},a_{1}g_{1}) + d(a_{1}g_{1},a_{1}g_{1}a_{2}) = d(e,a_{1}) + d(e,a_{2}) \le
2. \] For the step of induction
\begin{displaymath}
\begin{aligned}
& d(g_{1} \cdots g_{n-1}, a_{1}g_{1}a_{2} \cdots a_{n-1}g_{n-1}a_{n}) \le\\
& d(g_{1} \cdots g_{n-1}, g_{1} \cdots g_{n-1}a_{n}) + d(g_{1} \cdots g_{n-1}a_{n}, a_{1}g_{1}a_{2} \cdots
a_{n-1}g_{n-1}a_{n}) = \\
& d(e,a_{n}) + d(g_{1} \cdots g_{n-2}, a_{1}g_{1}a_{2} \cdots g_{n-2}a_{n-1}) \le 1 + (n-1)= n.
\end{aligned}
\end{displaymath}
And the lemma follows.
\end{proof}
Let \( \beta \) be a word of the form
\[ \beta = g_{0} \concat t \concat a_{1} \concat t^{-1} \concat g_{1} \concat t \concat a_{2} \concat t^{-1} \concat
\cdots \concat g_{n-1} \concat t \concat a_{n} \concat t^{-1} \concat g_{n}, \]
where \( g_{i} \in G \) and \( a_{i} \in A \).
Define a word \( \delta \) by setting for \( i \in \seg{1}{|\beta|} \)
\begin{displaymath}
\delta(i) =
\begin{cases}
e & \textrm{if \( i = 1 \mod 4 \)}; \\
t & \textrm{if \( i = 2 \mod 4 \)}; \\
e & \textrm{if \( i = 3 \mod 4 \)}; \\
t^{-1} & \textrm{if \(i= 0 \mod 4 \)}.
\end{cases}
\end{displaymath}
Or, equivalently,
\[ \delta = e \concat t\concat e \concat t^{-1} e \concat \cdots \concat e \concat t \concat e \concat t^{-1} \concat
e. \]
If \( T_{\delta} = \{\emptyset, s_{1}, \ldots, s_{n}, s_{1}', \ldots, s_{n}'\} \) with \( s_{k} \prec \emptyset \),
\( s_{k}' \prec s_{k} \), \( I_{s_{k}} = \seg{4k-2}{4k} \), \( I_{{s_{k}'}} = \{4k-1\} \), then \( T_{\delta} \) is a
slim evaluation tree. Set
\[ \xi = \symmet{\beta}{\delta}{1}{\{4k+1\}_{k=0}^{n}} = \symmet{\beta}{\delta}{1}{R_{\emptyset}}. \]
\begin{lemma}
\label{sec:hnn-extensions-even-subword-reduction-type-g}
Let \( \beta, \xi \) be as above. If \( \zeta \) is a trivial word of length \( |\beta| \), \( \zeta \) and
\( \beta \) are \multipliable and \( \zeta(i) \in G \) for all \( i \), in other words if
\[ \zeta = h_{0} \concat e \concat h_{1} \concat e \concat h_{2} \concat e \concat h_{3} \concat e \concat \cdots
\concat h_{2n-2} \concat e \concat h_{2n-1} \concat e \concat h_{2n}, \]
where \( h_{i} \in G \), then \( \rho(\beta,\xi) \le \rho(\beta,\zeta) \).
\end{lemma}
\begin{proof}
By the two-sided invariance
\[ \rho(\beta,\zeta) \ge d(g_{0}a_{1}g_{1}a_{2}\cdots g_{n-1}a_{n}g_{n},e) + 2n. \] On the other hand
\begin{displaymath}
\begin{aligned}
\rho(\beta,\xi) =& \sum_{i=1}^{n} d(a_{i},e) + d(g_{0}g_{1} \cdots g_{n},e) \le\\
& n + d(g_{0}g_{1} \cdots g_{n}, e) \le \\
& n + d(g_{0}g_{1} \cdots g_{n}, g_{0}a_{1}g_{1}a_{2} \cdots g_{n-1}a_{n}g_{n}) + d(g_{0}a_{1}g_{1}a_{2} \cdots
g_{n-1}a_{n}g_{n},e) = \\
& n + d(g_{1} \cdots g_{n-1}, a_{1}g_{1} \cdots g_{n-1}a_{n}) + d(g_{0}a_{1}g_{1}a_{2} \cdots g_{n-1}a_{n}g_{n},e)
\le \\
& \textrm{[by Lemma \ref{sec:hnn-extensions-a-cancellation-error}] } 2n + d(g_{0}a_{1}g_{1}a_{2} \cdots
g_{n-1}a_{n}g_{n},e).
\end{aligned}
\end{displaymath}
Hence \( \rho(\beta,\xi) \le \rho(\beta,\zeta). \)
\end{proof}
Suppose we have words
\begin{eqnarray*}
\begin{aligned}
\nu_{k} = g_{(k,1)} \concat \cdots \concat g_{(k,q_{k})}, \quad
\textrm{where \( g_{(k,j)} \in G \) and \(k \in \seg{0}{n}, \)}\\
\mu_{k} = a_{(k,1)} \concat \cdots \concat a_{(k,p_{k})}, \quad \textrm{where \( a_{(k,j)} \in A \) and
\(k \in \seg{1}{n}. \)}
\end{aligned}
\end{eqnarray*}
And let \( \bar{\beta} \) be the word
\[ \bar{\beta} = \nu_{0} \concat t \concat \mu_{1} \concat t^{-1} \nu_{1} \concat \cdots \concat \nu_{n-1} \concat t
\concat \mu_{n} \concat t^{-1} \concat \nu_{n}. \]
Let \( \{i_{k}\}_{k=1}^{n} \), \( \{i'_{k}\}_{k=1}^{n} \) be indices such that
\begin{enumerate}[(i)]
\item \( i_{k} < i_{k+1} \), \( i'_{k} < i'_{k+1} \);
\item \( \beta(i_{k}) = t \), \( \beta(i'_{k}) = t^{-1} \);
\item if \( \beta(i) = t \), then \( i = i_{k} \) for some \( k \in \seg{1}{n} \); if \( \beta(i) = t^{-1} \), then
\( i = i'_{k} \) for some \( k \in \seg{1}{n} \).
\end{enumerate}
In other words
\[ i_{k} = \sum_{l=0}^{k-1} q_{l} + \sum_{l=1}^{k-1} p_{k} + 2(k-1) + 1, \quad i'_{k} = i_{k} + p_{k} + 1.\]
Define the word \( \delta \) of length \( |\bar{\beta}| \) by
\begin{displaymath}
\delta(i) =
\begin{cases}
e & \textrm{if \( \bar{\beta}(i) \in G \)};\\
\bar{\beta}(i) & \textrm{if \( \bar{\beta}(i) = t^{\pm 1} \)}.\\
\end{cases}
\end{displaymath}
If \( T_{\delta} = \{\emptyset, s_{1}, \ldots, s_{n}, s_{1}', \ldots, s_{n}'\} \), \( s_{k} \prec \emptyset \),
\( s_{k}' \prec s_{k} \) and \( I_{s_{k}} = \seg{i_{k}}{i_{k}'} \), \( I_{s_{k}'} = \seg{i_{k}+1}{i_{k}'-1} \)
(in other words \( I_{s_{k}} \) and \( I_{s_{k}'} \) are such that \( \bar{\beta}[I_{s_{i}}] = t \concat \mu_{i} \concat t^{-1} \), \( \bar{\beta}[I_{s_{i}}] = \mu_{i} \)), then
\( T_{\delta} \) is a slim evaluation tree.
Let \( \{j_{k}\}_{k=1}^{m} \) be the enumeration of the set
\[ \seg{1}{|\bar{\beta}|} \setminus \bigcup_{k=1}^{n} \seg{i_{k}}{i'_{k}}. \]
Set inductively
\begin{eqnarray*}
\begin{aligned}
\xi_{0} &= \symmet{\bar{\beta}}{\delta}{j_{1}}{\{j_{k}\}} = \symmet{\bar{\beta}}{\delta}{j_{1}}{R_{\emptyset}}, \\
\xi_{l}&= \symmet{\bar{\beta}}{\xi_{l-1}}{j^{(l)}_{1}}{\{j^{(l)}_{k}\}} =
\symmet{\bar{\beta}}{\xi_{l-1}}{j^{(l)}_{1}}{R_{s_{l}'}},
\end{aligned}
\end{eqnarray*}
where \( j^{(l)}_{k} = i_{l} + k \), \( l \in \seg{1}{n} \), \( k \in \seg{1}{p_{k}} \). Finally set
\( \bar{\xi} = \xi_{n} \).
\begin{example-nn}
\label{sec:from-hered-rigid-beta}
For example, if
\[ \bar{\beta} = g_{1} \concat g_{2} \concat t \concat a_{1} \concat a_{2} \concat a_{3} \concat t^{-1} \concat
g_{3}, \] then
\begin{displaymath}
\begin{aligned}
\delta &= e \concat e \concat t \concat e \concat e \concat e \concat t^{-1} \concat e,\\
\xi_{0} &= x \concat g_{2}
\concat t \concat e \concat e \concat e \concat t^{-1} \concat g_{3}, \quad x = g_{3}^{-1}g_{2}^{-1}, \\
\xi_{1} &= x \concat g_{2} \concat t \concat y \concat a_{2} \concat a_{3} \concat t^{-1} \concat g_{3}, \quad y =
a_{3}^{-1}a_{2}^{-1}.
\end{aligned}
\end{displaymath}
\end{example-nn}
\begin{lemma}
\label{sec:hnn-extensions-even-subword-reduction-type-g-general}
Let \( \bar{\beta}, \bar{\xi} \) be as above. If \( \zeta \) is a trivial word of length \( |\bar{\beta}| \), \(
\zeta \) and \( \bar{\beta} \) are \multipliable and \( \zeta(i) \in G \) for all \( i \), then
\( \rho(\bar{\beta},\bar{\xi}) \le \rho(\bar{\beta},\zeta) \).
\end{lemma}
\begin{proof}
Set
\begin{displaymath}
\begin{aligned}
\beta &= \hat{\nu}_{0} \concat t \concat \hat{\mu}_{1} \concat t^{-1} \concat \ldots \concat \hat{\mu}_{n} \concat
t^{-1} \concat \hat{\nu}_{n},\\
\xi' &= \hat{\bar{\xi}}[1,i_{1} - 1] \concat t \concat \hat{\bar{\xi}}[i_{1} + 1, i_{1}' - 1] \concat t ^{-1}
\concat \ldots \concat \hat{\bar{\xi}}[i_{n} + 1, i_{n}' - 1] \concat t^{-1} \concat \hat{\bar{\xi}}[i_{n}' +1,
n],\\
\zeta' &= \hat{\zeta}[1,i_{1} - 1] \concat t \concat \hat{\zeta}[i_{1} + 1, i_{1}' - 1] \concat t ^{-1} \concat
\ldots \concat \hat{\zeta}[i_{n} + 1, i_{n}' - 1] \concat t^{-1} \concat \hat{\zeta}[i_{n}' +1, n].
\end{aligned}
\end{displaymath}
If \( \xi \) is as in Lemma \ref{sec:hnn-extensions-even-subword-reduction-type-g}, then \( \xi' = \xi \) and
\begin{displaymath}
\rho(\bar{\beta}, \zeta) \ge \textrm{[by tsi]}\ \rho(\beta,\zeta') \ge \textrm{[by Lemma
\ref{sec:hnn-extensions-even-subword-reduction-type-g}]}\ \rho(\beta,\xi) = \rho(\beta,\xi') =
\rho(\bar{\beta},\bar{\xi}). \qedhere
\end{displaymath}
\end{proof}
\begin{lemma}
\label{sec:from-hered-subword-keep-alternating-beta}
Let \( \bar{\beta} \) be a word of the form
\[ \bar{\beta} = \nu_{0} \concat t \concat \mu_{1} \concat t^{-1} \nu_{1} \concat \cdots \concat \nu_{n-1} \concat t
\concat \mu_{n} \concat t^{-1} \concat \nu_{n},\]
for some words \( \mu_{i} \in \word{A} \), \( \nu_{i} \in \word{G} \). If
\( j_{0}, j_{1} \in \seg{1}{|\bar{\beta}|} \) are such that \( j_{0} < j_{1} \),
\( \bar{\beta}(j_{0}), \bar{\beta}(j_{1}) \in \{t, t^{-1}\} \) and
\( \bar{\beta}(j_{0}) = \bar{\beta}(j_{1})^{-1} \), then
\( \bar{\beta}\Big[\seg{1}{|\bar{\beta}|} \setminus \seg{j_{0}}{j_{1}}\Big] \) can be written as
\[ \bar{\beta}\Big[\seg{1}{|\bar{\beta}|} \setminus \seg{j_{0}}{j_{1}}\Big] =
\nu'_{0} \concat t \concat \mu'_{1} \concat t^{-1} \nu'_{1} \concat \cdots \concat \nu'_{m-1} \concat t
\concat \mu'_{m} \concat t^{-1} \concat \nu'_{m},\]
for \( \mu'_{i} \in \word{A} \), \( \nu'_{i} \in \word{G} \) and \( m \le n \).
\end{lemma}
\begin{proof}
Suppose for definiteness that \( \bar{\beta}(j_{0}) = t \) (the case \( \bar{\beta}(j_{0}) = t^{-1} \) is similar).
For some \( k,l \) we can write
\( \bar{\beta} = \bar{\beta}_{0} \concat \nu_{k} \concat t \concat \bar{\beta}_{1} \concat t^{-1} \concat \nu_{l}
\concat \bar{\beta}_{2} \), where \( |\bar{\beta}_{0}| + |\nu_{k}| = j_{0} -1 \),
\( |\bar{\beta}_{2}| + |\nu_{l}| = |\bar{\beta}|-j_{1} \) and \( \bar{\beta}_{0} \) is either empty or ends with
\( t^{-1} \), \( \bar{\beta}_{2} \) is either empty or starts with \( t \). Then
\[ \bar{\beta}\Big[\seg{1}{|\bar{\beta}|} \setminus \seg{j_{0}}{j_{1}}\Big] = \bar{\beta}_{0} \concat \nu_{k} \concat
\nu_{l} \concat \bar{\beta}_{2}. \qedhere\]
\end{proof}
\medskip
Let \( \gamma \) be a word of the form
\[ \gamma = a_{0} \concat t^{-1} \concat g_{0} \concat t \concat a_{1} \concat t^{-1} \concat g_{1} \concat t \concat
\cdots \concat a_{n-1} \concat t^{-1} \concat g_{n-1} \concat t \concat a_{n}, \]
where \( g_{i} \in G \) and \( a_{i} \in A \). Let \( \zeta \) be a trivial word such that
\( \zeta \) and \( \gamma \) are \multipliable and \( \zeta(i) \in G \) for all \( i \). In other words
\[ \zeta = h_{0} \concat e \concat h_{1} \concat e \concat h_{2} \concat e \concat h_{3} \concat e \concat \cdots
\concat h_{2n-2} \concat e \concat h_{2n-1} \concat e \concat h_{2n}, \]
where \( h_{i} \in G \). Define a word \( \delta \) by
\begin{displaymath}
\delta(i) =
\begin{cases}
a_{0} & \textrm{if \( i = 1 \)};\\
e & \textrm{if \( i = 1 \mod 4 \) and \( 1 < i < 4n+1 \)}; \\
t^{-1} & \textrm{if \( i = 2 \mod 4 \)}; \\
e & \textrm{if \( i = 3 \mod 4 \)}; \\
t & \textrm{if \(i= 0 \mod 4 \)};\\
a_{0}^{-1} & \textrm{if \( i = 4n+1 \)}.
\end{cases}
\end{displaymath}
Or, equivalently,
\[ \delta = a_{0} \concat t^{-1}\concat e \concat t \concat e \concat \cdots \concat e \concat t^{-1} \concat e \concat
t \concat a_{0}^{-1}. \]
If \( T_{\delta} = \{\emptyset, u,s, s_{1}, \ldots, s_{n-1}, s_{1}', \ldots, s_{n-1}'\} \) with \( u \prec \emptyset \),
\( s \prec u \), \( s_{k} \prec s \), \( s_{k}' \prec s_{k} \), \( I_{u} = \seg{2}{n-1} \), \( I_{s} = \seg{3}{n-2} \),
\( I_{s_{k}} = \seg{4k}{4k+2} \), \( I_{{s_{i}'}} = \{4k+1\} \), then \( T_{\delta} \) is a slim evaluation tree. Set
\[ \xi = \symmet{\gamma}{\delta}{3}{\{4k-1\}_{k=1}^{n}} = \symmet{\gamma}{\delta}{3}{R_{s}}. \]
\begin{example-nn}
\label{sec:from-hered-rigid-gamma}
For example, if
\[ \gamma = a_{0} \concat t^{-1} \concat g_{0} \concat t \concat a_{1} \concat t^{-1} \concat g_{1} \concat t \concat
a_{2}, \] then
\begin{displaymath}
\begin{aligned}
&\delta = a_{0} \concat t^{-1} \concat e \concat t \concat e \concat t^{-1} \concat e \concat t \concat
a_{0}^{-1},\\
&\xi = a_{0} \concat t^{-1} \concat g_{1}^{-1} \concat t \concat e \concat t^{-1} \concat g_{1} \concat t \concat
a_{0}^{-1}.
\end{aligned}
\end{displaymath}
\end{example-nn}
\begin{lemma}
\label{sec:hnn-extensions-even-subsword-reduction-type-a}
If \( \gamma, \zeta,\xi \) are as above, then \( \rho(\gamma,\xi) \le \rho(\gamma,\zeta) \).
\end{lemma}
\begin{proof}
By the two-sided invariance
\[ \rho(\gamma,\zeta) \ge d(a_{0}g_{0}a_{1}g_{1}\cdots a_{n-1}g_{n-1}a_{n},e) + 2n. \] On the other hand
\begin{displaymath}
\begin{aligned}
\rho(\gamma,\xi) =& d(a_{0}a_{n},e) + \sum_{i=1}^{n-1}d(a_{i},e) + d(g_{0}g_{1} \cdots g_{n}, e) \le \\
&n + d(g_{0}g_{1}\ldots g_{n-1}, a_{0}^{-1}a_{n}^{-1}) + d(a_{0}^{-1}a_{n}^{-1},e) \le \\
&n+1 + d(g_{0}g_{1} \cdots g_{n-1}, a_{0}^{-1}a_{n}^{-1}) \le \\
&n+1 + d(a_{0}g_{0}g_{1} \cdots g_{n-2}g_{n-1}a_{n}, a_{0}g_{0}a_{1}g_{1} \cdots a_{n-1}g_{n-1}a_{n}) +\\
& \qquad d(a_{0}g_{0}a_{1}g_{1}\cdots a_{n-1}g_{n-1}a_{n},e) = \\
&n+1 + d(g_{1} \cdots g_{n-2}, a_{1}g_{1} \cdots g_{n-2}a_{n-1}) +\\
& \qquad d(a_{0}g_{0}a_{1}g_{1}\cdots a_{n-1}g_{n-1}a_{n},e) \le \textrm{[by Lemma
\ref{sec:hnn-extensions-a-cancellation-error}] }\\
&n+1 + n-1 + d(a_{0}g_{0}a_{1}g_{1}\cdots a_{n-1}g_{n-1}a_{n},e) \le \rho(\gamma,\zeta).
\end{aligned}
\end{displaymath}
And the lemma follows.
\end{proof}
Suppose we have words
\begin{eqnarray*}
\begin{aligned}
\mu_{k} = a_{(k,1)} \concat \cdots \concat a_{(k,p_{k})}, \quad \textrm{where \( a_{(k,j)} \in A \) and \(k \in
\seg{0}{n}, \)} \\
\nu_{k} = g_{(k,1)} \concat \cdots \concat g_{(k,q_{k})}, \quad \textrm{where \( g_{(k,j)} \in G \) and
\(k \in \seg{1}{n}, \)}
\end{aligned}
\end{eqnarray*}
and let \( \bar{\gamma} \) be the word
\[ \bar{\gamma} = \mu_{0} \concat t^{-1} \concat \nu_{0} \concat t \concat \mu_{1} \concat \cdots \concat \mu_{n-1}
\concat t^{-1} \concat \nu_{n-1} \concat t \concat \mu_{n}. \]
Let \( \{i_{k}\}_{k=1}^{n} \), \( \{i'_{k}\}_{k=1}^{n} \) be indices such that
\begin{enumerate}[(i)]
\item \( i_{k} < i_{k+1} \), \( i'_{k} < i'_{k+1} \);
\item \( \gamma(i_{k}) = t^{-1} \), \( \gamma(i'_{k}) = t \);
\item if \( \gamma(i) = t^{-1} \), then \( i = i_{k} \) for some \( k \in \seg{1}{n} \); if \( \gamma(i) = t\), then
\( i = i'_{k} \) for some \( k \in \seg{1}{n} \).
\end{enumerate}
Define the word \( \delta \) of length \( |\bar{\gamma}| \) by
\begin{displaymath}
\delta(i) =
\begin{cases}
e & \textrm{if \( \bar{\gamma}(i) \in G \)};\\
\bar{\gamma}(i) & \textrm{if \( \bar{\gamma}(i) = t^{\pm 1} \)}.\\
\end{cases}
\end{displaymath}
If \( T_{\delta} = \{\emptyset, u, s, s_{1}, \ldots, s_{n-1}, s_{1}', \ldots, s_{n-1}'\} \), \( u \prec \emptyset \),
\( s \prec u \), \( s_{k} \prec s \), \( s_{k}' \prec s_{k} \) and \( I_{u} = \seg{i_{1}}{i_{n}'} \),
\( I_{s} = \seg{i_{1}+1}{i_{n}'-1} \), \( I_{s_{k}} = \seg{i_{k}'}{i_{k+1}} \),
\( I_{s_{k}'} = \seg{i_{k}'+1}{i_{k+1}'-1} \) (in other words \( I_{s_{k}} \) and \( I_{s_{k}'} \) are such that
\( \bar{\gamma}[I_{s_{i}}] = t \concat \mu_{i} \concat t^{-1} \), \( \bar{\gamma}[I_{s_{i}}] = \mu_{i} \)), then
\( T_{\delta} \) is a slim evaluation tree.
Let \( \{j_{k}\}_{k=1}^{m} \) be the enumeration of the set
\[ \bigcup_{k=1}^{n} \seg{i_{k} + 1}{i'_{k} - 1}. \] Set inductively
\begin{displaymath}
\begin{aligned}
\xi_{0} &= \symmet{\bar{\gamma}}{\delta}{j_{1}}{\{j_{k}\}} =\symmet{\bar{\gamma}}{\delta}{j_{1}}{R_{s}},\\
\xi_{l} &= \symmet{\bar{\gamma}}{\xi_{l-1}}{j^{(l)}_{1}}{\{j^{(l)}_{k}\}} =
\symmet{\bar{\gamma}}{\xi_{l-1}}{j^{(l)}_{1}}{R_{s_{l}'}},
\end{aligned}
\end{displaymath}
where \( j^{(l)}_{k} = i'_{l} + k \) and \( l \in \seg{1}{n-1} \), \( k \in \seg{1}{p_{k}} \). Finally set
\[ \bar{\xi} = \symmet{\bar{\gamma}}{\xi_{n}}{1}{\seg{1}{i_{1}-1} \cup \seg{i'_{n}+1}{n}} = \symmet{\bar{\gamma}}{\xi_{n}}{1}{R_{\emptyset}}.\]
\begin{example-nn}
\label{sec:from-hered-rigid-gamma-bar}
For example, if
\[ \bar{\gamma} = a_{1} \concat a_{2} \concat t^{-1} \concat g_{1} \concat t \concat a_{3} \concat a_{4} \concat
t^{-1} \concat g_{2} \concat g_{3} \concat t \concat a_{5}, \] then
\begin{displaymath}
\begin{aligned}
\delta &= e \concat e \concat t^{-1} \concat e \concat t \concat e \concat e \concat t^{-1} \concat e \concat e
\concat t \concat e,\\
\xi_{0} &= e \concat e \concat t^{-1}\concat x \concat t \concat e \concat e \concat t^{-1} \concat g_{2} \concat g_{3}
\concat t \concat e, \quad x= g_{3}^{-1}g_{2}^{-1}\\
\xi_{1} &= e \concat e \concat t^{-1} \concat x \concat t \concat a_{4}^{-1} \concat a_{4} \concat t^{-1} \concat
g_{2} \concat g_{3} \concat t \concat e,\\
\bar{\xi} &= y \concat a_{2} \concat t^{-1} \concat x \concat t \concat a_{4}^{-1} \concat a_{4} \concat t^{-1} \concat
g_{2} \concat g_{3} \concat t \concat a_{5}, \quad y = a_{5}^{-1}a_{2}^{-1}.
\end{aligned}
\end{displaymath}
\end{example-nn}
\begin{lemma}
\label{sec:hnn-extensions-even-subword-reduction-type-a-general}
Let \( \bar{\gamma}, \bar{\xi} \) be as above. If \( \zeta \) is a trivial word of length \( |\bar{\gamma}| \), \(
\zeta \) and \( \bar{\gamma} \) are \multipliable and \( \zeta(i) \in G \) for all \( i \), then
\( \rho(\bar{\gamma},\bar{\xi}) \le \rho(\bar{\gamma},\zeta) \).
\end{lemma}
\begin{proof}
Proof is similar to the proof of Lemma \ref{sec:hnn-extensions-even-subword-reduction-type-g-general} using Lemma
\ref{sec:hnn-extensions-even-subsword-reduction-type-a} instead of Lemma
\ref{sec:hnn-extensions-even-subword-reduction-type-g}.
\end{proof}
\begin{lemma}
\label{sec:from-hered-subword-keep-alternating}
Let \( \bar{\gamma} \) be a word of the form
\[ \bar{\gamma} = \mu_{0} \concat t^{-1} \concat \nu_{0} \concat t \concat \mu_{1} \concat \cdots \concat \mu_{n-1}
\concat t^{-1} \concat \nu_{n-1} \concat t \concat \mu_{n}, \]
for some words \( \mu_{i} \in \word{A} \), \( \nu_{i} \in \word{G} \). If
\( j_{0}, j_{1} \in \seg{1}{|\bar{\gamma}|} \) are such that \( j_{0} < j_{1} \),
\( \bar{\gamma}(j_{0}), \bar{\gamma}(j_{1}) \in \{t, t^{-1}\} \) and
\( \bar{\gamma}(j_{0}) = \bar{\gamma}(j_{1})^{-1} \), then
\( \bar{\gamma}\Big[\seg{1}{|\bar{\gamma}|} \setminus \seg{j_{0}}{j_{1}}\Big] \) can be written as
\[ \bar{\gamma}\Big[\seg{1}{|\bar{\gamma}|} \setminus \seg{j_{0}}{j_{1}}\Big] =
\mu'_{0} \concat t^{-1} \concat \nu'_{0} \concat t \concat \mu'_{1} \concat \cdots \concat \mu'_{m-1}
\concat t^{-1} \concat \nu'_{m-1} \concat t \concat \mu'_{m}, \]
for \( \mu_{i} \in \word{A} \), \( \nu_{i} \in \word{G} \) and \( m \le n \).
\end{lemma}
\begin{proof}
The proof is similar to the proof of Lemma \ref{sec:from-hered-subword-keep-alternating-beta}.
\end{proof}
\begin{definition}
\label{sec:hnn-extensions-rigid-pair}
Let \( (\alpha,\zeta) \) be a hereditary \( f \)-pair of length \( n \). It is called \emph{rigid} if for all
\( i \in \seg{1}{n} \)
\[ \alpha(i) = t^{\pm 1} \implies \zeta(i) = \alpha(i). \]
\end{definition}
Here is an example of a rigid pair:
\begin{displaymath}
\begin{aligned}
&\alpha = g_{0} \concat t \concat a_{1} \concat t^{-1} \concat g_{1} \concat t \concat a_{2} \concat t^{-1} \concat
g_{2},\\
&\zeta = g_{2}^{-1}g_{1}^{-1} \concat t \concat e \concat t^{-1} \concat g_{1} \concat t \concat e \concat t^{-1}
\concat g_{2}.
\end{aligned}
\end{displaymath}
\begin{lemma}
\label{sec:induced-metrics-existence-of-a-good-pair}
Let \( f \in G*tAt^{-1} \), and let \( \alpha \in \word{G \cup \langle t \rangle} \) be the reduced form of \( f \).
If \( (\alpha,\zeta) \) is a hereditary \( f \)-pair, then there exists a rigid \( f \)-pair \( (\alpha,\xi) \) such
that \( \rho(\alpha,\zeta) \ge \rho(\alpha,\xi) \). Moreover, if for some \( i \) one has \( \alpha(i) = t \), then
\( \xi(i+1) \in A \).
\end{lemma}
\begin{proof}
Let \( (\alpha,\zeta) \) be hereditary and let \( T_{\zeta} \) be a structure tree for \( \zeta \). Let
\( s \in T_{\zeta} \) and set \( Q_{s} = R_{s} \setminus\{m(I_{s}),M(I_{s})\} \). Let \( s_{1}, \ldots, s_{N} \in
T_{\zeta} \) be such that \( R_{s} = I_{s} \setminus \bigcup_{i=1}^{N}I_{s_{i}} \). Then using for each \( s_{i} \) Lemma
\ref{sec:from-hered-subword-keep-alternating-beta} or Lemma \ref{sec:from-hered-subword-keep-alternating}
(depending on whether
\( \zeta(m(I_{s})) = t^{-1} \) or \( \zeta(m(I_{s})) = t \)), we get
\begin{multline*}
\alpha[Q_{s}] = \bar{\beta} = g_{(0,1)}\concat \cdots \concat g_{(0,q_{0})} \concat t \concat a_{(1,1)} \concat
\cdots
\concat a_{(1,p_{1})} \concat t^{-1} \concat \cdots \\
\cdots \concat t \concat a_{(n,1)} \concat \cdots \concat a_{(n,p_{n})} \concat t^{-1} \concat g_{(n,1)} \cdots
g_{(n,q_{n})},
\end{multline*}
or
\begin{multline*}
\alpha[Q_{s}] = \bar{\gamma} = a_{(0,1)}\concat \cdots \concat a_{(0,p_{0})} \concat t^{-1} \concat g_{(0,1)}
\concat \cdots
\concat g_{(0,q_{1})} \concat t \concat \cdots \\
\cdots \concat t^{-1} \concat g_{(n-1,1)} \concat \cdots \concat g_{(n-1,q_{n})} \concat t \concat a_{(n,1)} \cdots
a_{(n,p_{n})},
\end{multline*}
where \( a_{(i,j)} \in A \) and \( g_{(i,j)} \in G \).
Let \( \bar{\xi}_{s} \) be as in Lemma \ref{sec:hnn-extensions-even-subword-reduction-type-g-general} or in Lemma
\ref{sec:hnn-extensions-even-subword-reduction-type-a-general} depending on whether \( \alpha[Q_{s}] = \bar{\beta} \)
or \( \alpha[Q_{s}] = \bar{\gamma} \) and set
\[ \xi[Q_{s}] := \bar{\xi}_{s}, \quad \xi(m(I_{s})) = \alpha(m(I_{s})), \quad \xi(M(I_{s})) = \alpha(M(I_{s})) \quad
\textrm{if \( s \ne \emptyset \)}, \]
\[ \xi[R_{\emptyset}] := \bar{\xi}_{\emptyset}, \quad \textrm{if \( s = \emptyset \)}. \]
Do this for all \( s \in T_{\zeta} \). Then \( (\alpha,\xi) \) is a rigid \( f \)-pair and
\[ \rho(\alpha,\zeta) \ge \rho(\alpha,\xi)\ \textrm{[by Lemma
\ref{sec:hnn-extensions-even-subword-reduction-type-g-general} and Lemma
\ref{sec:hnn-extensions-even-subword-reduction-type-a-general}]}. \]
The moreover part follows immediately from the construction of \( \xi \).
\end{proof}
\begin{theorem}
\label{sec:induced-metrics-equality-of-induced-metrics}
Let \( (G,d) \) be a tsi group, \( A < G \) be a closed subgroup, \underline{\emph{not}} necessarily of diameter at
most one. If \( d \) and \( \dist \) are as before (see the beginning of Section \ref{sec:hnn-extensions}), then
\( d = \dist \) if and only if \( \diam{A} \le 1 \).
\end{theorem}
\begin{proof}
First we show that the condition \( \diam{A} \le 1 \) is necessary. Suppose \( \diam{A} > 1 \) and let \( a \in A \)
be such that \( d(a,e) > 1 \). Then
\begin{displaymath}
\begin{aligned}
&\dist(ata^{-1}t^{-1},e) = d(a,e) + d(ta^{-1}t^{-1},e) = d(a,e) + d(a^{-1},e) = 2d(a,e) > 2,\\
&d(ata^{-1}t^{-1},e) = d(ata^{-1}t^{-1},aea^{-1}e) \le\\
& \qquad d(a,a) + d(t,e) + d(a^{-1},a^{-1}) + d(t^{-1},e) = 2.
\end{aligned}
\end{displaymath}
And so \( \dist \ne d \).
Suppose now \( \diam{A} \le 1 \). Let \( f \in G*tAt^{-1} \) be given and let \( \alpha \) be the reduced form of
\( f \). If \( (\alpha,\zeta) \) is a \multipliable \( f \)-pair, then by Lemma \ref{sec:reduction-to-hereditary-pair}
and Lemma \ref{sec:induced-metrics-existence-of-a-good-pair} there is a rigid \( f \)-pair \( (\alpha,\xi) \) such
that \( \rho(\alpha,\xi) \le \rho(\alpha,\zeta) \) and \( \alpha(i)=t \) implies \( \xi(i+1) \in A \). Hence we can
view \( \xi \) as an element in \( \word{G \cup tAt^{-1}} \). Since \( \zeta \) was arbitrary, it follows that
\( \dist(f,e) \le d(f,e) \). The inverse inequality \( d(f,e) \le \dist(f,e) \) follows from item
\eqref{prop:graev-metric-amalgam-properties-item:maximality} of Proposition \ref{sec:graev-metric-amalgam-properties}.
Thus \( \dist(f,e) = d(f,e) \), and, by the left invariance, \( \dist(f_{1},f_{2}) = d(f_{1},f_{2}) \) for all
\( f_{1},f_{2} \in G*tAt^{-1} \).
\end{proof}
\begin{proposition}
\label{sec:metr-hnn-extens-free-product-closed}
Let \( (G,d) \) be a tsi group, \( A < G \) be a subgroup and \( d \) be the Graev metric on the free product
\( G * \langle t \rangle \). We can naturally view \( G * tAt^{-1} \) as a subgroup of \( G * \langle t \rangle\).
If \( A \) is closed in \( G \), then \( G * tAt^{-1} \) is closed in \( G * \langle t \rangle \).
\end{proposition}
\begin{proof}
The proof is similar in spirit to the proof of item \eqref{prop:graev-metric-amalgam-properties-item:induced-metric}
of Proposition \ref{sec:graev-metric-amalgam-properties}, but requires some additional work. Suppose the statement is
false and there is \( f \in G * \langle t \rangle \) such that \( f \not \in G* tAt^{-1} \), but
\( f \in \overline{G * tAt^{-1}} \). Let \( \alpha \in \word{G \cup \langle t \rangle} \) be the reduced form of
\( f \), \( n = |\alpha| \). We show that this is impossible and \( f \in G * tAt^{-1} \). The proof goes by
induction on \( n \).
\textbf{Base of induction}. For the base of induction we consider cases \( n \in \{1,2\} \). If \( n = 1 \), then
either \( f \in G \) or \( f = t^{k} \) for some \( k \ne 0 \). Since \( G < G * tAt^{-1} \), it must be the case
that \( f = t^{k}\). Let \( h \in G * tAt^{-1} \) be such that \( d(f,h) < 1 \), where \( d \) is the Graev
metric on \( G * \langle t \rangle \). Let \( \phi_{1} : G \to \mathbb{Z} \) be the trivial homomorphism:
\( \phi_{1}(g) = 0 \) for all \( g \in G \); and let \( \phi_{2} : \langle t \rangle \to \mathbb{Z} \) be the natural
isomorphism: \( \phi_{2}(t^{k}) = k \). By item \eqref{prop:graev-metric-amalgam-properties-item:extending-lipschitz}
of Proposition \ref{sec:graev-metric-amalgam-properties} \( \phi_{1} \) and \( \phi_{2} \) extend to a \( 1
\)-\lipschitz homomorphism \( \phi : G* \langle t \rangle \to \mathbb{Z} \). But \( d_{\mathbb{Z}}(\phi(f),\phi(h)) = |k| \ge 1
\). We get a contradiction with the assumption \( d(f,h) < 1 \).
Note that for any \( h \in G * tAt^{-1} \)
\begin{multline*}
f \in \left ( \overline{G* tAt^{-1}} \right) \setminus G*tAt^{-1} \implies fh, hf \in \left( \overline{G* tAt^{-1}}
\right) \setminus G*tAt^{-1}.
\end{multline*}
Using this observation the case \( n = 2 \) follows from the case \( n = 1 \). Indeed, \( n = 2 \) implies
\( \alpha = g \concat t^{k} \) or \( \alpha = t^{k} \concat g \) for some \( g \in G \), \( k \ne 0 \). Multiplying
\( f \) by \( g^{-1} \) from either left or right brings us to the case \( n = 1 \).
\textbf{Step of induction}. Without loss of generality we may assume that \( \alpha(n) = t^{k} \) for some
\( k \ne 0 \). Indeed, if \( \alpha(n) = g \) for some \( g \in G \), then we can substitute \( fg^{-1} \) for
\( f \). Assume that \(\alpha = \alpha_{0} \concat t^{k_{1}} \concat g \concat t^{k_{2}} \), where
\( k_{1}, k_{2} \ne 0 \) and \( g \in G \). We claim that \( k_{1} = 1 \), \( k_{2} = -1 \), and \( g \in A \). Set
\begin{align*}
&\epsilon_{1} = \min\{d(\alpha(i),e) : i \in \seg{1}{n} \},\\
&\epsilon_{2} =
\begin{cases}
1 & \textrm{if \( \forall i\ \alpha(i) \in G \implies \alpha(i) \in A \)},\\
\min\{d(\alpha(i),A) : \alpha(i) \in G \setminus A\} & \textrm{otherwise}.
\end{cases}
\end{align*}
And let \( \epsilon = \min\{1, \epsilon_{1},\epsilon_{2}\} \). Note that \( \epsilon > 0 \).
Since \( f \in \overline{G * tAt^{-1}} \), there is \( h \in G * tAt^{-1} \) such that \( d(f,h) < \epsilon \).
Therefore there is a reduced simple \( fh^{-1} \)-pair \( (\beta,\xi) \) such that \( \rho(\beta,\xi) < \epsilon\).
Let \( \gamma \) be the reduced form of \( h^{-1 }\). Suppose first that \( k_{2} \ne -1 \). Assume for simplicity
that \( \beta = \alpha \concat \gamma\) (in general the first letter of \( \gamma \) may get canceled; the proof for
the general case is the same, it is just notationally simpler to assume that \( \beta = \alpha \concat \gamma \)).
Let \( T_{\xi} \) be the slim evaluation tree for \( \xi \), and let \( s_{0} \in T_{\xi} \) be such that
\( n \in R_{s_{0}} \).
We claim that \( n = m(R_{s_{0}}) \). If this is not the case, then there is \( i_{0} \in R_{s_{0}} \) such that
\( i_{0} < n \) and \( \seg{i_{0}+1}{n-1} \cap R_{s_{0}} = \emptyset \). Since \( \alpha \) is reduced,
\( i_{0} < n-1\). If \( I = \seg{i_{0}+1}{n-1} \), then \( \hat{\xi}[I] = e \) and so there is \( j_{0} \in I \) such
that \( \xi(j_{0}) = e \) (since otherwise \( \xi[I] \) would be reduced). Therefore
\[\rho(\beta,\xi) \ge d(\beta(j_{0}),\xi(j_{0})) = d(\alpha(j_{0}), e) \ge \epsilon_{1} \ge \epsilon. \]
Contradicting the choice of the pair \( (\beta,\xi) \).
Thus \( n = m(R_{s_{0}}) \). Let \( j_{1}, \ldots, j_{p} \) be such that
\begin{enumerate}[(i)]
\item \( j_{k} \in R_{s_{0}} \) for all \( k \in \seg{1}{p} \);
\item \( j_{k} < j_{k+1} \);
\item \( \xi(j_{k}) \ne e \);
\item \( \xi(j) \ne e \) and \( j \in R_{s_{0}} \) implies \( j = j_{k} \) for some \( k \).
\end{enumerate}
In fact, we can always modify the tree to assure that \( \xi(j) \ne e \) for all \( j \in R_{s_{0}} \), but this is
not used here. In this notation \( j_{1} = n \). Since \( \rho(\beta,\xi) < 1 \), we get
\( \beta(j_{k}) = \xi(j_{k}) = t^{\pm 1} \) for all \( k \in \seg{2}{p} \). If \( I_{k} = \seg{j_{k}+1}{j_{k+1}-1} \)
for \( k \in \seg{1}{p-1} \), then \( \hat{\xi}[I_{k}] = e \) for all \( k \), whence for any \( k \in \seg{1}{p-1} \)
\[ |\{ i \in I_{k} : \xi(i) = t\}| = |\{ i \in I_{k} : \xi(i) = t^{-1} \} |.\]
We claim that \( \xi(j_{2}) = t \). Suppose not. Then \( \xi(j_{2}) = t^{-1} \) and we can write \( \gamma =
\gamma_{0} \concat t^{-1} \concat \gamma_{1} \),
\[ \beta = \alpha_{0} \concat t^{k_{1}} \concat g \concat t^{k_{2}} \concat \gamma_{0} \concat t^{-1} \concat
\gamma_{1}, \]
with \( |\alpha| + |\gamma_{0}| = j_{2} - 1 \). Since \( \hat{\gamma}_{0} = e \) we must have
\[ |\{ i \in \seg{1}{|\gamma_{0}|} : \gamma_{0}(i) = t\}| = |\{ i \in \seg{1}{|\gamma_{0}|} : \gamma_{0}(i) = t^{-1}
\} |.\]
On the other hand
\[ \gamma_{0} = g_{0}' \concat t \concat a_{1}' \concat t^{-1} \concat \cdots \concat t \concat a'_{m}, \]
(\( g_{0}' \) may be absent) and each \( t \) is paired with \( t^{-1} \) except for the last one. Therefore
\[ |\{ i \in \seg{1}{|\gamma_{0}|} : \gamma_{0}(i) = t\}| = |\{ i \in \seg{1}{|\gamma_{0}|} : \gamma_{0}(i) = t^{-1}
\} | + 1.\]
Contradiction. Therefore \( \xi(j_{2}) = t \). Similarly, it is now easy to see that
\[ \xi(j_{2}) = t,\ \xi(j_{3}) = t^{-1},\ \xi(j_{4}) = t,\ldots,\ \xi(j_{p}) = t^{((-1)^{p})}. \]
Finally, since \( \hat{\xi}[R_{s_{0}}] = e \), we get \( \xi(j_{1}) = t^{-1}\) or \( \xi(j_{1}) = e\), depending on
whether \( p \) is even or odd. But since by assumption \( k_{2} \ne 0 \) we get \( k_{2} = -1 \).
We have proved that \( k_{2} = -1 \). The next step is to show that \( g \in A \). We have two cases.
\setcounter{case}{0}
\begin{case}
\label{sec:metr-hnn-extens-1}
\( \gamma(1) \in G \). In this case we have \( \beta = \alpha \concat \gamma \). Let \( s_{1} \in T_{\xi} \) be
such that \( n-1 \in R_{s_{1}} \). Similarly to the previous step one shows that \( n-1 = m(R_{s_{1}}) \). Let
\( R_{s_{1}} = \{j_{k}\}_{k=1}^{p }\), where \( j_{k} < j_{k+1} \). In particular, \( n-1 = j_{1} \). Set
\( I_{k} = \seg{j_{k}+1}{j_{k+1}-1} \). From \( \hat{\xi}[I_{k}] = e \) it follows
\[ |\{ i \in I_{k} : \xi(i) = t\}| = |\{ i \in I_{k} : \xi(i) = t^{-1} \} |.\]
Therefore \( \xi(j_{k}) \in A \) for all \( k \in \seg{2}{p} \). And so \( \xi(j_{1}) \in A \) as well. Finally,
if \( g \not \in A \), then
\[\rho(\beta,\xi) \ge d(\beta(n-1),\xi(n-1)) \ge d(g,A) \ge \epsilon_{2} \ge \epsilon. \]
And again we have a contradiction with the choice of \( (\beta,\xi) \).
\end{case}
\begin{case}
\label{sec:metr-hnn-extens-2}
\( \gamma(1) = t \). In this case \( \alpha = \alpha_{0} \concat t^{k_{1}} \concat g \concat t^{-1} \) and
\( \gamma = t \concat a \concat t^{-1} \concat \gamma_{0} \), for some \( a \in A \) and a word \( \gamma_{0} \).
If \( g \not \in A \) then \( \beta = \alpha_{0} \concat t^{k_{1}} \concat ga \concat t^{-1} \concat \gamma_{0} \).
And we are essentially in Case 1. Therefore by the proof of Case 1 we get \( ga \in A \), but then \( g \in A \).
\end{case}
Thus \( g \in A \). The proof of \( k_{1} = 1 \) is similar to the proof of \( k_{2} = -1 \) given earlier, and we
omit the details.
We have shown that \( \alpha = \alpha_{0} \concat t \concat a \concat t^{-1} \). If \( f' = f t a^{-1} t^{-1} \),
then \( \alpha_{0} \) is the reduced form of \( f' \) and \( f' \in \overline{G * tAt^{-1}} \setminus G*tAt^{-1} \).
We proceed by induction on the length of \( \alpha \).
\end{proof}
\section{HNN extensions of groups with tsi metrics}
We now turn to the HNN construction itself. There are several ways to build an HNN extension. We will follow the
original construction of G. Higman, B. H. Neumann and H. Neumann from \cite{MR0032641}, because their approach hides a
lot of complications into the amalgamation of groups, and we have already constructed Graev metrics on amalgams in the
previous sections.
Let us briefly remind what an HNN extension is. Let \( G \) be an abstract group, \( A, B < G \) be isomorphic
subgroups and \( \phi : A \to B \) be an isomorphism between them. An HNN extension of \( (G,\phi) \) is a pair
\( (H, t) \), where \( t \) is a new symbol and \( H = \langle G, t | tat^{-1} = \phi(a), a \in A \rangle \). The
element \( t \) is called a \emph{stable letter} of the HNN extension.
\subsection{Metrics on HNN extensions}
\label{sec:metr-hnn-extens}
\begin{theorem}
\label{sec:hnn-extens-class-existence-of-hnn-extension}
Let \( (G,d) \) be a tsi group, \( \phi : A \to B \) be a \( d \)-isometric isomorphism between the closed subgroups
\( A, B \). Let \( H \) be the HNN extension of \( (G,\phi) \) in the abstract sense, and let \( t \) be the stable
letter of the HNN extension. If \( \diam{A} \le K \), then there is a tsi metric \( \dist \) on \( H \) such that
\( \dist|_{G} = d \) and \( \dist(t,e) = K \).
\end{theorem}
\begin{proof}
First assume that \( K = 1 \). Let \( \langle u \rangle \) and \( \langle v \rangle \) be two copies of the group
\( \mathbb{Z} \) of the integers with the usual metric. Form the free products \( (G * \langle u \rangle, d_{u}) \)
and \( (G * \langle v \rangle,d_{v}) \), where \( d_{u}, d_{v} \) are the Graev metrics. Since
\( \diam{A} = \diam{B} \le 1 \), by Theorem \ref{sec:induced-metrics-equality-of-induced-metrics} the Graev metric on
\( G * uAu^{-1} \) is the restriction of \( d_{u} \) onto \( G* uAu^{-1} \), and, similarly, the Graev metric on
\( G * vBv^{-1} \) is just the restriction of \( d_{v} \). Let \( \psi : G * uAu^{-1} \to G * vBv^{-1} \) be an
isomorphism that is uniquely defined by
\[ \psi(g) = g, \quad \psi(uau^{-1}) = v\phi(a)v^{-1}, \quad a \in A,\ g \in G. \]
By Theorem \ref{sec:induced-metrics-equality-of-induced-metrics} \( \psi \) is an isometry. Also, by Proposition
\ref{sec:metr-hnn-extens-free-product-closed} \( G*uAu^{-1} \) and \( G*vBv^{-1} \) are closed subgroups of
\(G * \langle u \rangle \) and \( G * \langle v \rangle \) respectively. Hence by the results of Section
\ref{sec:metrics-amalgams} we can amalgamate \( G * \langle u \rangle \) and \( G * \langle v \rangle \) over
\( G * uAu^{-1} = G*vBv^{-1} \). Denote the result of this amalgamation by \( (\widetilde{H}, \dist) \). Then
\[ uau^{-1} = v\phi(a)v^{-1} \quad \textrm{for all \( a \in A \)}, \]
and therefore \( v^{-1}uau^{-1}v = \phi(a) \). If \( H = \langle G, v^{-1}u \rangle \), then \( (H,v^{-1}u) \) is an
HNN extension of \( (G,\phi) \) and \( \dist|_{H_{\phi}} \) is a two-sided invariant metric on \( H \), which extends
\( d \).
This was done under the assumption that \( K=1 \). The general case can be reduced to this one. If \( d' = (1/K)d
\), then \( d' \) is a tsi metric on \( G \), \( \phi \) is a \( d' \)-isometric isomorphism and \(
d'\)-\(\diam{A}\le 1 \). By the above construction there is a tsi metric \( \dist' \) on \( H \) such that
\( \dist'|_{G} = d' \). Now set \( \dist = K \dist' \).
\end{proof}
It is, of course, natural to ask if the condition of having a bounded diameter is crucial. The answer to this question
is not known, but here is a necessary condition.
\begin{proposition}
\label{sec:hnn-extens-class-necessary-hnn-condition}
Let \( (G,d) \) be a tsi group, \( \phi : A \to B \) be a \( d \)-isometric isomorphism, and \( H \) be the HNN
extension of \( (G,\phi) \) with the stable letter \( t \). If \( d \) is extended to a tsi metric \( d' \) on
\( H \), then
\[ \sup\{d'(a,\phi(a)) : a \in A\} < \infty. \]
\end{proposition}
\begin{proof}
If \( K = d'(t,e) \), then for any \( a \in A \)
\begin{displaymath}
\begin{aligned}
d'(a,\phi(a)) =& d'(a,tat^{-1}) = d'(a^{-1}tat^{-1},e) =\\
&d'(a^{-1}tat^{-1}, a^{-1}eae) \le d'(t,e)+ d'(t^{-1},e)=2K.
\end{aligned}
\end{displaymath}
Therefore \( \sup\{d'(a,\phi(a)) : a \in A\} \le 2K \).
\end{proof}
\begin{question}
\label{sec:hnn-extens-class-sufficiency-of-the-necessary-condition-q}
Is this condition also sufficient? To be precise, suppose \( (G,d) \) is a tsi group, \( \phi : A \to B \) is a
\( d \)-isometric isomorphism between closed subgroups \( A, B \), and suppose that
\[ \sup \big\{d(a,\phi(a)) : a \in A \big\} < \infty. \]
Does there exist a tsi metric \( \dist \) on the HNN extension \( H \) of \( (G,\phi) \) such that
\( \dist|_{G} = d \)?
\end{question}
\subsection{Induced conjugation and HNN extension}
\label{sec:induc-conj-hnn-1}
Recall that a topological group \( G \) is called SIN if for every open \( U \subseteq G \) such that \( e \in U \)
there is an open subset \( V \subseteq U \) such that \( gVg^{-1} = V \) for all \( g \in G \). A metrizable group
admits a compatible two-sided invariant metric if and only if it is SIN.
\begin{theorem}
\label{sec:induc-conj-hnn-general-theorem}
Let \( G \) be a SIN metrizable group. Let \( \phi : A \to B \) be a topological isomorphism between two closed
subgroups. There exist a SIN metrizable group \( H \) and an element \( t \in H \) such that \( G < H \) is a
topological subgroup and \( tat^{-1} = \phi(a) \) for all \( a \in A \) if and only if there is a compatible tsi
metric \( d \) on \( G \) such that \( \phi \) becomes a \( d \)-isometric isomorphisms.
\end{theorem}
\begin{proof}
Necessity of the condition is obvious: if \( d \) is a compatible tsi metric on \( H \), then \( \phi \) is
\( d|_{G} \)-isometric. We show sufficiency. Let \( d \) be a compatible tsi metric on \( G \) such that \( \phi \)
is a \( d \)-isometric isomorphism. If \( d'(g,e) = \min\{d(g,e), 1\} \), then \( d' \) is also a compatible tsi
metric on \( G \), \( \phi \) is a \( d' \)-isometric isomorphism, and \( d' \)-\( \diam{A} \le 1\) (because \( d'
\)-\( \diam{G} \le 1\)). Apply Theorem \ref{sec:hnn-extens-class-existence-of-hnn-extension} to get an extension of
\( d' \) to a tsi metric on \( H \), where \( (H,t) \) is the HNN extension of \( (G,\phi) \). Then \( (H,t) \)
satisfies the conclusions of the theorem.
\end{proof}
\begin{corollary}
\label{sec:induc-conj-hnn-extension-for-discrete-subgroups}
Let \( G \) be a SIN metrizable group. Let \( \phi : A \to B \) be a topological group isomorphism. If \( A \) and
\( B \) are discrete, then there is a topology on the HNN extension of \( (G,\phi) \) such that \( G \) is a closed
subgroup of \( H \) and \( H \) is SIN and metrizable.
\end{corollary}
\begin{proof}
Let \( d \) be a compatible tsi metric on \( G \). Since \( A \) and \( B \) are discrete, there exists constant
\( c > 0 \) such that
\[ \inf\{d(a_{1},a_{2}) : a_{1},a_{2} \in A, a_{1} \ne a_{2}\} \ge c, \quad
\inf\{d(b_{1},b_{2}) : b_{1},b_{2} \in B, b_{1} \ne b_{2}\} \ge c.\]
If \( d'(g_{1},g_{2}) = \min\{d(g_{1},g_{2}), c\} \), then \( d' \) is a
compatible tsi metric on \( G \) and \( \phi \) is a \( d' \)-isometric
isomorphism. Theorem \ref{sec:induc-conj-hnn-general-theorem} finishes the
proof.
\end{proof}
\begin{corollary}
\label{sec:induc-conj-hnn-inverse-abelian-group}
Let \( (G,+) \) be an abelian metrizable group. If \( \phi : G \to G \) is given by \( \phi(x) = -x \), then
there is a SIN metrizable topology on the HNN extension \( H \) of \( (G,\phi) \) that extends the topology of \( G
\).
\end{corollary}
\begin{proof}
If \( d \) is a compatible tsi metric on \( G \) such that \(d\)-\( \diam{G} \le 1\), then \( \phi \) is a \( d
\)-isometric isomorphism and we apply Theorem \ref{sec:induc-conj-hnn-general-theorem}.
\end{proof}
\begin{definition}
\label{sec:induc-conj-hnn}
Let \( G \) be a topological group. Elements \( g_{1}, g_{2} \in G \) are said to be \emph{induced conjugated} if
there exist a topological group \( H \) and an element \( t \in H \) such that \( G < H\) is a topological subgroup
and \( tg_{1}t^{-1} = g_{2} \).
\end{definition}
\begin{example}
\label{sec:induc-conj-hnn-circle-hnn}
Let \( (\mathbb{T},+) \) be a circle viewed as a compact abelian group, and let \( g_{1}, g_{2} \in \mathbb{T} \).
The elements \( g_{1} \) and \( g_{2} \) are induced conjugated if and only if one of the two conditions is satisfied:
\begin{enumerate}[(i)]
\item \( g_{1} \) and \( g_{2} \) are periodic elements of the same period;
\item \( g_{1} = \pm g_{2} \).
\end{enumerate}
\end{example}
\begin{proof}
The sufficiency of any of these conditions follows from Corollary
\ref{sec:induc-conj-hnn-extension-for-discrete-subgroups} and Corollary
\ref{sec:induc-conj-hnn-inverse-abelian-group}. We need to show the necessity. If \( g_{1} \) and \( g_{2} \) are
induced conjugated, then they have the same order. If the order of \( g_{i} \) is finite, we are done. Suppose the
order is infinite. The groups \( \langle g_{1} \rangle \) and \( \langle g_{2} \rangle\) are naturally isomorphic (as
topological groups) via the map \( \phi(kg_{1}) = kg_{2}\). This map extends to a continuous isomorphism
\( \phi : \mathbb{T} \to \mathbb{T} \), because \( \mathbb{T} \) is compact and \( \langle g_{i} \rangle \) is dense
in \( \mathbb{T} \). But there are only two continuous isomorphisms of the circle: \( \phi = \id \) and
\( \phi = -\id \). Thus \( g_{1} = \pm g_{2} \).
\end{proof}
\begin{example}
\label{sec:induc-conj-hnn-infinite-torus-shift}
Let \( G = \mathbb{T}^{\mathbb{Z}} \) be a product of circles, and let
\( S : \mathbb{T}^{\mathbb{Z}} \to \mathbb{T}^{\mathbb{Z}} \) be the shift map \( S(x)(n) = x(n+1) \) for all
\( x \in \mathbb{T}^{\mathbb{Z}}\) and all \( n \in \mathbb{Z} \). The group \( \mathbb{T}^{\mathbb{Z}} \) is
monothetic and abelian. If \( x = \{a_{n}\}_{n \in \mathbb{Z}} \), where \( a_{n} \)'s and \( 1 \) are linearly
independent over \( \mathbb{Q} \), then \( \langle x \rangle \) is dense in \( \mathbb{T}^{\mathbb{Z}} \) (by
Kronecker's theorem, see, for example, \cite[Theorem 443]{MR2445243}). Since
\( S \) is an automorphism, \( x \) and \( S(x) \) are topologically similar. We claim that \( x \) and \( S(x) \)
are not induced conjugated in any SIN metrizable group \( H \).
\end{example}
\begin{proof}
Suppose \( H \) is a SIN metrizable group, \( G \) is a topological subgroup of \( H \) and \( t \in H \) is such that
\( txt^{-1} = S(x) \). If \( \phi_{t} : H \to H \) is given by \( \phi_{t}(y) = tyt^{-1} \), then
\( \phi_{t}(mx) = S(mx) \) for all \( m \in \mathbb{Z} \) and hence, by continuity and density of
\( \langle x \rangle \), \( \phi_{t}(y) = S(y) \) for all \( y \in \mathbb{T}^{\mathbb{Z}} \). If \( d \) is a
compatible tsi metric on \( H \), then \( \phi_{t} \) is a \( d \)-isometric isomorphism. Therefore for
\( x_{0} \in \mathbb{T}^{\mathbb{Z}} \),
\begin{displaymath}
x_{0}(n) =
\begin{cases}
1/2 & \textrm{if \( n = 0 \)}; \\
0 & \textrm{otherwise},
\end{cases}
\end{displaymath}
we get
\[ d(\phi_{t}^{m}(x_{0}), e) = d(\phi_{t}^{m}(x_{0}), \phi_{t}^{m}(e)) = d(x_{0}, e) = \mathrm{const} > 0, \]
but \( S^{m}(x_{0}) \to 0 \), when \( m \to \infty \). This contradicts \( \phi_{t}(y) = S(y) \) for all
\( y \in \mathbb{T}^{\mathbb{Z}} \).
\end{proof}
\bibliographystyle{amsplain}
\bibliography{/home/kslutsky/gitrep/papers/references}{}
\end{document} | {"config": "arxiv", "file": "1111.1538/graev-metrics.tex"} |
TITLE: Caclulate the area of $ D=\{(x,y)|(x-y)^2+4x^2\leq R^2\}$
QUESTION [0 upvotes]: Caclulate the area of $ D=\{(x,y)|(x-y)^2+4x^2\leq R^2\}$
Hint: Prove that for any continuous function $f$
$$\int_0^a\ dx \int_0^x f(y)\ dy = \int_0^a (a-x)f(x) \ dx$$
I’m honestly getting stuck lots of times at just sketching the regions even, does anyone have an idea of how to do this? And any tips on how to sketch a region in general and derive the limits of integration?
Thanks.
REPLY [1 votes]: Write $$2x=u,\quad y-x=v,\qquad {\rm resp.,}\qquad x={u\over2},\quad y={u\over2}+v\ .$$
Then $D$ is the linear image of $\hat D=\bigl\{(u,v)\bigm| u^2+v^2\leq R^2\bigr\}$. Since the Jacobian $x_uy_v-x_vy_u\equiv{1\over2}$ we have
$${\rm area}(D)={1\over2}{\rm area}(\hat D)={\pi\over2}R^2\ .$$ | {"set_name": "stack_exchange", "score": 0, "question_id": 3274613} |
TITLE: Proving properties of Hermitian conjugate
QUESTION [2 upvotes]: I have three properties:
If $\hat{A}$ and $\hat{B}$ are Hermitian operators. Then $\hat{A}\hat{B}$ is Hermitian provided $\hat{A}$ and $\hat{B}$ also commute $[\hat{A},\hat{B}]=0$
If $\hat{A}$ and $\hat{B}$ are Hermitian operators and $\hat{A}$ and $\hat{B}$ also commute, then $\hat{A}+\hat{B}$ is Hermitian
If $\hat{A}$ and $\hat{B}$ are Hermitian operators, and $\hat{A}$ and $\hat{B}$ do not commute, then $\hat{A}\hat{B}+\hat{B}\hat{A}$ is Hermitian
I am trying to prove all these properties.
1st one:
For the second one I'm struggling, as I do not know how to expand $(\hat{A}+\hat{B})^\dagger$
REPLY [2 votes]: As Jakob commented, to prove identities of that kind it is often good to go back to the definition of the adjoint operator as arising from an inner product. Given an inner product $(\cdot,\cdot)$ and an operator $\hat{A}$, one defines the adjoint operator $\hat{A}^\dagger$ to be the operator that satisfies
$$(v,\hat{A}w) = (\hat{A}^\dagger v,w)$$
for all vectors $v,w$ (on a more technical note, one might have to restrict the condition from "all vectors" to "those vectors where the quantities are defined", but that is typically omitted in introductory QM lectures). With that, you can prove $(\hat{A}\hat{B})^\dagger = \hat{B}^\dagger \hat{A}^\dagger$ by
$$((\hat{A}\hat{B})^\dagger v,w) \stackrel{\textrm{def}}{=} (v,\hat{A}\hat{B}w) = (v,\hat{A}(\hat{B}w)) \stackrel{\textrm{def}}{=} (\hat{A}^\dagger v,\hat{B}w)
\stackrel{\textrm{def}}{=} (\hat{B}^\dagger(\hat{A}^\dagger v),w)
= (\hat{B}^\dagger\hat{A}^\dagger v,w).$$
Analogous to that, and just using the linearity of the inner product, i.e.
$$(v, w + \lambda u) = (v, w) + \lambda(v,u)$$
with vectors $v,w,u$ and a scalar $\lambda \in \mathbb{C}$, you can figure it out. Try it and comment if that works, otherwise I'll add another edit.
If the inner product notation is unfamiliar, replace braces with bras and kets and write greek letters,
$$\langle \psi | \hat{A} \phi \rangle\ \sim (v,\hat{A}w)$$ | {"set_name": "stack_exchange", "score": 2, "question_id": 609062} |
\begin{document}
\author{Martin Klazar\thanks{Department of Applied Mathematics (KAM) and Institute for Theoretical
Computer Science (ITI), Charles University, Malostransk\'e n\'am\v est\'\i\ 25, 118 00 Praha,
Czech Republic. ITI is supported by the project LN00A056 of the
Ministery of Education of the Czech Republic. E-mail: {\tt klazar@kam.mff.cuni.cz}}}
\title{Extremal problems for ordered (hyper)graphs: applications of Davenport--Schinzel sequences}
\date{}
\maketitle
\begin{abstract}
We introduce a containment relation of hypergraphs which respects linear orderings of vertices and investigate
associated extremal functions. We extend, by means of a more generally applicable theorem, the $n\log n$
upper bound on the ordered graph extremal function of $F=(\{1,3\},\{1,5\},\{2,3\},\{2,4\})$ due to F\"uredi to
the $n(\log n)^2(\log\log n)^3$ upper bound in the hypergraph case. We use Davenport--Schinzel sequences to
derive almost linear upper bounds in terms of the inverse Ackermann function $\alpha(n)$. We obtain such upper
bounds for the extremal functions of forests consisting of stars whose all centers precede all leaves.
\end{abstract}
\section{Introduction and motivation}
In this article we shall investigate extremal problems on graphs and hypergraphs of
the following type. Let $G=([n],E)$ be a simple graph that has the vertex set
$[n]=\{1,2,\dots,n\}$ and contains no six vertices $1\le v_1<v_2<\cdots<v_6\le n$ such that
$\{v_1,v_3\},\{v_1,v_5\},\{v_2,v_4\},$ and $\{v_2,v_6\}$ are edges of $G$, that is, $G$ has no {\em ordered\/}
subgraph of the form
\begin{equation}\label{G0}
G_0=\ \ \obr{\put(0,1){\bod}\put(6,1){\bod}\put(12,1){\bod}
\put(18,1){\bod}\put(24,1){\bod}\put(30,1){\bod}
\put(6,1){\oval(12,9)[t]}
\put(12,1){\oval(24,13)[t]}
\put(12,1){\oval(12,6)[t]}
\put(18,1){\oval(24,19)[t]}
}
\ \ .
\end{equation}
Determine the maximum possible number $g(n)=|E|$ of edges in $G$.
What makes this task hard is the linear ordering of $V=[n]$ and the fact that $G_0$ must not appear in $G$
only as an ordered subgraph. If we ignore
the ordering for a while, then the problem asks to determine the maximum number of edges in a simple graph
$G$ with $n$ vertices and no $2K_{1,2}$ subgraph, and is easily solved. Clearly, if
$G$ has two vertices of degrees $\ge 3$ and $\ge 5$, respectively, or if $G$ has $\ge 6$ vertices of
degrees $4$ each, then a $2K_{1,2}$ subgraph must appear. Thus if $G$ has a vertex of degree $\ge 5$ and no
$2K_{1,2}$ subgraph, it has at most $(2(n-1)+n-1)/2=3n/2-1.5$ edges. If all degrees are $\le 4$, the number of
edges is at most $(3(n-5)+4\cdot 5)/2=3n/2+2.5$. On the other hand, the graph on $[n]$ in which $\deg(n)=n-1$
and $[n-1]$ induces a matching with $\lfloor (n-1)/2\rfloor$ edges has $n+\lfloor (n-1)/2\rfloor-1$ edges and no
$2K_{1,2}$ subgraph. We conclude that in the unordered version of the problem the maximum number of edges
equals $3n/2+O(1)$.
The ordered version is considerably more difficult. Later in this section we prove that the maximum
number of edges $g(n)$ satisfies
\begin{equation}\label{gn}
n\cdot\alpha(n)\ll g(n)\ll n\cdot 2^{(1+o(1))\alpha(n)^2}
\end{equation}
where $\alpha(n)$ is the inverse Ackermann function. Recall that $\alpha(n)=\min\{m:\ A(m)\ge n\}$ where
$A(n)=F_n(n)$, the Ackermann function, is defined as follows. We start with $F_1(n)=2n$ and for $i\ge 1$
we define $F_{i+1}(n)=F_i(F_i(\ldots F_i(1)\ldots ))$ with $n$ iterations of $F_i$. The function $\alpha(n)$
grows to infinity but its growth is extremely slow.
We obtain (\ref{gn}) and some generalizations by
reductions to {\em Davenport--Schinzel sequences\/} (DS sequences). We continue now with a brief review
of results on DS sequences that will be needed in the following. We refer the reader for more information
and references on DS sequences and their
applications in computational and combinatorial geometry to Agarwal and Sharir
\cite{agar_shar}, Klazar \cite{klaz02}, Sharir and Agarwal \cite{shar_agar}, and Valtr \cite{valt99}.
If $u=a_1a_2\dots a_r$ and $v=b_1b_2\dots b_s$ are two finite sequences (words) over a fixed infinite alphabet
$A$, where $A$ contains $\N=\{1,2,\dots\}$ and also some symbols $a,b,c,d,\dots$, we say that
$v$ {\em contains\/} $u$ and
write $v\succ u$ if $v$ has a subsequence $b_{i_1}b_{i_2}\dots b_{i_r}$ such that for every $p$ and $q$
we have $a_p=a_q$ if and only if $b_{i_p}=b_{i_q}$. In other words, $v$ has a subsequence that differs from
$u$ only by an injective renaming of the symbols. For example, $v=ccaaccbaa\succ 22244=u$ because $v$ has
the subsequence $cccaa$. On the other hand, $ccaaccbaa\not\succ 12121$. A sequence $u=a_1a_2\dots a_r$ is
called $k$-{\em sparse\/}, where $k\in\N$, if $a_i=a_j,i<j,$ always implies $j-i\ge k$; this means that
every interval in $u$ of length at most $k$ consists of distinct terms. The length $r$ of $u$ is denoted by
$|u|$. For two integers $a\le b$ we write $[a,b]$ for the interval $\{a,a+1,\dots,b\}$. For two functions
$f,g:\N\to\R$ the notation $f\ll g$ is synonymous to the $f=O(g)$ notation; it means that $|f(n)|<c|g(n)|$ for
all $n>n_0$ with a constant $c>0$.
The classical theory of DS sequences
investigates, for a fixed $s\in\N$, the function $\lambda_s(n)$ that is defined as the maximum length of
a 2-sparse sequence $v$ over $n$ symbols which does not contain the $s+2$-term alternating sequence
$ababa\dots$ ($a\ne b$). The notation $\lambda_s(n)$ and the shift $+2$ are due to historical reasons. The
term {\em DS sequences\/} refers to the sequences $v$ not containing a fixed alternating sequence.
The theory of {\em generalized DS sequences\/} investigates, for a fixed sequence $u$ that uses exactly $k$
symbols, the function $\ex(u,n)$ that is defined as the maximum length of a $k$-sparse sequence $v$ such that
$v$ is over $n$ symbols and $v\not\succ u$. Note that $\ex(u,n)$ extends $\lambda_s(n)$ since
$\lambda_s(n)=\ex(ababa\dots,n)$ where $ababa\dots$ has length $s+2$. In the definition of
$\ex(u,n)$ one has to require that $v$ is $k$-sparse because no condition or even
only $k-1$-sparseness would allow an infinite $v$ with $v\not\succ u$; for example,
$v=12121212\dots\not\succ abca=u$ and $v$ is 2-sparse (but not 3-sparse). An easy pigeon hole argument shows
that always $\ex(u,n)<\infty$.
DS sequences were introduced by Davenport and Schinzel \cite{dave_schi} and strongest bounds on
$\lambda_s(n)$ for general $s$ were obtained by Agarwal, Sharir and Shor \cite{agar_shar_shor}.
We need their bound
\begin{equation}\label{lambda6}
\lambda_6(n)\ll n\cdot 2^{(1+o(1))\alpha(n)^2}
\end{equation}
(recall that $\lambda_6(n)=\ex(abababab,n)$). Hart and Sharir \cite{hart_shar} proved that
\begin{equation}\label{lambda3}
n\alpha(n)\ll\lambda_3(n)\ll n\alpha(n).
\end{equation}
In Klazar \cite{klaz92} we proved that if $u$ is a sequence using $k\ge 2$ symbols and $|u|=l\ge 5$, then
for every $n\in\N$
\begin{equation}\label{mujobec}
\ex(u,n)\le n\cdot k2^{l-3}\cdot (10k)^{2\alpha(n)^{l-4}+8\alpha(n)^{l-5}}.
\end{equation}
It is easy to show that for $k=1$ or $l\le 4$ we have $\ex(u,n)\ll n$. In particular, for the sequence
\begin{equation}\label{ukl}
u(k,l)=12\ldots k12\ldots k\ldots 12\ldots k
\end{equation}
with $l$ segments $12\ldots k$ we have, for every fixed $k\ge 2$ and $l\ge 3$,
\begin{equation}\label{muj}
\ex(u(k,l),n)\le n\cdot k2^{kl-3}\cdot (10k)^{2\alpha(n)^{kl-4}+8\alpha(n)^{kl-5}}.
\end{equation}
We denote the factor at $n$ in (\ref{muj}) as $\beta(k,l,n)$. Thus
\begin{equation}\label{beta}
\beta(k,l,n)=k2^{kl-3}(10k)^{2\alpha(n)^{kl-4}+8\alpha(n)^{kl-5}}.
\end{equation}
Let us see now how (\ref{lambda6}) and the lower bound in (\ref{lambda3}) imply (\ref{gn}). Let $G=([n],E)$
be any simple graph not containing $G_0$ (given in (\ref{G0})) as an ordered subgraph. Consider the sequence
$$
v=I_1I_2\dots I_n
$$
over $[n]$ where $I_i$ is the
decreasing ordering of the list $\{j:\ \{j,i\}\in E\;\&\;j<i\}$. Note that $I_1=\emptyset$ and $|v|=|E|$.
\begin{lema}
If $v\succ abababab$ then $G_0$ is an ordered subgraph of $G$.
\end{lema}
\duk
We assume that $v$ has an 8-term alternating subsequence
$$
\dots a_1\dots b_1\dots a_2\dots b_2\dots a_3\dots b_3\dots a_4\dots b_4\dots
$$
where the appearances of two numbers $a\neq b$ are indexed for further discussion. We distinguish two cases.
If $a<b$ then $a_2$, $b_2$, $a_4$, and $b_4$ lie, respectively, in four distinct intervals $I_p$, $I_q$,
$I_r$, and $I_s$, $p<q<r<s$, (since every $I_i$ is decreasing) and $b<p$ (since $b_1$ precedes $a_2$). Hence
$G_0$ is an ordered subgraph of $G$. If $b<a$ then $b_1$, $a_2$, $b_3$, and $a_4$ lie, respectively, in four
distinct intervals $I_p$, $I_q$, $I_r$, and $I_s$, $p<q<r<s$, and $a<p$. Again, $G_0$ is an ordered subgraph
of $G$.
\kduk
\noindent
Thus $v$ has no 8-term alternating subsequence. In $v$ immediate repetitions may appear only on
the transitions $I_iI_{i+1}$. Deleting at most $n-1$ (actually $n-2$ because
$I_1=\emptyset$) terms from $v$ we obtain a 2-sparse subsequence $w$ on which we can apply
(\ref{lambda6}). We have
$$
|E|=|v|\le |w|+n-1\le\lambda_6(n)+n-1\ll n\cdot 2^{(1+o(1))\alpha(n)^2}.
$$
On the other hand, let $n\in\N$ and $v$ be the longest 2-sparse sequence over $[n]$ such that
$v\not\succ ababa$. It uses all $n$ symbols and, by the lower bound in (\ref{lambda3}),
$|v|>cn\alpha(n)$ for an absolute constant $c>0$. Notice that every $i\in[n]$ appears in $v$ at least twice.
We rename the symbols in $v$ so that for every $1\le i<j\le n$
the first appearance of $j$ in $v$ precedes that of $i$; this affects neither the property $v\not\succ ababa$
nor the 2-sparseness. By an {\em extremal term of\/} $v$ we mean the first or the last
appearance of a symbol in $v$. The sequence $v$ has exactly $2n$ extremal terms. We decompose $v$ uniquely
into intervals $v=I_1I_2\dots I_{2n}$ so that every $I_i$ ends with an extremal term and contains
no other extremal
term. Every $I_i$ consists of distinct terms because a repetition
$\dots b\dots b\dots$ in
$I_i$ would force a $5$-term alternating subsequence $\dots a\dots b\dots a\dots b\dots a\dots $ in $v$.
We define a simple (bipartite) graph $G^*=([3n],E)$ by
$$
\{i,j\}\in E\Longleftrightarrow i\in[n]\;\&\;j\in[n+1,3n]\;\&\;
\mbox{$i$ appears in $I_{j-n}$}.
$$
$G^*$ has $3n$ vertices and $|E|=|v|>cn\alpha(n)$ edges. Suppose that $G^*$ contains the forbidden
ordered subgraph $G_0$ on the vertices $1\le a_1<a_2<\ldots<a_6\le 3n$. By the definition of $G^*$,
$z=a_1a_2a_1a_2$ is a subsequence of $v$ and its terms appear in $I_{a_3-n},\dots, I_{a_6-n}$, respectively.
Since $a_2>a_1$, number $a_2$ must appear in $v$ before $z$ starts and therefore $v$ contains a $5$-term
alternating subsequence but this is forbidden. So $G^*$ does not contain $G_0$ and shows that
$$
g(n)\gg n\alpha(n).
$$
This concludes the proof of (\ref{gn}).
\begin{prob}
Narrow the gap $\lambda_3(n)\ll g(n)\ll\lambda_6(n)$ in (\ref{gn}). What is the precise asymptotics of
$g(n)$?
\end{prob}
Our illustrative example with $g(n)$ shows that the ordered version of a simple graph extremal
problem may differ dramatically from the unordered one.
Classical extremal theory of graphs and hypergraphs deals with unordered vertex sets and it produced many
results of great variety --- see, for example,
Bollob\'as \cite{boll78, boll95}, Frankl \cite{fran},
F\"{u}redi \cite{fure91}, and Tuza \cite{tuza94, tuza96}. However, only little attention has been paid to
ordered extremal problems. The only systematic studies devoted to this topic known to us
are F\"uredi and Hajnal \cite{fure_hajn} (ordered bipartite graphs) and Brass, K\'arolyi and Valtr \cite{bras_valt} (cyclically ordered graphs). We think that for several
reasons ordered extremal problems should be studied and investigated more intensively. First, for their
intrinsic combinatorial beauty. Second, since they present to us new functions not to be met
in the classical theory: nearly linear extremal functions, like $n\alpha(n)$ or
$n\log n$, are characteristic for ordered extremal problems and it seems that they cannot appear without
ordering of some sort. Third, estimates coming from ordered extremal theory were successfuly applied in
combinatorial geometry, where often the right key to a problem turns out to be some linear or partial
ordering, and to obtain more such applications we have to understand more thoroughly combinatorial cores
of these arguments.
Before summarizing our results, we return to DS sequences and show that the sequential containment $\prec$
can be naturally interpreted in terms of particular hypergraphs, (set) partitions.
A sequence $u=a_1a_2\dots a_r$
over the alphabet $A$ may be viewed as a partition $P$ of $[r]$ where $i$ and $j$ are in the same block of
$P$ if and only if $a_i=a_j$. Thus blocks of $P$ correspond to the positions of symbols in $u$.
For example, $u=abaccba$ is the partition $\{\{1,3,7\},\{2,6\},\{4,5\}\}$.
If
$u=([r],\sim_u)$ and $v=([s],\sim_v)$ are two sequences given as partitions by equivalence relations, then
$u\prec v$ if and only if there is an {\em increasing\/} injection $f:[r]\to[s]$ such that
$x\sim_u y\Leftrightarrow f(x)\sim_v f(y)$ holds for every $x,y\in[r]$.
In this article we introduce and investigate a hypergraph containment that generalizes both the ordered
subgraph relation and the sequential containment. The containment and its associated extremal
functions $\ex_e(F,n)$ and $\ex_i(F,n)$ are given in Definitions~\ref{defofcont} and \ref{defoffunc}.
The function $\ex_e(F,n)$ counts edges in extremal simple hypergraphs $H$ not containing a fixed hypergraph
$F$ and the function $\ex_i(F,n)$ counts sums of edge cardinalities. In
Theorem~\ref{exiaexe} we show that for many $F$ one has $\ex_i(F,n)\ll\ex_e(F,n)$. Theorem~\ref{blowups}
shows that if $F$ is a simple graph, then in some cases good bounds on $\ex_e(F,n)$ can be obtained from bounds
on the ordered graph extremal function $\mathrm{gex}(F,n)$. We apply Theorem~\ref{blowups} to prove in
Theorem~\ref{obecfure} that for $G_1=(\{1,3\},\{1,5\},\{2,3\},\{2,4\})$ one has
$\ex_e(G_1,n)\ll n\cdot(\log n)^2\cdot(\log\log n)^3$ and the same bound for $\ex_i(G_1,n)$;
this generalizes the bound $\gex(G_1,n)\ll n\cdot\log n$
of F\"uredi. In another application, Theorem~\ref{klasstro}, we prove that the {\em unordered\/} hypergraph
extremal function $\ex_e^u(F,n)$ of every forest $F$ is $\ll n$ . In Theorem~\ref{ipomocie} we
generalize the bound (\ref{mujobec}) to hypergraphs. In Theorem~\ref{starfore} we prove
that if $F$ is a star forest, then $\ex_e(F,n)$ has an almost linear upper bounds in terms of $\alpha(n)$;
this generalizes the upper bound in (\ref{gn}). In the concluding section we introduce the notion
of orderly bipartite forests and pose some problems.
This article is a revised version of about one half of the material in the technical report \cite{klaz01}.
We present the other half in \cite{klaz_dalsi}.
\section{Definitions and bounding weight by size}
A {\em hypergraph\/} $H=(E_i:\ i\in I)$ is a finite list of finite nonempty subsets $E_i$ of
$\N=\{1,2,\ldots\}$, called {\em edges\/}. $H$ is {\em simple\/} if $E_i\neq E_j$ for every $i,j\in I$,
$i\neq j$. $H$ is a {\em graph\/} if $|E_i|=2$ for every $i\in I$. $H$ is a {\em partition\/} if
$E_i\cap E_j=\emptyset$ for every $i,j\in I$, $i\neq j$. The elements
of $\bigcup H=\bigcup_{i\in I} E_i\subset\N$ are called {\em vertices\/}. Note that our hypergraphs have
no isolated vertices. The {\em simplification\/} of $H$ is the simple hypergraph obtained from $H$ by
keeping from each family of mutually equal edges just one edge.
\begin{defi}\label{defofcont}
Let $H=(E_i:\ i\in I)$ and $H'=(E_i':\ i\in I')$ be two hypergraphs. $H$ {\em contains\/}
$H'$, in symbols $H\succ H'$, if there exist an {\em increasing\/} injection $F:\bigcup H'\to\bigcup H$ and an
injection $f: I'\rightarrow I$ such that the implication
$$
v\in E_i'\Rightarrow F(v)\in E_{f(i)}
$$
holds for every vertex $v\in\bigcup H'$ and every index $i\in I'$. Else we say that $H$ is
$H'$-{\em free\/} and write $H\not\succ H'$.
\end{defi}
The hypergraph containment $\prec$ clearly extends the sequential containment and the ordered subgraph
relation. We give an alternative definition of $\prec$. $H=(E_i:\ i\in I)$ and $H'=(E_i':\ i\in I')$ are
{\em isomorphic\/} if there are an {\em increasing\/} bijection $F:\bigcup H'\to\bigcup H$ and a bijection
$f: I'\rightarrow I$ such that $F(E_i')=E_{f(i)}$ for every $i\in I'$. $H'$ is a {\em reduction\/}
of $H$ if $I'\subset I$ and $E_i'\subset E_i$ for every $i\in I'$. Then $H'\prec H$ if and only if $H'$
is isomorphic to a reduction of $H$. We call that reduction of $H$ an $H'$-{\em copy\/} in $H$.
For example, if $H'=(\{1\}_1,\{1\}_2)$ ($H'$ is a singleton edge repeated twice) then $H\not\succ H'$
iff $H$ is a partition. Another example: If $H'=(\{1,3\},\{2,4\})$ then $H$ is $H'$-free iff $H$ has no
four vertices $a<b<c<d$ such that $a$ and $c$ lie in one edge of $H$ and $b$ and $d$ lie in
another edge.
The {\em order\/} $v(H)$ of $H=(E_i:\ i\in I)$ is the number of vertices $v(H)=|\bigcup H|$, the
{\em size\/} $e(H)$ is the number of edges $e(H)=|H|=|I|$, and the {\em weight\/} $i(H)$ is the number of
incidences between the vertices and the edges $i(H)=\sum_{i\in I}|E_i|$. Trivially, $v(H)\le i(H)$ and
$e(H)\le i(H)$ for every $H$.
\begin{defi}\label{defoffunc}
Let $F$ be any hypergraph. We associate with $F$ the extremal functions $\ex_e(F),\ex_i(F): \N\to\N$, defined by
\begin{eqnarray*}
\ex_e(F,n)&=&\max\{e(H):\ H\not\succ F\;\&\;\mbox{$H$ is simple}\;\&\;v(H)\le n\}\\
\ex_i(F,n)&=&\max\{i(H):\ H\not\succ F\;\&\;\mbox{$H$ is simple}\;\&\;v(H)\le n\}.
\end{eqnarray*}
\end{defi}
We considered $\ex_e(F,n)$ and $\ex_i(F,n)$ implicitly already in Klazar \cite{klaz00}. Except of this
article, to our knowledge, this extremal setting is new and was not investigated before.
Obviously, for every $n\in\N$ and
$F$, $\ex_e(F,n)\le 2^n-1$ and $\ex_i(F,n)\le n2^{n-1}$ but much better bounds can be usually given.
The {\em reversal\/} of a hypergraph $H=(E_i:\ i\in I)$ with $N=\max(\bigcup H)$ is the hypergraph
$\overline{H}=(\overline{E_i}:\ i\in I)$ where $\overline{E_i}=\{N-x+1:\ x\in E_i\}$. Thus reversals are
obtained by reverting the linear ordering of vertices. It is clear that
$\ex_e(F,n)=\ex_e(\overline{F},n)$ and $\ex_i(F,n)=\ex_i(\overline{F},n)$ for every $F$ and $n$.
We give a few comments on Definitions~\ref{defofcont} and \ref{defoffunc}. Replacing the implication
in Definition~\ref{defofcont} with an equivalence, we obtain an induced hypergraph containment that still
extends the sequential containment. Let $\mathrm{iex}_e(F,n)$ be the corresponding extremal function. For
$F_k=(\{1\},\{2\},\dots,\{k\})$ and $k\ge 2$ we have $\mathrm{iex}_e(F_k,n)\ge{n\choose k-2}$
because the hypergraph $(E:\ E\subset[n]\;\&\;|E|=n-k+2)$ does not contain $F_k$ in the induced sense.
But in the hypergraph "DS theory" such uncomplicated hypergraphs like $F_k$ should have linear
(or almost linear) extremal functions. Thus the induced containment is not the right generalization, as far as
one is interested in "DS theories".
For graphs, if $H_2\succ H_1$ and $H_2$ is simple then $H_1$ is simple as well. But simple
hypergraphs may contain hypergraphs that are not simple.
In Definition~\ref{defoffunc} $H$ must be simple because allowing all $H$ would produce usually the value
$+\infty$ (the simplicity of $H$ may be dropped only for $F=(\{1\}_1,\{1\}_2,\ldots,\{1\}_k)$). On the other
hand, we allow for the forbidden $F$ any hypergraph: $F$ need not be simple and may have singleton edges. Such a
freedom for $F$ seemed to one referee of \cite{klaz01} "very artificial". This opinion is
understandable but the author does not share it. Putting aside psychological inertia, there is no reason why
to restrict $F$ from the outset to be simple or in any other way. On the contrary, doing so we might miss
some connections and phenomena. Thus we start
in the definitions with completely arbitrary $F$ and restrict it later only if circumstances require so.
Another perhaps unusual feature of our extremal theory is that in $H$ and $F$ edges of all cardinalities are
allowed; in extremal theories with forbidden substructures it is more common to have edges of just one
cardinality. This led naturally to the function $\ex_i(F,n)$ which accounts for edges of all sizes. Trivially,
$\ex_i(F,n)\ge\ex_e(F,n)$ for every hypergraph $F$ and $n\in\N$. On the other hand, Theorem~\ref{exiaexe} shows
that for many $F$ one has $\ex_i(F,n)\ll\ex_e(F,n)$. In Definition~\ref{defoffunc} we take all $H$ with
$v(H)\le n$ that the extremal functions be automatically nondecreasing. Replacing $v(H)\le n$ with $v(H)=n$
would give more information on the functions but also would bring the complication that then extremal
functions are not always nondecreasing. It happens for $F=(\{1\},\{2\},\dots,\{k\})$ and we analyze this
phenomenon in \cite{klaz01,klaz_dalsi}.
\begin{veta}\label{exiaexe}
Suppose that the hypergraph $F$ has no two separated edges, which means that $E_1<E_2$ holds for no two
edges of $F$. Let
$p=v(F)$ and $q=e(F)>1$. Then for every $n\in\N$,
$$
\ex_i(F,n)\le (2p-1)(q-1)\cdot\ex_e(F,n).
$$
\end{veta}
\duk
Suppose that $H$ attains the value $\ex_i(F,n)$. We transform $H$ in a new hypergraph $H'$ by keeping all
edges with less than $p$ vertices and replacing every edge $E=\{v_1,v_2,\ldots,v_s\}$ of $H$
with $s\ge p$, where $v_1<v_2<\cdots<v_s$, by $t=\lfloor |E|/p\rfloor$ new $p$-element edges
$\{v_1,\ldots,v_p\}$, $\{v_{p+1},\ldots,v_{2p}\},\ldots,$ $\{v_{(t-1)p+1},\ldots,v_{tp}\}$. $H'$ may not be
simple and we set $H''$ to be the simplification of $H'$. Two observations: (i) no edge of $H'$ repeats $q$
or more times and (ii) $H''$ is $F$-free. If (i) were false, there would be $q$ distinct edges
$E_1,\ldots,E_q$ in $H$ such that $|\bigcap_{i=1}^q E_i|\ge p$. But this implies the contradiction
$F\prec H$. As for (ii), any $F$-copy in $H''$ may use from every $E\in H$ at most one new edge
$E''\subset E$ (the new edges born from $E$ are mutually separated) and so it is an $F$-copy in $H$ as well.
The observations and the definitions of $H'$ and $H''$ imply
\begin{eqnarray*}
\ex_i(F,n)=i(H)&\le& {(2p-1)\cdot i(H')\over p}\le {(2p-1)(q-1)\cdot i(H'')\over p}\\
&\le& (2p-1)(q-1)\cdot e(H'')\\
&\le& (2p-1)(q-1)\cdot\ex_e(F,n).
\end{eqnarray*}
The last innocently looking inequality follows from the fact that $\ex_e(F,n)$ is nondecreasing by definition.
\kduk
However, $\ex_i(F,n)\ll\ex_e(F,n)$ does not hold for $F_k=(\{1\},\{2\},\dots,\{k\})$ and $k\ge 2$:
$\ex_e(F_k,n)=2^{k-1}-1$ for $n\ge k-1$ and $\ex_i(F_k,n)=(k-1)n-(k-2)$ for $n>\max(k,2^{k-2})$
(\cite{klaz01,klaz_dalsi}). Note that for $F=(\{1\})$ both extremal functions are undefined. $F_k$ is highly
symmetric and the ordering of vertices is irrelevant for the containment $H\succ F_k$.
\begin{prob}
Prove that if $F$ is not isomorphic to $(\{1\},\{2\},\dots,\{k\})$, $k\ge 1$, then $\ex_i(F,n)\ll\ex_e(F,n)$.
\end{prob}
\section{Bounding hypergraphs by means of graphs}
For a family of simple graphs $R$ and $n\in\N$ we define
\begin{eqnarray*}
\gex(R,n)&=&\max\{e(G):\ G\not\succ G' \mbox{ for all } G'\in R\;\&\;\mbox{$G$ is a simple
graph}\\
&&\;\&\;v(G)\le n\}
\end{eqnarray*}
and for one simple graph $G$ we write $\gex(G,n)$ instead of $\gex(\{G\},n)$. F\"uredi proved in \cite{fure90},
see also \cite{fure_hajn}, that for
\begin{equation}\label{GF}
G_1=(\{1,3\},\{1,5\},\{2,3\},\{2,4\})=\
\obr{\put(1,1){\bod}\put(8,1){\bod}\put(15,1){\bod}
\put(22,1){\bod}\put(29,1){\bod}
\put(8,1){\line(1,0){7}}
\put(8,1){\oval(14,10)[t]}\put(15,1){\oval(14,7)[t]}
\put(15,1){\oval(28,16)[t]}
}
\end{equation}
one has
\begin{equation}\label{furebound}
n\log n\ll\gex(G_1,n)\ll n\log n.
\end{equation}
(In \cite{fure90} and \cite{fure_hajn} the investigated objects are 0-1 matrices, which are ordered bipartite
graphs, but in the case of $G_1$ the bounds are easily extended to all ordered graphs.) Attempts to generalize
the upper bound in (\ref{furebound}) to hypergraphs motivated the next theorem.
For $k\in\N$ we say that a simple graph $G'$ is a $k$-{\em blow-up\/} of a simple graph $G$ if for
every edge coloring
$\chi: G'\to\N$ that uses every color at most $k$ times there is a $G$-copy in $G'$ with totally different
colors, that is, $\chi$ is injective on the $G$-copy. For $k\in\N$ and a simple graph $G$ we write $B(k,G)$ to
denote the set of all $k$-blow-ups of $G$.
\begin{veta}\label{blowups}
Let $F$ be a simple graph with $p=v(F)$ and $q=e(F)>1$ and let $B\subset B({p\choose 2},F)$.
If $f: \N\to\N$ is a nondecreasing function such that
$$
\mathrm{gex}(B,n)<n\cdot f(n)
$$
holds for every $n\in\N$, then
\begin{equation}\label{rekner}
\ex_e(F,n)< q\cdot \mathrm{gex}(F,n)\cdot \ex_e(F,2f(n)+1)
\end{equation}
holds for every $n\in\N$, $n\ge 3$.
\end{veta}
\duk
Suppose that the simple hypergraph $H$ attains the value $\ex_e(F,n)$ and $\bigcup H=[m]$, $m\le n$. We put in
$H'$ every edge of $H$ with more than $1$ and less than $p$ vertices and for every $E\in H$ with $|E|\ge p$
we put in $H'$ an arbitrary subset $E'\subset E$, $|E'|=p$. So $2\le |E|\le p$ for every $E\in H'$ and no edge
of $H'$ repeats more then $q-1$ times, for else we would have $H\succ F$. Let $H''$ be the simplification
of $H'$. We have
$$
e(H)\le n+(q-1)e(H'').
$$
Let $G$ be the simple graph consisting
of all edges $E^*$ such that $E^*\subset E$ for some $E\in H''$. Observe that if $F'\in B$ and $F'\prec G$,
then $F\prec H''$ and thus $F\prec H$. (For the edges $E^*\in G$ forming an $F'$-copy consider the coloring
$\chi(E^*)=E$ where $E\in H''$ is such that
$E^*\subset E$. Every color is used at most ${p\choose 2}$ times and therefore, since $F'$ is a
${p\choose 2}$-blow-up of $F$, we have an $F$-copy in $G$ for which the correspondence $E^*\mapsto E$ is
injective.)
Hence $F'\prec G$ for no $F'\in B$. Let $v(G)=n'$; $n'\le n$. We have
$$
e(G)\le \mathrm{gex}(B,n')<n'\cdot f(n').
$$
There exists a vertex $v_0\in\bigcup G$ such that
$$
d=\deg_{G}(v_0)<2f(n')\le 2f(n).
$$
Fix an arbitrary edge $E_0^*$, $v_0\in E_0^*\in G$. Let $X\subset[n]$ be the union of all edges $E\in H''$
satisfying $E_0^*\subset E$ and $m$ be the number of such edges in $H''$. We have the inequalities
$$
m\le\ex_e(F,|X|)\ \mbox{ and }\ |X|\le d+1.
$$
Thus
$$
m\le \ex_e(F,|X|)\le \ex_e(F,d+1)\le\ex_e(F,2f(n)+1).
$$
We see that the two-element set $E_0^*$ is contained in at least 1 but at most $\ex_e(F,2f(n)+1)$ edges of
$H''$. Deleting those edges we obtain a subhypergraph $H_1''$ of $H''$ on which the same argument can be
applied. That is, a two-element set $E_1^*$ exists such that $E_1^*\subset E$ for at least 1 but at most
$\ex_e(F,2f(n)+1)$ edges $E\in H_1''$ and clearly $E_1^*\neq E_0^*$. Continuing this way until the whole
$H''$ is exhausted, we define a mapping
$$
M: H''\to\{E^*:\ E^*\subset [n], |E^*|=2\}
$$
such that
$$
M(E)\subset E\ \mbox{ and }\
|M^{-1}(E^*)|\le\ex_e(F,2f(n)+1)
$$
holds for every $E\in H''$ and $E^*\subset [n], |E^*|=2$. Let $G'$ be the simple graph $G'=M(H'')$
and $v(G')=n'$; $n'\le n$.
The containment $F\prec G'$ implies, by the definition of $G'$, that
$F\prec H''$ and hence $F\prec H$, which is not allowed. Thus
$$
e(G')\le \mathrm{gex}(F,n')\le \mathrm{gex}(F,n).
$$
Putting it all together, we obtain (since $\mathrm{gex}(F,n)\ge n-1$ if $q>1$)
\begin{eqnarray*}
\ex_e(F,n)=e(H)&\le & n+(q-1)\cdot e(H'')\\
&\le& n+(q-1)\cdot\ex_e(F,2f(n)+1)\cdot e(G')\\
&<& q\cdot \ex_e(F,2f(n)+1)\cdot \mathrm{gex}(F,n)
\end{eqnarray*}
for every $n\ge 3$.
\kduk
\noindent
We give three applications of this theorem. The first one is the promised generalization of the upper bound
in (\ref{furebound}).
We say that a simple graph $G$ is a $k$-{\em multiple\/} of $G_1$, where $k\in\N$ and $G_1$ is defined
in (\ref{GF}), if $G$ has this structure: $\bigcup G=A\cup\{v\}\cup B\cup C$ with $A<v<B<C$, $|A|=k$, the
vertex $v$ has degree $k$ and is connected to every vertex in $A$, every vertex in $A$ has degree $2k+1$ and
is besides $v$ connected to $k$ vertices in $B$ and to $k$ vertices in $C$, and $G$ has no other edges. The
edges incident with $v$ are called {\em backward edges\/} and the edges incident with vertices in $B\cup C$
are called {\em forward edges\/}. We denote the set of all $k$-multiples of $G_1$ by $M(k)$.
\begin{lema}\label{blowupfure}
The sets of graphs $M(k)$, $k\in\N$, have the following properties.
\begin{enumerate}
\item For every $k$, $M(3k+1)\subset B(k,G_1)$. In particular, $M(31)\subset B({5\choose 2},G_1)$.
\item For every $k$, $\gex(M(k),n)\ll n\log n$.
\end{enumerate}
\end{lema}
\duk
1. Let $G$ be a $3k+1$-multiple of $G_1$ and $\chi:G\to\N$ be an edge coloring using each color at most
$k$ times. We select in $G$ two backward edges $E_1=\{i,v\}$ and $E_2=\{j,v\}$, $i<j<v$, with different
colors. It follows that we can select in $G$ two forward edges $E_3=\{i,l\}$ and $E_4=\{j,l'\}$
such that $v<l'<l$ and the colors $\chi(E_1),\dots,\chi(E_4)$ are distinct. Edges $E_1,\dots,E_4$ form a
$G_1$-copy on which $\chi$ is injective. Thus $G\in B(k,G_1)$.
2. Let $n\ge 2$ and $G$ be any simple graph such that $\bigcup G=[n]$ and $G\not\succ F$ for every $F\in M(k)$.
Let $J_i=\{E\in G:\ \min E=i\}$ for $i\in[n]$. In every $J_i$ it is possible to mark
$\lfloor|J_i|/k\rfloor>|J_i|/k-1$ edges so that every marked edge is in $J_i$ immediately
followed by $k-1$ unmarked ones (we order the edges in $J_i$ by their endpoints). The graph $G'$ formed by the
marked edges satisfies
$$
e(G')> e(G)/k-n.
$$
Also, for every edge $\{i,j\}\in G'$, $i<j$, there are
at least $k-1$ edges $\{i,l\}\in G$ with $l>j$, and for every two edges $\{i,j\},\{i,j'\}\in G'$, $i<j<j'$,
there are at least $k-1$ edges $\{i,l\}\in G$ with $j<l<j'$.
Now we proceed as in F\"uredi \cite{fure90}. We say that $\{i,j\}\in G'$, $i<j$, has {\em type\/} $(j,m)$,
where $m\ge 0$ is an integer, if
there are two edges $\{i,l\}$ and $\{i,l'\}$ in $G'$ such that $j<l<l'$ and $l-j\le 2^m<l'-j$.
Consider the partition
$$
G'=G^*\cup G^{**}
$$
where $G^*$ is formed by edges with at least one type and $G^{**}$ by
edges without type. It follows from the definition of type and of $G'$ that if $k$ edges of $G^*$ have
the same type, then $F\prec G$ for some $F\in M(k)$ which is forbidden. Thus any type is shared by at
most $k-1$ edges. Since the number of types is less than $n(1+\log_2 n)$, we have
$$
e(G^*)<(k-1)n+(k-1)n\log_2n.
$$
To bound $e(G^{**})$, we fix a vertex $i\in[n]$ and consider the endpoints $i<j_0<j_1<\cdots<j_{t-1}\le n$
of all edges $E\in G'$ which have no type and $\min E=i$. Let $d_r=j_r-j_{r-1}$ for $1\le r\le t-1$ and
$D=d_1+\cdots+d_{t-1}=j_{t-1}-j_0$. If $d_1\le D/2$, then $d_1\le 2^m<D$ for
some integer $m\ge 0$ and the edge $\{i,j_0\}$ would have type $(j_0,m)$ because of the edges $\{i,j_1\}$ and
$\{i,j_{t-1}\}$. Thus $d_1>D/2$ and $D-d_1<D/2$. By the same argument applied to $\{i,j_1\}$,
$d_2>(D-d_1)/2$ and thus $D-d_1-d_2<D/4$.
In general, $1\le D-d_1-\cdots-d_r<D/2^r$ for $1\le r\le t-2$. Thus
$t\le \log_2 D+2<2+\log_2n$. Summing these inequalities for all $i\in[n]$, we have
$$
e(G^{**})<2n+n\log_2n.
$$
Alltogether we have
$$
e(G)<kn+k(e(G^*)+e(G^{**}))<(k^2+k)n+k^2n\log_2 n.
$$
We conclude that $\gex(M(k),n)\ll n\log n$ and the constant in $\ll$ depends quadratically on $k$.
\kduk
\begin{veta}\label{obecfure}
Let $G_1$ be the simple graph given in (\ref{GF}). We have the following bounds.
\begin{enumerate}
\item $n\cdot\log n\ll \ex_e(G_1,n)\ll n\cdot(\log n)^2\cdot(\log\log n)^3$.
\item $n\cdot\log n\ll \ex_i(G_1,n)\ll n\cdot(\log n)^2\cdot(\log\log n)^3$.
\end{enumerate}
\end{veta}
\duk
1. The lower bound follows from the lower bound in (\ref{furebound}). To prove the upper bound, we use
Theorem~\ref{blowups}. By 2 of Lemma~\ref{blowupfure}, we have
$\gex(M(31),n)\ll n\log n$. Also, $\gex(G_1,n)\ll n\log n$ (by the upper bound in (\ref{furebound}) or
by $\gex(G_1,n)\le\gex(B,n)$). By 1 of Lemma~\ref{blowupfure}, we can apply Theorem~\ref{blowups} with
$B=M(31)$. Starting with the trivial bound $\ex_e(G_1,n)<2^n$, (\ref{rekner}) with
$f(n)\ll\log n$ gives
$$
\ex_e(G_1,n)\ll n^c
$$
where $c>0$ is a constant. Feeding this bound back to (\ref{rekner}), we get
$$
\ex_e(G_1,n)\ll n\cdot(\log n)^{c+1}.
$$
Two more iterations of (\ref{rekner}) give
$$
\ex_e(G_1,n)\ll n\cdot(\log n)^2\cdot(\log\log n)^{c+1}
$$
and
$$
\ex_e(G_1,n)\ll n\cdot(\log n)^2\cdot(\log\log n)^2\cdot(\log\log\log n)^{c+1}
$$
which is slightly better than the stated bound.
2. The lower bound follows from $\ex_i(G_1,n)\ge\ex_e(G_1,n)$. The upper bound follows from the upper
bound in 1 by Theorem~\ref{exiaexe}.
\kduk
\begin{prob}
What is the exact asymptotics of $\ex_e(G_1,n)$?
\end{prob}
The second application of Theorem~\ref{blowups} concerns unordered extremal functions $\ex^u_e(F,n)$ and
$\gex^u(G,n)$. They are
defined as $\ex_e(F,n)$ and $\gex(G,n)$ except that in the containment the injection need not be
increasing. So $\gex^u(G,n)$ is the classical graph extremal function. It is well known, see for example
Bollob\'as \cite[Exercise 24 in IV.7]{boll98}, that $\gex^u(F,n)\le (e(F)-1)\cdot n$ for every forest $F$.
We extend the linear bound to unordered hypergraphs. Theorem~\ref{blowups} holds also in the
unordered case because the proof is independent of ordering. Ordering is crucial only for obtaining
linear or almost linear bounds on $\gex(F,n)$ and $\gex(B,n)$ because the inequality (\ref{rekner}) is useless
if $f(n)$ is not almost constant. The proof of Theorem~\ref{blowups} shows also that if $F$ is a forest
and all members of $B$ are forests (which is not the case for $B=M(k)$) then ${p\choose 2}$ can be
replaced by $p-1$ (because for $|E|=p$ every $p$
two-element edges $E^*\subset E$ contain a cycle but no $F'\in B$ has a cycle).
\begin{veta}\label{klasstro}
Let $F$ be a forest. Its unordered hypergraph extremal function satisfies
$$
\ex_e^u(F,n)\ll n.
$$
\end{veta}
\duk
Let $v(F)=p$ and $e(F)=q>1$ (case $q=1$ is trivial). It is not hard to find a forest
$F'$ with large $e(F')=Q$ --- $Q\le (pq^2)^{q+1}$ suffices ---
that is a $p-1$-blow-up of $F$. We set $B=\{F'\}$
and use (\ref{rekner}) with the bounds $\gex^u(F,n)\le (q-1)n$, $f(n)=Q-1$ (since
$\gex^u(B,n)=\gex^u(F',n)\le (Q-1)n$), and $\ex_e^u(F,n)<2^n$ (trivial):
$$
\ex_e^u(F,n)<q\cdot (q-1)n\cdot 2^{2Q-1}=
{\textstyle {q\choose 2}}4^Q\cdot n.
$$
\kduk
\noindent
One can prove the bound $\ex^u_e(F,n)\ll n$ also directly, without Theorem~\ref{blowups}, by adapting the proof
of $\gex^u(F,n)\ll n$ to hypergraphs.
The third application of Theorem~\ref{blowups} follows in the next section.
\section{Partitions and star forests}
The bound (\ref{mujobec}) tells us that if $F$ is any fixed partition
with $k$ blocks and $H$ is a $k$-sparse partition with $H\not\succ F$, then $v(H)=i(H)$ has an almost linear
upper bound in terms of $e(H)$. The following theorem bounds $i(H)$ almost linearly in terms of
$e(H)$ in the wider class of (not necessarily simple) hypergraphs $H$.
The proof is based on (\ref{muj}).
\begin{veta}\label{ipomocie}
Let $F$ be a partition with $p=v(F)$ and $q=e(F)>1$ and $H$ be a $F$-free hypergraph, not
necessarily simple. Then
\begin{equation}\label{iae}
i(H)<(q-1)\cdot v(H)+e(H)\cdot\beta(q,2p,e(H))
\end{equation}
where $\beta(k,l,n)$ is the almost constant function defined in (\ref{beta}).
\end{veta}
\duk
Let $\bigcup H=[n]$ and the edges of $H$ be $E_1,E_2,\ldots,E_e$ where $e=e(H)$.
We set, for $1\le i\le n$, $S_i=\{j\in[e]:\ i\in E_j\}$ and consider the sequence
$$
v=I_1I_2\ldots I_n
$$
where $I_i$ is an arbitrary ordering of $S_i$. Clearly,
$v$ is over $[e]$ and $|v|=i(H)$. The sequence $v$ may not be $q$-sparse, because of the transitions
$I_iI_{i+1}$, but it is easy to delete at most $q-1$ terms from the beginning of
every $I_i$, $i>1$, so that the resulting subsequence $w$ is $q$-sparse. Thus $|w|\ge |v|-(q-1)(n-1)$. It
follows that
if $w$ contains $u(q,2p)$, where $u(k,l)$ is defined in (\ref{ukl}), then $H$ contains $F$ but
this is forbidden. (Note that the
subsequence $aab$ in $v$ forces the first $a$ and the $b$ to appear in two distinct segments
$I_i$ and thus it gives incidences of $E_a$ and $E_b$ with two distinct vertices.) Hence $w\not\succ u(q,2p)$
and we can apply (\ref{muj}):
$$
i(H)=|v|<(q-1)n+|w|\le (q-1)n+e\cdot\beta(q,2p,e).
$$
\kduk
\noindent
We show that for the partition
$$
F=H_2=(\{1,3,5\},\{2,4\})=\
\obr{\put(3,1){\bod}\put(9,1){\bod}\put(15,1){\bod}
\put(21,1){\bod}\put(27,1){\bod}
\put(15,1){\oval(30,14)[t]}
\put(3,1){\oval(6,10)[b]}
\put(27,1){\oval(6,10)[b]}
\put(9,1){\oval(6,8)[t]}
\put(15,1){\oval(6,10)[b]}
\put(21,1){\oval(6,8)[t]}
\put(15,1){\oval(12,5)[b]}
}
$$
\bigskip\noindent
the factor at $e(H)$ in (\ref{iae}) must be $\gg\alpha(e(H))$. We proceed as in the proof of
$g(n)=\gex(G_0,n)\gg n\alpha(n)$ in (\ref{gn}) and take a 2-sparse sequence $v$ over $[n]$ such that
$v\not\succ 12121$, $|v|\gg n\alpha(n)$, and $v=I_1I_2\ldots I_{2n}$ where every interval $I_i$
consists of distinct terms. We define the hypergraph
$$
H=(E_i:\ i\in[n])\ \mbox{ with }\ E_i=\{j\in[2n]:\ i \mbox{ appears in }\ I_j\}.
$$
We have $i(H)=|v|\gg n\alpha(n)$, $\bigcup H=[2n]$, $v(H)=2n$, and $e(H)=n$. It is clear that
$H\not\succ H_2$ because $v\not\succ 12121$.
Taking in Theorem~\ref{ipomocie} $H$ to be simple and with the maximum weight,
we obtain as a corollary that if $F$ is a partition, $p=v(F)$, and $q=e(F)>1$, then
$$
\ex_i(F,n)< (q-1)n+\ex_e(F,n)\cdot\beta(q,2p,\ex_e(F,n)).
$$
But Theorem~\ref{exiaexe}, when it applies, gives better bounds.
Our last theorem generalizes in two ways the upper bound in (\ref{gn}). We consider a class of forbidden
forests that contains $G_0$ as a member and we extend by means of Theorems~\ref{blowups} and \ref{exiaexe}
the almost linear upper bound to hypergraphs. The class consists of {\em star forests\/} which are forests $F$
with this structure:
$\bigcup F=A\cup B$ for some sets $A<B$ such that every vertex in $B$ has degree $1$ and every edge of $F$
connects
$A$ and $B$. Thus $F$ is a star forest iff every component of $F$ is a star and every central vertex of a star
is smaller than every leaf.
\begin{veta}\label{starfore}
Let $F$ be a star forest with $r>1$ components, $p$ vertices, and $q=p-r$ edges. Let $t=(p-1)(q-1)+1$ and
$\beta(k,l,n)$ be the almost constant function defined in (\ref{beta}). We have
the following bounds.
\begin{enumerate}
\item $\gex(F,n)<(r-1)n+n\cdot\beta(r,2q,n)$.
\item $\ex_e(F,n)\ll n\cdot\beta(r,2tq,n)^3$.
\item $\ex_i(F,n)\ll n\cdot\beta(r,2tq,n)^3$.
\end{enumerate}
\end{veta}
\duk
1. We mark the centers of the stars in $F$ with $1,2,\dots,r$ according
to their order and give the leaves of any star the mark of its center. The marks on all leaves then
form a
sequence $u$ over $[r]$ of length $p-r$. Now let $G$ be any simple graph with $\bigcup G=[n]$ and
$G\not\succ F$. We consider the sequence
$$
v=I_1I_2\ldots I_n
$$
where $I_j$ is any ordering of the set $\{i\in[n]:\ \{i,j\}\in G, i<j\}$. As in the previous proof, we select
an $r$-sparse subsequence $w$ of $v$ with length
$|w|\ge |v|-(r-1)(n-1)$. Suppose that $w\succ u(r,2(p-r))$ where $u(k,l)$ is defined in (\ref{ukl}). Then
$w$ has a (not necessarily consecutive) subsequence $z$ of the form
$$
a_1a_2\dots a_ra_1a_2\dots a_r\dots a_1a_2\dots a_r
$$
with $2(p-r)$ segments $a_1a_2\dots a_r$. We have $a_{i_1}<a_{i_2}<\dots<a_{i_r}$ for a permutation
$i_1,i_2,\dots,i_r$ of $[r]$. We label every term $a_{i_j}$ in $z$ with $j$. Clearly, if we select
one term from the 2-nd, 4-th, $\dots$, $2(p-r)$-th segment of $z$ so that the labels on the selected terms
coincide with the sequence $u$, the selected terms lie in $p-r$ distinct intervals
$I_{j_1},\dots,I_{j_{p-r}}$, $j_1<\dots<j_{p-r}$. Since the selected terms are preceded by one segment
$a_1a_2\dots a_r$, we have $a_{i_r}<j_1$. The edges between $a_1,\dots,a_r$ and $j_1,\dots,j_{p-r}$ which
corespond to the selected terms form an $F$-copy in $G$ but $G\succ F$ is forbidden. Therefore
$w\not\succ u(r,2(p-r))$ and we can apply (\ref{muj}):
$$
e(G)=|v|\le (r-1)n+|w|<(r-1)n+n\cdot\beta(r,2(p-r),n).
$$
2. Suppose that $F$ has the vertex set $[p]$ (so that $[r]$ are the centers of the stars and $[r+1,p]$ are the
leaves). For $k\in\N$ we denote $F(k)$ the star forest with the vertex set $[r+(p-r)k]$ in which $[r]$
are again the centers
of stars and for $i=1,2,\dots,p-r$ the vertices in $[r+(i-1)k+1,r+ik]$ are joined to the same vertex
in $[r]$ as $r+i$ is joined in $F$.
It is easy to see that $F(t)=F((p-1)(q-1)+1)$ is a $p-1$-blow-up of $F$. Also, $e(F(k))=kq$.
We set $B=\{F(t)\}$
and use (\ref{rekner}) with the bounds $\gex(F,n)\ll n\cdot\beta(r,2q,n)=n\cdot\beta'$ (bound 1 for
$F$), $f(n)=c\beta(r,2tq,n)=c\beta$ for a constant $c>0$ (bound 1 for $F(t)$), and $\ex_e(F,n)<2^n$
(trivial):
$$
\ex_e(F,n)\ll n\cdot\beta'\cdot 2^{2c\beta+1}<n\cdot 2^{2(c+1)\beta}.
$$
The second application of (\ref{rekner}) gives
$$
\ex_e(F,n)\ll n\cdot\beta'\cdot\beta\cdot 2^{2(c+1)\cdot\beta(r,2tq,2c\beta+1)}\ll n\cdot\beta^3
$$
because $\beta'\le\beta$ and
$$
\beta(r,2tq,x)\ll\log\log x
$$
(this is true with any number of logarithms).
3. This follows from 2 by Theorem~\ref{exiaexe}.
\kduk
\noindent
The lower bound in (\ref{gn}) shows that in general the factor at $n$ in 1, 2, and 3 of the previous theorem
cannot be replaced with a constant and may be as big as $\gg\alpha(n)$. The proved bounds hold also for the
reversals of star forests.
\section{Concluding remarks}
It is reasonable to call a function $f:\N\to\R$ {\em nearly linear\/} if
$n^{1-\varepsilon}\ll f(n)\ll n^{1+\varepsilon}$ holds for every $\varepsilon>0$. We identify a candidate for
the class of all hypergraphs $F$ with nearly linear $\ex_e(F,n)$. If $F$ is isomorphic to the hypergraph
$(\{1\},\{2\},\dots,\{k\})$, then $\ex_e(F,n)$ is eventually constant (\cite{klaz_dalsi}) and thus is not nearly
linear. For other hypergraphs we have $\ex_e(F,n)\ge n$ because $F\not\prec(\{1\},\{2\},\dots,\{n\})$. An
{\em orderly bipartite forest\/} is a simple graph $F$ such that $F$ has no cycle and
$\min E<\max E'$ holds for every two edges of $F$. In other words, $F$ is a forest and there is a partition
$\bigcup F=A\cup B$ such that $A<B$ and every edge of $F$ connects $A$ and $B$. We denote the class of all
orderly bipartite forests by OBF. We say that $F$ is an {\em orderly bipartite forest with
singletons\/} if $F=F_1\cup F_2$ where $F_1\in\mathrm{OBF}$ and $F_2$ is a hypergraph consisting of possibly
repeating singleton edges. For example, $F$ may be
$$
F=(\{8\},\{6\}_1,\{6\}_2,\{2\},\{1,6\},\{3,6\},\{4,5\},\{4,7\}).
$$
The class OBF subsumes star forests and their reversals. $G_1$ defined in (\ref{GF}) belongs to OBF but is
neither a star forest nor a reversed star forest.
\begin{lema}
If the hypergraph $F$ is not an orderly bipartite forest with singletons, then there is a constant
$\gamma>1$ such that
$$
\ex_e(F,n)\gg n^{\gamma}
$$
and hence $\ex_e(F,n)$ is not nearly linear.
\end{lema}
\duk
If $F$ is not an orderly bipartite forest with singletons, then $F$ has (i) an edge with more than two elements
or (ii) two separated two-element edges or (iii) a two-path isomorphic to $(\{1,2\},\{2,3\})$ or (iv) a repeated
two-element edge or (v) an even cycle of two-element edges (odd cycles
are subsumed in (iii)). In the cases (i)--(iv) we have $\ex_e(F,n)\gg n^2$ because the complete bipartite graph
with parts $[\lfloor n/2\rfloor]$ and $[\lfloor n/2\rfloor+1,n]$ does not contain $F$. As for the case (v),
an application of the probabilistic method (Erd\H os \cite{erdo59}) provides
an unordered graph that has $n$ vertices, $\gg n^{1+1/k}$ edges, and no even cycle of length $k$. Thus,
in the case (v), $\ex_e(F,n)\gg n^{1+1/k}$ for some $k\in\N$.
\kduk
\noindent
We conjecture that $\ex_e(F,n)$ is nearly linear if and only if $F$ is an orderly bipartite forest with
singletons that is not isomorphic to $(\{1\},\{2\},\dots,\{k\})$. Since every orderly bipartite forest with
singletons is contained in some orderly bipartite forest, it suffices to consider only orderly bipartite
forests.
\begin{prob}
Prove (or disprove) that for every orderly bipartite forest $F$ we have
$$
\ex_e(F,n)\ll n(\log n)^c
$$
for some constant $c>0$.
\end{prob}
\noindent
It is not difficult to find for every $F\in\mathrm{OBF}$ and $k\in\N$ an $F'\in\mathrm{OBF}$ that is a
$k$-blow-up of $F$. Thus the
previous bound would follow by Theorem~\ref{blowups} from the graph bound $\gex(F,n)\ll n(\log n)^c$.
It is natural to consider two subclasses $\mathrm{OBF}^l\subset\mathrm{OBF}^{\alpha}\subset\mathrm{OBF}$
where $\mathrm{OBF}^l$ consists of all $F\in\mathrm{OBF}$ with $\ex_e(F,n)\ll n$ and $\mathrm{OBF}^{\alpha}$
consists of all $F\in\mathrm{OBF}$ with $\ex_e(F,n)\ll n\cdot f(\alpha(n))$ for a primitive recursive
function $f(n)$. Both inclusions are strict as witnessed by $G_0$ and $G_1$ (defined in (\ref{G0}) and
(\ref{GF})). In this article we ignored the class $\mathrm{OBF}^l$ completely and showed that
$\mathrm{OBF}^{\alpha}$ contains all star forests (and their reversals). It would much interesting to
learn more about $\mathrm{OBF}^l$ and $\mathrm{OBF}^{\alpha}$. Does the latter class consist only of
star forests and their reversals? | {"config": "arxiv", "file": "math0305037.tex"} |
TITLE: Transfer matrix for 1D chains
QUESTION [2 upvotes]: Until recently I believed that the transfer matrix method such as used in solving the 1D Ising model could be used to solve the thermodynamics of any system that is:
1D
Translationally invariant
Has only nearest-neighbor interactions (or any fixed finite range), and
Has finite local dimension.
Besides being used for Ising spin-1/2, Heisenberg, and Ising spin-1 models, papers like this one use it for chains with local dimension 4. (Since it has next-nearest-neighbor interactions, it actually becomes local dimension 16.) In particular, the ground state energy is the lowest eigenvalue of the transfer matrix.
But then, there is Gottesman, Irani 2009 which seemed to create a very hard problem on a system that has all the above properties. Bausch et al. extended the work, reducing the local dimension to about 40. Given that finding the ground state energy of these Hamiltonians is QMAEXP-Complete, there certainly aren't solvable with a simple transfer matrix -- but why not?
My two guesses is that there's some additional condition (bosonic vs. fermionic operators, perhaps?) that I'm missing, or that somehow the finite system size of those 1D chains ends up contributing finite size effects that end up being more relevant than expected.
REPLY [2 votes]: Answering my own question, feeling silly now. The key additional requirement is that local interactions can be broken up into commuting terms. For some general nearest-neighbor interaction $J_{ij}$ acting on sites $i$ and $j$, the partition function reads
$$Z = \exp(-\beta H) = \exp(\sum -\beta J_{i,i+1})\quad \neq\quad \exp(-\beta J_{1,2})\exp(-\beta J_{2,3}) \dots = T_{12}T_{23} \dots$$
On the left we have the partition function, and on the right are the transfer matrices $\exp(-\beta J)$ that we want. But we do not in general have equality in the middle, unless all $J$'s commute. (Although you could do an expansion in terms of e.g. the BCH formula.) Of the models I gave as examples,
The Ising model can only be solved with transfer matrices if it the interactions are aligned with the field, i.e. $\sum J S^z_{i} S^z_{i+1} + h S^z_i$, the classical Ising model. The transverse field Ising model with $h S^x_i$ cannot be solved with transfer matrices, and is solved through other methods. The same is true for spin-1 Ising models.
The Heisenberg model cannot (as far as I can tell, upon more careful reading) be solved with transfer matrices. It also requires Jordan-Wigner or Bethe ansatz solutions.
The other paper I linked, with the Hubbard models and local dimension 16, makes a narrow-bandwidth approximation. This allows them to drop the hopping terms and have only number terms, which all commute.
Lots of other 1D models cannot be written in this commuting form of course, which prevents them from being solved this way. Having a commuting form can be understood as the model being "inherently classical", in which case the partition function becomes a counting problem on a path graph.
Personally, I think of this in relation to fixed-parameter tractable algorithms for counting solutions to constraint problems on graphs of fixed path-width, e.g. Courcelle's theorem. | {"set_name": "stack_exchange", "score": 2, "question_id": 553523} |
TITLE: The Relation between Holder continuous, absolutely continuous, $W^{1,1}$, and $BV$ functions
QUESTION [13 upvotes]: I am trying to find out the relation between those spaces. Take $I\subset R$ on the real line. $I$ can be unbounded. Then I have:
We first assume $I$ is bounded.
If $u\in C^{0,\alpha}(I)$, for $0<\alpha<1$, then $u$ is uniformly continuous for sure. However, I can not prove $u\in C^{0,\alpha}(I)$ then $u\in AC(I)$, nor $u\in AC(I)$ then $u\in C^{0,\alpha}(I)$. I tried a lot and now I start to think that there are no relations between those two spaces, even if $I$ is bounded.
update: I think I find an example to show that $u\in AC(0,1)\setminus C^{0,\alpha}(0,1)$ by taking $u= x^\beta$ for some $\beta>\alpha$. But I still can not prove the converse, nor find an counterexample.
Moreover, I am wondering that can Holder continuous implies bounded variation? Certainly when $\alpha=1$ we are good. But what about $\alpha<1$, say for $I$ bounded?
Finally, it is clearly that $BV$ can not implies Holder, just take any discontinuous functions for example.
REPLY [16 votes]: The Weierstrass function is Hölder continuous, but is not absolutely continuous, and is not of bounded variation either. (Every function of bounded variation is differentiable at almost every point, but the Weierstrass function is nowhere differentiable.)
Conversely, $f(x) = 1/\log x$ is absolutely continuous on $[0,1/2]$ but is not Hölder continuous for any $\alpha\in (0,1)$.
The Lipschitz case, $\alpha=1$, is dramatically different: a Lipschitz function is absolutely continuous, and belongs to $W^{1,p}$ for every $p$. | {"set_name": "stack_exchange", "score": 13, "question_id": 972554} |
TITLE: How to show the eigenvalue of Laplace on compact manifold is discrete?
QUESTION [1 upvotes]: $(M,g)$ is a compact Riemannian manifold, $\Delta=\frac{1}{\sqrt g}\partial_i(\sqrt g g^{ij}\partial _j)$ is Laplace operator.
How to show the eigenvalue of Laplace on compact manifold is discrete ?
REPLY [3 votes]: We can mimic the proof of this fact for open sets in $\mathbb{R}^n$. Here's a sketch of one possible way to do it.
Step 1
Instead of considering the operator $-\Delta$ we perturb it a bit my considering $L_\mu = -\Delta + \mu I$, where $I$ denotes the identity and $\mu >0$. The weak formulation of the problem $L_\mu u = f$ is then given by integrating by parts: for any $v \in H^1(M)$ we have
$$
\int_M f v = \int_M L_\mu u v = \int_M \nabla_g u \cdot \nabla_g v + \mu uv,
$$
where $\nabla_g$ is the covariant derivative determined by the metric $g$. The reason why we introduce $\mu$ is evident here: the right hand side of this now determines an inner-product on $H^1(M)$. If we had set $\mu=0$ then we would have to worry about removing the functions that are constant on each connected component of $M$.
Step 2
The above allows us to use the Lax-Milgram lemma to establish the solvability of the weak problem $L_\mu u =f$ for any $f \in (H^1(M))^\ast$. Actually, L-M is overkill and we could just use the Riesz representation. In other words, for any given $f \in (H^1)^\ast$ we can find a unique $u \in H^1$ such that
$$
\int_M \nabla_g u \cdot \nabla_g u + \mu uv = \langle f,v \rangle \text{ for all } v \in H^1.
$$
Moreover,
$$
\Vert u \Vert_{H^1} \le C \Vert f \Vert_{(H^1)^\ast}.
$$
Step 3
The above establishes that the map $L_\mu : H^1 \to (H^1)^\ast$ is an isomorphism. Now we consider the map $L_\mu^{-1}$, but restricted to $L^2(M) \hookrightarrow (H^1(M))^\ast$, i.e. we consider only $f \in L^2(M)$, which is more restrictive than $f \in (H^1)^\ast$. Doing so, we find that $L_\mu^{-1} : L^2 \to H^1$ is a bounded linear map. But we have the compact embedding $H^1 \subset \subset L^2$ due to Rellich's theorem, and consequently the map $L_\mu^{-1} : L^2 \to L^2$ is compact.
Step 4
Next we prove that $L_\mu^{-1}$ is self-adjoint on $L^2$. This follows directly, and I'll leave it as an exercise. A similar argument shows that $L_\mu^{-1}$ is a positive operator.
Step 5
We now know that $L_\mu^{-1}: L^2 \to L^2$ is a compact self-adjoint positive operator. The spectral theory of such operators now tells us that the spectrum of $L_\mu^{-1}$ consists of $0$ and a countable sequence $\rho_n$ of positive real eigenvalues such that $\rho_n \to 0$ as $n \to \infty$. Moreover, the associated eigenfunctions form an orthonormal basis of $L^2$.
Step 6
I leave it as an exercise to verify that $L_\mu^{-1} u = \rho_n u$ if and only if $L_\mu u = \rho_n^{-1} u$. This means that the spectrum of $L_\mu$ consists of a sequence $0 < \rho_n^{-1} \to \infty$. Finally, we now note that
$$
L_\mu u = \rho_n^{-1} u \Leftrightarrow -\Delta u = (\rho_n^{-1} - \mu) u.
$$
Thus the spectrum of $-\Delta$ is the sequence of real eigenvalues $\lambda_n = (\rho_n^{-1} - \mu)$, which satisfy $\lambda_n \to \infty$ as $n \to \infty$. In particular, the eigenvalues are discrete. | {"set_name": "stack_exchange", "score": 1, "question_id": 2078023} |
TITLE: Still stuck on recurrence
QUESTION [0 upvotes]: I am still stuck on this problem and it is very frustrating. I need to solve this using exponential generating series and again with telescoping. Problem is I am not even sure what telescoping is and my googling has not been very helpful. Thanks in advance.
Solve the recurrence
$y_{n+1} = 2y_n + n$
for non-negative integer n and initial condition $y_0=1$ for
Using
1. Exponential generating series
2. Telescoping.
REPLY [1 votes]: For exponential generating functions, define:
$$
\widehat{Y}(z) = \sum_{n \ge 0} y_n \frac{z^n}{n!}
$$
Take your recurrence, multiply by $\frac{z^n}{n!}$, sum over $n \ge 0$, and see how to express the result in terms of $\widehat{Y}$ and its derivatives. The initial condition to the recurrence translates into $\widehat{Y}(0) = 1$.
Also try the ordinary generating function $Y(z) = \sum_{n \ge 0} y_n z^n$:
multiply by $z^n$ and sum, express the result in terms of $Y(z)$ and $z$. | {"set_name": "stack_exchange", "score": 0, "question_id": 292942} |
TITLE: Threshold for cliques in random graphs
QUESTION [0 upvotes]: I am trying to understand the proof for this theorem about the threshold for cliques of 4 or more vertices:
I do not understand what $p = o(n^{-2/3})$ means; is this a bound on the number of edges? If so, then does this mean that if we have 1000 vertices, then $p = o(0.1)$? I also do not understand how we got to $p^6$ in the last equation. I realize that we can find the expected number of cliques with the number of vertices and edges, but I am having difficulties understanding it here. I may discover this after further reading, but what is the meaning of $n^{-2/3}$, and how did we come to this number/threshold?
REPLY [1 votes]: Given two functions $f,g$ in $n$, we write $f(n)=o(g(n))$ if $\frac{f(n)}{g(n)}$ tends to zero as $n$ tends to infinity.
About the last equation, you have $E(X)=\sum E(X_i)$ and
$$E(X_i)=1*P(X_i=1)+0*P(X_i=0)=P(X_i=1)$$
but $P(X_i=1)=p^6$ for each $i$ (Why? This is the probability that $C_i$, which is a set of four vertices, forms a cliques of 4).
$n^{-2/3}$ comes up just by doing the reverse reasoning: we have $E(X)={n\choose 4}p^6$ and we want $E(x)$ to go to zero as $n$ goes to infinity. But
$${n\choose 4}p^6\leq\frac{n^4}{4!}p^6\leq n^4p^6=\left(n^{2/3}p\right)^6$$
So it's good enough if $n^{2/3}p$ goes to zero as $n$ goes to infinity; note that this is equivalent to write $p=o(n^{-2/3})$. | {"set_name": "stack_exchange", "score": 0, "question_id": 3481250} |
TITLE: Tension of a String
QUESTION [2 upvotes]: does anybody know a general relation between the tension of a string and it's energy density? I am at the moment learning about topological cosmic strings and calculated the energy density, now I do not know how to get to the string tension.
REPLY [2 votes]: For a cosmic string, its energy per unit length and its tension are the same quantity. This is analogous to surface tension, which is the same thing as a liquid's surface energy per unit area. | {"set_name": "stack_exchange", "score": 2, "question_id": 682602} |
TITLE: Find all value of a that series is convergent
QUESTION [0 upvotes]: Find all $\alpha$ that $\sum_{n=1}^{\infty} \Big( \ln \Big( \sinh {\frac{1}{n} }\Big) - \ln{\frac{1}{n}} \Big)^{\alpha}$ is covergent
$$\sum_{n=1}^{\infty} \Big( \ln \Big( \sinh{\frac{1}{n} }\Big) - \ln{\frac{1}{n}} \Big)^{\alpha} =
\sum_{n=1}^{\infty} \Big( \ln \Big( n\sinh{\frac{1}{n} }\Big)\Big)^{\alpha} =
\sum_{n=1}^{\infty} \Big( \ln \Big( n \big( \frac{1}{n} + o(\frac{1}{n}) \big) \Big)\Big)^{\alpha} =
\sum_{n=1}^{\infty} \Big( \ln \Big( 1 + o(1) \Big) \Big)^{\alpha} =
\sum_{n=1}^{\infty} \Big( o(1) + o(o(1)) \Big)^{\alpha} $$
If I did the right things, how to go on? Any ideas?
REPLY [0 votes]: Your first step is an excellent start. Then I would be a little more precise in the next step, by expanding $\ln\left( n \sinh \frac1n \right)$ as a series in $\frac1n$:
$$\ln\left( n \sinh \frac1n \right) = \frac1{6n^2}-\frac1{180n^4} +O(\frac1{n^6})
$$
So each term is positive and smaller than $\frac1{6n^2}$
Now you are raising each term to the power $\alpha$, so if $\alpha>\frac12$ the series is smaller, term by term, than $$\sum\frac1{6n^{1+(\frac12-\frac\alpha2)}}$$ which converges. If $\alpha = \frac12$ it turns out that the series diverges because the $\frac1{n^4}$ terms themselves converge, leaving the harmonic series.
So the answer is that your sum converges for $\alpha > \frac12$. | {"set_name": "stack_exchange", "score": 0, "question_id": 1989266} |
\begin{document}
\title{On the Positivity of Trace Class Operators}
\author{Elena Cordero}
\address{Dipartimento di Matematica, Universit\`a di Torino, Dipartimento di
Matematica, via Carlo Alberto 10, 10123 Torino, Italy}
\email{elena.cordero@unito.it}
\thanks{}
\author{Maurice de Gosson}
\address{University of Vienna, Faculty of Mathematics,
Oskar-Morgenstern-Platz 1 A-1090 Wien, Austria}
\email{maurice.de.gosson@univie.ac.at}
\thanks{}
\author{Fabio Nicola}
\address{Dipartimento di Scienze Matematiche, Politecnico di Torino, corso
Duca degli Abruzzi 24, 10129 Torino, Italy}
\email{fabio.nicola@polito.it}
\subjclass[2010]{46E35, 35S05, 81S30, 42C15}
\keywords{Wigner transform, trace class operator, positive operator, Weyl
symbol, Gabor frames}
\maketitle
\begin{abstract}
The characterization of positivity properties of Weyl operators is a
notoriously difficult problem, and not much progress has been made since the
pioneering work of Kastler, Loupias, and Miracle-Sole (KLM). In this paper we
begin by reviewing and giving simpler proofs of some known results for
trace-class Weyl operators; the latter play an essential role in quantum
mechanics. We then apply time-frequency analysis techniques to prove a phase
space version of the KLM condition; the main tools are Gabor frames and the
Wigner formalism. Finally, discrete approximations of the KLM condition, which
are tractable numerically, are provided.
\end{abstract}
\section{Introduction}
The characterization of positivity properties for trace class operators on
$L^{2}(\mathbb{R}^{n})$ is an important topic, not only because it is an
interesting mathematical problem which still is largely open, but also because
of its potential applications to quantum mechanics and even cosmology. It is a
notoriously difficult part of functional analysis which has been tackled by
many authors but there have been few decisive advances since the pioneering
work of Kastler \cite{Kastler} and Loupias and Miracle-Sole
\cite{LouMiracle1,LouMiracle2}; see however Dias and Prata \cite{dipra}. While
some partial results have been obtained in connection with the study of
quantum density operators
\cite{Narcow2,Narcow3,Narcow,Narconnell,Narconnell88} when the operators under
consideration are expressed using the Weyl correspondence, very little is
known about them when they are given in terms of more general correspondences
(in \cite{sriwolf} Srinivas and Wolf give such a condition, but the necessity
statement is false as already noted by Mourgues \textit{et al}. \cite{mofean}
). It seems in fact that the field, which was quite active in the late 1980s
hasn't much evolved since; the open questions remain open.
We shall tackle the problem using techniques which come from both quantum
mechanics and time-frequency analysis. The phase space representation mainly
employed is the $\eta$-cross-Wigner transform; for $\eta\in\mathbb{R}
\setminus\{0\}$, this is defined by
\begin{equation}
W_{\eta}(\psi,\phi)(z)=\left( \tfrac{1}{2\pi\eta}\right) ^{n}\int
_{\mathbb{R}^{n}}e^{-\frac{i}{\eta}p\cdot y}\psi(x+\tfrac{1}{2}y)\overline
{\phi(x-\tfrac{1}{2}y)}dy,\label{ww}
\end{equation}
for $\psi,\phi\in L^{2}(\mathbb{R}^{n}).$ When $\eta=\hbar>0$, ($\hbar$ the Planck constant $h$ divided by $2\pi$) we
recapture the standard cross-Wigner function $W_{\hbar}(\psi,\phi)$, simply
denoted by $W(\psi,\phi)$. Setting $W_{\eta}(\psi,\psi)=W_{\eta}\psi$ and
$\lambda=\eta/\hbar$, we have
\begin{equation}
W_{\eta}\psi(x,p)=|\lambda|^{-n}W\psi(x,\lambda^{-1}p).\label{scale1}
\end{equation}
In particular, a change of $\eta$ into $-\eta$ yields
\begin{equation}
W_{\eta}\psi=(-1)^{n}W_{-\eta}\overline{\psi}.\label{scale2}
\end{equation}
Given a symbol $a\in\mathcal{S}^{\prime}(\mathbb{R}^{2n})$ (the space of
tempered distribution), the Weyl pseudodifferential operator $\widehat{A}
_{\eta}^{\mathrm{W}}=\operatorname*{Op}_{\eta}^{\mathrm{W}}(a)$ is weakly
defined by
\begin{equation}
\langle\widehat{A}_{\eta}^{\mathrm{W}}\psi,\overline{\phi}\rangle=\langle
a,W_{\eta}(\psi,\phi)\rangle, \label{w1}
\end{equation}
for all $\psi,\phi$ in the Schwartz class $\mathcal{S}(\mathbb{R}^{n})$
(Observe that $W_{\eta}(\psi,\phi)\in\mathcal{S}(\mathbb{R}^{2n})$). The
function $a$ is called the $\eta$-Weyl symbol of $\widehat{A}_{\eta
}^{\mathrm{W}}$.
Consider now a trace-class operator $\widehat{A}$ on $L^{2}(\mathbb{R}^{n})$
(see the definition in the subsequent Section \ref{sec2}). Then there exists
an orthonormal basis $(\psi_{j})$ for $L^{2}(\mathbb{R}^{n})$ and a sequence
$(\alpha_{j})\in\ell^{1}$ such that $\widehat{A}$ can be written as
\[
\widehat{A}=\sum_{j}\alpha_{j}\widehat{\Pi}_{j}
\]
whit absolute convergence in $B(L^{2}(\mathbb{R}^{n}))$; here $\widehat{\Pi
}_{j}$ is the rank-one orthogonal projector of $L^{2}(\mathbb{R}^{n})$ onto
the one-dimensional subspace $\mathbb{C}\psi_{j}$ generated by $\psi_{j}$ (cf.
Lemma \ref{Lemma1}). It turns out that, under the additional assumption
$\widehat{A}$ to be self-adjoint, that $\widehat{A}$ can be represented as a
$\eta$-Weyl operator with corresponding symbol
\[
a=(2\pi\eta)^{n}\sum_{j}\alpha_{j}W_{\eta}\psi_{j}\in L^{2}(\mathbb{R}
^{2n})\cap L^{\infty}(\mathbb{R}^{2n})
\]
(see Proposition \ref{Prop1}).
When $\widehat{A}$ is positive semidefinite and has trace equal\ to one, it is
called a \textit{density operator} (or density matrix, or stochastic, operator
in quantum mechanics); it is usually denoted by $\widehat{\rho}$. If the Weyl
symbol of $\widehat{\rho}$ is $a$, the function $\rho=(2\pi\eta)^{-n}a$ is
called the \textit{Wigner distribution} of $\widehat{\rho}$ in the quantum
mechanical literature. Given a trace class operator $\widehat{A}$ (positive or
not), the function
\begin{equation}
\rho=\sum_{j}\alpha_{j}W_{\eta}\psi_{j} \label{density}
\end{equation}
is called the $\eta$-\emph{Wigner distribution} of $\widehat{A}$. (Observe
that $\rho\in L^{2}(\mathbb{R}^{2n})$).
We will henceforth assume that all the concerned operators are self-adjoint
and of trace class and denote them by $\widehat{\rho}$; such operators can
always be written as
\begin{equation}
\widehat{\rho}=\sum_{j}\alpha_{j}\widehat{\Pi}_{j}=(2\pi\eta)^{n}
\operatorname*{Op}\nolimits_{\eta}^{\mathrm{W}}(\rho) \label{notation1}
\end{equation}
the real function $\rho$ being given by formula (\ref{density}). We are going
to determine explicit necessary and sufficient conditions on $\rho$ ensuring
the positivity of $\widehat{\rho}$. To this goal, we will use the reduced
symplectic Fourier transform $F_{\Diamond}$, defined for $a\in\mathcal{S}
(\mathbb{R}^{2n})$ by
\begin{equation}
a_{\Diamond}(z)=F_{\Diamond}a(z)=\int_{\mathbb{R}^{2n}}e^{i\sigma(z,z^{\prime
})}a(z^{\prime})dz^{\prime} \label{adiam}
\end{equation}
with $\sigma$ being the standard symplectic form. For $\eta\in\mathbb{R}
\setminus\{0\}$, recall the symplectic $\eta$-Fourier transform
\begin{equation}
a_{\sigma,\eta}(z)=F_{\sigma,\eta}a(z)=\left( \tfrac{1}{2\pi\eta}\right)
^{n}\int_{\mathbb{R}^{2n}}e^{-\frac{i}{\eta}\sigma(z,z^{\prime})}a(z^{\prime
})dz^{\prime}. \label{w4}
\end{equation}
Obviously $F_{\Diamond}$ is related to the symplectic $\eta$-Fourier transform
(\ref{w4}) by the formula
\begin{equation}
a_{\Diamond}(z)(z)=(2\pi\eta)^{n}a_{\sigma,\eta}(-\eta z). \label{diasig12}
\end{equation}
With the notation (\ref{adiam}) Bochner's theorem \cite{Bochner,Katz} on
Fourier transforms of probability measures can be restated in the following way:
\begin{proposition}
[Bochner]\label{propbochner}A real function $\rho\in L^{1}(\mathbb{R}^{2n})$
is a probability density if and only if $\rho_{\Diamond}$ is continuous,
$\rho_{\Diamond}(0)=1$, and for all $z_{1},...,z_{N}\in\mathbb{R}^{2n}$ the
$N\times N$ matrix $\Lambda$ whose entries are the complex numbers
$\rho_{\Diamond}(z_{j}-z_{k})$ is positive semidefinite:
\begin{equation}
\Lambda=(\rho_{\Diamond}(z_{j}-z_{k}))_{1\leq j,k\leq N}\geq0. \label{bochner}
\end{equation}
\end{proposition}
When condition (\ref{bochner}) is satisfied one says that $\rho_{\Diamond}$ is
of positive type. The notion of $\eta$-positivity, due to Kastler
\cite{Kastler}, generalizes this notion:
\begin{definition}
\label{defhpos}Let $a$ $\in L^{1}(\mathbb{R}^{2n})$ and $\eta\in
\mathbb{R}\setminus\{0\}$; we say that $a_{\Diamond}$\textit{\ }is of $\eta
$\textit{-positive type if for every integer }$N$ \textit{the }$N\times N$
matrix $\Lambda_{(N)}$ with entries
\[
\Lambda_{jk}=e^{-\frac{i\eta}{2}\sigma(z_{j},z_{k})}a_{\Diamond}(z_{j}-z_{k})
\]
is positive semidefinite for all choices of $(z_{1},z_{2},...,z_{N}
)\in(\mathbb{R}^{2n})^{N}$:
\begin{equation}
\Lambda_{(N)}=(\Lambda_{jk})_{1\leq j,k\leq N}\geq0. \label{fzjfzk}
\end{equation}
\end{definition}
The condition (\ref{fzjfzk}) is equivalent to the polynomial inequalities
\begin{equation}
\sum_{1\leq j,k\leq N}\zeta_{j}\overline{\zeta_{k}}e^{-\frac{i\eta}{2}
\sigma(z_{j},z_{k})}a_{\Diamond}(z_{j}-z_{k})\geq0 \label{polynomial1}
\end{equation}
for all $N\in\mathbb{N}$, $\zeta_{j},\zeta_{k}\in\mathbb{C}$, and $z_{j}
,z_{k}\in\mathbb{R}^{2n}$.
It is easy to see that this implies $a_{\Diamond}(-z)=\overline{a_{\Diamond
}(z)}$ and therefore $a$ is real-valued.
\begin{remark}
\label{remeta}If $a$ is of $\eta$\textit{-positive type then it is also of }
$(-\eta)$\textit{-positive type.} This follows from the fact that the matrix
$(\overline{\Lambda_{jk}})_{1\leq j,k\leq N}$ is still positive semidefinite
and taking into account the equality $\overline{a_{\Diamond}(z)}=a_{\Diamond
}(-z)$.
\end{remark}
We first present a result originally due to Kastler \cite{Kastler}, and
Loupias and Miracle-Sole \cite{LouMiracle1,LouMiracle2} (the \textquotedblleft
KLM conditions\textquotedblright), who use the theory of $C^{\ast}$-algebras;
also see Parthasarathy \cite{partha1,partha2} and Parthasarathy and Schmidt
\cite{parthaschmidt}. The proof we give is simpler and is partially based on
the discussions in \cite{Narcow3,Narconnell,Werner}.
\begin{theorem}
[The KLM conditions] \label{Prop2} Let $\eta\in R\setminus\{0\}$ and let
$\widehat{A} =\operatorname*{Op}_{\eta}^{\mathrm{W}}(a)$ be a self-adjoint
trace-class operator on $L^{2}(\mathbb{R}^{n})$ with symbol $a\in
L^{1}(\mathbb{R}^{2n})$. We have $\widehat{A}\geq0$ if and only if the
conditions below hold:
(i) $a_{\Diamond}$ is continuous;
(ii) $a_{\Diamond}$ is of $\eta$\textit{-positive type}.
\end{theorem}
The KLM conditions are difficult to use in practice since they involve the
simultaneous verification of an uncountable set of conditions. We are going to
prove that they can be replaced with a countable set of conditions in phase
space. The key idea from time-frequency analysis is to use Gabor frames.
\begin{definition}
\label{ee0} Given a lattice $\Lambda$ in $\mathbb{R}^{2n}$ and a non-zero
function $g\in L^{2}(\mathbb{R}^{n})$, the system
\[
\mathcal{G}(g,\Lambda)=\{T(\lambda)g(x)=e^{i(\lambda_{2}x-\frac{1}{2}
\lambda_{1}\lambda_{2})}g(x-\lambda_{1}),\,\,\lambda=(\lambda_{1},\lambda
_{2})\in\Lambda\}
\]
is called a Gabor frame or Weyl-Heisenberg frame if it is a frame for
$L^{2}(\mathbb{R}^{n})$, that is there exist constants $0<A\leq B$ such that
\begin{equation}
A\Vert f\Vert_{2}^{2}\leq\sum_{z\in\Lambda}|\langle f,T(\lambda)g\rangle
|^{2}\leq B\Vert f\Vert_{2}^{2},\quad\forall f\in L^{2}(\mathbb{R} ^{n}).
\label{framedef}
\end{equation}
\end{definition}
Hence, the $L^{2}$-norm of the function $f$ is equivalent to the $\ell^{2}$
norm of the sequence of its coefficients $\{\langle f,T_{1/(2\pi)}(\lambda)
g\rangle\}_{\lambda\in\Lambda}$ (cf. Section \ref{sec1} for more details).
Consider a Gabor frame ${\mathcal{G}}(\phi,\Lambda)$ for $L^{2}(\mathbb{R}
^{n})$, with window $\phi\in L^{2}(\mathbb{R}^{n})$ and lattice $\Lambda
\in\mathbb{R}^{n}$. Let $a\in\mathcal{S}^{\prime}(\mathbb{R}^{2n})$ be a
symbol and denote by $a_{\lambda,\mu}$ its ``twisted" Gabor coefficient with
respect to the Gabor system $\mathcal{G}(W_{\eta}\phi,\Lambda\times\Lambda)$,
defined for $\lambda,\mu\in\Lambda\times\Lambda$ by
\begin{equation}
a_{\lambda,\mu}=\int_{\mathbb{R}^{2n}}e^{-\frac{i}{\eta}\sigma(z,\lambda-\mu
)}a(z)W_{\eta}\phi(z-\tfrac{1}{2}(\lambda+\mu))dz, \label{amunu}
\end{equation}
where $W_{\eta}\psi=W_{\eta}(\psi,\psi)$ is the $\eta$-Wigner transform of
$\psi$.
Our main result characterizes the positivity of Hilbert--Schmidt operators
(and hence of trace class operators). It reads as follows:
\begin{theorem}
\label{ThmFabio}Let $a\in L^{2}(\mathbb{R}^{n})$ be real-valued and
$\widehat{A}_{\eta}=\operatorname*{Op}_{\eta}^{\mathrm{W}}(a)$.
(i) We have $\widehat{A}_{\eta}\geq0$ if and only if for every integer
$N\geq0$ the matrix $M_{(N)}$ with entries
\begin{equation}
M_{\lambda,\mu}=e^{-\frac{i}{2\eta}\sigma(\lambda,\mu)}a_{\lambda,\mu}\text{
\ , \ }|\lambda|,|\mu|\leq N \label{lm1}
\end{equation}
is positive semidefinite.
(ii) One obtains an equivalent statement replacing the matrix $M_{(N)}$ with
the matrix $M^{\prime}_{(N)}$ where
\begin{equation}
M^{\prime}_{\lambda,\mu}=W_{\eta}(a,(W_{\eta}\phi)^{\vee})(\tfrac{1}
{4}(\lambda+\mu),\tfrac{1}{2}J(\mu-\lambda)) \label{lm2}
\end{equation}
with $(W_{\eta}\phi)^{\vee}(z)=W_{\eta}\phi(-z)$.
\end{theorem}
The conditions in Theorem \ref{ThmFabio} only involve a countable set of
matrices, as opposed to the KLM ones. In addition, they are
\emph{well-organized} because the matrix of size $N$ is a submatrix of that
of size $N+1$.
The KLM conditions can be recaptured by an averaging procedure from the ones
in Theorem \ref{ThmFabio}. To show this claim, we make use of another
well-known time-frequency representation: the short-time Fourier transform
(STFT). Precisely, for a given function $g\in\mathcal{S}(\mathbb{R}
^{n})\setminus\{0\}$ (called window), the STFT $V_{g} f$ of a distribution
$f\in\mathcal{S^{\prime}}(\mathbb{R}^{n})$ is defined by
\begin{equation}
\label{STFT}V_{g} f(x,p)=\int_{\mathbb{R}^{n}}e^{- i p\cdot y} f(y)
\overline{g(y-x)}\,dy,\quad(x,p)\in\mathbb{R}^{2n}.
\end{equation}
Let $\phi_{0}(x)=(\pi\eta)^{-n/4}e^{-|x|^{2}/2\eta}$ be the standard Gaussian
and $\phi_{\nu}=T(\nu)\phi_{0}$, $\nu\in\mathbb{R}^{2n}$. We shall consider
the STFT $V_{W\phi_{\nu}}a$, with window given by the Wigner function
$W\phi_{\nu}$ and symbol $a$. Then we establish the following connection:
\begin{theorem}
\label{Theorem2}Let $a\in L^{1}(\mathbb{R}^{2n})$ and $\lambda,\mu
\in\mathbb{R}^{2n}$. We set
\begin{align}
M^{(KLM)}_{\lambda,\mu} & =e^{-\frac{i}{2\eta}\sigma(\lambda,\mu)}
a_{\sigma,\eta}(\lambda-\mu)\nonumber\\
M_{\lambda,\mu}^{\phi_{\nu}} & =e^{-\frac{i}{2\eta}\sigma(\lambda,\mu
)}V_{W\phi_{\nu}}a(\tfrac{1}{2}(\lambda+\mu),J(\mu-\lambda)). \label{formula2}
\end{align}
We have
\begin{equation}
\label{formula3}M^{(KLM)}_{\lambda,\mu}=(2\pi\eta)^{-n} \int_{\mathbb{R}^{2n}
}M_{\lambda,\mu}^{\phi_{\nu}}\, d\nu.
\end{equation}
\end{theorem}
If the symbol $a\in L^{1}(\mathbb{R}^{n}) \cap L^{2}(\mathbb{R}^{n})$ and
choosing the lattice $\Lambda$ such that $\mathcal{G}(\phi_{0},\Lambda)$ is a
Gabor frame for $L^{2}(\mathbb{R}^{n})$, we obtain the following consequence:
\emph{If the matrix $(M_{\lambda,\mu}^{\phi_{0}})_{\lambda,\mu\in
\Lambda,|\lambda|,|\mu|\leq N}$ is positive semidefinite for every $N$, then
so is the matrix $(M^{(KLM)}_{\lambda,\mu})_{\lambda,\mu\in\Lambda
,|\lambda|,|\mu|\leq N}$} (cf. Corollary \ref{coro1}).
Finally, if the symbol $a$ is as before and and $\widehat{A}_{\eta
}=\operatorname*{Op}_{\eta}^{\mathrm{W}}(a)\geq0$, then for every finite
subset $S\subset\mathbb{R}^{2n}$ the matrix $(M^{(KLM)}_{\lambda,\mu
})_{\lambda,\mu\in S}$ is positive semidefinite. That is, the KLM conditions
hold (see Corollary \ref{coro2}).
The paper is organized as follows:
\begin{itemize}
\item In Section \ref{sec1} we briefly recall the main definitions and
properties of the Wigner--Weyl--Moyal formalism.
\item In Section \ref{sec2} we discuss the notion of positivity for trace
class operators; we also prove a continuous version of the positivity theorem
using the machinery of Hilbert--Schmidt operators.
\item In Section \ref{secKLM} we characterize positivity using the
Kastler--Loupias--Miracle-Sole (KLM) conditions of which we give a simple
proof. We give a complete description of trace class operators with Gaussian
Weyl symbols using methods which simplify and put on a rigorous footing older
results found in the physical literature.
\item In Section \ref{Secfabio} we show that the KLM conditions, which form an
uncountable set of conditions can be replaced with a set of countable
conditions involving the Wigner function. We thereafter study the notion of
\textquotedblleft almost positivity\textquotedblright\ which is an useful
approximation of the notion of positivity which can be easily implemented numerically.
\end{itemize}
\begin{notation}
We denote by $z=(x,p)$ the generic element of $\mathbb{R}^{2n}\equiv
\mathbb{R}^{n}\times\mathbb{R}^{n}$. Equipping $\mathbb{R}^{2n}$ with the
symplectic form $\sigma=\sum_{j}dp_{j}\wedge dx_{j}$ we denote by
$\operatorname*{Sp}(n)$ the symplectic group of $(\mathbb{R}^{2n},\sigma)$ and
by $\operatorname*{Mp}(n)$ the corresponding metaplectic group. $J=
\begin{pmatrix}
0_{n\times n} & I_{n\times n}\\
-I_{n\times n} & 0_{n\times n}
\end{pmatrix}
$ is the standard symplectic matrix, and we have $\sigma(z,z^{\prime})=Jz\cdot
z^{\prime}$. The $L^{2}$-scalar product is given by
\[
(\psi|\phi)_{L^{2}}=\int_{\mathbb{R}^{n}}\psi(x)\overline{\phi(x)}dx.
\]
The distributional pairing between $\psi\in\mathcal{S}^{\prime}(\mathbb{R}
^{m})$ and $\phi\in\mathcal{S}(\mathbb{R}^{m})$ is denoted by $\langle
\psi,\phi\rangle$ regardless of the dimension $m$.
For $A,B\in \mathrm{GL}(m)$, we use the notation $A\backsim B$ to denote the
equality of two square matrices $A,B$ of same size $m\times m$ up to
conjugation: $A\backsim B$ if and only if there exists $C\in \mathrm{GL}(m)$
such that $A=C^{-1}BC$.
\end{notation}
\section{Weyl Operators and Gabor frames}
\label{sec1}
\subsection{The Weyl--Wigner formalism}
In what follows $\eta$ denotes a real parameter different from zero.
Given a symbol $a\in\mathcal{S}^{\prime}(\mathbb{R}^{2n})$ the Weyl
pseudodifferential operator $\widehat{A}_{\eta}^{\mathrm{W}}
=\operatorname*{Op}_{\eta}^{\mathrm{W}}(a)$ is defined in \eqref{w1}, whereas
the $\eta$-cross-Wigner transform $W_{\eta}(\psi,\phi)$ is recalled in \eqref{ww}.
The operator $T_{\eta}(z)$ is Heisenberg's $\eta$-displacement operator
\begin{equation}
T_{\eta}(z_{0})\psi(x)=e^{\frac{i}{\eta}(p_{0}x-\frac{1}{2}p_{0}x_{0})}
\psi(x-x_{0}) \label{w3}
\end{equation}
(see \cite{Birk,Birkbis}). The $\eta$-cross-ambiguity transform is defined by
\begin{equation}
\operatorname*{Amb}\nolimits_{\eta}(\psi,\phi)(z)=\left( \tfrac{1}{2\pi\eta
}\right) ^{n}(\psi|T_{\eta}(z)\phi)_{L^{2}}; \label{defamb}
\end{equation}
we have \cite{Folland,Birkbis} the relation
\begin{equation}
\operatorname*{Amb}\nolimits_{\eta}(\psi,\phi)=F_{\sigma,\eta}W_{\eta}
(\psi,\phi), \label{w5}
\end{equation}
where $F_{\sigma,\eta}$ is the symplectic $\eta$-Fourier transform already
recalled in \eqref{w4}. The functions $W_{\eta}\psi=W_{\eta}(\psi,\psi)$ and
$\operatorname*{Amb}\nolimits_{\eta}\psi=\operatorname*{Amb}\nolimits_{\eta
}(\psi,\psi)$ are called, respectively, the $\eta$-Wigner and $\eta$-ambiguity
transforms. The explicit expression of the $\eta$-Wigner transform is already
given in \eqref{ww}, whereas the $\eta$-ambiguity transform is defined by
\begin{equation}
\operatorname*{Amb}\nolimits_{\eta}(\psi,\phi)(z) =\left( \tfrac{1}{2\pi\eta
}\right) ^{n}\int_{\mathbb{R}^{n}}e^{-\tfrac{i}{\eta}p\cdot y}\psi
(y+\tfrac{1}{2}x)\overline{\phi(y-\tfrac{1}{2}x)}dy. \label{wa}
\end{equation}
Let $\widehat{A}_{\eta}^{\mathrm{W}}=\operatorname*{Op}_{\eta}^{\mathrm{W}
}(a)$ and $\widehat{B}_{\eta}^{\mathrm{W}}=\operatorname*{Op}_{\eta
}^{\mathrm{W}}(b)$ and assume that $\widehat{A}_{\eta}^{\mathrm{W}}
\widehat{B}_{\eta}^{\mathrm{W}}$ is defined on some subspace of $L^{2}
(\mathbb{R}^{n})$; then the twisted symbol $c_{\sigma,\eta}$ of $\widehat{C}
_{\eta}^{\mathrm{W}}=\widehat{A}_{\eta}^{\mathrm{W}}\widehat{B}_{\eta
}^{\mathrm{W}}$ is given by the \textquotedblleft twisted
convolution\textquotedblright\ \cite{Folland,Birkbis} $c_{\sigma,\eta
}=a_{\sigma,\eta}\ast_{\eta}b_{\sigma,\eta}$ defined by
\begin{equation}
(a_{\sigma,\eta}\star_{\eta}b_{\sigma,\eta})(z)=\left( \tfrac{1}{2\pi\eta
}\right) ^{n}\int_{\mathbb{R}^{2n}}e^{\frac{i}{2\eta}\sigma(z,z^{\prime}
)}a_{\sigma,\eta}(z-z^{\prime})b_{\sigma,\eta}(z^{\prime})dz^{\prime}.
\label{twist1}
\end{equation}
Alternatively, the symbol $c$ is given by the \textquotedblleft twisted
product\textquotedblright\ $c=a\times_{\hbar}b$ where
\begin{equation}
(a\times_{\eta}b)(z)=\left( \tfrac{1}{4\pi\eta}\right) ^{2n}\int
_{\mathbb{R}^{2n}}e^{\frac{i}{2\eta}\sigma(z^{\prime},z^{\prime\prime}
)}a(z+\tfrac{1}{2}z^{\prime})b(z-\tfrac{1}{2}z^{\prime\prime})dz^{\prime
}dz^{\prime\prime}. \label{twist2}
\end{equation}
An important property of the $\eta$-Wigner transform is that it satisfies the
\textquotedblleft marginal properties\textquotedblright
\begin{equation}
\int_{\mathbb{R}^{n}}W_{\eta}\psi(z)dx=|F_{\eta}\psi(p)|^{2}\text{ \ , \ }
\int_{\mathbb{R}^{n}}W_{\eta}\psi(z)dp=|\psi(x)|^{2},\label{marginal}
\end{equation}
the first for every function $\psi\in L^{1}(\mathbb{R}^{n})\cap L^{2}
(\mathbb{R}^{n})$, the second for every function $\psi\in L^{2}(\mathbb{R}
^{n})$ such that $\hat{\psi}\in L^{1}(\mathbb{R}^{n})$;
here
\begin{equation}
F_{\eta}\psi(p)=\left( \tfrac{1}{2\pi|\eta|}\right) ^{n/2}\int
_{\mathbb{R}^{n}}e^{-\frac{i}{\eta}px}\psi(x)dx\label{Feta}
\end{equation}
is the $\eta$-Fourier transform (see \cite{Folland,Springer}). Notice that
$F_{\eta}\psi$ and $F_{-\eta}\psi$ are related by the trivial formula
\begin{equation}
F_{-\eta}\psi=(-1)^{n}\overline{F_{\eta}\overline{\psi}}.\label{fmineta}
\end{equation}
It follows that $F_{\eta}$ extends into a topological unitary automorphism of
$L^{2}(\mathbb{R}^{n})$ for all values of $\eta\neq0$.
An important equality satisfied by the $\eta$-Wigner function is Moyal's
identity\footnote{It is sometimes also called the \textquotedblleft
orthogonality relation\textquotedblright\ for the Wigner function.}:
\begin{lemma}
Let $(\psi,\phi)\in L^{2}(\mathbb{R}^{n})\times L^{2}(\mathbb{R}^{n})$ and
$\eta\in\mathbb{R}\setminus\{0\}$. The function $W_{\eta}\psi$ is real and we
have
\begin{equation}
||W_{\eta}\psi||_{L^{2}(\mathbb{R}^{2n})}^{2}=\int_{\mathbb{R}^{2n}}W_{\eta
}\psi(z)W_{\eta}\phi(z)dz=\left( \tfrac{1}{2\pi|\eta|}\right) ^{n}
|(\psi|\phi)|^{2}. \label{Moyaleta}
\end{equation}
In particular
\begin{equation}
\int_{\mathbb{R}^{2n}}W_{\eta}\psi(z)^{2}dz=\left( \tfrac{1}{2\pi|\eta
|}\right) ^{n}||\psi||^{4}. \label{Moyal2eta}
\end{equation}
\end{lemma}
\begin{proof}
It is a standard result \cite{Folland,Gro} that (\ref{Moyaleta}) holds for all
$\eta>0$. The case $\eta<0$ follows using formula (\ref{scale2}).
\end{proof}
In Section \ref{Secfabio} we will use some concepts from time-frequency
analysis. We recall here the most important issues.
A Gabor frame $\mathcal{G}(\phi,\Lambda)$ is defined in Definition
\eqref{ee0}. This implies that any function $f\in L^{2}(\mathbb{R}^{n})$ can
be represented as
\[
f=\sum_{\lambda\in\Lambda} c_{\lambda}T(\lambda)g,
\]
with unconditional convergence in $L^{2}(\mathbb{R}^{n})$ and with suitable
coefficients $(c_{\lambda})_{\lambda}\in\ell^{2}(\Lambda)$.
A time-frequency representation closely related to the Wigner function is the
short-time Fourier transform (STFT), whose definition is in formula
\eqref{STFT}. Using this representation, we can define the Sj\"ostrand class
or modulation space $M^{\infty,1}_{v_{s}}$ \cite{F1,wiener30} in terms of the
decay of the STFS as follows. For $s\geq0$, consider the weight function
$v_{s}(z)=\langle z\rangle^{s}=(1+|z|^{2})^{s/2}$, $z\in\mathbb{R}^{2n}$,
then
\[
M^{\infty,1}_{v_{s}}(\mathbb{R}^{n})=\{f\in\mathcal{S^{\prime}}(\mathbb{R}
^{n}): \|f\|_{M^{\infty,1}_{v_{s}}}:=\int_{\mathbb{R}^{n}}\sup_{x\in
\mathbb{R}^{n}}|V_{g} f(x,p)|v_{s}(x,p)\,dp<\infty\}.
\]
It can be shown that $\|f\|_{M^{\infty,1}_{v_{s}}}$ is a norm on $M^{\infty
,1}_{v_{s}}(\mathbb{R}^{n})$, independent of the window function
$g\in\mathcal{S}(\mathbb{R}^{n})$ (different windows yield equivalent norms).
Moreover $M^{\infty,1}_{v_{s}}(\mathbb{R}^{n})$ is a Banach space. For $s=0$
we simply write $M^{\infty,1}(\mathbb{R}^{n})$ in place of $M^{\infty
,1}_{v_{s}}(\mathbb{R}^{n})$.
Generally, by measuring the decay of the STFT by means of the mixed-normed
spaces $L_{v_{s}}^{p,q}(\mathbb{R}^{2n})$, one can define a scale of Banach
spaces known as modulation spaces. Here we will make use only of the so-called
Feichtinger's algebra (unweighted case $s=0$)
\[
M^{1}(\mathbb{R}^{n})=\{f\in\mathcal{S^{\prime}}(\mathbb{R}^{2n}):\Vert
f\Vert_{M^{1}}=\Vert V_{g}f\Vert_{L^{1}(\mathbb{R}^{2n})}<\infty\}.
\]
Notice that in Section \ref{Secfabio} we will work with spaces of symbols,
hence the dimension $n$ of the space is replaced by $2n$.
\section{The Positivity of Trace-Class Weyl Operators\label{sec2}}
Trace class operators play an essential role in quantum mechanics. A positive
semidefinite self-adjoint operator with unit trace is called a \emph{density
operator }(or \emph{density matrix} in the physical literature). Density
operators represent (and are usually identified with) the mixed quantum states
corresponding to statistical mixtures of quantum pure states.
\subsection{Trace class operators}
A bounded linear operator $\widehat{A}$ on $L^{2}(\mathbb{R}^{n})$ is of trace
class if for one (and hence every) orthonormal basis $(\psi_{j})_{j}$ of
$L^{2}(\mathbb{R}^{n})$ its modulus $|\widehat{A}|=(\widehat{A}^{\ast
}\widehat{A})^{1/2}$ satisfies
\begin{equation}
\sum_{j}(|\widehat{A}|\psi_{j}|\psi_{j})_{L^{2}}<\infty; \label{tr1}
\end{equation}
the trace of $\widehat{A}$ is then, by definition, given by the absolutely
convergent series $\operatorname{Tr}(\widehat{A})=\sum_{j}(\widehat{A}\psi
_{j}|\psi_{j})_{L^{2}}$whose value is independent of the choice of the
orthonormal basis $(\psi_{j})_{j}$.
Trace class operators form a two-side ideal $\mathcal{L}_{1}(L^{2}
(\mathbb{R}^{n}))$ in the algebra $\mathcal{L}(L^{2}(\mathbb{R}^{n}))$ of all
bounded linear operators on $L^{2}(\mathbb{R}^{n})$.
An operator $\widehat{A}\in\mathcal{L}(L^{2}(\mathbb{R}^{n}))$ is a
Hilbert--Schmidt operator if and only if there exists an orthonormal basis
$(\psi_{j})$ such that
\[
\sum_{j}(\widehat{A}\psi_{j}|\widehat{A}\psi_{j})_{L^{2}}<\infty.
\]
Hilbert--Schmidt operators form a two-sided ideal $\mathcal{L}_{2}
(L^{2}(\mathbb{R}^{n}))$ in $\mathcal{L}(L^{2}(\mathbb{R}^{n}))$ and we have
$\mathcal{L}_{1}(L^{2}(\mathbb{R}^{n}))\subset\mathcal{L}_{2}(L^{2}
(\mathbb{R}^{n}))$. A trace class operator can always be written (non
uniquely) as the product of two Hilbert--Schmidt operators (and is hence
compact). In particular, a positive (and hence self-adjoint) trace-class
operator can always be written in the form $\widehat{A}=\widehat{B}^{\ast
}\widehat{B}$ where $\widehat{B}$ is a Hilbert--Schmidt operator. One proves
(\cite{blabru}, \S 22.4, also \cite{Birk,Birkbis}) using the spectral theorem
for compact operators that for every trace-class operator on $L^{2}
(\mathbb{R}^{n})$ there exists a sequence $(\alpha_{j})_{j}\in\ell
^{1}(\mathbb{N})$ and orthonormal bases $(\psi_{j})_{j}$ and $(\phi_{j})_{j}$
of $L^{2}(\mathbb{R}^{n})$ (indexed by the same set) such that for $\psi\in
L^{2}(\mathbb{R}^{n})$
\begin{equation}
\widehat{A}\psi=\sum_{j}\alpha_{j}(\psi|\psi_{j})_{L^{2}}\phi_{j}; \label{tr3}
\end{equation}
conversely the formula above defines a trace-class operator on $L^{2}
(\mathbb{R}^{n})$.
Observe that the series in \eqref{tr3} is absolutely convergent in
$L^{2}(\mathbb{R}^{n})$ (hence unconditionally convergent), since
\[
\sum_{j}\| \alpha_{j}(\psi|\psi_{j})_{L^{2}}\phi_{j}\|=\sum_{j}|\alpha_{j}|
|(\psi|\psi_{j})_{L^{2}}| \|\phi_{j}\|\newline\leq\sum_{j}|\alpha_{j}|\|\psi\|
\|\psi_{j}\|=\sum_{j}|\alpha_{j}| \|\psi\|.
\]
One verifies that the adjoint $\widehat{A}^{\ast}$ (which is also of trace
class) is given by
\begin{equation}
\widehat{A}^{\ast}\psi=\sum_{j}\overline{\alpha_{j}}(\psi|\phi_{j})_{L^{2}
}\psi_{j}. \label{tr4}
\end{equation}
where the series is absolutely convergent in $L^{2}(\mathbb{R}^{n})$.
The following issue is an easy consequence of the spectral theorem for compact
self-adjoint operators.
\begin{lemma}
\label{Lemma1}Let $\widehat{A}$ be a trace-class operator on $L^{2}
(\mathbb{R}^{n})$.
(i) If $\widehat{A}$ is self-adjoint there exists a real sequence $(\alpha
_{j})_{j}\in\ell^{1}(\mathbb{N})$ and an orthonormal basis $(\psi_{j})_{j}$ of
$L^{2}(\mathbb{R}^{n})$ such that
\begin{equation}
\widehat{A}\psi=\sum_{j}\alpha_{j}(\psi|\psi_{j})_{L^{2}}\psi_{j} \label{tr5}
\end{equation}
for every $\psi\in L^{2}(\mathbb{R}^{n})$,
with absolute convergence in $L^{2}(\mathbb{R}^{n})$;
(ii) if $\widehat{A}\geq0$ then (\ref{tr5}) holds with $\alpha_{j}\geq0$ for
all $j$.
\end{lemma}
Formula (\ref{tr5}) can be rewritten for short as
\begin{equation}
\widehat{A}=\sum_{j}\alpha_{j}\widehat{\Pi}_{j} \label{tr6}
\end{equation}
where $\widehat{\Pi}_{j}$ is a rank-one projector, namely the orthogonal
projection operator of $L^{2}(\mathbb{R}^{n})$ onto the one-dimensional
subspace $\mathbb{C}\psi_{j}$ generated by $\psi_{j}$.
Notice that the series \eqref{tr6} is absolutely convergent in $B(L^{2}
(\mathbb{R}^{n}))$. Indeed, $\|\widehat{\Pi}_{j}\|_{B(L^{2})}=1$ for every $j$
and
\[
\sum_{j} \|\alpha_{j}\widehat{\Pi}_{j}\|_{B(L^{2})}\leq\sum_{j} |\alpha
_{j}|<\infty.
\]
\begin{proposition}
\label{Prop1} Let $\widehat{A}$ be a self-adjoint trace-class operator on
$L^{2}(\mathbb{R}^{n})$ as in \eqref{tr5} and $\eta>0$.
(i) The $\eta$-Weyl symbol $a$ of $\widehat{A}$ is given by
\begin{equation}
a=(2\pi\eta)^{n}\sum_{j}\alpha_{j}W_{\eta}\psi_{j} \label{tr7}
\end{equation}
where the series converges absolutely in $L^{2}(\mathbb{R}^{2n})$ ;
(ii) The twisted symbol $a_{\sigma,\eta}$ is given by
\begin{equation}
a_{\sigma,\eta}=(2\pi\eta)^{n}\sum_{j}\alpha_{j}\operatorname*{Amb}
\nolimits_{\eta}\psi_{j}. \label{tr8}
\end{equation}
with absolute convergence in $L^{2}(\mathbb{R}^{2n})$.
(iii) In particular, the symbols $a$ and $a_{\sigma,\eta}$ are in
$L^{2}(\mathbb{R}^{2n})\cap L^{\infty}(\mathbb{R}^{2n})$.
\end{proposition}
\begin{proof}
The distributional kernel of the orthogonal projection $\widehat{\Pi}_{j}$ is
$K_{j}=\psi_{j}\otimes\overline{\psi_{j}}$ hence the Weyl symbol $a_{j}$ of
$\widehat{\Pi}_{j}$ is given by the usual formula
\[
a_{j}(z)=\int_{\mathbb{R}^{n}}e^{-\frac{i}{\eta}py}K_{j}(x+\tfrac{1}
{2}y,x-\tfrac{1}{2}y)dy=(2\pi\eta)^{n}W_{\eta}\psi_{j}(z).
\]
the series \eqref{tr6} being absolutely convergent in $B(L^{2}(\mathbb{R}
^{n}))$. Moyal's identity
\[
\Vert\alpha_{j}W_{\eta}\psi_{j}\Vert_{2}=|\alpha_{j}|\left( \tfrac{1}
{2\pi\eta}\right) ^{\frac{n}{2}}\Vert\psi_{j}\Vert_{2}^{2}=\left( \tfrac
{1}{2\pi\eta}\right) ^{\frac{n}{2}}|\alpha_{j}|
\]
and the assumption $(\alpha_{j})_{j}\in\ell^{1}(\mathbb{N})$ guarantee that
the series in (\ref{tr7}) is absolutely convergent in $L^{2}(\mathbb{R}^{2n})$
and we infer that the symbol $a$ is in $L^{2}(\mathbb{R}^{2n})$. Similarly, by
H\"{o}lder's inequality,
\[
|W_{\eta}\psi_{j}(z)|\leq\tfrac{2^{2n}}{(2\pi\eta)^{n}}\Vert\psi_{j}\Vert
_{2}^{2}
\]
for all $z\in\mathbb{R}^{2n}$ so that
\[
\Vert\alpha_{j}W_{\eta}\psi_{j}\Vert_{\infty}\leq\left( \tfrac{2}{\pi\eta
}\right) ^{n}|\alpha_{j}|
\]
and the series in (\ref{tr7}) is absolutely convergent in $L^{\infty
}(\mathbb{R}^{2n})$, too. This proves our claim (iii) for the symbol $a$.
Formula (\ref{tr8}) follows since $W_{\eta}\psi_{j}$ and $\operatorname*{Amb}
\nolimits_{\eta}\psi_{j}$ are symplectic $\eta$-Fourier transforms of each
other. Claim (iii) for $a_{\sigma,\eta}$ is obtained in a similar way.
\end{proof}
\begin{remark}
From Proposition \eqref{Prop1}, the functions $a$ and $a_{\sigma,\eta}$ are in
$L^{2}(\mathbb{R}^{2n})\cap L^{\infty}(\mathbb{R}^{2n})$. Notice that in
general the symbols $a$ and $a_{\sigma,\eta}$ are not in $L^{1}(\mathbb{R}
^{2n})$. For example, choose $\widehat{A}=\widehat{{\Pi}}_{0}$, with
$\widehat{{\Pi}}_{0}$ the orthogonal projection onto a vector $\psi_{0}\in
L^{2}(\mathbb{R}^{n})\setminus(L^{1}(\mathbb{R}^{n})\cup\mathcal{F}
L^{1}(\mathbb{R}^{n}))$.
\end{remark}
Recall that if $\widehat{A}$ is positive semidefinite and has trace equal\ to
one, it is called a \textit{density operator} and denoted by $\widehat{\rho}$.
We will from now on assume that all the concerned operators are self-adjoint
and of trace class, recalling from \eqref{notation1} that they can be written
as
\[
\widehat{\rho}=\sum_{j}\alpha_{j}\widehat{\Pi}_{j}=(2\pi\eta)^{n}
\operatorname*{Op}\nolimits_{\eta}^{\mathrm{W}}(\rho)
\]
the real function $\rho$ being given by formula (\ref{density}). We are going
to determine explicit necessary and sufficient conditions on $\rho$ ensuring
the positivity of $\widehat{\rho}$. Let us first note the following result
which shows the sensitivity of density operators to changes in the value of
$\hbar$.
Let us now address the following question: for given $\psi\in L^{2}
(\mathbb{R}^{n})$, can we find $\phi$ such that $W_{\eta}\phi=W\psi$ for
$\eta\neq\hbar$? The answer is negative:
\begin{proposition}
\label{Thm3}Let $\psi\in L^{1}(\mathbb{R}^{n})\cap L^{2}(\mathbb{R}
^{n})\setminus\{0\}$ and $\eta\in\mathbb{R}\setminus\{0\}$, $\hbar>0$.
(i) There does not exist any $\phi\in L^{1}(\mathbb{R}^{n})\cap L^{2}
(\mathbb{R}^{n})$ such that $W_{\eta}\phi=W\psi$ if $|\eta|\neq\hbar$.
(ii) Assume that there exist orthonormal systems $(\psi_{j})_{j\in\mathbb{N}}
$, $(\phi_{j})_{j\in\mathbb{N}}$ of $L^{2}(\mathbb{R}^{n})$
and nonnegative sequences $\alpha=(\alpha_{j})_{j\in\mathbb{N}},\ \beta
=(\beta_{j})_{j\in\mathbb{N}}\in\ell^{1}(\mathbb{N})$ such that
\begin{equation}
\sum_{j}\alpha_{j}W_{\eta}\psi_{j}=\sum_{j}\beta_{j}W\phi_{j} \label{abcp}
\end{equation}
Then we must have
\[
\hbar^{n}\Vert\alpha\Vert_{\ell^{2}}^{2}=|\eta|^{n}\Vert\beta\Vert_{\ell^{2}
}^{2}.
\]
\end{proposition}
\begin{proof}
Observe that the series in (\ref{abcp}) are absolutely convergent in
$L^{2}(\mathbb{R}^{2n})$. (i) Assume that $W_{\eta}\phi=W\psi$; then, using
the first marginal property (\ref{marginal}),
\[
|F_{\eta}\phi(p)|^{2}=\int_{\mathbb{R}^{n}}W_{\eta}\phi(x,p)dx=\int
_{\mathbb{R}^{n}}W\psi(x,p)dx=|F_{\hbar}\psi(p)|^{2}
\]
hence $\phi$ and $\psi$ must have the same $L^{2}$-norm: $||\phi||=||\psi||$
in view of Parseval's equality. On the other hand, using the Moyal identity
(\ref{Moyaleta}) for, respectively, $W\psi$ and $W_{\eta}\phi$,\ the equality
$W\psi=W_{\eta}\phi$ implies that
\begin{align*}
\int_{\mathbb{R}^{2n}}W\psi(z)^{2}dz & =\left( \tfrac{1}{2\pi\hbar}\right)
^{n}||\psi||^{4}\\
\int_{\mathbb{R}^{2n}}W_{\eta}\phi(z)^{2}dz & =\left( \tfrac{1}{2\pi|\eta
|}\right) ^{n}||\phi||^{4}
\end{align*}
hence we must have $|\eta|=\hbar$.
(ii) Squaring both sides of \eqref{abcp} and integrating over $\mathbb{R}
^{2n}$ we get, using again Moyal's identity and the orthonormality of the
vectors $\psi_{j}$ and $\phi_{j}$,
\[
\frac{1}{(2\pi|\eta|)^{n}}\sum_{j}\alpha_{j}^{2}=\frac{1}{(2\pi\hbar)^{n}}
\sum_{j}\beta_{j}^{2},
\]
hence our claim.
\end{proof}
\begin{remark}
Assume in particular that $\beta_{1}=1$ and $\beta_{j}=0$ for $j\geq2$. Then
(ii) tells us that if $\sum_{j}\alpha_{j}W_{\eta}\psi_{j}=W\phi$ then we must
have $\hbar^{n}\Vert\alpha\Vert_{\ell^{2}}^{2}=|\eta|^{n}$. Assume that
$\sum_{j}\alpha_{j}=1$; then $\Vert\alpha\Vert_{\ell^{2}}^{2}\leq1$ hence we
must have $|\eta|\leq\hbar$.
\end{remark}
\section{Positive Trace Class Operators\label{secKLM}}
\subsection{A general positivity result for trace class operators}
We are now going to give an integral description of the positivity of a trace
class operator on $L^{2}(\mathbb{R}^{n})$ of which Theorem \ref{prop2} can be
viewed as a discretized version.
Let us begin by stating a general result:
\begin{lemma}
\label{Lemmatrab}Let $\widehat{A}=\operatorname*{Op}_{\eta}^{\mathrm{W}}(a)$
be a trace-class operator on $L^{2}(\mathbb{R}^{n})$, with $\eta>0$. We have
$\widehat{A}\geq0$ if and only $\operatorname*{Tr}(\widehat{A}\widehat{B}
)\geq0$ for every positive trace class operator $\widehat{B}\in\mathcal{L}
_{1}(L^{2}(\mathbb{R}^{n}))$.
\end{lemma}
\begin{proof}
Since $\mathcal{L}_{1}(L^{2}(\mathbb{R}^{n}))$ is itself an algebra the
product $\widehat{A}\widehat{B}$ is indeed of trace class so the condition
$\operatorname*{Tr}(\widehat{A}\widehat{B})\geq0$ makes sense; setting
$\widehat{B}=\operatorname*{Op}_{\eta}^{\mathrm{W}}(b)$ we have
\[
b(z)=(2\pi\eta)^{n}\sum_{j}\beta_{j}W\psi_{j}
\]
where $(\beta_{j})\in\ell^{1}(\mathbb{N})$ with $\beta_{j}\geq0$ and $\psi
_{j}$ an orthonormal basis for $L^{2}(\mathbb{R}^{n})$. Observing that trace
class operators are also Hilbert--Schmidt operators, we have \cite[Prop.
284]{Birkbis} since $a,b\in L^{2}(\mathbb{R}^{2n})$
\begin{equation}
\operatorname*{Tr}(\widehat{A}\widehat{B})=\int_{\mathbb{R}^{2n}}a(z)b(z)dz
\label{trab}
\end{equation}
and hence
\[
\operatorname*{Tr}(\widehat{A}\widehat{B})=(2\pi\eta)^{n}\sum_{j}\beta_{j}
\int_{\mathbb{R}^{2n}}a(z)W_{\eta}\psi_{j}(z)dz
\]
(the interchange of integral and series is justified by Fubini's Theorem).
Assume that $\operatorname*{Tr}(\widehat{A}\widehat{B})\geq0$. It is enough to
check the positivity of $\widehat{A}$ on unit vectors $\psi$ in $L^{2}
(\mathbb{R}^{n})$. Choosing all the $\beta_{j}=0$ except $\beta_{1}$ and
setting $\psi_{1}=\psi$ we have
\begin{equation}
\int_{\mathbb{R}^{2n}}a(z)W\psi(z)dz\geq0; \label{ouap}
\end{equation}
since we can choose $\psi\in L^{2}(\mathbb{R}^{n})$ arbitrarily, this means
that we have $\widehat{A}\geq0$. If, conversely, we have $\widehat{A}\geq0$
then (\ref{ouap}) holds for all $\psi_{j}$ hence $\operatorname*{Tr}
(\widehat{A}\widehat{B})\geq0$.
\end{proof}
Let us now prove:
\begin{theorem}
\label{prop2}Let $\eta>0$ and $\widehat{A}=\operatorname*{Op}_{\eta
}^{\mathrm{W}}(a)$ be a trace-class operator. We have $\widehat{A}\geq0$ if
and only if
\begin{equation}
\int_{\mathbb{R}^{2n}}F_{\sigma,\eta}a(z)\left( \int_{\mathbb{R}^{2n}
}e^{-\frac{i}{2\eta}\sigma(z,z^{\prime})}c(z^{\prime}-z)\overline{c(z^{\prime
})}dz^{\prime}\right) dz\geq0,
\end{equation}
for all $c\in L^{2}(\mathbb{R}^{2n})$.
\end{theorem}
\begin{proof}
In view of Lemma \ref{Lemmatrab} above we have $\widehat{A}\geq0$ if and only
if $\operatorname*{Tr}(\widehat{A}\widehat{B})\geq0$ for all positive
$\widehat{B}=\operatorname*{Op}_{\eta}^{\mathrm{W}}(b)\in\mathcal{L}_{1}
(L^{2}(\mathbb{R}^{n}))$ that is (formula (\ref{trab}))
\begin{equation}
\int_{\mathbb{R}^{2n}}a(z)b(z)dz=(a|b)_{L^{2}(\mathbb{R}^{2n})}\geq0.
\label{trab2}
\end{equation}
(Recall that Weyl symbols of self-adjoint operators are real). Using
Plancherel's Theorem
\begin{equation}
(a|b)_{L^{2}(\mathbb{R}^{2n})}=\left( \tfrac{1}{2\pi\eta}\right)
^{n}(F_{\sigma,\eta}a|F_{\sigma,\eta}b)_{L^{2}(\mathbb{R}^{2n})}.
\label{trab2bis}
\end{equation}
Since $\widehat{B}\geq0$ there exists $\widehat{C}\in\mathcal{L}_{2}
(L^{2}(\mathbb{R}^{n}))$ such that $\widehat{B}=\widehat{C}^{\ast}\widehat{C}$
and hence, setting $\widehat{C}=\operatorname*{Op}_{\eta}^{\mathrm{W}}(c)$
(recall that $\widehat{C}^{\ast}=\operatorname*{Op}_{\eta}^{\mathrm{W}}
(\bar{c})$), by the composition formula for Weyl operators \eqref{twist1},
\begin{align*}
F_{\sigma,\eta}b(z) & =\left( \tfrac{1}{2\pi\eta}\right) ^{n}
\int_{\mathbb{R}^{2n}}e^{\frac{i}{2\eta}\sigma(z,z^{\prime})}F_{\sigma,\eta
}\bar{c}(z-z^{\prime})F_{\sigma,\eta}c(z^{\prime})dz^{\prime}\\
& =\left( \tfrac{1}{2\pi\eta}\right) ^{n}\int_{\mathbb{R}^{2n}}e^{\frac
{i}{2\eta}\sigma(z,z^{\prime})}\overline{F_{\sigma,\eta}{c}}(z^{\prime
}-z)F_{\sigma,\eta}c(z^{\prime})dz^{\prime}.
\end{align*}
Vice-versa, take any function $c\in L^{2}(\mathbb{R}^{2n})$ then
$\widehat{C}=\operatorname*{Op}_{\eta}^{\mathrm{W}}(c)$ is a Hilbert-Schmidt
operator and $\widehat{B}=\widehat{C}^{\ast}\widehat{C}$ is a positive
operator. Hence, using the fact that the operator $F_{\sigma,\eta}$ is a
topological automorphism of $L^{2}(\mathbb{R}^{2n})$, condition (\ref{trab2})
and \eqref{trab2bis} are equivalent to
\begin{equation}
\int_{\mathbb{R}^{2n}}F_{\sigma,\eta}a(z)\left( \int_{\mathbb{R}^{2n}
}e^{-\frac{i}{2\eta}\sigma(z,z^{\prime})}c(z^{\prime}-z)\overline{c(z^{\prime
})}dz^{\prime}\right) dz\geq0,
\end{equation}
for every $c\in L^{2}(\mathbb{R}^{2n})$, as claimed.
\end{proof}
\subsection{Proof of the KLM condition}
We are now going to prove the KLM conditions, that is Theorem \ref{Prop2}. We
will need the following classical result from linear algebra. It says that the
entrywise product of two positive semidefinite matrices is also positive semidefinite.
\begin{lemma}
[Schur]\label{lemmaschur}Let $M_{(N)}=(M_{jk})_{1\leq j,k\leq N}$ be the
Hadamard product $M_{(N)}^{\prime}\circ M_{(N)}^{\prime\prime}$ of the
matrices $M_{(N)}^{\prime}=(M_{jk}^{\prime})_{1\leq j,k\leq N}$ and
$M_{(N)}^{\prime\prime}=(M_{jk}^{\prime\prime})_{1\leq j,k\leq N}$:
$M_{(N)}=(M_{jk}^{\prime}M_{jk}^{\prime\prime})_{1\leq j,k\leq N}$. If
$M_{(N)}^{\prime}$ and $M_{(N)}^{\prime\prime}$ are positive semidefinite then
so is $M_{(N)}$.
\end{lemma}
\begin{proof}
See for instance Bapat \cite{Bapat}.
\end{proof}
We have now all the instruments to prove the KLM conditions.
\begin{proof}
[Proof of Theorem \ref{Prop2}]Let us first show that the conditions (i)--(ii)
are necessary. Assume that $\widehat{A}\geq0$. Since $a\in L^{1}
(\mathbb{R}^{2n})$, the Riemann-Lebesgue Lemma gives that $a_{\Diamond}$ is
continuous. In view of Lemma \ref{Lemma1} and formula (\ref{tr7}) in
Proposition \ref{Prop1}\ we have
\begin{equation}
a=(2\pi\eta)^{n}\sum_{j}\alpha_{j}W_{\eta}\psi_{j}
\end{equation}
for an orthonormal basis $\{\psi_{j}\}$ in $L^{2}(\mathbb{R}^{n})$, the
coefficients $\alpha_{j}$ being $\geq0$ and in $\ell^{1}(\mathbb{N})$. It is
thus sufficient to show that the Wigner transform $W_{\eta}\psi$ of an
arbitrary $\psi\in L^{2}(\mathbb{R}^{n})$ is of $\eta$-positive type. This
amounts to show that for all $(z_{1},...,z_{N})\in(\mathbb{R}^{2n})^{N}$ and
all $(\zeta_{1},...,\zeta_{N})\in\mathbb{C}^{N}$ we have
\begin{equation}
I_{N}(\psi)=\sum_{1\leq j,k\leq N}\zeta_{j}\overline{\zeta_{k}}e^{-\frac
{i}{2\eta}\sigma(z_{j},z_{k})}F_{\sigma,\eta}W_{\eta}\psi(z_{j}-z_{k})\geq0
\label{ineq121}
\end{equation}
for every complex vector $(\zeta_{1},...,\zeta_{N})\in\mathbb{C}^{N}$ and
every sequence $(z_{1},...,z_{N})\in(\mathbb{R}^{2n})^{N}$. Since the $\eta
$-Wigner distribution $W_{\eta}\psi$ and the $\eta$-ambiguity function are
obtained from each other by the symplectic $\eta$-Fourier transform
$F_{\sigma,\eta}$ we have
\[
I_{N}(\psi)=\sum_{1\leq j,k\leq N}\zeta_{j}\overline{\zeta_{k}}e^{-\frac
{i}{2\eta}\sigma(z_{j},z_{k})}\operatorname*{Amb}\nolimits_{\eta}\psi
(z_{j}-z_{k}).
\]
Let us prove that
\begin{equation}
I_{N}(\psi)=\left( \tfrac{1}{2\pi\eta}\right) ^{n}||\sum\nolimits_{1\leq
j\leq N}\zeta_{j}T_{\eta}(-z_{j})\psi||_{L^{2}}^{2}; \label{equin12}
\end{equation}
the inequality (\ref{ineq121}) will follow. Taking into account the fact that
$T_{\eta}(-z_{k})^{\ast}=T_{\eta}(z_{k})$ and using the familiar relation
\cite{Folland,Birk,Birkbis}
\begin{equation}
T_{\eta}(z_{k}-z_{j})=e^{-\frac{i}{2\eta}\sigma(z_{j},z_{k})}T_{\eta}
(z_{k})T_{\eta}(-z_{j}) \label{tzotzo12}
\end{equation}
we have, expanding the square in the right-hand side of (\ref{equin12}),
\begin{align*}
||\sum_{1\leq j\leq N}\zeta_{j}T_{\eta}(-z_{j})\psi||_{L^{2}}^{2} &
=\sum_{1\leq j,k\leq N}\zeta_{j}\overline{\zeta_{k}}(T_{\eta}(-z_{j}
)\psi|T_{\eta}(-z_{k})\psi)_{L^{2}}\\
& =\sum_{1\leq j,k\leq N}\zeta_{j}\overline{\zeta_{k}}(T_{\eta}(z_{k}
)T_{\eta}(-z_{j})\psi|\psi)_{L^{2}}\\
& =\sum_{1\leq j,k\leq N}\zeta_{j}\overline{\zeta_{k}}e^{-\tfrac{i}{2\eta
}\sigma(z_{j},z_{k})}(T_{\eta}(z_{k}-z_{j})\psi|\psi)_{L^{2}}\\
& =\left( 2\pi\eta\right) ^{n}\sum_{1\leq j,k\leq N}\zeta_{j}
\overline{\zeta_{k}}e^{-\tfrac{i}{2\eta}\sigma(z_{j},z_{k})}
\operatorname*{Amb}\nolimits_{\eta}\psi(z_{j}-z_{k})
\end{align*}
proving the equality (\ref{equin12}). Let us now show that, conversely, the
conditions (i) and (ii) are sufficient, \textit{i.e.} that they imply that
$(\widehat{A}\psi|\psi)_{L^{2}}\geq0$ for all $\psi\in L^{2}(\mathbb{R}^{n})$;
equivalently (see formula (\ref{w1}))
\begin{equation}
\int_{\mathbb{R}^{2n}}a(z)W_{\eta}\psi(z)dz\geq0 \label{integraltoprove}
\end{equation}
for $\psi\in L^{2}(\mathbb{R}^{n})$. Let us set, as above,
\begin{equation}
\Lambda_{jk}^{\prime}=e^{\frac{i}{2\eta}\sigma(z_{j},z_{k})}a_{\sigma,\eta
}(z_{j}-z_{k})
\end{equation}
where $z_{j}$ and $z_{k}$ are arbitrary elements of $\mathbb{R}^{2n}$. To say
that $a_{\sigma,\eta}$ is of $\eta$\textit{-}positive type means that the
matrix $\Lambda^{\prime}=(\Lambda_{jk}^{\prime})_{1\leq j,k\leq N}$ is
positive semidefinite; choosing $z_{k}=0$ and setting $z_{j}=z$ this means
that every matrix $(a_{\sigma,\eta}(z))_{1\leq j,k\leq N}$ is positive
semidefinite. Setting
\begin{align*}
\Gamma_{jk} & =e^{\frac{i}{2\eta}\sigma(z_{j},z_{k})}F_{\sigma,\eta}W_{\eta
}\psi(z_{j}-z_{k})\\
& =e^{\frac{i}{2\eta}\sigma(z_{j},z_{k})}\operatorname*{Amb}\nolimits_{\eta
}\psi(z_{j}-z_{k})
\end{align*}
the matrix $\Gamma_{(N)}=(\Gamma_{jk})_{1\leq j,k\leq N}$ is positive
semidefinite. Let us now write
\[
M_{jk}=\operatorname*{Amb}\nolimits_{\eta}\psi(z_{j}-z_{k})a_{\sigma,\eta
}(z_{j}-z_{k});
\]
we claim that the matrix $M_{(N)}=(M_{jk})_{1\leq j,k\leq N}$ is positive
semidefinite. In fact, $M$ is the Hadamard product of the positive
semidefinite matrices $M_{(N)}^{\prime}=(M_{jk}^{\prime})_{1\leq j,k\leq N}$
and $M_{(N)}^{\prime\prime}=(M_{jk}^{\prime\prime})_{1\leq j,k\leq N}$ where
\begin{align*}
M_{jk}^{\prime} & =e^{\frac{i}{2\eta}\sigma(z_{j},z_{k})}\operatorname*{Amb}
\nolimits_{\eta}\psi(z_{j}-z_{k})\\
M_{jk}^{\prime\prime} & =e^{-\frac{i}{2\eta}\sigma(z_{j},z_{k})}
a_{\sigma,\eta}(z_{j}-z_{k})\text{\ }
\end{align*}
and Lemma \ref{lemmaschur} implies that $M_{(N)}$ is also positive
semidefinite. It follows from Bochner's theorem that the function $b$ defined
by
\[
b_{\sigma,\eta}(z)=\operatorname*{Amb}\nolimits_{\eta}\psi(z)a_{\sigma,\eta
}(-z)
\]
is a probability density hence $b(z)\geq0$ for all $z\in\mathbb{R}^{2n}$.
Integrating this equality we get, using the Plancherel formula for the
symplectic $\eta$-Fourier transform,
\begin{align*}
(2\pi\eta)^{n}b(0) & =\int_{\mathbb{R}^{2n}}\operatorname*{Amb}
\nolimits_{\eta}\psi(z)a_{\sigma,\eta}(-z)dz\\
& =\int_{\mathbb{R}^{2n}}W_{\eta}\psi(z)a(z)dz
\end{align*}
hence the inequality (\ref{integraltoprove}) since $b(0)\geq0$.
\end{proof}
\subsection{The\ Gaussian case}
Let $\Sigma$ be a positive symmetric (real) $2n\times2n$ matrix and consider
the Gaussian
\begin{equation}
\rho(z)=(2\pi)^{-n}\sqrt{\det\Sigma^{-1}}e^{-\frac{1}{2}\Sigma^{-1}z^{2}}.
\label{Gauss}
\end{equation}
Let us find for which values of $\eta$ the function $\rho$ is the $\eta
$-Wigner distribution of a density operator. Narcowich \cite{Narcow2} was the
first to address this question using techniques from harmonic analysis using
the approach in Kastler's paper \cite{Kastler}; we give here a new and simpler
proof using the multidimensional Hardy's uncertainty principle, which we state
in the following form:
\begin{lemma}
\label{LemmaHardy}Let $A$ and $B$ be two real positive definite matrices and
$\psi\in L^{2}(\mathbb{R}^{n})$, $\psi\neq0$. Assume that
\begin{equation}
|\psi(x)|\leq Ce^{-\tfrac{1}{2}Ax^{2}}\text{ \ and \ }|F_{\eta}\psi(p)|\leq
Ce^{-\tfrac{1}{2}Bp^{2}} \label{AB}
\end{equation}
for a constant $C>0$. Then:
(i) The eigenvalues $\lambda_{j}$, $j=1,...,n$, of the matrix $AB$ are all
$\leq1/\eta^{2}$;
(ii) If $\lambda_{j}=1/\eta^{2}$ for all $j$, then $\psi(x)=ke^{-\frac{1}
{2}Ax^{2}}$ for some constant $k$.
\end{lemma}
\begin{proof}
See de Gosson and Luef \cite{goluhardy}, de Gosson \cite{Birkbis}. The $\eta
$-Fourier transform $F_{\eta}\psi$ in the second inequality (\ref{AB}) is
given by \eqref{Feta}.
\end{proof}
We will also need the two following lemmas; the first is a positivity result.
\begin{lemma}
\label{Lemmaposex} If $R$ is a symmetric positive semidefinite $2n\times2n$
matrix, then
\begin{equation}
P_{(N)}=\left( Rz_{j}\cdot z_{k}\right) _{1\leq j,k\leq N} \label{P}
\end{equation}
is a symmetric positive semidefinite $N\times N$ matrix for all $z_{1}
,...,z_{N}\in\mathbb{R}^{2n}$.
\end{lemma}
\begin{proof}
There exists a matrix $L$ such that $R=L^{\ast}L$ (Cholesky decomposition).
Denoting by $\langle z|z^{\prime}\rangle=z\cdot\overline{z^{\prime}}$ the
inner product on $\mathbb{C}^{2n}$ we have, since the $z_{j}$ are real
vectors,
\[
L^{\ast}z_{j}\cdot z_{k}=\langle L^{\ast}z_{j}|z_{k}\rangle=\langle
z_{j}|Lz_{k}\rangle=z_{j}\cdot\overline{Lz_{k}}
\]
hence $Rz_{j}\cdot z_{k}=Lz_{j}\cdot\overline{Lz_{k}}$. It follows that for
all complex $\zeta_{j}$ we have
\[
\sum_{1\leq j,k\leq N}\zeta_{j}\overline{\zeta_{k}}Rz_{j}\cdot z_{k}
=\sum_{1\leq j\leq N}\zeta_{j}Lz_{j}\overline{\left( \sum_{1\leq j\leq
N}\zeta_{j}Lz_{j}\right) }\geq0
\]
hence our claim.
\end{proof}
The second lemma is a well-known diagonalization result (Williamson's
symplectic diagonalization theorem \cite{Folland,Birk}):
\begin{lemma}
\label{Williamson}Let $\Sigma$ be a symmetric positive definite real
$2n\times2n$ matrix. There exists $S\in\operatorname*{Sp}(n)$ such that
$\Sigma=S^{T}DS$ where
\[
D=
\begin{pmatrix}
\Lambda & 0\\
0 & \Lambda
\end{pmatrix}
\]
with $\Lambda=\operatorname*{diag}(\lambda_{1},...,\lambda_{n})$, the positive
numbers $\lambda_{j}$ being the symplectic eigenvalues of $\Sigma$ (that is,
$\pm i\lambda_{1},...,\pm i\lambda_{n}$ are the eigenvalues of $J\Sigma
\backsim\Sigma^{1/2}J\Sigma^{1/2}$).
\end{lemma}
\begin{proof}
See for instance\ \cite{Folland,Birk,Birkbis}.
\end{proof}
We now have the tools needed to give a complete characterization of Gaussian
$\eta$-Wigner distributions:
\begin{proposition}
Let $\eta\in R\setminus\{0\}$. The Gaussian function (\ref{Gauss}) is the
$\eta$-Wigner transform of a positive trace class operator if and only if
\begin{equation}
|\eta|\leq2\lambda_{\min} \label{etalambda}
\end{equation}
where $\lambda_{\min}$ is the smallest symplectic eigenvalue of $\Sigma$;
equivalently the self-adjoint matrix $\Sigma+i\eta J$ is positive
semidefinite:
\begin{equation}
\Sigma+\frac{i\eta}{2}J\geq0. \label{sigmaj}
\end{equation}
\end{proposition}
\begin{proof}
Let us first show that the conditions (\ref{etalambda}) and (\ref{sigmaj}) are
equivalent. Let $\Sigma=S^{T}DS$ be a symplectic diagonalization of $\Sigma$
(Lemma \ref{Williamson}). Since $S^{T}JS=J$ condition (\ref{sigmaj}) is
equivalent to $D+\frac{i\eta}{2}J\geq0$. Let $z=(x,p)$ be an eigenvector of
$D+\frac{i\eta}{2}J$; the corresponding eigenvalue $\lambda$ is real and
$\geq0$. The characteristic polynomial of $D+\frac{i\eta}{2}J$ is
\[
P(\lambda)=\det\left[ (\Lambda-\lambda I)^{2}-\tfrac{\eta^{2}}{4}I\right]
=P_{1}(\lambda)\cdot\cdot\cdot P_{n}(\lambda)
\]
where
\[
P_{j}(\lambda)=(\lambda_{j}-\lambda)^{2}-\tfrac{\eta^{2}}{4}
\]
hence the eigenvalues $\lambda$ of $D+\frac{i\eta}{2}J$ are the numbers
$\lambda=\lambda_{j}\pm\frac{1}{2}\eta$; since $\lambda\geq0$ the condition
$D+\frac{i\eta}{2}J\geq0$ is equivalent to $\lambda_{j}\geq\sup\{\pm\frac
{1}{2}\eta\}=\frac{1}{2}|\eta|$ for all $j$, which is the condition
(\ref{etalambda}). Let us now show that the condition (\ref{etalambda}) is
necessary for the function
\begin{equation}
\rho(z)=(2\pi)^{-n}\sqrt{\det\Sigma^{-1}}e^{-\frac{1}{2}\Sigma^{-1}z^{2}}
\end{equation}
to be $\eta$-Wigner transform of a positive trace class operator. Let
$\widehat{\rho}=(2\pi\eta)^{n}\operatorname*{Op}_{\eta}^{\mathrm{W}}(\rho)$
and set $a(z)=(2\pi\eta)^{n}\rho(z)$. Let $\widehat{S}$ $\in\operatorname*{Mp}
(n)$; the operator $\widehat{\rho}$ is of trace class if only if
$\widehat{S}\widehat{\rho}\widehat{S}^{-1}$ is, in which case
$\operatorname{Tr}(\widehat{\rho})=\operatorname{Tr}(\widehat{S}\widehat{\rho
}\widehat{S}^{-1})$. Choose $\widehat{S}$ with projection $S\in
\operatorname*{Sp}(n)$ such that $\Sigma=S^{T}DS$ is a symplectic
diagonalization of $\Sigma$. This choice reduces the proof to the case
$\Sigma=D$, that is to
\begin{equation}
\rho(z)=(2\pi)^{-n}(\det\Lambda^{-1})e^{-\frac{1}{2}(\Lambda^{-1}x^{2}
+\Lambda^{-1}p^{2})}. \label{Gaussdiag}
\end{equation}
Suppose now that $\widehat{\rho}$ is of trace class; then there exist an
orthonormal basis of functions $\psi_{j}\in L^{2}(\mathbb{R}^{n})$ ($1\leq
j\leq n$) such that
\[
\rho(z)=\sum_{j}\alpha_{j}W_{\eta}\psi_{j}(z)
\]
where the $\alpha_{j}\geq0$ sum up to one. Integrating with respect to the $p$
and $x$ variables, respectively, the marginal conditions satisfied by the
$\eta$-Wigner transform and formula (\ref{Gaussdiag}) imply that we have
\begin{align*}
\sum_{j}\alpha_{j}|\psi_{j}(x)|^{2} & =(2\pi)^{-n/2}(\det\Lambda
)^{1/2}e^{-\frac{1}{2}\Lambda^{-1}x^{2}}\\
\sum_{j}\alpha_{j}|F_{\eta}\psi_{j}(p)|^{2} & =(2\pi)^{-n/2}(\det
\Lambda)^{1/2}e^{-\frac{1}{2}\Lambda^{-1}p^{2}}.
\end{align*}
In particular, since $\alpha_{j}\geq0$ for every $j=1,2,...,n,\dots$,
\[
|\psi_{j}(x)|\leq C_{j}e^{-\frac{1}{4}\Lambda^{-1}x^{2}}\text{ \ , \ }
|F_{\eta}\psi_{j}(p)|\leq C_{j}e^{-\frac{1}{4}\Lambda^{-1}p^{2}}
\]
here, if $\alpha_{j}\not =0$, $C_{j}=(2\pi)^{-n/4}(\det\Lambda)^{1/4}
/\alpha_{j}^{1/2}$. Applying Lemma \ref{LemmaHardy} with $A=B=\frac{\eta}
{2}\Lambda^{-1}$ we must have $|\eta|\leq2\lambda_{j}$ for all $j=1,\dots n$,
which is condition (\ref{etalambda}); this establishes the sufficiency
statement. (iii) Let us finally show that, conversely, the condition
(\ref{sigmaj}) is sufficient. It is again no restriction to assume that
$\Sigma$ is the diagonal matrix $D=
\begin{pmatrix}
\Lambda & 0\\
0 & \Lambda
\end{pmatrix}
$; the symplectic Fourier transform of $\rho$ is easily calculated and one
finds that $\rho_{\Diamond}(z)=e^{-\frac{1}{4}Dz^{2}}$. Let $\Lambda
_{(N)}=(\Lambda_{jk})_{1\leq j,k\leq N}$ with
\[
\Lambda_{jk}=e^{-\frac{i\eta}{2}\sigma(z_{j},z_{k})}\rho_{\Diamond}
(z_{j}-z_{k});
\]
a simple algebraic calculation shows that we have
\[
\Lambda_{jk}=e^{-\frac{1}{4}Dz_{j}^{2}}e^{\frac{1}{2}(D+i\eta J)z_{j}\cdot
z_{k}}e^{-\frac{1}{4}Dz_{k}^{2}}
\]
and hence
\[
\Lambda_{(N)}=\Delta_{(N)}\Gamma_{(N)}\Delta_{(N)}^{\ast}
\]
where $\Delta_{(N)}=\operatorname*{diag}(e^{-\frac{1}{4}Dz_{1}^{2}
},...,e^{-\frac{1}{4}Dz_{N}^{2}})$ and $\Gamma_{(N)}=(\Gamma_{jk})_{1\leq
j,k\leq N}$ with $\Gamma_{jk}=e^{\frac{1}{2}(D+i\eta J)z_{j}\cdot z_{k}}$. The
matrix $\Lambda_{(N)}$ is thus positive semidefinite if and only if
$\Gamma_{(N)}$ is, but this is the case in view of Lemma \ref{Lemmaposex}.
\end{proof}
\begin{remark}
Setting $2\lambda_{\min}=\hslash$ and writing $\Sigma$ in the block-matrix
form $
\begin{pmatrix}
\Sigma_{xx} & \Sigma_{xp}\\
\Sigma_{px} & \Sigma_{pp}
\end{pmatrix}
$ where $\Sigma_{xx}=(\sigma_{x_{j},x_{k}})_{1\leq j,k\leq n}$, $\Sigma
_{xp}=(\sigma_{x_{j},p_{k}})_{1\leq j,k\leq n}$ and so on, one shows
\cite{goluPR} that (\ref{sigmaj}) is equivalent to the generalized uncertainty
relations (\textquotedblleft Robertson--Schr\"{o}dinger
inequalities\textquotedblright, see \cite{goluPR})
\begin{equation}
\sigma_{x_{j}}^{2}\sigma_{p_{j}}^{2}\geq\sigma_{x_{j},p_{j}}^{2}+\tfrac{1}
{4}\hbar^{2} \label{RS}
\end{equation}
where, for $\leq j\leq n$, the $\sigma_{x_{j}}^{2}=\sigma_{x_{j},x_{j}}$,
$\sigma_{p_{j}}^{2}=\sigma_{p_{j},p_{j}}$ are viewed as variances and the
$\sigma_{x_{j},p_{j}}^{2}$ as covariances.
\end{remark}
\section{The KLM Conditions in phase space\label{Secfabio}}
\subsection{The main result}
We now consider a Gabor (or Weyl--Heisenberg) frame $\mathcal{G}(\phi
,\Lambda)$ for $L^{2}(\mathbb{R}^{n})$, with window $\phi\in L^{2}
(\mathbb{R}^{n})$ and lattice $\Lambda\in\mathbb{R}^{n}$ (cf.
\eqref{framedef}). Time-frequency analysis and the Wigner formalism are the
main ingredients for proving our main result.
\begin{proof}
[Proof of Theorem \ref{ThmFabio}](i) Since $\mathcal{G}(\phi,\Lambda)$ is a
Gabor frame, we can write
\begin{equation}
\label{gaborespansione}\psi=\sum_{\lambda\in\Lambda}c_{\lambda}T(\lambda)\phi
\end{equation}
for some $(c_{\lambda})\in\ell^{2}(\Lambda)$. Let us prove that
\begin{equation}
\label{aggiunta}\int_{\mathbb{R}^{2n}}a(z)W_{\eta}\psi(z)dz=\sum
\nolimits_{\lambda,\mu\in\Lambda}c_{\lambda}\overline{c_{\mu}}e^{-\frac
{i}{2\eta}\sigma(\lambda,\mu)}a_{\lambda,\mu}.
\end{equation}
In view of the sesquilinearity of the cross-Wigner transform and its
continuity as a map $L^{2}(\mathbb{R}^{n})\times L^{2}(\mathbb{R}^{n})\to
L^{2}(\mathbb{R}^{2n})$ we have
\[
W_{\eta}\left(
{\textstyle\sum\nolimits_{\lambda\in\Lambda}}
c_{\lambda}T(\lambda)\phi\right) =
{\textstyle\sum\nolimits_{\lambda\in\Lambda}}
c_{\lambda}\overline{c_{\mu}}W_{\eta}(T(\lambda)\phi,T(\mu)\phi).
\]
Using the relation (formula (9.23) in \cite{Birkbis})
\[
W_{\eta}(T(\lambda)\phi,T(\mu)\phi)=e^{-\frac{i}{2\eta}\sigma(\lambda,\mu
)}e^{-\frac{i}{\eta}\sigma(z,\lambda-\mu)}W_{\eta}\phi(z-\tfrac{1}{2}
(\lambda+\mu))
\]
we obtain
\begin{align*}
\int_{\mathbb{R}^{2n}} & a(z)W_{\eta}\psi(z)dz\\
& =c_{\lambda}\overline{c_{\mu}}e^{-\frac{i}{2\eta}\sigma(\lambda,\mu)}
\int_{\mathbb{R}^{2n}}a(z)e^{-\frac{i}{\eta}\sigma(z,\lambda-\mu)}W_{\eta}
\phi(z-\tfrac{1}{2}(\lambda+\mu))dz\\
& =\sum\nolimits_{\lambda,\mu\in\Lambda}c_{\lambda}\overline{c_{\mu}
}e^{-\frac{i}{2\eta}\sigma(\lambda,\mu)}a_{\lambda,\mu}.
\end{align*}
Suppose now $\widehat{A}_{\eta}\geq0$, that is
\begin{equation}
\int_{\mathbb{R}^{2n}}a(z)W_{\eta}\psi(z)dz\geq0 \label{apos1}
\end{equation}
for every $\psi\in L^{2}(\mathbb{R}^{n})$. For any given sequence
$(c_{\lambda})_{\lambda\in\Lambda}$ with $c_{\lambda}=0$ for $|\lambda|>N$ we
take $\psi$ as in \eqref{gaborespansione} and apply \eqref{aggiunta}; we
obtain that the finite matrix in \eqref{lm1} is positive semidefinite.
Assume conversely that the matrix in \eqref{lm1} is positive semidefinite for
every $N$; then the right-hand side of \eqref{aggiunta} is nonnegative,
whenever the series converges (unconditionally). Now, every $\psi\in
L^{2}(\mathbb{R}^{n})$ has a Gabor expansion as in \eqref{gaborespansione},
with $c_{\lambda}=(\psi|T(\lambda)\gamma)_{L^{2}}$ in $\ell^{2}(\Lambda)$, for
some dual window $\gamma\in L^{2}(\mathbb{R}^{n})$. Hence from
\eqref{aggiunta} and \eqref{apos1} we deduce $\widehat{A}\geq0$.
(ii) The desired result follows from the following calculation: we have
\[
M^{\prime}_{\lambda,\mu}=W_{\eta}(a,(W_{\eta}\phi)^{\vee})(\tfrac{1}
{4}(\lambda+\mu),\tfrac{1}{2}J(\mu-\lambda))
\]
that is, by definition of the Wigner transform,
\begin{multline*}
M^{\prime}_{\lambda,\mu}=\left( \tfrac{1}{2\pi\eta}\right) ^{2n}
\int_{\mathbb{R}^{2n}}e^{-\frac{i}{2\eta}\sigma(\mu-\lambda,u)}a(\tfrac{1}
{4}(\lambda+\mu)+\tfrac{1}{2}u)\\
\times W_{\eta}\phi(-\tfrac{1}{4}(\lambda+\mu)+\tfrac{1}{2}u)du;
\end{multline*}
setting $z=\tfrac{1}{4}(\lambda+\mu)+\tfrac{1}{2}u$ we have $u=2z-\tfrac{1}
{2}(\lambda+\mu)$ and hence
\[
M^{\prime}_{\lambda,\mu}=\left( \tfrac{1}{2\pi\eta}\right) ^{2n}2^{n}
\int_{\mathbb{R}^{2n}}e^{-\frac{i}{2\eta}\sigma(\mu-\lambda,2z-\tfrac{1}
{2}(\lambda+\mu))}a(z)W_{\eta}\phi(z-\tfrac{1}{2}(\lambda+\mu))dz.
\]
Using the bilinearity and antisymmetry of the symplectic form $\sigma$ we have
\[
\sigma(\mu-\lambda,2z-\tfrac{1}{2}(\lambda+\mu))=2\sigma(z,\lambda-\mu
)+\sigma(\lambda,\mu)
\]
so that
\[
M^{\prime}_{\lambda,\mu}=\left( \tfrac{1}{2\pi\eta}\right) ^{2n}
2^{n}e^{-\frac{i}{2\eta}\sigma(\lambda,\mu)}\int_{\mathbb{R}^{2n}}e^{-\frac
{i}{\eta}\sigma(z,\lambda-\mu)}a(z)W_{\eta}\phi(z-\tfrac{1}{2}(\lambda
+\mu))dz
\]
that is $M^{\prime}_{\lambda,\mu}=2^{n}(2\pi\eta)^{-2n}M_{\lambda,\mu}$, hence
our claim.
\end{proof}
\begin{remark}
Let us observe that Theorem \ref{ThmFabio} extends to other classes of
symbols, essentially with the same proof. For example the results hold for
$a\in M^{\infty,1}(\mathbb{R}^{2n})$ (the Sj\"{o}strand class) if the window
$\phi$ belongs to $M^{1}(\mathbb{R}^{n})$. Other choices are certainly possible.
\end{remark}
\subsection{The connection with the KLM conditions}
In what follows we prove Theorem \ref{Theorem2}, which shows that the KLM
conditions can be recaptured by an averaging procedure from the conditions in
Theorem \ref{ThmFabio}.
\begin{proof}
[Proof of Theorem \ref{Theorem2}]Let us first observe that we can write
\[
M_{\lambda,\mu}^{(KLM)}=(2\pi\eta)^{-n}e^{-\frac{i}{2\eta}\sigma(\lambda,\mu
)}V_{\Phi}a(\tfrac{1}{2}(\lambda+\mu),J(\mu-\lambda))
\]
where $\Phi(z)=1$ for all $z\in\mathbb{R}^{2n}$. Now we have
\[
W\phi_{\nu}(z)=\left( \tfrac{1}{\pi\eta}\right) ^{n}e^{-\frac{1}{\eta}
|z-\nu|^{2}}
\]
and therefore
\[
\int_{\mathbb{R}^{2n}}W\phi_{\nu}(z)\,d\nu=\left( \tfrac{1}{\pi\eta}\right)
^{n}\int_{\mathbb{R}^{2n}}e^{-\frac{1}{\eta}|z-\nu|^{2}}d\nu=1=\Phi
(z)\quad\forall z\in\mathbb{R}^{2n}.
\]
Hence \eqref{formula3} follows by exchanging the integral with respect to
$\nu$ in \eqref{formula3} with the integral in the definition of the STFT in
\eqref{formula2}. Fubini's theorem can be applied because the function
\[
a(z)W\phi_{\nu}(z-\zeta)
\]
belongs to $L^{1}(\mathbb{R}^{2n}\times\mathbb{R}^{2n})$ with respect to
$z,\nu$, for every fixed $\zeta\in\mathbb{R}^{2n}$.
\end{proof}
\begin{corollary}
\label{coro1} Suppose $a\in L^{1}(\mathbb{R}^{n}) \cap L^{2}(\mathbb{R}^{n})$.
With the notation in Theorem \ref{Theorem2}, suppose that $\mathcal{G}
(\phi_{0},\Lambda)$ is a Gabor frame for $L^{2}(\mathbb{R}^{n})$. If the
matrix $(M_{\lambda,\mu}^{\phi_{0}})_{\lambda,\mu\in\Lambda,|\lambda
|,|\mu|\leq N}$ is positive semidefinite for every $N$, then so is the matrix
$(M^{(KLM)}_{\lambda,\mu})_{\lambda,\mu\in\Lambda,|\lambda|,|\mu|\leq N}$.
\end{corollary}
\begin{proof}
Observing that $\mathcal{G}(\phi_{\nu},\Lambda)$ is also a\ Gabor frame for
every $\nu\in\mathbb{R}^{2n}$, it follows from the assumptions and Theorem
\ref{ThmFabio} that the matrices $(M_{\lambda,\mu}^{\phi_{\nu}})_{\lambda
,\mu\in\Lambda,|\lambda|,|\mu|\leq N}$ are\ positive semidefinite for all
$\nu\in\mathbb{R}^{2n}$. The result therefore follows from \eqref{formula3}.
\end{proof}
\begin{corollary}
\label{coro2} Suppose $a\in L^{1}(\mathbb{R}^{n}) \cap L^{2}(\mathbb{R}^{n})$
and $\widehat{A}_{\eta}=\operatorname*{Op}_{\eta}^{\mathrm{W}}(a)\geq0$. Then,
with the notation in Theorem \ref{Theorem2}, for every finite subset
$S\subset\mathbb{R}^{2n}$ the matrix $(M^{(KLM)}_{\lambda,\mu})_{\lambda
,\mu\in S}$ is positive semidefinite (that is, the KLM conditions hold).
\end{corollary}
\begin{proof}
Since $\mathcal{G}(\phi_{0},\Lambda)$ is a frame for $L^{2}(\mathbb{R}^{n})$
for every sufficiently dense lattice $\Lambda$, as a consequence of Corollary
\ref{coro1} the matrix $(M^{(KLM)}_{\lambda,\mu})_{\lambda,\mu\in
\Lambda,|\lambda|,|\mu|\leq N}$ is positive semidefinite for all such lattices
$\Lambda$ and every integer $N$. By restricting the matrix to subspaces we see
that the submatrices $(M^{(KLM)}_{\lambda,\mu})_{\lambda,\mu\in S}$ are
positive semidefinite for every finite subset $S\subset\Lambda$. Since $a\in
L^{1}(\mathbb{R}^{n})$ the symplectic Fourier transform $a_{\sigma,\eta}$ is
continuous and therefore the same holds for every finite subset $S\subset
\mathbb{R}^{2n}$.
\end{proof}
\subsection{Almost positivity}
We now address the following question: suppose that $\mathcal{G}(\phi
,\Lambda)$ is a Gabor frame for $L^{2}(\mathbb{R}^{n})$ and assume that the
matrix $(M_{\lambda,\mu})_{\lambda,\mu\in\Lambda,|\lambda|,|\mu|\leq N}$ in
\eqref{lm1},\eqref{amunu} is positive semidefinite \textit{for a fixed $N$}.
What can we say about the positivity of the operator $\widehat{A}_{\eta}$?
Under suitable decay condition on the symbol $a$ it turns out that
$\widehat{A}_{\eta}$ is \textquotedblleft almost positive\textquotedblright
\ in the following sense.
Let $\mathcal{G}(\phi,\Lambda)$ be a Gabor frame in $L^{2}(\mathbb{R}^{n})$,
with $\phi\in\mathcal{S}(\mathbb{R}^{n})$.
\begin{theorem}
Let $a\in M_{v_{s}}^{\infty,1}(\mathbb{R}^{2n})$ be real valued and $s\geq0$;
we use the notation $v_{s}(\zeta)=\langle\zeta\rangle^{s}$ for $\zeta
\in\mathbb{R}^{4n}$. Suppose that the matrix
\[
(M_{\lambda,\mu})_{\lambda,\mu\in\Lambda,|\lambda|,|\mu|\leq N}
\]
in \eqref{lm1} is positive semidefinite for some integer $N$. Then there
exists a constant $C>0$ independent of $N$ such that
\[
(\widehat{A}_{\eta}\psi|\psi)_{L^{2}}\geq-CN^{-s}||\psi||_{L^{2}}^{2}
\]
for all $\psi\in L^{2}(\mathbb{R}^{n})$.
\end{theorem}
\begin{proof}
Let $\psi\in L^{2}(\mathbb{R}^{n})$, and write its Gabor frame expansion as
\[
\psi=\sum\nolimits_{\lambda\in\Lambda,|\lambda|\leq N}c_{\lambda}
T(\lambda)\phi+\sum\nolimits_{\lambda\in\Lambda,|\lambda|>N}c_{\lambda
}T(\lambda)\phi;
\]
denoting the sums in the right-hand side by, respectively, $\psi^{\prime}$ and
$\psi^{\prime\prime}$ we get
\[
(\widehat{A}_{\eta}\psi|\psi)_{L^{2}}=(\widehat{A}_{\eta}\psi^{\prime}
|\psi^{\prime})_{L^{2}}+(\widehat{A}_{\eta}\psi^{\prime}|\psi^{\prime\prime
})_{L^{2}}+(\widehat{A}_{\eta}\psi^{\prime\prime}|\psi^{\prime})_{L^{2}
}+(\widehat{A}_{\eta}\psi^{\prime\prime}|\psi^{\prime\prime})_{L^{2}}.
\]
We have, by \eqref{aggiunta} and the positivity assumption
\[
(\widehat{A}_{\eta}\psi^{\prime}|\psi^{\prime})_{L^{2}}=\sum\nolimits_{\lambda
,\mu\in\Lambda,|\lambda|,|\mu|\leq N}c_{\lambda}\overline{c_{\mu}}
M_{\lambda,\mu}\geq0
\]
hence it is sufficient to show that
\begin{equation}
|(\widehat{A}_{\eta}\psi^{\prime}|\psi^{\prime\prime})_{L^{2}}|\leq
CN^{-s}||\psi||_{L^{2}}^{2}\label{5}
\end{equation}
and similar inequalities for the other terms. Now
\begin{equation}
|(\widehat{A}_{\eta}\psi^{\prime}|\psi^{\prime\prime})_{L^{2}}|=|(\psi
^{\prime}|\widehat{A}_{\eta}\psi^{\prime\prime})_{L^{2}}|\leq||\psi^{\prime
}||_{L^{2}}||\widehat{A}_{\eta}\psi^{\prime\prime}||_{L^{2}}.\label{6}
\end{equation}
Observe that the function $\psi^{\prime\prime}$ has a Gabor expansion with
coefficients $c_{\lambda}=0$ for $|\lambda|\leq N$. By the frame property and
the same computation as in the proof of \eqref{aggiunta} we have
\begin{align*}
||\widehat{A}_{\eta}\psi^{\prime\prime}||_{L^{2}} & \asymp\Vert
(\widehat{A}_{\eta}\psi^{\prime\prime}|T(\mu)\phi)_{L^{2}}\Vert_{\ell^{2}}\\
& =\Vert{\textstyle\sum_{\lambda\in\Lambda,|\lambda|>N}}M_{\lambda,\mu
}c_{\lambda}\Vert_{\ell^{2}}.
\end{align*}
Observe that \eqref{amunu} can be rewritten in terms of the short-time
Fourier transform (STFT) on $\mathbb{R}^{2n}$ as
\begin{equation}
a_{\lambda,\mu}=V_{W_{\eta}\phi}a(\tfrac{1}{2}(\lambda+\mu),J(\mu-\lambda)).
\label{astft}
\end{equation}
Now, from \eqref{lm1} and \eqref{astft} we have
\[
|M_{\lambda,\mu}|=|V_{W_{\eta}\phi}a(\tfrac{1}{2}(\lambda+\mu),J(\mu
-\lambda))|.
\]
In view of the assumption $a\in M_{v_{s}}^{\infty,1}$ we have, by
\cite[Theorem 12.2.1]{Gro}, that
\[
\sum_{\nu\in\Lambda^{\prime}}\sup_{z\in\mathbb{R}^{2n}}(1+|z|+|\nu
|)^{s}|V_{W_{\eta}\phi}a(z,\nu)|<\infty
\]
for every lattice $\Lambda^{\prime}\subset\mathbb{R}^{2n}$. Now we apply this
formula with $\Lambda^{\prime}=J(\Lambda)$; using
\[
1+|\lambda|+|\mu|\asymp1+\frac{1}{2}|\lambda+\mu|+|J(\mu-\lambda)|
\]
we obtain, for $|\lambda|>N$
\begin{align*}
N^{s}|M_{\lambda,\mu}| & \leq C\big(1+\frac{1}{2}|\lambda+\mu|+|J(\mu
-\lambda)|\big)^{s}|V_{W_{\eta}\phi}a(\tfrac{1}{2}(\lambda+\mu),J(\mu
-\lambda))|\\
& \leq H(\mu-\lambda)
\end{align*}
for some $H\in\ell^{1}(\Lambda)$. By Schur's test we can continue the above
estimate as
\[
\Vert{\textstyle\sum_{\lambda\in\Lambda,|\lambda|>N}}M_{\lambda,\mu}
c_{\lambda}\Vert_{\ell^{2}}\leq CN^{-s}\Vert c_{\lambda}\Vert_{\ell^{2}},
\]
which combined with (\ref{6}) gives (\ref{5}), because
\[
\Vert c_{\lambda}\Vert_{\ell^{2}}\asymp||\psi^{\prime\prime}||_{L^{2}}\leq
C^{\prime}||\psi||_{L^{2}}
\]
and $||\psi^{\prime}||_{L^{2}}\leq C^{\prime\prime}||\psi||_{L^{2}}$.
\end{proof}
\section*{Acknowledgements}
M. de Gosson has been funded by the Grant P27773 of the Austrian Research
Foundation FWF. E. Cordero and F. Nicola were partially supported by the
Gruppo Nazionale per l'Analisi Matematica, la Probabilit\`a e le loro
Applicazioni (GNAMPA) of the Istituto Nazionale di Alta Matematica (INdAM). | {"config": "arxiv", "file": "1706.06171.tex"} |
TITLE: Are zonotopes determined by their edge-graph?
QUESTION [2 upvotes]: General polytopes are not determined by their edge-graph (up to combinatorial equivalence). But I came accross the statement that zonotopes are determined in this way.
Question: Is this true? And where is this proven?
I suppose that this is somehow proven in the language of oriented matroids and their realizations, but I am not familiar with their literature.
REPLY [5 votes]: Yes, the face lattice of a zonotope is determined by its graph. This is Theorem 6.14 of Bjorner, A., Edelman, P. H., and Ziegler, G. M. (1990). Hyperplane arrangements with a lattice of regions. Discrete Comput. Geom., 5(3):263–288.
The result uses the relation between hyperplane arrangements and zonotopes.
The other result on the reconstruction of zonotopes that I am aware of is that cubical zonotopes are reconstructed from their dual graph. This is in the paper E. Babson, L. Finschi, and K. Fukuda, Cocircuit graphs and ef- ficient orientation reconstruction in oriented matroids., Eur. J. Comb. 22, no. 5 (2001), pp. 587–600.
Hope this helps.
Regards,
Guillermo | {"set_name": "stack_exchange", "score": 2, "question_id": 381088} |
TITLE: Let $|G|=pqr$ s.t $p<q<r$ and $q\nmid r-1$, $p,q,r$ primes then $G$ has normal subgroups of order $q,r$ or $p$
QUESTION [2 upvotes]: Let $|G|=pqr$ s.t $p<q<r$ and $q\nmid r-1$, $p,q,r$ primes then $G$ has normal subgroups of order $q,r$ or $p$
We know that a group of such order must have a normal Sylow subgroup of some order. Say $H$ and assume $|H|\neq p$, and let $K$ be a sylow subgroup for another prime that is not $p$. Then $HK$ is a cyclic subgroup and it is normal as its index is $p$. Thus $HK$ is normal, and $H,K$ are characteristic in $HK$ and so both are normal in $G$. Thus $H,K$ are normal in $G$. Is this correct? Can this result be strengthened somehow? By showing additional normal subgroups or loosening the division condition?
REPLY [1 votes]: The groups $S_3\times C_5$ and $C_3\times D_{10}$ show that you have the maximum number of normal subgroups. The group $C_2\times F_{21}$ (the group $C_7\rtimes C_3$, the normalizer of a Sylow $7$-subgroup of $A_7$) shows you cannot relax the non-divisibility condition. The group $F_{42}=C_7\rtimes C_6$ (the normalizer in $S_7$) shows that relaxing the divisibility condition does not allow you to still have two normal subgroups of prime order.
The minimum number of (proper, non-trivial) normal subgroups is two, because $G$ is soluble. | {"set_name": "stack_exchange", "score": 2, "question_id": 3777075} |
\typeout{TCILATEX Macros for Scientific Word and Scientific WorkPlace 5.5 <06 Oct 2005>.}
\typeout{NOTICE: This macro file is NOT proprietary and may be
freely copied and distributed.}
\makeatletter
\ifx\pdfoutput\relax\let\pdfoutput=\undefined\fi
\newcount\msipdfoutput
\ifx\pdfoutput\undefined
\else
\ifcase\pdfoutput
\else
\msipdfoutput=1
\ifx\paperwidth\undefined
\else
\ifdim\paperheight=0pt\relax
\else
\pdfpageheight\paperheight
\fi
\ifdim\paperwidth=0pt\relax
\else
\pdfpagewidth\paperwidth
\fi
\fi
\fi
\fi
\def\FMTeXButton#1{#1}
\newcount\@hour\newcount\@minute\chardef\@x10\chardef\@xv60
\def\tcitime{
\def\@time{
\@minute\time\@hour\@minute\divide\@hour\@xv
\ifnum\@hour<\@x 0\fi\the\@hour:
\multiply\@hour\@xv\advance\@minute-\@hour
\ifnum\@minute<\@x 0\fi\the\@minute
}}
\def\x@hyperref#1#2#3{
\catcode`\~ = 12
\catcode`\$ = 12
\catcode`\_ = 12
\catcode`\# = 12
\catcode`\& = 12
\catcode`\% = 12
\y@hyperref{#1}{#2}{#3}
}
\def\y@hyperref#1#2#3#4{
#2\ref{#4}#3
\catcode`\~ = 13
\catcode`\$ = 3
\catcode`\_ = 8
\catcode`\# = 6
\catcode`\& = 4
\catcode`\% = 14
}
\@ifundefined{hyperref}{\let\hyperref\x@hyperref}{}
\@ifundefined{msihyperref}{\let\msihyperref\x@hyperref}{}
\@ifundefined{qExtProgCall}{\def\qExtProgCall#1#2#3#4#5#6{\relax}}{}
\def\FILENAME#1{#1}
\def\QCTOpt[#1]#2{
\def\QCTOptB{#1}
\def\QCTOptA{#2}
}
\def\QCTNOpt#1{
\def\QCTOptA{#1}
\let\QCTOptB\empty
}
\def\Qct{
\@ifnextchar[{
\QCTOpt}{\QCTNOpt}
}
\def\QCBOpt[#1]#2{
\def\QCBOptB{#1}
\def\QCBOptA{#2}
}
\def\QCBNOpt#1{
\def\QCBOptA{#1}
\let\QCBOptB\empty
}
\def\Qcb{
\@ifnextchar[{
\QCBOpt}{\QCBNOpt}
}
\def\PrepCapArgs{
\ifx\QCBOptA\empty
\ifx\QCTOptA\empty
{}
\else
\ifx\QCTOptB\empty
{\QCTOptA}
\else
[\QCTOptB]{\QCTOptA}
\fi
\fi
\else
\ifx\QCBOptA\empty
{}
\else
\ifx\QCBOptB\empty
{\QCBOptA}
\else
[\QCBOptB]{\QCBOptA}
\fi
\fi
\fi
}
\newcount\GRAPHICSTYPE
\GRAPHICSTYPE=\z@
\def\GRAPHICSPS#1{
\ifcase\GRAPHICSTYPE
\special{ps: #1}
\or
\special{language "PS", include "#1"}
\fi
}
\def\GRAPHICSHP#1{\special{include #1}}
\def\graffile#1#2#3#4{
\bgroup
\@inlabelfalse
\leavevmode
\@ifundefined{bbl@deactivate}{\def~{\string~}}{\activesoff}
\raise -#4 \BOXTHEFRAME{
\hbox to #2{\raise #3\hbox to #2{\null #1\hfil}}}
\egroup
}
\def\draftbox#1#2#3#4{
\leavevmode\raise -#4 \hbox{
\frame{\rlap{\protect\tiny #1}\hbox to #2
{\vrule height#3 width\z@ depth\z@\hfil}
}
}
}
\newcount\@msidraft
\@msidraft=\z@
\let\nographics=\@msidraft
\newif\ifwasdraft
\wasdraftfalse
\def\GRAPHIC#1#2#3#4#5{
\ifnum\@msidraft=\@ne\draftbox{#2}{#3}{#4}{#5}
\else\graffile{#1}{#3}{#4}{#5}
\fi
}
\def\addtoLaTeXparams#1{
\edef\LaTeXparams{\LaTeXparams #1}}
\newif\ifBoxFrame \BoxFramefalse
\newif\ifOverFrame \OverFramefalse
\newif\ifUnderFrame \UnderFramefalse
\def\BOXTHEFRAME#1{
\hbox{
\ifBoxFrame
\frame{#1}
\else
{#1}
\fi
}
}
\def\doFRAMEparams#1{\BoxFramefalse\OverFramefalse\UnderFramefalse\readFRAMEparams#1\end}
\def\readFRAMEparams#1{
\ifx#1\end
\let\next=\relax
\else
\ifx#1i\dispkind=\z@\fi
\ifx#1d\dispkind=\@ne\fi
\ifx#1f\dispkind=\tw@\fi
\ifx#1t\addtoLaTeXparams{t}\fi
\ifx#1b\addtoLaTeXparams{b}\fi
\ifx#1p\addtoLaTeXparams{p}\fi
\ifx#1h\addtoLaTeXparams{h}\fi
\ifx#1X\BoxFrametrue\fi
\ifx#1O\OverFrametrue\fi
\ifx#1U\UnderFrametrue\fi
\ifx#1w
\ifnum\@msidraft=1\wasdrafttrue\else\wasdraftfalse\fi
\@msidraft=\@ne
\fi
\let\next=\readFRAMEparams
\fi
\next
}
\def\IFRAME#1#2#3#4#5#6{
\bgroup
\let\QCTOptA\empty
\let\QCTOptB\empty
\let\QCBOptA\empty
\let\QCBOptB\empty
#6
\parindent=0pt
\leftskip=0pt
\rightskip=0pt
\setbox0=\hbox{\QCBOptA}
\@tempdima=#1\relax
\ifOverFrame
\typeout{This is not implemented yet}
\show\HELP
\else
\ifdim\wd0>\@tempdima
\advance\@tempdima by \@tempdima
\ifdim\wd0 >\@tempdima
\setbox1 =\vbox{
\unskip\hbox to \@tempdima{\hfill\GRAPHIC{#5}{#4}{#1}{#2}{#3}\hfill}
\unskip\hbox to \@tempdima{\parbox[b]{\@tempdima}{\QCBOptA}}
}
\wd1=\@tempdima
\else
\textwidth=\wd0
\setbox1 =\vbox{
\noindent\hbox to \wd0{\hfill\GRAPHIC{#5}{#4}{#1}{#2}{#3}\hfill}\\%
\noindent\hbox{\QCBOptA}
}
\wd1=\wd0
\fi
\else
\ifdim\wd0>0pt
\hsize=\@tempdima
\setbox1=\vbox{
\unskip\GRAPHIC{#5}{#4}{#1}{#2}{0pt}
\break
\unskip\hbox to \@tempdima{\hfill \QCBOptA\hfill}
}
\wd1=\@tempdima
\else
\hsize=\@tempdima
\setbox1=\vbox{
\unskip\GRAPHIC{#5}{#4}{#1}{#2}{0pt}
}
\wd1=\@tempdima
\fi
\fi
\@tempdimb=\ht1
\advance\@tempdimb by -#2
\advance\@tempdimb by #3
\leavevmode
\raise -\@tempdimb \hbox{\box1}
\fi
\egroup
}
\def\DFRAME#1#2#3#4#5{
\vspace\topsep
\hfil\break
\bgroup
\leftskip\@flushglue
\rightskip\@flushglue
\parindent\z@
\parfillskip\z@skip
\let\QCTOptA\empty
\let\QCTOptB\empty
\let\QCBOptA\empty
\let\QCBOptB\empty
\vbox\bgroup
\ifOverFrame
#5\QCTOptA\par
\fi
\GRAPHIC{#4}{#3}{#1}{#2}{\z@}
\ifUnderFrame
\break#5\QCBOptA
\fi
\egroup
\egroup
\vspace\topsep
\break
}
\def\FFRAME#1#2#3#4#5#6#7{
\@ifundefined{floatstyle}
{
\begin{figure}[#1]
}
{
\ifx#1h
\begin{figure}[H]
\else
\begin{figure}[#1]
\fi
}
\let\QCTOptA\empty
\let\QCTOptB\empty
\let\QCBOptA\empty
\let\QCBOptB\empty
\ifOverFrame
#4
\ifx\QCTOptA\empty
\else
\ifx\QCTOptB\empty
\caption{\QCTOptA}
\else
\caption[\QCTOptB]{\QCTOptA}
\fi
\fi
\ifUnderFrame\else
\label{#5}
\fi
\else
\UnderFrametrue
\fi
\begin{center}\GRAPHIC{#7}{#6}{#2}{#3}{\z@}\end{center}
\ifUnderFrame
#4
\ifx\QCBOptA\empty
\caption{}
\else
\ifx\QCBOptB\empty
\caption{\QCBOptA}
\else
\caption[\QCBOptB]{\QCBOptA}
\fi
\fi
\label{#5}
\fi
\end{figure}
}
\newcount\dispkind
\def\makeactives{
\catcode`\"=\active
\catcode`\;=\active
\catcode`\:=\active
\catcode`\'=\active
\catcode`\~=\active
}
\bgroup
\makeactives
\gdef\activesoff{
\def"{\string"}
\def;{\string;}
\def:{\string:}
\def'{\string'}
\def~{\string~}
}
\egroup
\def\FRAME#1#2#3#4#5#6#7#8{
\bgroup
\ifnum\@msidraft=\@ne
\wasdrafttrue
\else
\wasdraftfalse
\fi
\def\LaTeXparams{}
\dispkind=\z@
\def\LaTeXparams{}
\doFRAMEparams{#1}
\ifnum\dispkind=\z@\IFRAME{#2}{#3}{#4}{#7}{#8}{#5}\else
\ifnum\dispkind=\@ne\DFRAME{#2}{#3}{#7}{#8}{#5}\else
\ifnum\dispkind=\tw@
\edef\@tempa{\noexpand\FFRAME{\LaTeXparams}}
\@tempa{#2}{#3}{#5}{#6}{#7}{#8}
\fi
\fi
\fi
\ifwasdraft\@msidraft=1\else\@msidraft=0\fi{}
\egroup
}
\def\TEXUX#1{"texux"}
\def\BF#1{{\bf {#1}}}
\def\NEG#1{\leavevmode\hbox{\rlap{\thinspace/}{$#1$}}}
\def\limfunc#1{\mathop{\rm #1}}
\def\func#1{\mathop{\rm #1}\nolimits}
\def\unit#1{\mathord{\thinspace\rm #1}}
\long\def\QQQ#1#2{
\long\expandafter\def\csname#1\endcsname{#2}}
\@ifundefined{QTP}{\def\QTP#1{}}{}
\@ifundefined{QEXCLUDE}{\def\QEXCLUDE#1{}}{}
\@ifundefined{Qlb}{\def\Qlb#1{#1}}{}
\@ifundefined{Qlt}{\def\Qlt#1{#1}}{}
\def\QWE{}
\long\def\QQA#1#2{}
\def\QTR#1#2{{\csname#1\endcsname {#2}}}
\let\QQQuline\uline
\let\QQQsout\sout
\let\QQQuuline\uuline
\let\QQQuwave\uwave
\let\QQQxout\xout
\long\def\TeXButton#1#2{#2}
\long\def\QSubDoc#1#2{#2}
\def\EXPAND#1[#2]#3{}
\def\NOEXPAND#1[#2]#3{}
\def\PROTECTED{}
\def\LaTeXparent#1{}
\def\ChildStyles#1{}
\def\ChildDefaults#1{}
\def\QTagDef#1#2#3{}
\@ifundefined{correctchoice}{\def\correctchoice{\relax}}{}
\@ifundefined{HTML}{\def\HTML#1{\relax}}{}
\@ifundefined{TCIIcon}{\def\TCIIcon#1#2#3#4{\relax}}{}
\if@compatibility
\typeout{Not defining UNICODE U or CustomNote commands for LaTeX 2.09.}
\else
\providecommand{\UNICODE}[2][]{\protect\rule{.1in}{.1in}}
\providecommand{\U}[1]{\protect\rule{.1in}{.1in}}
\providecommand{\CustomNote}[3][]{\marginpar{#3}}
\fi
\@ifundefined{lambdabar}{
\def\lambdabar{\errmessage{You have used the lambdabar symbol.
This is available for typesetting only in RevTeX styles.}}
}{}
\@ifundefined{StyleEditBeginDoc}{\def\StyleEditBeginDoc{\relax}}{}
\def\QQfnmark#1{\footnotemark}
\def\QQfntext#1#2{\addtocounter{footnote}{#1}\footnotetext{#2}}
\@ifundefined{TCIMAKEINDEX}{}{\makeindex}
\@ifundefined{abstract}{
\def\abstract{
\if@twocolumn
\section*{Abstract (Not appropriate in this style!)}
\else \small
\begin{center}{\bf Abstract\vspace{-.5em}\vspace{\z@}}\end{center}
\quotation
\fi
}
}{
}
\@ifundefined{endabstract}{\def\endabstract
{\if@twocolumn\else\endquotation\fi}}{}
\@ifundefined{maketitle}{\def\maketitle#1{}}{}
\@ifundefined{affiliation}{\def\affiliation#1{}}{}
\@ifundefined{proof}{\def\proof{\noindent{\bfseries Proof. }}}{}
\@ifundefined{endproof}{\def\endproof{\mbox{\ \rule{.1in}{.1in}}}}{}
\@ifundefined{newfield}{\def\newfield#1#2{}}{}
\@ifundefined{chapter}{\def\chapter#1{\par(Chapter head:)#1\par }
\newcount\c@chapter}{}
\@ifundefined{part}{\def\part#1{\par(Part head:)#1\par }}{}
\@ifundefined{section}{\def\section#1{\par(Section head:)#1\par }}{}
\@ifundefined{subsection}{\def\subsection#1
{\par(Subsection head:)#1\par }}{}
\@ifundefined{subsubsection}{\def\subsubsection#1
{\par(Subsubsection head:)#1\par }}{}
\@ifundefined{paragraph}{\def\paragraph#1
{\par(Subsubsubsection head:)#1\par }}{}
\@ifundefined{subparagraph}{\def\subparagraph#1
{\par(Subsubsubsubsection head:)#1\par }}{}
\@ifundefined{therefore}{\def\therefore{}}{}
\@ifundefined{backepsilon}{\def\backepsilon{}}{}
\@ifundefined{yen}{\def\yen{\hbox{\rm\rlap=Y}}}{}
\@ifundefined{registered}{
\def\registered{\relax\ifmmode{}\r@gistered
\else$\m@th\r@gistered$\fi}
\def\r@gistered{^{\ooalign
{\hfil\raise.07ex\hbox{$\scriptstyle\rm\text{R}$}\hfil\crcr
\mathhexbox20D}}}}{}
\@ifundefined{Eth}{\def\Eth{}}{}
\@ifundefined{eth}{\def\eth{}}{}
\@ifundefined{Thorn}{\def\Thorn{}}{}
\@ifundefined{thorn}{\def\thorn{}}{}
\def\TEXTsymbol#1{\mbox{$#1$}}
\@ifundefined{degree}{\def\degree{{}^{\circ}}}{}
\newdimen\theight
\@ifundefined{Column}{\def\Column{
\vadjust{\setbox\z@=\hbox{\scriptsize\quad\quad tcol}
\theight=\ht\z@\advance\theight by \dp\z@\advance\theight by \lineskip
\kern -\theight \vbox to \theight{
\rightline{\rlap{\box\z@}}
\vss
}
}
}}{}
\@ifundefined{qed}{\def\qed{
\ifhmode\unskip\nobreak\fi\ifmmode\ifinner\else\hskip5\p@\fi\fi
\hbox{\hskip5\p@\vrule width4\p@ height6\p@ depth1.5\p@\hskip\p@}
}}{}
\@ifundefined{cents}{\def\cents{\hbox{\rm\rlap c/}}}{}
\@ifundefined{tciLaplace}{\def\tciLaplace{\ensuremath{\mathcal{L}}}}{}
\@ifundefined{tciFourier}{\def\tciFourier{\ensuremath{\mathcal{F}}}}{}
\@ifundefined{textcurrency}{\def\textcurrency{\hbox{\rm\rlap xo}}}{}
\@ifundefined{texteuro}{\def\texteuro{\hbox{\rm\rlap C=}}}{}
\@ifundefined{euro}{\def\euro{\hbox{\rm\rlap C=}}}{}
\@ifundefined{textfranc}{\def\textfranc{\hbox{\rm\rlap-F}}}{}
\@ifundefined{textlira}{\def\textlira{\hbox{\rm\rlap L=}}}{}
\@ifundefined{textpeseta}{\def\textpeseta{\hbox{\rm P\negthinspace s}}}{}
\@ifundefined{miss}{\def\miss{\hbox{\vrule height2\p@ width 2\p@ depth\z@}}}{}
\@ifundefined{vvert}{\def\vvert{\Vert}}{}
\@ifundefined{tcol}{\def\tcol#1{{\baselineskip=6\p@ \vcenter{#1}} \Column}}{}
\@ifundefined{dB}{\def\dB{\hbox{{}}}}{}
\@ifundefined{mB}{\def\mB#1{\hbox{$#1$}}}{}
\@ifundefined{nB}{\def\nB#1{\hbox{#1}}}{}
\@ifundefined{note}{\def\note{$^{\dag}}}{}
\def\newfmtname{LaTeX2e}
\ifx\fmtname\newfmtname
\DeclareOldFontCommand{\rm}{\normalfont\rmfamily}{\mathrm}
\DeclareOldFontCommand{\sf}{\normalfont\sffamily}{\mathsf}
\DeclareOldFontCommand{\tt}{\normalfont\ttfamily}{\mathtt}
\DeclareOldFontCommand{\bf}{\normalfont\bfseries}{\mathbf}
\DeclareOldFontCommand{\it}{\normalfont\itshape}{\mathit}
\DeclareOldFontCommand{\sl}{\normalfont\slshape}{\@nomath\sl}
\DeclareOldFontCommand{\sc}{\normalfont\scshape}{\@nomath\sc}
\fi
\def\alpha{{\Greekmath 010B}}
\def\beta{{\Greekmath 010C}}
\def\gamma{{\Greekmath 010D}}
\def\delta{{\Greekmath 010E}}
\def\epsilon{{\Greekmath 010F}}
\def\zeta{{\Greekmath 0110}}
\def\eta{{\Greekmath 0111}}
\def\theta{{\Greekmath 0112}}
\def\iota{{\Greekmath 0113}}
\def\kappa{{\Greekmath 0114}}
\def\lambda{{\Greekmath 0115}}
\def\mu{{\Greekmath 0116}}
\def\nu{{\Greekmath 0117}}
\def\xi{{\Greekmath 0118}}
\def\pi{{\Greekmath 0119}}
\def\rho{{\Greekmath 011A}}
\def\sigma{{\Greekmath 011B}}
\def\tau{{\Greekmath 011C}}
\def\upsilon{{\Greekmath 011D}}
\def\phi{{\Greekmath 011E}}
\def\chi{{\Greekmath 011F}}
\def\psi{{\Greekmath 0120}}
\def\omega{{\Greekmath 0121}}
\def\varepsilon{{\Greekmath 0122}}
\def\vartheta{{\Greekmath 0123}}
\def\varpi{{\Greekmath 0124}}
\def\varrho{{\Greekmath 0125}}
\def\varsigma{{\Greekmath 0126}}
\def\varphi{{\Greekmath 0127}}
\def\nabla{{\Greekmath 0272}}
\def\FindBoldGroup{
{\setbox0=\hbox{$\mathbf{x\global\edef\theboldgroup{\the\mathgroup}}$}}
}
\def\Greekmath#1#2#3#4{
\if@compatibility
\ifnum\mathgroup=\symbold
\mathchoice{\mbox{\boldmath$\displaystyle\mathchar"#1#2#3#4$}}
{\mbox{\boldmath$\textstyle\mathchar"#1#2#3#4$}}
{\mbox{\boldmath$\scriptstyle\mathchar"#1#2#3#4$}}
{\mbox{\boldmath$\scriptscriptstyle\mathchar"#1#2#3#4$}}
\else
\mathchar"#1#2#3#4
\fi
\else
\FindBoldGroup
\ifnum\mathgroup=\theboldgroup
\mathchoice{\mbox{\boldmath$\displaystyle\mathchar"#1#2#3#4$}}
{\mbox{\boldmath$\textstyle\mathchar"#1#2#3#4$}}
{\mbox{\boldmath$\scriptstyle\mathchar"#1#2#3#4$}}
{\mbox{\boldmath$\scriptscriptstyle\mathchar"#1#2#3#4$}}
\else
\mathchar"#1#2#3#4
\fi
\fi}
\newif\ifGreekBold \GreekBoldfalse
\let\SAVEPBF=\pbf
\def\pbf{\GreekBoldtrue\SAVEPBF}
\@ifundefined{theorem}{\newtheorem{theorem}{Theorem}}{}
\@ifundefined{lemma}{\newtheorem{lemma}[theorem]{Lemma}}{}
\@ifundefined{corollary}{\newtheorem{corollary}[theorem]{Corollary}}{}
\@ifundefined{conjecture}{\newtheorem{conjecture}[theorem]{Conjecture}}{}
\@ifundefined{proposition}{\newtheorem{proposition}[theorem]{Proposition}}{}
\@ifundefined{axiom}{\newtheorem{axiom}{Axiom}}{}
\@ifundefined{remark}{\newtheorem{remark}{Remark}}{}
\@ifundefined{example}{\newtheorem{example}{Example}}{}
\@ifundefined{exercise}{\newtheorem{exercise}{Exercise}}{}
\@ifundefined{definition}{\newtheorem{definition}{Definition}}{}
\@ifundefined{mathletters}{
\newcounter{equationnumber}
\def\mathletters{
\addtocounter{equation}{1}
\edef\@currentlabel{\theequation}
\setcounter{equationnumber}{\c@equation}
\setcounter{equation}{0}
\edef\theequation{\@currentlabel\noexpand\alph{equation}}
}
\def\endmathletters{
\setcounter{equation}{\value{equationnumber}}
}
}{}
\@ifundefined{BibTeX}{
\def\BibTeX{{\rm B\kern-.05em{\sc i\kern-.025em b}\kern-.08em
T\kern-.1667em\lower.7ex\hbox{E}\kern-.125emX}}}{}
\@ifundefined{AmS}
{\def\AmS{{\protect\usefont{OMS}{cmsy}{m}{n}
A\kern-.1667em\lower.5ex\hbox{M}\kern-.125emS}}}{}
\@ifundefined{AmSTeX}{\def\AmSTeX{\protect\AmS-\protect\TeX\@}}{}
\def\@@eqncr{\let\@tempa\relax
\ifcase\@eqcnt \def\@tempa{& & &}\or \def\@tempa{& &}
\else \def\@tempa{&}\fi
\@tempa
\if@eqnsw
\iftag@
\@taggnum
\else
\@eqnnum\stepcounter{equation}
\fi
\fi
\global\tag@false
\global\@eqnswtrue
\global\@eqcnt\z@\cr}
\def\TCItag{\@ifnextchar*{\@TCItagstar}{\@TCItag}}
\def\@TCItag#1{
\global\tag@true
\global\def\@taggnum{(#1)}
\global\def\@currentlabel{#1}}
\def\@TCItagstar*#1{
\global\tag@true
\global\def\@taggnum{#1}
\global\def\@currentlabel{#1}}
\def\QATOP#1#2{{#1 \atop #2}}
\def\QTATOP#1#2{{\textstyle {#1 \atop #2}}}
\def\QDATOP#1#2{{\displaystyle {#1 \atop #2}}}
\def\QABOVE#1#2#3{{#2 \above#1 #3}}
\def\QTABOVE#1#2#3{{\textstyle {#2 \above#1 #3}}}
\def\QDABOVE#1#2#3{{\displaystyle {#2 \above#1 #3}}}
\def\QOVERD#1#2#3#4{{#3 \overwithdelims#1#2 #4}}
\def\QTOVERD#1#2#3#4{{\textstyle {#3 \overwithdelims#1#2 #4}}}
\def\QDOVERD#1#2#3#4{{\displaystyle {#3 \overwithdelims#1#2 #4}}}
\def\QATOPD#1#2#3#4{{#3 \atopwithdelims#1#2 #4}}
\def\QTATOPD#1#2#3#4{{\textstyle {#3 \atopwithdelims#1#2 #4}}}
\def\QDATOPD#1#2#3#4{{\displaystyle {#3 \atopwithdelims#1#2 #4}}}
\def\QABOVED#1#2#3#4#5{{#4 \abovewithdelims#1#2#3 #5}}
\def\QTABOVED#1#2#3#4#5{{\textstyle
{#4 \abovewithdelims#1#2#3 #5}}}
\def\QDABOVED#1#2#3#4#5{{\displaystyle
{#4 \abovewithdelims#1#2#3 #5}}}
\def\tint{\msi@int\textstyle\int}
\def\tiint{\msi@int\textstyle\iint}
\def\tiiint{\msi@int\textstyle\iiint}
\def\tiiiint{\msi@int\textstyle\iiiint}
\def\tidotsint{\msi@int\textstyle\idotsint}
\def\toint{\msi@int\textstyle\oint}
\def\tsum{\mathop{\textstyle \sum }}
\def\tprod{\mathop{\textstyle \prod }}
\def\tbigcap{\mathop{\textstyle \bigcap }}
\def\tbigwedge{\mathop{\textstyle \bigwedge }}
\def\tbigoplus{\mathop{\textstyle \bigoplus }}
\def\tbigodot{\mathop{\textstyle \bigodot }}
\def\tbigsqcup{\mathop{\textstyle \bigsqcup }}
\def\tcoprod{\mathop{\textstyle \coprod }}
\def\tbigcup{\mathop{\textstyle \bigcup }}
\def\tbigvee{\mathop{\textstyle \bigvee }}
\def\tbigotimes{\mathop{\textstyle \bigotimes }}
\def\tbiguplus{\mathop{\textstyle \biguplus }}
\newtoks\temptoksa
\newtoks\temptoksb
\newtoks\temptoksc
\def\msi@int#1#2{
\def\@temp{{#1#2\the\temptoksc_{\the\temptoksa}^{\the\temptoksb}}}
\futurelet\@nextcs
\@int
}
\def\@int{
\ifx\@nextcs\limits
\typeout{Found limits}
\temptoksc={\limits}
\let\@next\@intgobble
\else\ifx\@nextcs\nolimits
\typeout{Found nolimits}
\temptoksc={\nolimits}
\let\@next\@intgobble
\else
\typeout{Did not find limits or no limits}
\temptoksc={}
\let\@next\msi@limits
\fi\fi
\@next
}
\def\@intgobble#1{
\typeout{arg is #1}
\msi@limits
}
\def\msi@limits{
\temptoksa={}
\temptoksb={}
\@ifnextchar_{\@limitsa}{\@limitsb}
}
\def\@limitsa_#1{
\temptoksa={#1}
\@ifnextchar^{\@limitsc}{\@temp}
}
\def\@limitsb{
\@ifnextchar^{\@limitsc}{\@temp}
}
\def\@limitsc^#1{
\temptoksb={#1}
\@ifnextchar_{\@limitsd}{\@temp}
}
\def\@limitsd_#1{
\temptoksa={#1}
\@temp
}
\def\dint{\msi@int\displaystyle\int}
\def\diint{\msi@int\displaystyle\iint}
\def\diiint{\msi@int\displaystyle\iiint}
\def\diiiint{\msi@int\displaystyle\iiiint}
\def\didotsint{\msi@int\displaystyle\idotsint}
\def\doint{\msi@int\displaystyle\oint}
\def\dsum{\mathop{\displaystyle \sum }}
\def\dprod{\mathop{\displaystyle \prod }}
\def\dbigcap{\mathop{\displaystyle \bigcap }}
\def\dbigwedge{\mathop{\displaystyle \bigwedge }}
\def\dbigoplus{\mathop{\displaystyle \bigoplus }}
\def\dbigodot{\mathop{\displaystyle \bigodot }}
\def\dbigsqcup{\mathop{\displaystyle \bigsqcup }}
\def\dcoprod{\mathop{\displaystyle \coprod }}
\def\dbigcup{\mathop{\displaystyle \bigcup }}
\def\dbigvee{\mathop{\displaystyle \bigvee }}
\def\dbigotimes{\mathop{\displaystyle \bigotimes }}
\def\dbiguplus{\mathop{\displaystyle \biguplus }}
\if@compatibility\else
\RequirePackage{amsmath}
\fi
\def\ExitTCILatex{\makeatother\endinput}
\bgroup
\ifx\ds@amstex\relax
\message{amstex already loaded}\aftergroup\ExitTCILatex
\else
\@ifpackageloaded{amsmath}
{\if@compatibility\message{amsmath already loaded}\fi\aftergroup\ExitTCILatex}
{}
\@ifpackageloaded{amstex}
{\if@compatibility\message{amstex already loaded}\fi\aftergroup\ExitTCILatex}
{}
\@ifpackageloaded{amsgen}
{\if@compatibility\message{amsgen already loaded}\fi\aftergroup\ExitTCILatex}
{}
\fi
\egroup
\typeout{TCILATEX defining AMS-like constructs in LaTeX 2.09 COMPATIBILITY MODE}
\let\DOTSI\relax
\def\RIfM@{\relax\ifmmode}
\def\FN@{\futurelet\next}
\newcount\intno@
\def\iint{\DOTSI\intno@\tw@\FN@\ints@}
\def\iiint{\DOTSI\intno@\thr@@\FN@\ints@}
\def\iiiint{\DOTSI\intno@4 \FN@\ints@}
\def\idotsint{\DOTSI\intno@\z@\FN@\ints@}
\def\ints@{\findlimits@\ints@@}
\newif\iflimtoken@
\newif\iflimits@
\def\findlimits@{\limtoken@true\ifx\next\limits\limits@true
\else\ifx\next\nolimits\limits@false\else
\limtoken@false\ifx\ilimits@\nolimits\limits@false\else
\ifinner\limits@false\else\limits@true\fi\fi\fi\fi}
\def\multint@{\int\ifnum\intno@=\z@\intdots@
\else\intkern@\fi
\ifnum\intno@>\tw@\int\intkern@\fi
\ifnum\intno@>\thr@@\int\intkern@\fi
\int}
\def\multintlimits@{\intop\ifnum\intno@=\z@\intdots@\else\intkern@\fi
\ifnum\intno@>\tw@\intop\intkern@\fi
\ifnum\intno@>\thr@@\intop\intkern@\fi\intop}
\def\intic@{
\mathchoice{\hskip.5em}{\hskip.4em}{\hskip.4em}{\hskip.4em}}
\def\negintic@{\mathchoice
{\hskip-.5em}{\hskip-.4em}{\hskip-.4em}{\hskip-.4em}}
\def\ints@@{\iflimtoken@
\def\ints@@@{\iflimits@\negintic@
\mathop{\intic@\multintlimits@}\limits
\else\multint@\nolimits\fi
\eat@}
\else
\def\ints@@@{\iflimits@\negintic@
\mathop{\intic@\multintlimits@}\limits\else
\multint@\nolimits\fi}\fi\ints@@@}
\def\intkern@{\mathchoice{\!\!\!}{\!\!}{\!\!}{\!\!}}
\def\plaincdots@{\mathinner{\cdotp\cdotp\cdotp}}
\def\intdots@{\mathchoice{\plaincdots@}
{{\cdotp}\mkern1.5mu{\cdotp}\mkern1.5mu{\cdotp}}
{{\cdotp}\mkern1mu{\cdotp}\mkern1mu{\cdotp}}
{{\cdotp}\mkern1mu{\cdotp}\mkern1mu{\cdotp}}}
\def\RIfM@{\relax\protect\ifmmode}
\def\text{\RIfM@\expandafter\text@\else\expandafter\mbox\fi}
\let\nfss@text\text
\def\text@#1{\mathchoice
{\textdef@\displaystyle\f@size{#1}}
{\textdef@\textstyle\tf@size{\firstchoice@false #1}}
{\textdef@\textstyle\sf@size{\firstchoice@false #1}}
{\textdef@\textstyle \ssf@size{\firstchoice@false #1}}
\glb@settings}
\def\textdef@#1#2#3{\hbox{{
\everymath{#1}
\let\f@size#2\selectfont
#3}}}
\newif\iffirstchoice@
\firstchoice@true
\def\Let@{\relax\iffalse{\fi\let\\=\cr\iffalse}\fi}
\def\vspace@{\def\vspace##1{\crcr\noalign{\vskip##1\relax}}}
\def\multilimits@{\bgroup\vspace@\Let@
\baselineskip\fontdimen10 \scriptfont\tw@
\advance\baselineskip\fontdimen12 \scriptfont\tw@
\lineskip\thr@@\fontdimen8 \scriptfont\thr@@
\lineskiplimit\lineskip
\vbox\bgroup\ialign\bgroup\hfil$\m@th\scriptstyle{##}$\hfil\crcr}
\def\Sb{_\multilimits@}
\def\endSb{\crcr\egroup\egroup\egroup}
\def\Sp{^\multilimits@}
\let\endSp\endSb
\newdimen\ex@
\ex@.2326ex
\def\rightarrowfill@#1{$#1\m@th\mathord-\mkern-6mu\cleaders
\hbox{$#1\mkern-2mu\mathord-\mkern-2mu$}\hfill
\mkern-6mu\mathord\rightarrow$}
\def\leftarrowfill@#1{$#1\m@th\mathord\leftarrow\mkern-6mu\cleaders
\hbox{$#1\mkern-2mu\mathord-\mkern-2mu$}\hfill\mkern-6mu\mathord-$}
\def\leftrightarrowfill@#1{$#1\m@th\mathord\leftarrow
\mkern-6mu\cleaders
\hbox{$#1\mkern-2mu\mathord-\mkern-2mu$}\hfill
\mkern-6mu\mathord\rightarrow$}
\def\overrightarrow{\mathpalette\overrightarrow@}
\def\overrightarrow@#1#2{\vbox{\ialign{##\crcr\rightarrowfill@#1\crcr
\noalign{\kern-\ex@\nointerlineskip}$\m@th\hfil#1#2\hfil$\crcr}}}
\let\overarrow\overrightarrow
\def\overleftarrow{\mathpalette\overleftarrow@}
\def\overleftarrow@#1#2{\vbox{\ialign{##\crcr\leftarrowfill@#1\crcr
\noalign{\kern-\ex@\nointerlineskip}$\m@th\hfil#1#2\hfil$\crcr}}}
\def\overleftrightarrow{\mathpalette\overleftrightarrow@}
\def\overleftrightarrow@#1#2{\vbox{\ialign{##\crcr
\leftrightarrowfill@#1\crcr
\noalign{\kern-\ex@\nointerlineskip}$\m@th\hfil#1#2\hfil$\crcr}}}
\def\underrightarrow{\mathpalette\underrightarrow@}
\def\underrightarrow@#1#2{\vtop{\ialign{##\crcr$\m@th\hfil#1#2\hfil
$\crcr\noalign{\nointerlineskip}\rightarrowfill@#1\crcr}}}
\let\underarrow\underrightarrow
\def\underleftarrow{\mathpalette\underleftarrow@}
\def\underleftarrow@#1#2{\vtop{\ialign{##\crcr$\m@th\hfil#1#2\hfil
$\crcr\noalign{\nointerlineskip}\leftarrowfill@#1\crcr}}}
\def\underleftrightarrow{\mathpalette\underleftrightarrow@}
\def\underleftrightarrow@#1#2{\vtop{\ialign{##\crcr$\m@th
\hfil#1#2\hfil$\crcr
\noalign{\nointerlineskip}\leftrightarrowfill@#1\crcr}}}
\def\qopnamewl@#1{\mathop{\operator@font#1}\nlimits@}
\let\nlimits@\displaylimits
\def\setboxz@h{\setbox\z@\hbox}
\def\varlim@#1#2{\mathop{\vtop{\ialign{##\crcr
\hfil$#1\m@th\operator@font lim$\hfil\crcr
\noalign{\nointerlineskip}#2#1\crcr
\noalign{\nointerlineskip\kern-\ex@}\crcr}}}}
\def\rightarrowfill@#1{\m@th\setboxz@h{$#1-$}\ht\z@\z@
$#1\copy\z@\mkern-6mu\cleaders
\hbox{$#1\mkern-2mu\box\z@\mkern-2mu$}\hfill
\mkern-6mu\mathord\rightarrow$}
\def\leftarrowfill@#1{\m@th\setboxz@h{$#1-$}\ht\z@\z@
$#1\mathord\leftarrow\mkern-6mu\cleaders
\hbox{$#1\mkern-2mu\copy\z@\mkern-2mu$}\hfill
\mkern-6mu\box\z@$}
\def\projlim{\qopnamewl@{proj\,lim}}
\def\injlim{\qopnamewl@{inj\,lim}}
\def\varinjlim{\mathpalette\varlim@\rightarrowfill@}
\def\varprojlim{\mathpalette\varlim@\leftarrowfill@}
\def\varliminf{\mathpalette\varliminf@{}}
\def\varliminf@#1{\mathop{\underline{\vrule\@depth.2\ex@\@width\z@
\hbox{$#1\m@th\operator@font lim$}}}}
\def\varlimsup{\mathpalette\varlimsup@{}}
\def\varlimsup@#1{\mathop{\overline
{\hbox{$#1\m@th\operator@font lim$}}}}
\def\stackunder#1#2{\mathrel{\mathop{#2}\limits_{#1}}}
\begingroup \catcode `|=0 \catcode `[= 1
\catcode`]=2 \catcode `\{=12 \catcode `\}=12
\catcode`\\=12
|gdef|@alignverbatim#1\end{align}[#1|end[align]]
|gdef|@salignverbatim#1\end{align*}[#1|end[align*]]
|gdef|@alignatverbatim#1\end{alignat}[#1|end[alignat]]
|gdef|@salignatverbatim#1\end{alignat*}[#1|end[alignat*]]
|gdef|@xalignatverbatim#1\end{xalignat}[#1|end[xalignat]]
|gdef|@sxalignatverbatim#1\end{xalignat*}[#1|end[xalignat*]]
|gdef|@gatherverbatim#1\end{gather}[#1|end[gather]]
|gdef|@sgatherverbatim#1\end{gather*}[#1|end[gather*]]
|gdef|@gatherverbatim#1\end{gather}[#1|end[gather]]
|gdef|@sgatherverbatim#1\end{gather*}[#1|end[gather*]]
|gdef|@multilineverbatim#1\end{multiline}[#1|end[multiline]]
|gdef|@smultilineverbatim#1\end{multiline*}[#1|end[multiline*]]
|gdef|@arraxverbatim#1\end{arrax}[#1|end[arrax]]
|gdef|@sarraxverbatim#1\end{arrax*}[#1|end[arrax*]]
|gdef|@tabulaxverbatim#1\end{tabulax}[#1|end[tabulax]]
|gdef|@stabulaxverbatim#1\end{tabulax*}[#1|end[tabulax*]]
|endgroup
\def\align{\@verbatim \frenchspacing\@vobeyspaces \@alignverbatim
You are using the "align" environment in a style in which it is not defined.}
\let\endalign=\endtrivlist
\@namedef{align*}{\@verbatim\@salignverbatim
You are using the "align*" environment in a style in which it is not defined.}
\expandafter\let\csname endalign*\endcsname =\endtrivlist
\def\alignat{\@verbatim \frenchspacing\@vobeyspaces \@alignatverbatim
You are using the "alignat" environment in a style in which it is not defined.}
\let\endalignat=\endtrivlist
\@namedef{alignat*}{\@verbatim\@salignatverbatim
You are using the "alignat*" environment in a style in which it is not defined.}
\expandafter\let\csname endalignat*\endcsname =\endtrivlist
\def\xalignat{\@verbatim \frenchspacing\@vobeyspaces \@xalignatverbatim
You are using the "xalignat" environment in a style in which it is not defined.}
\let\endxalignat=\endtrivlist
\@namedef{xalignat*}{\@verbatim\@sxalignatverbatim
You are using the "xalignat*" environment in a style in which it is not defined.}
\expandafter\let\csname endxalignat*\endcsname =\endtrivlist
\def\gather{\@verbatim \frenchspacing\@vobeyspaces \@gatherverbatim
You are using the "gather" environment in a style in which it is not defined.}
\let\endgather=\endtrivlist
\@namedef{gather*}{\@verbatim\@sgatherverbatim
You are using the "gather*" environment in a style in which it is not defined.}
\expandafter\let\csname endgather*\endcsname =\endtrivlist
\def\multiline{\@verbatim \frenchspacing\@vobeyspaces \@multilineverbatim
You are using the "multiline" environment in a style in which it is not defined.}
\let\endmultiline=\endtrivlist
\@namedef{multiline*}{\@verbatim\@smultilineverbatim
You are using the "multiline*" environment in a style in which it is not defined.}
\expandafter\let\csname endmultiline*\endcsname =\endtrivlist
\def\arrax{\@verbatim \frenchspacing\@vobeyspaces \@arraxverbatim
You are using a type of "array" construct that is only allowed in AmS-LaTeX.}
\let\endarrax=\endtrivlist
\def\tabulax{\@verbatim \frenchspacing\@vobeyspaces \@tabulaxverbatim
You are using a type of "tabular" construct that is only allowed in AmS-LaTeX.}
\let\endtabulax=\endtrivlist
\@namedef{arrax*}{\@verbatim\@sarraxverbatim
You are using a type of "array*" construct that is only allowed in AmS-LaTeX.}
\expandafter\let\csname endarrax*\endcsname =\endtrivlist
\@namedef{tabulax*}{\@verbatim\@stabulaxverbatim
You are using a type of "tabular*" construct that is only allowed in AmS-LaTeX.}
\expandafter\let\csname endtabulax*\endcsname =\endtrivlist
\def\endequation{
\ifmmode\ifinner
\iftag@
\addtocounter{equation}{-1}
$\hfil
\displaywidth\linewidth\@taggnum\egroup \endtrivlist
\global\tag@false
\global\@ignoretrue
\else
$\hfil
\displaywidth\linewidth\@eqnnum\egroup \endtrivlist
\global\tag@false
\global\@ignoretrue
\fi
\else
\iftag@
\addtocounter{equation}{-1}
\eqno \hbox{\@taggnum}
\global\tag@false
$$\global\@ignoretrue
\else
\eqno \hbox{\@eqnnum}
$$\global\@ignoretrue
\fi
\fi\fi
}
\newif\iftag@ \tag@false
\def\TCItag{\@ifnextchar*{\@TCItagstar}{\@TCItag}}
\def\@TCItag#1{
\global\tag@true
\global\def\@taggnum{(#1)}
\global\def\@currentlabel{#1}}
\def\@TCItagstar*#1{
\global\tag@true
\global\def\@taggnum{#1}
\global\def\@currentlabel{#1}}
\@ifundefined{tag}{
\def\tag{\@ifnextchar*{\@tagstar}{\@tag}}
\def\@tag#1{
\global\tag@true
\global\def\@taggnum{(#1)}}
\def\@tagstar*#1{
\global\tag@true
\global\def\@taggnum{#1}}
}{}
\def\tfrac#1#2{{\textstyle {#1 \over #2}}}
\def\dfrac#1#2{{\displaystyle {#1 \over #2}}}
\def\binom#1#2{{#1 \choose #2}}
\def\tbinom#1#2{{\textstyle {#1 \choose #2}}}
\def\dbinom#1#2{{\displaystyle {#1 \choose #2}}}
\makeatother
\endinput | {"config": "arxiv", "file": "1111.4976/tcilatex.tex"} |
TITLE: $f(x) = \Box$ vs $x \mapsto \Box$
QUESTION [2 upvotes]: In the book I am reading (Abstract Algebra, Dummit & Foote), the author uses 2 ways to define functions:
$$f(x) = \Box$$
$$x \mapsto \Box$$
It's not that I don't know what they mean - it's that they use both, which leaves me feeling like I am missing something, when a particular choice is used.
For example, just a few lines apart, they write a group action as
$\sigma_{g}: A \rightarrow A$ defined by $\sigma_{g}: a\mapsto g \cdot a$
and a group homomorphism as
$\varphi:G \rightarrow S_{n}$ defined by $\varphi (g) = \sigma_g$
Is there a reason one form would be used over the other?
REPLY [0 votes]: Long comment ( hope it helps...)
See : Saunders Mac Lane & Garrett Birkhoff, Algebra, AMS (3rd ed., 1991), page 4 :
A function $f$ on a set $S$ to a set $T$ assigns to each element $s$ of $S$ an
element $f(s) \in T$, as indicated by the notation
$s \mapsto f(s), \ \ \ \ s \in S$.
The element $f(s)$ may also be written as $fs$ or $f_s$, without parentheses; it is
the value of $f$ at the argument $s$. The set $S$ is called the domain of $f$, while $T$ is the codomain. The arrow notation
$f : S \to T \ \ \text {or } \ \ S \stackrel{f}\longrightarrow T$
indicates that $f$ is a function with domain $S$ and codomain $T$. [...] We systematically use the barred arrow to go from argument to value of a function and the straight arrow $S \to T$ to go from domain to codomain. | {"set_name": "stack_exchange", "score": 2, "question_id": 2935571} |
\begin{document}
\title[On the existence of dichromatic single element lenses ]{On the existence of dichromatic \\single element lenses}
\author[C. E. Guti\'errez and A. Sabra]{Cristian E. Guti\'errez and Ahmad Sabra}
\thanks{\today}
\address{Department of Mathematics\\Temple University\\Philadelphia, PA 19122}
\email{gutierre@temple.edu}
\address{Faculty of Mathematics, Informatics, and Mechanics,
University of Warsaw, Poland}
\email{sabra@mimuw.edu.pl}
\begin{abstract}
Due to dispersion, light with different wavelengths, or colors, is refracted at different angles. Our purpose is to determine when is it possible to design a lens made of a single homogeneous material so that it refracts light superposition of two colors into a desired fixed final direction. Two problems are considered: one is when light emanates in a parallel beam and the other is when light emanates from a point source. For the first problem, and when the direction of the parallel beam is different from the final desired direction, we show that such a lens does not exist; otherwise we prove the solution is trivial, i.e., the lens is confined between two parallel planes. For the second problem we prove that is impossible to design such a lens when the desired final direction is not among the set of incident directions. Otherwise, solving an appropriate system of functional equations we show that a local solution exists.
\end{abstract}
\maketitle
\tableofcontents
\section{Introduction}
\setcounter{equation}{0}
We showed in \cite{gutierrez-sabra:asphericallensdesignandimaging} that given a function $u$ in $\Omega\subset \R^2$ and a unit direction $w\in S^2$ there exists a surface parametrized by a function $f$ such that the lens sandwiched by $u$ and $f$, made of an homogeneous material and denoted by $(u,f)$, refracts monochromatic light emanating vertically from $\Omega$ into the direction $w$. In the earlier paper \cite{gutierrez:asphericallensdesign}, a similar result is proved when light emanates from a point source.
The purpose of this paper is to study if it is possible to design simple lenses doing similar refracting jobs for non monochromatic light.
By a simple (or single element) lens we mean a domain in $\R^3$ bounded by two smooth surfaces that is filled with an homogeneous material.
To do this we need to deal with dispersion: since the index of refraction of a material depends on the wavelength of the radiation, a non monochromatic light ray after passing through a lens splits into several rays having different refraction angles and wavelengths.
Therefore, when white light is refracted by a single lens each color comes to a focus at a different distance from the objective.
This is called chromatic aberration and plays a vital role in lens design,
see \cite[Chapter 5]{kingslake:lensdesignfundamentals}.
Materials have various degrees of dispersion, and
low dispersion ones are used in the manufacturing of photographic lenses, see \cite{Cannon}.
A way to correct chromatic aberration is to build lenses composed of various simple lenses made of different materials.
Also chromatic aberration has recently being handled numerically using demosaicing algorithms, see \cite{demosaicingalgorithms}.
The way in which the refractive index depends of the wavelength is given by a formula for the dispersion of light due to A. Cauchy: the refractive index $n$ in terms of the wavelength $\lambda$ is given by
$n(\lambda)=A_1+\dfrac{A_2}{\lambda^2}+\dfrac{A_4}{\lambda^4}+\cdots $, where $A_i$ are constants depending on the material \cite{cauchy-memoire-sur-la-dispersio-de-la-luminere}. The validity of this formula is in the visible wavelength range, see \cite[pp. 99-102]{book:born-wolf} for its accurateness in various materials.
A more accurate formula was derived by Sellmeier, see \cite[Section 23.5]{jenkins-white:fundamentalsofoptics}.
A first result related to our question is
that there is no single lens bounded by two spherical surfaces that refracts non monochromatic radiation from a point into a fixed direction; this was originally stated by K. Schwarzschild \cite{schwarzschild:1905-telescope}.
The question of designing a single lens, non spherical, that focuses one point into a fixed direction for light containing only two colors, i.e., for two refractive indices $n\neq \bar n$, is considered in \cite{schultz:1983achromaticsinglelens} in the plane; but no mathematically rigorous proof is given. In fact, by tracing back and forth rays of both colors, the author describes how a finite number of points should be located on the faces of the desired lens and he claims, without proof, that the desired surfaces can be completed by interpolating adjacent points with third degree polynomials. Such an interpolation will give an undesired refracting behavior outside the fixed number of points considered.
For the existence of rotationally symmetric lenses capable of focusing one point into two points for two different refractive indices see \cite{vanbrunt-ockendon:lenstwowavelengths}, \cite{vaanbrunt:refinements}.
The results of all these authors require size conditions on $n,\bar n$.
The monochromatic case is due to Friedman and MacLeod \cite{friedmanmacleod:optimaldesignopticalens} and Rogers \cite{rogers1988:picardtypethemlensdesign}.
The solutions obtained are analytic functions.
These results are all two dimensional and therefore concern only to rotationally symmetric lenses.
In view of all this, we now state precisely the problems that are considered and solved in this paper.
Problem A: is there a single lens sandwiched by a surface $L$ given by the graph of a function $u$ in $\Omega$, the lower surface of the lens, and a surface $S$, the top part of the lens, such that each ray of dichromatic light (superposition of two colors) emanating in the vertical direction $e$ from $(x,0)$ for $x\in \Omega$ is refracted by the lens into the direction $w$?
We denote such a lens by the pair $(L,S)$.
Notice that when a dichromatic ray enters the lens, due to chromatic dispersion, it splits into two rays having different directions and colors, say red and blue, that they both travel inside the lens until they exit it at points on the surface $S$ and then both continue traveling in the direction $w$; see Figure \ref{fig:Problems A and B}(a).
\begin{figure}[htp]
\begin{center}
\subfigure[$ $]{\label{fig:Problem A}\includegraphics[height=2.9in]{ProblemA-pic}}\,
\subfigure[$ $]{\label{fig:Problem B}\includegraphics[width=2.8in]{ProblemB-pic}}
\end{center}
\caption{Problems A and B}
\label{fig:Problems A and B}
\end{figure}
Problem B: a similar question is when the rays emanate from a point source $O$ and we ask if a a single lens $(L,S)$ exists such that all rays are refracted into a fixed given direction $w$. Now $L$ is given parametrically by $\rho(x)x$ for $x\in \Omega\subset S^2$;
see Figure \ref{fig:Problems A and B}(b).
\\
We will show in Section \ref{The collimated case} using Brouwer fixed point theorem that Problem A has no solution if $w\neq e$. In case $w=e$, the unique solution to Problem A is the trivial solution: $L$ and $S$ are contained in two horizontal planes.
This is the contents of Theorem \ref{thm:monotonicity of phi}.
On the other hand, since Problem A is solvable for monochromatic light and for each given lower surface $L$, we obtain two single lenses, one for each color.
We then show in Section \ref{subsec:difference of the upper surface} that the difference between the upper surfaces of these two lenses can be estimated by the difference between the refractive indices for each color.
Concerning Problem B, we prove in Theorem \ref{thm:nonexistence point source}, also using Brouwer fixed point theorem, that if $w\notin \Omega$ then Problem $B$ has no solution.
The case when $w\in \Omega$ requires a more elaborate and long approach.
In fact we show in Sections \ref{subsec:Problem B implies System} and \ref{subsec:converse functional implies optic}, in dimension two, that the solvability of Problem B is equivalent to solve a system of first order functional differential equations.
For this we need an existence theorem for these type of equations that was introduced by Rogers in \cite{rogers1988:picardtypethemlensdesign}.
We provide in Section \ref{sec:FOD} a simpler proof of this existence and uniqueness result of local solutions using the Banach fixed point theorem, Theorems \ref{thm: Existence} and \ref{thm:Uniqueness}. Section \ref{sec:FOD} is self contained and has independent interest.
The existence of local solutions to Problem B in the plane is then proved in Section \ref{sec:existence of local solution} by application of Theorem \ref{thm: Existence}. For this it is necessary to assume conditions on the ratio between the thickness of the lens and its distance to the point source, Theorem \ref{thm:Existence last}.
We also derive a necessary condition for the solvability of Problem B, see Corollary \ref{cor: Sufficient Condition}.
To sum up our result: for $w=e\in \Omega$ and fixing two points $P$ and $Q$ on the positive $y$-axis, with $|Q|>|P|$ and letting $k=|P|/|Q-P|$, we show that if $k$ is small then there exists a unique lens $(L,S)$ local solution to Problem $B$ such that $L$ passes through the point $P$ and $S$ through $Q$; otherwise, for $k$ large no solution exists. For intermediate values of $k$ see Remark \ref{rmk:final remark}.
The analogue of Problem B for more than two colors has no solution, i.e., if rays emitted from the origin are superposition of three or more colors, there is no simple lens refracting these rays into a unique direction $w$, see Remark \ref{rmk:three colors}.
We close this introduction mentioning that a number of results have been recently obtained for refraction of monochromatic light, these include the papers \cite{gutierrez-huang:farfieldrefractor}, \cite{gutierrez-huang:near-field-refractor}, \cite{gutierrez:cimelectures}, \cite{2016-karakhanyan:parallel-beam-refractor}, \cite{deleo-gutierrez-mawi:numerical-refractor}, and \cite{gutierrez-sabra:freeformgeneralfields}.
\section{Preliminaries}
\setcounter{equation}{0}
In this section we mention some consequences from the Snell law that will be used later.
In an homogeneous and isotropic medium, the index of refraction depends on the wavelength of light.
Suppose $\Gamma$ is a surface in $\R^3$ separating two media
I and II that are homogeneous and isotropic.
If a
ray of monochromatic light
having unit direction $x$ and traveling
through the medium I hits $\Gamma$ at the point $P$, then this ray
is refracted in the unit direction $m$ through medium II
according with the Snell law in vector form, \cite{luneburg:maththeoryofoptics},
\begin{equation}\label{snellwithcrossproduct}
n_{1}(x\times \nu)=n_{2}(m\times \nu),
\end{equation}
where $\nu$ is the unit normal to the surface to $\Gamma$ at $P$ going towards the medium II,
and $n_1,n_2$ are the refractive indices for the corresponding monochromatic light.
This has several consequences:
\begin{enumerate}
\item[(a)] the vectors $x,m,\nu$ are all on the same plane, called plane of incidence;
\item[(b)] the well known Snell law in scalar form
$$n_1\sin \theta_1= n_2\sin
\theta_2,$$
where $\theta_1$ is the angle between $x$ and $\nu$
(the angle of incidence),
$\theta_2$ the angle between $m$ and $\nu$ (the angle of refraction).
\end{enumerate}
From \eqref{snellwithcrossproduct},
with $\kappa=n_2/n_1$,
\begin{equation}\label{eq:snellvectorform}
x-(n_2/n_1) \,m =\lambda \nu,
\end{equation}
with
\begin{equation}\label{formulaforlambda}
\lambda=x\cdot \nu -\sqrt{\kappa^2-1+(x\cdot \nu)^{2}}=\Phi_\kappa(x\cdot \nu).
\end{equation}
Notice that $\lambda>0$ when $\kappa<1$, and $\lambda<0$ if $\kappa>1$.
When $\kappa<1$ total reflection occurs, unless $x\cdot \nu\geq \sqrt{1-\kappa^2}$, see \cite{book:born-wolf} and \cite[Sec. 2]{gutierrez:cimelectures}.
The following lemmas will be used in the remaining sections of the paper.
\begin{lemma}\label{lm:lemma equality of three normals}
Assume we have monochromatic light.
Let $\Gamma_1$ and $\Gamma_2$ be two surfaces enclosing a lens with refractive index $n_2$,
and the outside of the lens is a medium with refractive index $n_1$ with $n_1\neq n_2$.
Suppose an incident ray with unit direction $x$ strikes $\Gamma_1$ at $P$, the ray propagates inside the lens and is refracted at $Q\in \Gamma_2$ into the unit direction $w$.
Then $w=x$ if and only if the unit normals $\nu_1(P)=\nu_2(Q)$.
\end{lemma}
\begin{proof}
From the Snell law applied at $P$ and $Q$
\[
x-(n_2/n_1)\,m=\lambda_1\,\nu_1(P), \qquad m-(n_1/n_2)\,w=\lambda_2\,\nu_2(Q),
\]
then
\begin{equation}\label{eq:x minus w equals normals}
x-w=\lambda_1\,\nu_1(P)+(n_2/n_1)\,\lambda_2\,\nu_2(Q).
\end{equation}
If $x=w$, since $\lambda_1$ and $-(n_2/n_1)\,\lambda_2$ have the same sign and the normals are unit vectors, we conclude
\[
\nu_1(P)=\nu_2(Q).
\]
Conversely, if $\nu_1(P)=\nu_2(Q):=\nu$, then from \eqref{eq:x minus w equals normals} $x-w=\(\lambda_1+(n_1/n_2)\,\lambda_2\, \)\,\nu$. Notice that $m\cdot \nu=(n_1/n_2)\(x\cdot \nu-\lambda_1\)=(n_1/n_2)\sqrt{(n_2/n_1)^2-1+(x\cdot \nu)^2}$. Hence from \eqref{formulaforlambda}
\begin{align*}
\lambda_1+(n_2/n_1)\,\lambda_2
&=x\cdot \nu-(n_2/n_1)\,m\cdot \nu +(n_2/n_1)\(m\cdot \nu-\sqrt{(n_1/n_2)^2-1+(m\cdot \nu)^2}\)=0.
\end{align*}
\end{proof}
Let us now consider the case of dichromatic light, i.e., a mix of two colors b and r. That is, if a ray with direction $x$ in vacuum strikes a surface $\Gamma$ at $P$ separating two media, then the ray splits into two rays one with color b and direction $m_b$, and another with color r and direction $m_r$. Here $m_r$ satisfies \eqref{snellwithcrossproduct} with $n_1=1$ and $n_2=n_r$ (the refractive index for the color r) and $m_b$ satisfies \eqref{snellwithcrossproduct} with $n_1=1$ and $n_2=n_b$ (the refractive index for the color b).
Notice $m_b,m_r$ are both in the plane of incidence containing $P$, the vector $x$, and $\nu(P)$ the normal to $\Gamma$ at $P$.
Assuming $n_b>n_r$, i.e., rays with color r travel faster than rays with color b, then for a given incidence angle $\theta$ by the Snell law the angles of refraction satisfy $\theta_b\leq \theta_r$. In fact
$$\sin \theta=n_b\sin \theta_b=n_r\sin \theta_r,$$
obtaining the following Lemma.
\begin{lemma}\label{item:m vectors are equal}
Suppose a dichromatic ray with unit direction $x$ strikes a surface $\Gamma$ at a point $P$ having normal $\nu$.
Then $m_b=m_r$ if and only if $x=\nu$.
\end{lemma}
\section{The collimated case: Problem A}\label{The collimated case}
\setcounter{equation}{0}
In this section we consider the following set up. We are given $\Omega\subseteq\R^2$ a compact and convex set with nonempty interior, and $w$ a unit vector in $\R^3$. Dichromatic rays with colors b and r are emitted from $(t,0)$, with $t\in \Omega$, into the vertical direction $e=(0,0,1)$.
By application of the results from \cite{gutierrez-sabra:asphericallensdesignandimaging} with $n_1=n_3=1$ and $n_2=n_r$, we have the following.
Given $u\in C^2$, there exist surfaces parametrized by
$f_r(t)=(t,u(t))+d_r(t)m_r(t)$, with $m_r(t)=\dfrac{1}{n_r}\(e-\lambda_r \nu_u(t)\)$ where $\lambda_r
=\Phi_{n_r}\(e\cdot \nu_u(t)\)$ from \eqref{formulaforlambda}, $\nu_u(t)=\dfrac{\(-\nabla u(t),1\)}{\sqrt{1+|\nabla u(t)|^2}}$ the unit normal at $(t,u(t))$, and
\begin{equation}\label{eq:formula for dbt}
d_r(t)=\dfrac{C_r-(e-w)\cdot (t,u(t))}{n_r-w\cdot m_r(t)}
\end{equation}
from \cite[Formula (3.14)]{gutierrez-sabra:asphericallensdesignandimaging}, such that lens bounded between $u$ and $f_r$ refracts the rays with color r into $w$. Here the constant $C_r$ is chosen so that $d_r(t)>0$ and $f_r$ has a normal vector at every point. This choice is possible from \cite[Theorem 3.2 and Corollary 3.3]{gutierrez-sabra:freeformgeneralfields}.
Likewise there exist surfaces parametrized by
$f_b(t)=(t,u(t))+d_b(t)m_b(t)$, with similar quantities as before with r replaced by b,
such that lens bounded between $u$ and $f_b$ refracts the rays with color b into $w$.
We assume that $n_b>n_r>1$, where $n_b, n_r$ are the refractive indices of the material of the lens corresponding to monochromatic light having colors b or r, and the medium surrounding the lens is vacuum.
To avoid total reflection compatibility conditions between $u$ and $w$ are needed, see \cite[condition (3.4)]{gutierrez-sabra:asphericallensdesignandimaging} which in our case reads
\[
\lambda_r\,\nu_u(t)\cdot w\leq e\cdot w-1,\text{ and } \lambda_b\,\nu_u(t)\cdot w\leq e\cdot w-1.
\]
If $w=e$, these two conditions are automatically satisfied because $\lambda_r, \lambda_b$ are both negative and $\nu_u(t)\cdot e=\dfrac{1}{\sqrt{1+|\nabla u(t)|^2}}>0$.
The problem we consider in this section is to determine if there exist $u$ and corresponding surfaces $f_r$ and $f_b$ for each color such that $f_r$ can be obtained by a reparametrization of $f_b$. That is, if there exist a positive function $u\in C^2(\Omega)$, real numbers $C_r$ and $C_b$, and a continuous map $\p:\Omega \to \Omega$ such that the surfaces $f_r$ and $f_b$, corresponding to $u$, $C_r,C_b$, have normals at each point and
\begin{equation}\label{eq:equality by re parametrization}
f_r(t)=f_b(\p(t))\qquad \forall t\in \Omega,
\end{equation}
we refer to this as {\it Problem A}. Notice that if a solution exists
then $f_r(\Omega) \subseteq f_b(\Omega).$ From an optical point of view, this means that the lens sandwiched between $u$ and $f_b$ refracts both colors into $w$.
Notice that there could be points in $f_b(\Omega)$ that are not reached by red rays.
The answer to Problem A is given
in the following theorem.
\begin{theorem}\label{thm:monotonicity of phi}
If $w\neq e$, then Problem A has no solution, and if $w=e$ the only solutions to Problem A are lenses enclosed by two horizontal planes.
\end{theorem}
To prove this theorem we need the following lemma.
\begin{lemma}\label{lem:fixed point}
Given a surface described by $u\in C^2(\Omega)$ and the unit direction $w$, let $f_r$ and $f_b$ be the surfaces parametrized as above. If $f_r(t)=f_b(t)$ for some $t\in \Omega$, then $\nu_u(t)=e$, the unit normal vector to $u$ at $(t,u(t))$.
\end{lemma}
\begin{proof}
Since
\[
f_b(t)=(t,u(t))+d_b(t)\,m_b(t)=f_r(t)=(t,u(t))+d_r(t)\,m_r(t)
\]
we get $d_b(t)\,m_b(t)=d_r(t)\,m_r(t)$, and since $m_r,m_b$ are unit, $d_b(t)=d_r(t)$. Therefore
$m_b(t)=m_r(t)$ which from Lemma \ref{item:m vectors are equal} implies that $\nu_u(t)=e$.
\end{proof}
\begin{proof}[Proof of Theorem \ref{thm:monotonicity of phi}]
To show the first part of the theorem, suppose by contradiction that Problem A has a solution with $w\neq e$.
Since $\Omega$ is compact and convex, by Brouwer fixed point theorem \cite[Sect. 2, Chap. XVI]{dugundji:topology} there is $t_0\in \Omega$ such that $\p(t_0)=t_0$, and so from \eqref{eq:equality by re parametrization} $f_r(t_0)=f_b(t_0)$. Hence from Lemma \ref{lem:fixed point} $\nu_u(t_0)=e$. By Snell's law at $(t_0,u(t_0))$, $m_b(t_0)=m_r(t_0)=e$. Since $n_r\neq n_b$, and both colors with direction $e$ are refracted at $f_b(t_0)=f_r(t_0)$ into the direction $w$, it follows again from the Snell's law that $w=e$, a contradiction.
To show the second part of the Theorem, assume there exist $u$ and $\varphi:\Omega\to \Omega$ such that problem A has a solution. Let $t\in \Omega$, and $Q=f_r(t)=f_b(\varphi(t))$. Since the ray emitted from $(t,0)$ with direction $e$ and color $r$ is refracted by $(u,f_r)$ into $e$ at $Q$, then by Lemma \ref{lm:lemma equality of three normals} $\nu_u(t)=\nu(Q)$, where $\nu(Q)$ denotes the normal to the upper face of the lens at $Q$. Similarly, applying Lemma \ref{lm:lemma equality of three normals} to the color b we have $\nu_u(\varphi(t))=\nu(Q)$. We conclude that for every $t\in \Omega$
\begin{equation}\label{eq:equality of normals}
\nu_u(t)=\nu_u(\varphi(t)).
\end{equation}
We will show that $u$ is constant, i.e., $\nabla u(t)=0$, for all $t\in \Omega$ connected. Suppose by contradiction that there exists $t_0\in \Omega$, with $\nabla u(t_0)\neq 0$. If $t_1=\p(t_0)$, then $t_1\neq t_0$. Otherwise, from Lemma \ref{lem:fixed point} $\nu_u(t_0)=\dfrac{\(-\nabla u(t_0),1\)}{\sqrt{1+\left|\nabla u(t_0)\right|^2}}= e$, so $\nabla u(t_0)=0$. Also from \eqref{eq:equality of normals}, $\nu_u(t_0)=\nu_u(t_1)$.
Let $L_r(t_0)$ be the red ray from $(t_0,u(t_0))$ to $f_r(t_0)$, and let $L_b(t_1)$
be the blue ray from $(t_1,u(t_1))$ to $f_b(t_1)$. We have that $L_r(t_0)$ and $L_b(t_1)$ intersect at $Q_0:=f_r(t_0)=f_b(t_1)$.
If $\Pi_r$ denotes the plane of incidence passing through $(t_0,u(t_0))$ containing the directions $e$ and $\nu_u(t_0)$,
and $\Pi_b$ denotes the plane of incidence through $(t_1,u(t_1))$ containing the directions $e$ and $\nu_u(t_1)$, then $\Pi_r$ and $\Pi_b$ are parallel since $\nu_u(t_0)=\nu_u(t_1)$.
Also by Snell's law $L_r(t_0)\subset \Pi_r$ and $L_b(t_1)\subset \Pi_b$, so $Q_0\in \Pi_r \cap \Pi_b$. We then obtain $\Pi_r= \Pi_b:=\Pi$.
Let $\ell$ denote the segment $\Omega\cap \Pi$. We deduce from the above that $t_0,t_1\in \ell$. Next, let $t_2=\p(t_1)$. As before, since $\nabla u(t_1)=\nabla u(t_0)\neq 0$, by Lemma \ref{lem:fixed point} $t_2\neq t_1$; and by \eqref{eq:equality of normals} $\nu_u(t_1)=\nu_u(t_2)$. Let $\Pi_2$ be the plane through $(t_2,u(t_2))$ and containing the vectors $e$ and $\nu_u(t_2)$.
We have $L_b(t_2)\subset \Pi_2$, $f_r(t_1)=f_b(t_2)$, and $f_r(t_1)\in \Pi$.
Therefore $\Pi_2=\Pi$, in particular, $t_2\in \ell$.
Let $\ell_1$ denote the half line starting from $t_1$ and containing $t_0$. We claim that $t_2\notin \ell_1$.
In fact, we first have that $L_r(t_0)$ and $L_b(t_1)$ intersect at $Q_0$. Since $\nu_u(t_0)=\nu_u(t_1):=\nu$, $L_r(t_0)$ is parallel to $L_r(t_1)$, and $L_b(t_0)$ is parallel to $L_b(t_1)$. And since $n_b>n_r$, it follows from the Snell law that the angle of refraction $\theta_b$ for the blue ray $L_b(t_0)$ and the angle of refraction $\theta_r$ of the red ray $L_r(t_1)$ satisfy $\theta_b<\theta_r$. Hence $L_b(t_0)$ and $L_r(t_1)$ diverge. Moreover, notice that all rays are on the plane $\Pi$, and $L_b(t_2)$ is parallel to $L_b(t_1)$. Then, if $t_2\in \ell_1$ , we have $L_b(t_2)$ and $L_r(t_1)$ diverge and cannot intersect, a contradiction since $f_b(t_2)=f_r(t_1)$ and the claim is proved;
see Figure \ref{fig:divergent rays} illustrating that $t_2$ cannot be on $\ell_1$.
\begin{figure}[htp]
\includegraphics[width=3in]{Lemma32-onepic}
\caption{$t_2\notin \ell_1$}
\label{fig:divergent rays}
\end{figure}
Continuing in this way we construct the sequence $t_k=\p(t_{k-1})$.
By \eqref{eq:equality of normals} $\nu_u(t_k)=\nu_u(t_{k-1})=\cdots=\nu_u(t_0)\neq 0$,
and again by Lemma \ref{lem:fixed point} $t_k\neq t_{k-1}$.
By Snell's law $\{L_b(t_k)\}$ are all parallel, $\{L_r(t_k)\}$ are all parallel and arguing as before they are all contained in $\Pi$, and then $t_k\in \ell$. In addition, the angles between $L_r(t_k)$ and
$L_b(t_k)$ are the same for all $k$. Also, for $k\geq 1$, if $\ell_k$ is the half line with origin $t_{k}$ and passing through $t_{k-1}$, then as above $t_{k+1}\notin \ell_k$. Hence the
sequence $\{t_k\}$ is decreasing or increasing on the line $\ell$. Therefore $t_k$ converges to some $\hat t\in \ell$ so by continuity $\p(\hat t)=\hat t$. Hence by Lemma \ref{lem:fixed point}
$\nabla u(\hat t)=0$, but since $\nabla u(t_k)=\nabla u(t_0)\neq 0$ for all $k$, and $u$ is $C^2$ we obtain a contradiction. Thus $u$ is constant in $\Omega$. Since the lower face is then contained in a horizontal plane, $m_r(t)=m_b(t)=e=(0,0,1)$. Hence from the form of the parameterizations of $f_b$ and $f_r$, and since from \eqref{eq:formula for dbt} $d_r$ and $d_b$ are constants, the upper face of the lens is also contained in a horizontal plane.
\end{proof}
\subsection{Estimates of the upper surfaces for two colors}\label{subsec:difference of the upper surface}
The purpose of this section is to measure how far the surfaces $f_r$ and $f_b$ can be when $w=e$. We shall prove the following.
\begin{proposition}\label{prop:estimate for fr and fb passing through a fixed point}
Suppose $f_r(t_0)=f_b(t_0)$ at some point $t_0$, and $w=e$. Then
\[
|f_r(t)-f_b(t)|\leq \bar C\,|n_b-n_r|
\]
for all $t$ with a constant $\bar C$ depending only $t_0$ and $n_r,n_b$.
\end{proposition}
\begin{proof}
We begin showing an upper estimate of the difference between $m_r(t)$ and $m_b(t)$.
To simplify the notation write $\nu=\nu_u$.
We have
\begin{align*}
m_b(t)&=\dfrac{1}{n_b}\left( e-\lambda_b\,\nu(t)\right);\qquad \lambda_b=e\cdot \nu-\sqrt{n_b^2-1+(e\cdot \nu)^2};\\
m_r(t)&=\dfrac{1}{n_r}\left( e-\lambda_r\,\nu(t)\right);\qquad \lambda_r=e\cdot \nu-\sqrt{n_r^2-1+(e\cdot \nu)^2}.
\end{align*}
So
\[
m_b(t)-m_r(t)
=
\left(\dfrac{1}{n_b}- \dfrac{1}{n_r}\right)\,e
+
\left(\dfrac{\lambda_r}{n_r}- \dfrac{\lambda_b}{n_b}\right)\,\nu(t):=A+B.
\]
Notice $|A|= \dfrac{|n_b-n_r|}{n_b\,n_r}$.
Next write
\begin{align*}
\left(\dfrac{\lambda_r}{n_r}- \dfrac{\lambda_b}{n_b}\right)
&=
\dfrac{1}{n_b\,n_r}\,\left(n_b\,\lambda_r-n_r\,\lambda_b \right)\\
&=
\dfrac{1}{n_b\,n_r}\,
\left\{\left(n_b-n_r \right)\,(e\cdot \nu(t))
+
n_r\,\sqrt{n_b^2-1+(e\cdot \nu)^2}-n_b\,\sqrt{n_r^2-1+(e\cdot \nu)^2}
\right\}.
\end{align*}
Now
\[
n_r\,\sqrt{n_b^2-1+(e\cdot \nu)^2}-n_b\,\sqrt{n_r^2-1+(e\cdot \nu)^2}
=
\dfrac{(n_b^2-n_r^2)\,(1-(e\cdot \nu)^2)}{n_r\,\sqrt{n_b^2-1+(e\cdot \nu)^2}+n_b\,\sqrt{n_r^2-1+(e\cdot \nu)^2}}.
\]
Hence
\begin{align*}
|B|&\leq \dfrac{|n_b-n_r|}{n_b\,n_r}
+
\dfrac{1}{n_b\,n_r}\,\left( \dfrac{|n_b^2-n_r^2|}{n_r\,\sqrt{n_b^2-1}+n_b\,\sqrt{n_r^2-1}}\right)
\end{align*}
and therefore
\begin{equation}\label{eq:estimate of mr-mb}
\left|m_b(t)-m_r(t) \right|
\leq \dfrac{\left|n_b-n_r \right|}{n_b\,n_r}\,\left(2+\dfrac{n_b+n_r}{n_r\,\sqrt{n_b^2-1}+n_b\,\sqrt{n_r^2-1}} \right).
\end{equation}
We next estimate $d_r(t)-d_b(t)$, where
\begin{align*}
d_r(t)=\dfrac{C_r}{n_r-m_r(t)\cdot e}, \qquad d_b(t)=\dfrac{C_b}{n_b-m_b(t)\cdot e},
\end{align*}
by \eqref{eq:formula for dbt}.
From Lemma \ref{lem:fixed point}, since $f_r(t_0)=f_b(t_0)$, $\nu(t_0)=e=(0,0,1)$. Then by the Snell law $m_r(t_0)=m_b(t_0)=e$, and from the parametrization of $f_r$ and $f_b$, $d_r(t_0)=d_b(t_0):=d_0$. Hence
\[
\dfrac{C_b}{n_b-1}=\dfrac{C_r }{n_r-1}=d_0.
\]
We then obtain
\begin{align*}
&d_r(t)-d_b(t)\\
&=\dfrac{d_0}{(n_r-m_r(t)\cdot e)(n_b-m_b(t)\cdot e)}\left((n_b-m_b(t)\cdot e)(n_r-1)-(n_r-m_r(t)\cdot e)(n_b-1)\right)\\
&=\dfrac{d_0}{(n_r-m_r(t)\cdot e)(n_b-m_b(t)\cdot e)}\,\Delta.
\end{align*}
Now
\begin{align*}
\Delta&=
n_r-n_b+\(m_b(t)-m_r(t)\)\cdot e+n_b\,m_r(t)\cdot e-n_r\,m_b(t)\cdot e\\
&=
n_r-n_b+\(m_b(t)-m_r(t)\)\cdot e+(n_b-n_r)\,m_r(t)\cdot e+n_r\,\(m_r(t)-m_b(t)\)\cdot e,
\end{align*}
so from \eqref{eq:estimate of mr-mb}
\[
|\Delta|\leq C(n_r,n_b)\,|n_r-n_b|.
\]
Since $(n_r-m_r(t)\cdot e)(n_b-m_b(t)\cdot e)\geq (n_r-1)(n_b-1)$, we obtain
\begin{equation}\label{eq:estimate difference dr minus db}
|d_r(t)-d_b(t)|\leq C' \,|n_r-n_b|,
\end{equation}
with $C'$ depending on $d_0,n_r,n_b$.
Finally write
\[
f_r(t)-f_b(t)=d_r(t)m_r(t)-d_b(t)m_b(t)=d_r(t)\(m_r(t)-m_b(t)\)-\(d_b(t)-d_r(t)\)m_b(t).
\]
Since $d_r(t)\leq C_r/(n_r-1)=d_0$, then the desired estimate follows from \eqref{eq:estimate of mr-mb} and \eqref{eq:estimate difference dr minus db}.
\end{proof}
We conclude this section analyzing the intersection of the upper surfaces of the single lenses $(u,f_r)$ and $(u,f_b)$.
\begin{proposition}\label{lm:grad u injective implies no lens exist}
Let $w=e$. If $\nu_u(t)\neq e$ for all $t\in \Omega$, and $\nabla u$ is injective in $\Omega$, then $S_r\cap S_b=\emptyset$, where $S_r=f_r(\Omega)$ and $S_b=f_b(\Omega)$.
This means that, the upper surfaces of the single lenses $(u,f_r)$ and $(u,f_b)$ are disjoint.
\end{proposition}
\begin{proof}
Suppose $P\in S_r\cap S_b$, i.e. there exists $t_0, t_1\in \Omega$ such that $f_r(t_0)=f_b(t_1)$, then as in the proof of \eqref{eq:equality of normals} we get $\nu_u(t_0)=\nu_u(t_1)$, and therefore $\nabla u(t_0)=\nabla u(t_1)$.
Since $\nabla u$ is injective, then $t_0=t_1$ and so $f_r(t_0)=f_b(t_0)$. Therefore from the proof of Lemma \ref{lem:fixed point} we conclude that $\nu_u(t_0)=e$.
\end{proof}
\begin{remark}\label{rmk:parallel planes}\rm
When $\nabla u$ is not injective the upper surfaces of the single lenses $(u,f_r)$ and $(u,f_b)$ may or may not be disjoint.
We illustrate this with lenses bounded by parallel planes.
In fact, by Lemma \ref{lm:lemma equality of three normals}
such a lens refracts all rays blue and red into the vertical direction $e$.
Notice that if the planes are sufficiently far apart, depending on the refractive indices, then $f_r(\Omega)\cap f_b(\Omega)=\emptyset$. This is illustrated in Figure \ref{fig:parallel-planes}: if the lens in between the planes $A$ and $B$, then $f_r(\Omega)\cap f_b(\Omega)\neq\emptyset$; and if the lens is between the planes $A$ and $C$, then $f_r(\Omega)\cap f_b(\Omega)=\emptyset$.
\begin{figure}
\includegraphics[width=3in]{Parallel-planes}
\caption{The lens is between $A$ and $B$ or between $A$ and $C$.}
\label{fig:parallel-planes}
\end{figure}
\end{remark}
\section{First order functional differential equations}\label{sec:FOD}
\setcounter{equation}{0}
In this section we give a new and simpler proof of an existence theorem for functional differential equations
originally due to J. Rogers \cite[Sec. 2]{rogers1988:picardtypethemlensdesign}. It will be used in Section \ref{sec:one point source problem} to show the existence of a dichromatic lens when rays emanate from one point source.
Developing an extension of Picard's iteration method for functional equations, Van-Brunt and Ockendon gave another proof of that theorem, \cite{vanbrunt-ockendon:lenstwowavelengths}.
We present a topological proof that we believe has independent interest and uses the Banach fixed point theorem.
Let $H$ be a continuous map defined in an open domain in $\R^{4n+1}$ with values in $\R^n$ given by
\begin{equation}\label{eq:definition map H}
H:=H(X)=\(h_1(X),h_2(X),\cdots, h_{n}(X)\),
\end{equation}
with $X:=\(t;\zeta^0,\zeta^1;\xi^0,\xi^1\), t\in \R; \zeta^0,\zeta^1,\xi^0,\xi^1\in \R^n.$\\
We are interested in solving the following system of functional differential equations
\begin{align}
Z'(t)&=H\(t;Z(t),Z(z_1(t));Z'(t),Z'(z_1(t))\)\label{eq:System}\\
Z(0)&=0,\notag
\end{align}
with $Z(t)=(z_1(t),\cdots,z_n(t))$.
\begin{theorem}\label{thm: Existence}
Let $\|\cdot\|$ be a norm in $\R^n$.
Assume that the system
\begin{equation}\label{eq:system P}
P=H\(0;{\bf 0,0};P,P\),
\end{equation}
has a solution $P=\(p_1,p_2,\cdots,p_n\)$ such that
\begin{equation}\label{eq:First Component}
|p_1|\leq 1.
\end{equation}
Let $\mathcal P=\left(0;{\bf 0,0};P,P\right)\in R^{4n+1}$, and let
$$
N_{\varepsilon}(\mathcal P)=\left\{\(t; \zeta^0,\zeta^1;\xi^0,\xi^1\): |t|+ \|\zeta^0\|+\|\zeta^1\|+\|\xi^0-P\|+\|\xi^1-P\|\leq\varepsilon\right\}
$$
be a neighborhood of $\mathcal P$ such that
\begin{enumerate}[(i)]
\item $H$ is uniformly Lipschitz in the variable $t$, i.e., there exists $\Lambda>0$ such that
\begin{equation}\label{eq:Lip in t}
\left\|H\(\bar t;{ \zeta^0, \zeta^1; \xi^0, \xi^1}\)-H\( t;{\zeta^0, \zeta^1; \xi^0, \xi^1}\)\right\|\leq \Lambda\,|\bar t-t|.
\end{equation}
for all $\(\bar t;{ \zeta^0, \zeta^1; \xi^0, \xi^1}\), \( t;{ \zeta^0, \zeta^1; \xi^0, \xi^1}\)\in N_{\varepsilon}(\mathcal P)$;
\item $H$ is uniformly Lipschitz in the variables $\zeta^0$ and $\zeta^1$, i.e., there exist positive constants $L_0$ and $L_1$ such that
\begin{equation}\label{eq:Lip in zeta0,zeta1}
\left\|H\(t;{ \bar\zeta^0, \bar \zeta^1; \xi^0, \xi^1}\)-H\(t;{ \zeta^0, \zeta^1; \xi^0, \xi^1}\)\right\|\leq L_0\left\|\bar \zeta^0-\zeta^0\right\|+L_1\left\|\bar\zeta^1-\zeta^1\right\|,
\end{equation}
for all $\(t;{ \bar\zeta^0, \bar\zeta^1; \xi^0, \xi^1}\), \(t;{ \zeta^0, \zeta^1; \xi^0, \xi^1}\)\in N_{\varepsilon}(\mathcal P)$;
\item $H$ is a uniform contraction in the variables $\xi^0$ and $\xi^1$, i.e., there exists constants $C_0$ and $C_1$ such that
\begin{equation}\label{eq:Lip in xi0,xi1}
\left\|H\(t;{ \zeta^0, \zeta^1; \bar \xi^0, \bar \xi^1}\)-H\( t;{ \zeta^0, \zeta^1; \xi^0, \xi^1}\)\right\|\leq C_0\left\|\bar \xi^0-\xi^0\right\|+C_1\left\|\bar\xi^1-\xi^1\right\|,
\end{equation}
for all $\(t;{ \zeta^0, \zeta^1; \bar\xi^0, \bar\xi^1}\), \(t;{ \zeta^0, \zeta^1; \xi^0, \xi^1}\)\in N_{\varepsilon}(\mathcal P)$,
with
\begin{equation}\label{eq:Contraction}
C_0+C_1<1;
\end{equation}
\item For all $X\in N_{\varepsilon}(\mathcal P)$
\begin{equation}\label{eq:bd on h1}
|h_1(X)|\leq 1.
\end{equation}
\end{enumerate}
Under these assumptions, there exists $\delta>0$ and $Z\in C^1[-\delta,\delta]$ with
$$\(t;Z(t),Z(z_1(t));Z'(t),Z'(z_1(t))\)\in N_{\varepsilon}(\mathcal P)$$
and $Z$
solving the system
\begin{equation}\label{eq:functional equations}
\begin{cases}
Z'(t)=H\(t;Z(t),Z(z_1(t));Z'(t),Z'(z_1(t))\)\\
Z(0)=0,
\end{cases}
\end{equation}
for $|t|\leq \delta$ and satisfying in addition that $Z'(0)=P$.
\end{theorem}
\begin{proof}
Since $H$ is continuous, let
\begin{equation}\label{eq:Bound on H}
\alpha=\max\{\left\|H(X)\right\|:X\in N_{\varepsilon}(\mathcal P)\}.
\end{equation}
From \eqref{eq:system P}
\begin{equation}\label{eq:bound on P}
\|P\|=\|H(\mathcal P)\|\leq \alpha.
\end{equation}
Let $\mu$ be a constant such that
\begin{equation}\label{eq:lower bound on mu}
\mu\geq\dfrac{\Lambda+(L_0+L_1)\,\alpha}{1-C_0-C_1}.
\end{equation}
For any map $Z:\R\to\R^n$, we define the vector
$$V_Z(t)=\(t;Z(t), Z\(z_1(t)\); \(Z\)'(t),\(Z\)'\(z_1(t)\)\).$$
\begin{definition}\label{def:set C(delta)}
Let $C^1[-\delta,\delta]$ denote the class of all functions $Z:[-\delta,\delta]\to \R^n$ that are $C^1$ equipped with the norm $\|Z\|_{C^1[-\delta,\delta]}=\max_{[-\delta,\delta]}\|Z(t)\|+\max_{[-\delta,\delta]}\|Z'(t)\|$.
We define the set $\mathcal C=\mathcal C(\delta)$ as follows:
$Z\in \mathcal C$ if and only if
\begin{enumerate}
\item $Z\in C^{1}[-\delta,\delta]$,
\item $Z(0)=0$, $Z'(0)=P,$
\item $|z_1(t)|\leq |t|,$
\item $\|Z(t)-Z(\bar t)\|\leq \alpha\, |t-\bar t|,$
\item $|z_1(t)-z_1(\bar t)|\leq |t-\bar t|,$
\item $\|Z'(t)-Z'(\bar t)\|\leq \mu\, |t-\bar t|$,
\item $V_Z(t)\in N_{\varepsilon}(\mathcal P)$, for all $|t|\leq \delta$.
\end{enumerate}
\end{definition}
Define a map $T$ on $\mathcal C$ as follows:
$$
T\,Z(t)=\int_0^t H(s;Z(s),Z(z_1(s));Z'(s),Z'(z_1(s)))\,ds.
$$
Our goal is to show that $T:\mathcal C\to \mathcal C$, for $\delta$ sufficiently small, and therefore
from Banach's fixed point theorem, $T$ has a unique fixed point $Z\in \mathcal C$ and so $Z$ solves \eqref{eq:System}.
We will prove the theorem in a series of steps.
\begin{claim}\label{clm:Non empty}
There exists $\delta_0>0$ such that $\mathcal C(\delta)$ is non empty for $\delta\leq \delta_0$; in fact, the function $Z^0(t)=t P\in \mathcal C$.
\end{claim}
\begin{proof}
Obviously, $Z^0(0)=0$, $\(Z^0\)'=P$, and from \eqref{eq:First Component}
$\left|z_1^0(t)\right|=|p_1|\,|t|\leq |t|.$
Also from \eqref{eq:system P} and \eqref{eq:First Component}
$$
\|Z^0(t)-Z^0(\bar t)\|\leq\|P\|\, |t-\bar t|= \|H(\mathcal P)\|\, |t-\bar t|\leq \alpha \,|t-\bar t|,
$$
$$|z_1^0(t)-z_1^0(\bar t)|=|p_1|\,|t-\bar t|\leq |t-\bar t|,$$
and
$
\|\(Z^0\)'(t)-\(Z^0\)'(\bar t)\|=0\leq \mu\,|t-\bar t|.
$
It remains to show that $V_{Z^0}(t)\in N_{\varepsilon}(\mathcal P)$.
By definition
$
V_{Z^0}(t)=\( t;t\,P,t\,p_1\,P;P,P\),
$
and so $V_{Z^0}(t)\in N_\varepsilon \(\mathcal P\)$ if and only if
$
|t|+|t|\,\|P\|+|t|\,|p_1|\,\|P\|\leq \varepsilon,
$
which is equivalent to
\begin{equation}\label{eq:definition of delta zero}
|t|\leq \dfrac{\varepsilon}{1+\|P\|+|p_1|\,\|P\|}:=\delta_0.
\end{equation}
\end{proof}
\begin{claim}\label{clm:Completeness}
$\mathcal C(\delta)$ is complete for every $\delta>0$.
\end{claim}
\begin{proof}
Let $Z^k$ be a Cauchy sequence in $\mathcal C$. Since $C^1[-\delta,\delta]$ is complete, $Z^k$ converges uniformly to a function $Z\in C^{1}[-\delta,\delta]$, and $\(Z^{k}\)'$ converges uniformly to $Z'$. Since $Z^k$ satisfy properties (1)-(7) in Definition \ref{def:set C(delta)}, then Step \ref{clm:Completeness} follows by uniform convergence.
\end{proof}
\begin{claim}\label{clm:First Component map}
If $Z\in \mathcal C$ and $W=TZ=(w_1,\cdots ,w_n)$, then
$$|w_1(t)|\leq |t|.$$
\end{claim}
\begin{proof}
From \eqref{eq:bd on h1}
and since $V_Z(t)\in N_{\varepsilon}(\mathcal P)$ for $|t|\leq \delta$, it follows that
$$|w_1(t)|=\left |\int_0^t h_1(V_Z(s))\, ds\right| \leq |t|.$$
\end{proof}
\begin{claim}\label{clm:Lip Continuity}
If $Z\in \mathcal C$ and $W=TZ$, then
$$\left\|W(t)-W(\bar t)\right\|\leq \alpha |t-\bar t|,$$
and
$$|w_1(t)-w_1(\bar t)|\leq |t-\bar t|$$
for every $t,\bar t\in[-\delta,\delta].$
\end{claim}
\begin{proof}
Since $V_{Z}(t)\in N_\varepsilon(\mathcal P)$, by \eqref{eq:Bound on H} for all $|t|\leq \delta$
\[
\left\|W(t)-W(\bar t)\right\|=\left\|\int_{\bar t}^{t} H(V_Z(s))\,ds\right\|\leq \alpha \,|t-\bar t|,
\]
and from \eqref{eq:bd on h1}, we get similarly the desired estimate for $w_1(t)-w_1(\bar t)$.
\end{proof}
\begin{claim}\label{clm:Lip Derivative for W}
If $Z\in \mathcal C(\delta)$ and $W=TZ$, then
$$\left\|W'(t)-W'(\bar t)\right\|\leq \mu |t-\bar t|,$$
for every $t,\bar t\in[-\delta,\delta]$.
\end{claim}
\begin{proof}
From the Lipschitz estimates for $H$
\begin{align*}
\left\|W'(t)-W'(\bar t)\right\|&=\left\|H(V_{Z}(t))-H(V_Z(\bar t))\right\|\\
&\leq \Lambda\, |t-\bar t|+L_0\,\|Z(t)-Z(\bar t)\|+L_1\,\|Z(z_1(t))-Z(z_1(\bar t))\|\\
&\qquad +C_0\,\|Z'(t)-Z'(\bar t)\|+C_1\,\|Z'(z_1(t))-Z'(z_1(\bar t))\|.
\end{align*}
Since $|z_1(t)|\leq |t|\leq \delta$ and $|z_1(\bar t)|\leq |\bar t|\leq \delta$, we get from the Lipschitz properties of $Z$, $z_1$, and $Z'$
\begin{align*}
\left\|W'(t)-W'(\bar t)\right\|&\leq \Lambda\,|t-\bar t|+L_0\,\alpha |t-\bar t|+L_1\,\alpha |z_1(t)-z_1(\bar t)|+C_0\,\mu |t-\bar t|+C_1\,\mu |z_1(t)-z_1(\bar t)|\\
&\leq \(\Lambda+(L_0+L_1)\alpha+(C_0+C_1)\mu\)\,|t-\bar t|.
\end{align*}
From \eqref{eq:lower bound on mu},
$\Lambda+(L_0+L_1)\alpha\leq (1-C_0-C_1)\mu,$
then Step \ref{clm:Lip Derivative for W} follows.
\end{proof}
\begin{claim}\label{clm:automorphism}
For $\delta$ sufficiently small,
$W=TZ\in \mathcal C$ for each $Z\in \mathcal C$.
\end{claim}
\begin{proof}
From the previous steps, it remains to show that $V_{W}(t)\in N_{\varepsilon}(\mathcal P)$. Define
\begin{align*}
S_{Z}(t)&=|t|+\left\|Z(t)\right\|+\left\|Z(z_1(t))\right\|+\left\|Z'(t)-P\right\|+\left\|Z'(z_1(t))-P\right\|\\
S_{W}(t)&=|t|+\left\|W(t)\right\|+\left\|W(w_1(t))\right\|+\left\|W'(t)-P\right\|+\left\|W'(w_1(t))-P\right\|.
\end{align*}
Since $V_{Z}(t)\in N_{\varepsilon}(\mathcal P)$, we have $S_{Z}(t)\leq \varepsilon$. We shall prove that $S_{W}(t)\leq \varepsilon$ by choosing $\delta$ sufficiently small.
In fact, from \eqref{eq:Bound on H} for every $|t|\leq \delta$
\begin{equation}\label{eq:bound on W}
\left\|W(t)\right\|= \left\|\int_{0}^t H(V_Z(s))\,ds\right\|\leq \alpha \,|t|\leq \alpha\, \delta,
\end{equation}
and from \eqref{eq:bd on h1}
$$
\left|w_1(t)\right|= \left|\int_0^t h_1(V_Z(s))\,ds\right|\leq |t|\leq \delta.
$$
Hence
$
\left\|W(w_1(t))\right\|\leq \alpha\, \delta.
$
We next estimate $\left\|W'(t)-P\right\|$. Using the Lipschitz properties of $H$ we write
\begin{align*}
\left\|W'(t)-P\right\|&=\left\|H(V_{Z}(t))-H(\mathcal P)\right\|\\
&\leq \Lambda|t|+L_0\left\|Z(t)\right\|+L_1\left\|Z(z_1(t))\right\|+C_0\left\|Z'(t)-P\right\|+C_1\left\|Z'(z_1(t))-P\right\|.
\end{align*}
Notice that $\left\|Z(t)\right\|=\left\|Z(t)-Z(0)\right\|\leq \alpha |t|\leq \alpha \delta$,
and since $|z_1(t)|\leq |t|\leq \delta$ then
$\left\|Z(z_1(t))\right\|\leq \alpha \delta$.
Also
$$\left\|Z'(t)-P\right\|=\left\|Z'(t)-Z'(0)\right\|\leq \mu|t|\leq \mu\delta .$$
and
$$\left\|Z'(z_1(t))-P\right\|\leq \mu\delta .$$
Therefore, for $|t|\leq \delta$
$$\left\|W'(t)-P\right\|\leq \left[\Lambda+\alpha(L_0+L_1)\delta+\mu(C_0+C_1)\right]\delta,$$
and since $|w_1(t)|\leq \delta $, we also get
$$\left\|W'(w_1(t))-P\right\|\leq \left[\Lambda+\alpha(L_0+L_1)\delta+\mu(C_0+C_1)\right]\delta.$$
We conclude that
$$S_W(t)\leq \delta\,\(1+2\Lambda+2\alpha(1+L_0+L_1)+2\mu(C_0+C_1)\),
$$
so choosing $\delta\leq \dfrac{\varepsilon}{1+2\Lambda+2\alpha(1+L_0+L_1)+2\mu(C_0+C_1)}$ Step \ref{clm:automorphism} follows.
\end{proof}
It remains to show that $T$ is a contraction.
\begin{claim}\label{stp:T is contraction}
If $Z^1,Z^2\in \mathcal C(\delta)$ with $\delta$ small enough from the previous steps, then
$$\left\|TZ^1-TZ^2\right\|_{C^1[-\delta,\delta]}\leq q \,\left\|Z^1-Z^2\right\|_{C^1[-\delta,\delta]},$$
for some $q<1$.
\end{claim}
\begin{proof}
Let $W^1(t)=T Z^1(t),$ $W^2(t)=TZ^2(t)$. By the fundamental theorem of calculus we have for every $|t|\leq\delta$
\begin{align*}
\left\|W^1(t)-W^2(t)\right\| &\leq \left |\int_0^t \left\|\(W^1\)'(s)-\(W^2\)'(s)\right\|\,ds \right| \leq \delta \sup_{|t|\leq \delta} \left\|\(W^1\)'(t)-\(W^2\)'(t)\right\|\\
& \leq \delta \|W^1-W^2\|_{C^1[-\delta,\delta]},
\end{align*}
and similarly $\left\|Z^1(t)-Z^2(t)\right\| \leq \delta\, \|Z^1-Z^2\|_{C^1[-\delta,\delta]}$.
From the Lipschitz properties of $H$, for every $|t|\leq \delta$,
{
\begin{align*}
\left\|\(W^1\)'(t)-\(W^2\)'(t)\right\|&\leq \left\|H\(V_{Z^1}(t)\)-H\(V_{Z^2}(t)\)\right\|\\
&\leq L_0 \left\|Z^1(t)-Z^2(t)\right\|+L_1\left\|Z^1\(z_1^1(t)\)-Z^2\(z_1^2(t)\)\right\|\\
&\quad+C_0\left\|\(Z^1\)'(t)-\(Z^2\)'(t)\right\|+C_1\left\|\(Z^1\)'\(z_1^1(t)\)-\(Z^2\)'\(z_1^2(t)\)\right\|.
\end{align*}
}
We have
\begin{align*}
\left\|Z^1(t)-Z^2(t)\right\|
&\leq \delta\,\|Z^1-Z^2\|_{C^1[-\delta,\delta]}\\
\left\|Z^1(z_1^1(t))-Z^2(z_1^2(t))\right\|
&\leq \left\|Z^1(z_1^1(t))-Z^2(z_1^1(t))\right\|+\left\|Z^2(z_1^1(t))-Z^2(z_1^2(t))\right\|\\
&\leq \delta\,\|Z^1-Z^2\|_{C^1[-\delta,\delta]}+\alpha\, |z_1^1(t)-z_1^2(t)|\\
&\leq \delta\,\|Z^1-Z^2\|_{C^1[-\delta,\delta]}+\alpha C_{\|\cdot\|} \|Z^1(t)-Z^2(t)\|\\
&\leq \delta\,(\alpha C_{\|\cdot\|}+1)\,\|Z^1-Z^2\|_{C^1[\delta,\delta]}\\
\left\|\(Z^1\)'(t)-\(Z^2\)'(t)\right\|&\leq \|Z^1-Z^2\|_{C^1[-\delta,\delta]}\\
\left\|\(Z^1\)'\(z_1^1(t)\)-\(Z^2\)'\(z_1^2(t)\)\right\|&\leq \left\|\(Z^1\)'\(z_1^1(t)\)-\(Z^2\)'\(z_1^1(t)\)\right\|+\left\|\(Z^2\)'\(z_1^1(t)\)-\(Z^2\)'\(z_1^2(t)\)\right\|\\
&\leq \left\|Z^1-Z^2\right\|_{C^1[-\delta,\delta]}+\mu |z_1^1(t)-z_1^2(t)|\\
&\leq \left\|Z^1-Z^2\right\|_{C^1[-\delta,\delta]}+\mu C_{\|\cdot\|} \left\|Z^1(t)-Z^2(t)\right\|\\
&\leq \(\mu \, C_{\|\cdot\|} \,\delta+1\)\left\|Z^1-Z^2\right\|_{C^1[-\delta,\delta]};
\end{align*}
here $C_{\|\cdot\|}$ is a constant larger than $1$, depending only on the choice of the norm in $\R^n$, since all norms in $\R^n$ are equivalent such constant exists.
Combining the above inequalities, we obtain
\begin{align*}
&\left\|\(W^1\)'(t)-\(W^2\)'(t)\right\|\\
&\leq \left(L_0\,\delta+L_1\,\delta\,(\alpha C_{\|\cdot\|}+1)+C_0+C_1\(\mu C_{\|\cdot\|}\,\delta+1\)\right) \,\left\|Z^1-Z^1\right\|_{C^1}:=\(M\,\delta+C_0+C_1\)\left\|Z^1-Z^2\right\|_{C^1},
\end{align*}
and from the fundamental theorem of calculus
\[
\left\|W^1(t)-W^2(t)\right\|\leq \delta \sup_{|t|\leq \delta} \left\|\(W^1\)'(t)-\(W^2\)'(t)\right\|\leq \delta \,\(M\,\delta+C_0+C_1\)\left\|Z^1-Z^2\right\|_{C^1}.
\]
We conclude that
$$
\left\|W^1-W^2\right\|_{C^1[-\delta,\delta]}\leq \(1+\delta\)\,\left(M\,\delta+C_0+C_1\right)\,\left\|Z^1-Z^2\right\|_{C^1[-\delta,\delta]}
$$
Since $C_0+C_1<1$, then choosing $\delta$ sufficiently small Step \ref{stp:T is contraction} follows.
\end{proof}
We conclude that there exists $\delta^*>0$ small such that for $0<\delta\leq \delta^*$, the map $T:\mathcal C(\delta)\to \mathcal C(\delta)$ is a contraction and hence by the Banach fixed point theorem there is a unique $Z\in \mathcal C(\delta)$ such that
$$
Z(t)=TZ(t)=\int_0^t H(V_Z(s))\,ds.
$$
Differentiating with respect to $t$, we get that $Z$ solves \eqref{eq:functional equations} for $|t|\leq \delta$.
\end{proof}
We make the following observations about the assumptions in Theorem \ref{thm: Existence}.
\begin{remark}\label{rmk:non existence}\rm
We show that even for $H$ smooth, satisfying \eqref{eq:system P} with \eqref{eq:First Component}, and \eqref{eq:bd on h1}, the system \eqref{eq:System} might not have any real solutions in a neighborhood of $t=0$.
In fact, consider for example the following ode:
\begin{equation}\label{eq:Counterexample System}
z'(t)= z'(t)^2+\dfrac{1-t}{4},\qquad z(0)=0.
\end{equation}
In this case $n=1$ and $H:\R^5\to \R$ with
$$H\(t;\zeta^0,\zeta^1;\xi^0,\xi^1\)=\(\xi^0\)^2+\dfrac{1-t}{4}$$
analytic.
The system $P=H(0;0,0;P,P)$ has a unique solution $P=1/2$, and so \eqref{eq:system P} and \eqref{eq:First Component} hold.
Let $\mathcal P=(0;0,0;1/2,1/2)$. Since $H(\mathcal P)<1$, \eqref{eq:bd on h1} holds in a small neighborhood of $\mathcal P$.
On the other hand, \eqref{eq:Counterexample System} cannot have real solutions for $t<0$ and so in any neighborhood of $t=0$.
This shows that there cannot exist a norm $\|\cdot \|$ in $\R$ so that the contraction condition \eqref{eq:Contraction} is satisfied.
In particular, this shows that the conclusion of \cite[Lemma 2.2]{vanbrunt-ockendon:lenstwowavelengths} is in error.
\end{remark}
\begin{remark}\label{rmk:More C0+C1}\rm Let $H$ be a map from a domain in $\R^{4n+1}$ with values in $\R^n$, and let $P$ be a solution to the system $P=H(0;{\bf 0,0};P,P)$ satisfying \eqref{eq:First Component}. Assume $H$ is $C^1$ in a neighborhood of $\mathcal P=(0;{\bf 0,0};P,P)$. Given a norm $\|\cdot\|$ in $\R^n$, let $\left\|\left|\cdot\right|\right\|$ be the induced norm on the space of $n\times n$ matrices, i.e., for a $n\times n$ matrix $A$
$$\left\|\left|A\right|\right\|=\max\{\|Av\|:v\in \R^n, \|v\|=1\},$$
see \cite[Section 5.6]{horn-johnson-matrix-analysis}.
Since $H$ is $C^1$ then there exists a neighborhood $N_{\varepsilon}(\mathcal P)$ as defined in Theorem \ref{thm: Existence} such that \eqref{eq:Lip in t}, \eqref{eq:Lip in zeta0,zeta1}, \eqref{eq:Lip in xi0,xi1} are satisfied.
The following proposition shows estimates for $C_0+C_1$ in \eqref{eq:Lip in xi0,xi1}.
\begin{proposition}\label{eq:prop Bound on C0+C1}
We define the $n\times n$ matrices $$\nabla_{\xi^0}H=\left(\dfrac{\partial h_i}{\partial \xi^0_j}\right)_{1\leq i,j\leq n},\quad \nabla_{\xi^1}H=\left(\dfrac{\partial h_i}{\partial \xi^1_j}\right)_{1\leq i,j\leq n}.$$
If \eqref{eq:Lip in xi0,xi1} holds for some $C_0, C_1$, then
\begin{equation}\label{eq:lower bound on C0+C1}
C_0\geq \left\|\left|\nabla_{\xi^0}H(\mathcal P)\right|\right\|,\text{ and } C_1\geq\left\|\left|\nabla_{\xi^0}H(\mathcal P)\right|\right\|.
\end{equation}
In addition, \eqref{eq:Lip in xi0,xi1} holds with $C_0=\max_{N_{\varepsilon}(\mathcal P)}\left|\left\|\nabla_{\xi^0} H\right\|\right|$ and $C_1=\max_{N_{\varepsilon}(\mathcal P)}\left|\left\|\nabla_{\xi^1} H\right\|\right|$.
\end{proposition}
\begin{proof}
We first prove \eqref{eq:lower bound on C0+C1}.
Let $v\in \R^n$ with $\|v\|=1$; for $s>0$ small, the vector $(0;{\bf 0,0};P+s\,v,P)\in N_{\varepsilon}(\mathcal P)$. From \eqref{eq:Lip in xi0,xi1}, we get
$$
\left\|H(0;{\bf 0,0};P+s\,v,P)-H(0;{\bf 0,0};P,P)\right\|\leq C_0\, |s|.
$$
Dividing by $|s|$ and letting $s\to 0$ we obtain, by application of the mean value theorem on each component, that for every $v\in \R^n$ with $\|v\|=1$
$$C_0\geq \left\| \nabla_{\xi^0} H(\mathcal P)\, v^t\right\|.$$
Taking the supremum over all $v\in \R^n$ we obtain
$C_0\geq \left\|\left|\nabla_{\xi^0}H(\mathcal P)\right|\right\|.$
Similarly we get $C_1\geq \left\|\left|\nabla_{\xi^1}H(\mathcal P)\right|\right\|$.
To show the second part of the proposition,
by the fundamental theorem of calculus,
we have for $\(t;\zeta^0,\zeta^1;\xi^0,\xi^1\),\(t;\zeta^0,\zeta^1;\bar\xi^0,\bar\xi^1\)\in N_{\varepsilon}(\mathcal P)$ that
\begin{align*}
&H\(t;\zeta^0,\zeta^1;\xi^0,\xi^1\)-H\(t;\zeta^0,\zeta^1;\bar\xi^0,\bar\xi^1\)\\
&=\int_{0}^1 DH\left( (1-s) \(t;\zeta^0,\zeta^1;\xi^0,\xi^1\)+s \(t;\zeta^0,\zeta^1;\bar\xi^0,\bar\xi^1\)\right)\, \(0;\bf{0,0};\xi^0-\bar\xi^0,\xi^1-\bar \xi^1\)^t\,ds\\
&= \int_{0}^1 \nabla_{\xi^0}H\left( (1-s) \(t;\zeta^0,\zeta^1;\xi^0,\xi^1\)+s \(t;\zeta^0,\zeta^1;\bar\xi^0,\bar\xi^1\)\right)\, \(\xi^0-\bar \xi^0\)^t\,ds\\
&\qquad + \int_{0}^1 \nabla_{\xi^1}H\left( (1-s) \(t;\zeta^0,\zeta^1;\xi^0,\xi^1\)+s \(t;\zeta^0,\zeta^1;\bar\xi^0,\bar\xi^1\)\right)\, \(\xi^1-\bar \xi^1\)^t\,ds,
\end{align*}
where $DH$ is the $(4n+1)\times n$ matrix of the first derivatives of $H$ with respect to all variables.
Then
\begin{align*}
&\left\|H\(t;\zeta^0,\zeta^1;\xi^0,\xi^1\)-H\(t;\zeta^0,\zeta^1;\bar\xi^0,\bar\xi^1\)\right\|\\
&\leq \int_{0}^{1} \left\|\left|\nabla_{\xi^0}H\left( (1-s) \(t;\zeta^0,\zeta^1;\xi^0,\xi^1\)+s \(t;\zeta^0,\zeta^1;\bar\xi^0,\bar\xi^1\)\right)\right\|\right|\, \left\|\xi^0-\bar \xi^0\right\|\, ds \\
&\qquad +\int_{0}^{1} \left\|\left|\nabla_{\xi^1}H\left( (1-s) \(t;\zeta^0,\zeta^1;\xi^0,\xi^1\)+s \(t;\zeta^0,\zeta^1;\bar\xi^0,\bar\xi^1\)\right)\right\|\right|\,\left\|\xi^1-\bar \xi^1\right\|\, ds \\
&\leq \max_{N_{\varepsilon}(\mathcal P)}\left|\left\|\nabla_{\xi^0} H\right\|\right|\,\, \left\|\bar \xi^0-\xi^0\right\|+
\max_{N_{\varepsilon}(\mathcal P)}\left|\left\|\nabla_{\xi^1} H\right\|\right|\,\, \left\|\bar \xi^1-\xi^1\right\|.
\end{align*}
The proof is then complete.
\end{proof}
Given an $n\times n$ matrix $A$, let $R_A$ be its spectral radius, i.e., $R_A$ is the largest absolute value of the eigenvalues of $A$. From \cite[Theorem 5.6.9]{horn-johnson-matrix-analysis} we have $R_A\leq \left\|\left|A\right|\right\|$ for any matrix norm $\left\|\left|\cdot \right|\right\|$. Then from \eqref{eq:lower bound on C0+C1} we get the following corollary that shows that the possibility of choosing a norm in $\R^n$ for which the contraction property \eqref{eq:Contraction} holds depends on the spectral radii of the matrices $\nabla_{\xi^0} H(\mathcal P), \nabla_{\xi^1} H(\mathcal P)$.
\begin{corollary}\label{cor:spectral radii}
Let $H$ be as above. Denote by $R_{\xi^0}$ and $R_{\xi^1}$ the spectral radii of the matrices $\nabla_{\xi^0} H(\mathcal P),$ and $\nabla_{\xi^1} H(\mathcal P)$ respectively. Then for any norm in $\R^n$ and any $C_0,C_1$ satisfying \eqref{eq:Lip in xi0,xi1} we have
$$C_0+C_1\geq R_{\xi^0}+R_{\xi^1}.$$
\end{corollary}
To apply Theorem \ref{thm: Existence}, we need $H$ to satisfy \eqref{eq:Lip in xi0,xi1} together with the contraction condition \eqref{eq:Contraction} for some norm $\|\cdot \|$ in $\R^n$.
It might not be possible to find such a norm. In fact, if an $n\times n$ matrix $A$ has spectral radius $R_A\geq 1$, then for each norm $\|\cdot \|$ in $\R^n$ the induced matrix norm satisfies $\||A|\|\geq 1$; see [Theorem 5.6.9 and Lemma 5.6.10]\cite{horn-johnson-matrix-analysis}.
So if the sum of the spectral radii of the Jacobian matrices $\nabla_{\xi^0}H(\mathcal P),$ $\nabla_{\xi^1}H(\mathcal P)$ is bigger than one, then from Corollary \ref{cor:spectral radii} it is not possible to find a norm $\|\cdot \|$ in $\R^n$ for which \eqref{eq:Contraction} holds.
\end{remark}
\subsection{Uniqueness of solutions} In this section we show the following uniqueness theorem.
\begin{theorem}\label{thm:Uniqueness}
Under the assumptions of Theorem \ref{thm: Existence}, the local solution to \eqref{eq:functional equations} with $Z'(0)=P$ is unique.
\end{theorem}
The theorem is a consequence of the following lemma.
\begin{lemma}\label{lm:uniqueness lemma}
Let $\mathcal C(\delta)$ be the set in Definition \ref{def:set C(delta)}.
If there exists $\delta>0$ such that $W$ solves \eqref{eq:functional equations} for $|t|\leq \delta$, with $W'(0)=P$, and $V_W(t)\in N_{\varepsilon}(\mathcal P)$ for $|t|\leq \delta$, then $W\in \mathcal C(\delta)$.
\end{lemma}
\begin{proof}
Since $w_1'(t)=h_1(V_{W}(t))$, then \eqref{eq:bd on h1} implies
\begin{equation}\label{eq:first component W}
|w_1(t)|=\left|\int_{0}^t h_1(V_W(s))\,ds\right|\leq |t|.
\end{equation}
Also for $|t|,|\bar t|\leq \delta$ since $W$ is a solution to \eqref{eq:functional equations} then from \eqref{eq:Bound on H}
\begin{equation}\label{eq:lip W}
\left\|W(t)-W(\bar t)\right\|=\left\|\int_{\bar t}^t H(V_W(s))\, ds\right\|\leq \alpha |t-\bar t|
\end{equation}
and by \eqref{eq:bd on h1}
\begin{equation}\label{eq:Lip w1}
\left|w_1(t)-w_1(\bar t)\right|=\left|\int_{\bar t}^t h_1(V_W(s))\,ds\right|\leq |t-\bar t|.
\end{equation}
It remains to show the Lipschitz estimate on $W'$. Let $|t|,|\bar t|\leq \delta$, then by the Lipschitz properties of $H$
\begin{align*}
\left\|W'(t)-W'(\bar t)\right\|&= \left\|H(V_{W}(t))-H(V_{W}(\bar t))\right\|\\
&\leq \Lambda |t-\bar t|+L_0 \left\|W(t)-W(\bar t)\right\|+L_1\left\|W(w_1(t))-W(w_1(\bar t))\right\|\\
&\qquad +C_0 \left\|W'(t)-W'(\bar t)\right\|+C_1\left\|W'(w_1(t))-W'(w_1(\bar t))\right\|.
\end{align*}
Using \eqref{eq:lip W}, \eqref{eq:first component W}, and \eqref{eq:Lip w1}, we get that for every $|t|,|\bar t|\leq \delta$
\begin{equation}\label{eq:Estimate}
\left\|W'(t)-W'(\bar t)\right\|\leq (\Lambda+(L_0+L_1)\alpha)|t-\bar t|+C_0 \left\|W'(t)-W'(\bar t)\right\|+C_1\left\|W'(w_1(t))-W'(w_1(\bar t))\right\|.
\end{equation}
Fix $t$ and $\bar t$ and let $r=|t-\bar t|$. Let $\tau$, and $\bar \tau$ be such that $|\tau|,|\bar \tau|\leq \delta$ and $|\tau-\bar \tau|\leq r$, then by \eqref{eq:Lip w1}
$$\left|w_1(\tau)-w_1(\bar \tau)\right|\leq |\tau-\bar \tau|\leq r.$$
Hence applying \eqref{eq:Estimate} for $\tau$ and $\bar \tau$ we get for $|\tau|,|\bar \tau|\leq \delta$, $|\tau-\bar \tau|\leq r$ that
\begin{align*}
\left\|W'(\tau)-W'(\bar \tau)\right\|
&\leq (\Lambda+\(L_0+L_1\)\alpha)|\tau-\bar \tau|+C_0 \left\|W'(\tau)-W'(\bar \tau)\right\|+C_1\left\|W'(w_1(\tau))-W'(w_1(\bar \tau))\right\|\\
&\leq (\Lambda+\(L_0+L_1\)\alpha)\,r+(C_0+C_1)\sup_{|\tau|,|\bar \tau|\leq \delta, |\tau-\bar \tau|\leq r} \left\|W'(\tau)-W'(\bar \tau)\right\|.
\end{align*}
Hence taking the supremum on the left hand side of the inequality, and using \eqref{eq:lower bound on mu} we get
$$
\sup_{|\tau|,|\bar \tau|\leq \delta, |\tau-\bar \tau|\leq r} \left\|W'(\tau)-W'(\bar \tau)\right\| \leq \dfrac{\Lambda+\(L_0+L_1\)\,\alpha}{1-C_0-C_1} \,r\leq \mu \,r
$$
so for every $|t|,|\bar t|\leq \delta$
$$
\left\|W'(t)-W'(\bar t)\right\|\leq \sup_{|\tau|,|\bar \tau|\leq \delta, |\tau-\bar \tau|\leq |t-\bar t|} \left\|W'(\tau)-W'(\bar \tau)\right\| \leq \mu\, |t-\bar t|,
$$
and the lemma follows.
\end{proof}
\begin{proof}[Proof of Theorem \ref{thm:Uniqueness}]
Let $\delta_1,\delta_2\leq \delta^*$ and let $W^i$ solving \eqref{eq:functional equations} for $|t|\leq \delta_i$, with $\(W^i\)'(0)=P$, and $V_{W^i}(t)\in N_{\varepsilon}(\mathcal P)$ for $|t|\leq \delta_i$ for $i=1,2$. From Lemma \ref{lm:uniqueness lemma} $W^i\in \mathcal C(\delta_i)$, and since they solve \eqref{eq:functional equations} we have $TW^i(t)=W^i(t)$ for $|t|\leq \delta_i$, $i=1,2$.
Let $\delta=\min\{\delta_1,\delta_2\}$. We have $\mathcal C(\delta_i)\subset \mathcal C(\delta)$, $i=1,2$.
Since $\delta\leq \delta^*$, from the proof of the existence theorem $T$ has a unique fixed point in $\mathcal C(\delta)$. But, $TW^i=W^i$ for $|t|\leq \delta$ and so $W^1=W^2$ for $|t|\leq \delta$.
\end{proof}
\begin{remark}\rm
If the vector $P$ solution to the system \eqref{eq:system P}
does not satisfy \eqref{eq:First Component}, then the system \eqref{eq:functional equations} may have infinitely many solutions. This goes back to the paper by Kato and McLeod \cite[Thm. 2]{katomcleod:functionaldifferentialequations} about single functional differential equations.
We refer to \cite[Equation (1.7)]{fox-mayers-ockendon-tayler:funcdiffeqexplicitexample} for representation formulas for infinitely many solutions.
\end{remark}
\begin{remark}\rm\label{rmk:theorem for general m}
Theorem \ref{thm: Existence} has an extension to more variables and the proof is basically the same.
In fact, the set up in this case and the result are as follows.
We set $Z(t)=\(z_1(t),\cdots ,z_n(t)\)$ where $z_i(t)$ are real valued functions of one variable, and
$Z'(t)=\(z_1'(t),\cdots ,z_n'(t)\)$.
Let $H=\(h_1,\cdots ,h_n\)$ where
$$h_i=h_i\(t;\zeta^0,\zeta^1,\cdots ,\zeta^m;\xi^0,\xi^1,\cdots ,\xi^m\)$$
are real valued functions with $\zeta^j,\xi^j\in \R^n$ for $1\leq j\leq m$, so each $h_i$ has $1+2\,(m+1)\,n$ variables.
Let
\[
X=\(t;\zeta^0,\zeta^1,\cdots ,\zeta^m;\xi^0,\xi^1,\cdots ,\xi^m\).
\]
We assume $1\leq m\leq n$.
Set $Z(z_k(t))=\(z_1(z_k(t)),\cdots ,z_n(z_k(t))\)$ for $1\leq k\leq m$.
We want to find $Z(t)$ as above satisfying the following functional differential equation
\begin{align*}
Z'(t)&=H\(t;Z(t),Z(z_1(t)),\cdots ,Z(z_m(t));Z'(t),Z'(z_1(t)),\cdots ,Z'(z_m(t))\)\\
Z(0)&=(0,\cdots ,0)\in \R^n,
\end{align*}
for $t$ in a neighborhood of $0$.
We assume that there exists $P=(p_1,\cdots ,p_n)\in \R^n$ with $|p_i|\leq 1$ for $1\leq i\leq m$ solving
\[
P=H\(0;{\bf 0,\cdots ,0}; P,\cdots ,P\)
\]
where dots mean $m+1$ times.
Let $\mathcal P=\left(0;{\bf 0,\cdots,0};P,\cdots,P\right)\in R^{2(m+1)n+1}$, and let $\|\cdot \|$ be a norm in $\R^n$ with
$$
N_{\varepsilon}(\mathcal P)=\left\{\(t;{ \zeta^0,\cdots,\zeta^m;\xi^0,\cdots,\xi^m}\); |t|+ \|\zeta^0\|+\cdots+\|\zeta^m\|+\|\xi^0-P\|+\cdots+\|\xi^m-P\|\leq\varepsilon \right\}
$$
a neighborhood of $\mathcal P$ such that
\begin{enumerate}[(i)]
\item $H$ is uniformly Lipschitz in the variable $t$, i.e., there exists $\Lambda>0$ such that
\[
\left\|H\(\bar t;{ \zeta^0, \cdots,\zeta^m; \xi^0,\cdots, \xi^m}\)-H\( t;{ \zeta^0,\cdots, \zeta^m; \xi^0, \cdots,\xi^m}\)\right\|\leq \Lambda|\bar t-t|.
\]
for all $\(\bar t;{ \zeta^0,\cdots, \zeta^m; \xi^0,\cdots, \xi^m}\), \(t;{ \zeta^0,\cdots, \zeta^m; \xi^0,\cdots, \xi^m}\)\in N_{\varepsilon}(\mathcal P)$;
\item $H$ is uniformly Lipschitz in the variables $\zeta^0,\cdots,\zeta^m$ i.e., there exist positive constants $L_0,\cdots,L_m$ such that
\[
\left\|H\(t;{ \bar\zeta^0,\cdots ,\bar \zeta^m; \xi^0,\cdots , \xi^m}\)-H\(t;{\zeta^0, \cdots,\zeta^m; \xi^0, \cdots,\xi^m}\)\right\|\leq L_0\left\|\bar \zeta^0-\zeta^0\right\|+\cdots+L_m\left\|\bar\zeta^m-\zeta^m\right\|,
\]
for all $\(t;{ \bar\zeta^0, \cdots,\bar\zeta^m; \xi^0,\cdots , \xi^m}\), \(t;{ \zeta^0,\cdots, \zeta^m; \xi^0,\cdots , \xi^m}\)\in N_{\varepsilon}(\mathcal P)$;
\item $H$ is a uniform contraction in the variables $\xi^0,\cdots,\xi^m,$ i.e., there exists constants $C_0,\cdots, C_m$ such that
\[
\left\|H\(t;{ \zeta^0,\cdots, \zeta^m; \bar \xi^0,\cdots, \bar \xi^m}\)-H\( t;{ \zeta^0,\cdots, \zeta^m; \xi^0, \cdots,\xi^m}\)\right\|\leq C_0\left\|\bar \xi^0-\xi^0\right\|+\cdots+C_m\left\|\bar\xi^m-\xi^m\right\|,
\]
for all $\(t;{ \zeta^0,\cdots, \zeta^m; \bar\xi^0, \cdots ,\bar\xi^m}\), \(t;{\zeta^0,\cdots, \zeta^m; \xi^0, \cdots, \xi^m}\)\in N_{\varepsilon}(\mathcal P)$,
with
\[
C_0+\cdots+C_m<1;
\]
\item For all $X\in N_{\varepsilon}(\mathcal P)$
\begin{equation*}
|h_1(X)|\leq 1.
\end{equation*}
\end{enumerate}
Under these assumptions, there exists $\delta>0$, such that the system
\begin{equation*}
\begin{cases}
Z'(t)=H\(t;Z(t),Z(z_1(t)),\cdots,Z(z_m(t));Z'(t),Z'(z_1(t)),\cdots,Z'(z_m(t))\)\\
Z(0)=0,
\end{cases}
\end{equation*}
has a unique solution defined for $|t|\leq \delta$ satisfying $Z'(0)=P$.
\end{remark}
\section{One point source case: Problem B}\label{sec:one point source problem}
\setcounter{equation}{0}
The setup in this section is the following. We are given a unit vector $w\in \R^3$, and a compact domain $\Omega$ contained in the upper unit sphere $S^2$, such that $\Omega=x(D)$, where $D$ is a convex and compact domain in $\R^2$ with nonempty interior. Here $x(t)$ are for example spherical coordinates, $t\in D$. Dichromatic rays with colors b and r are now emitted from the origin with unit direction $x(t)$, $t\in D$.
From the results from \cite[Section 3]{gutierrez:asphericallensdesign} with $n_1=n_3=1$, $n_2=n_r$, and $e_1=w$ we have the following.
Consider a $C^2$ surface with a given polar parametrization $\rho(t)x(t)$ for $t\in D$, and the surface parametrized by
$f_r(t)=\rho(t)x(t)+d_r(t)m_r(t)$, with $m_r(t)=\dfrac{1}{n_r}\(x(t)-\lambda_r \nu_\rho(t)\)$, $\lambda_r(t)
=\Phi_{n_r}\(x(t)\cdot \nu_\rho(t)\)$ from \eqref{formulaforlambda}, $\nu_\rho(t)$ the outer unit normal at $\rho(t)x(t)$, and with
\begin{equation}\label{eq:formula for dbt point source}
d_r(t)=\dfrac{C_r-\rho(t)\(1-w\cdot x(t)\)}{n_r-w\cdot m_r(t)},
\end{equation}
for some constant $C_r$. Then the lens bounded between $\rho$ and $f_r$ refracts the rays with color r into the direction $w$ provided that $C_r$ is chosen so that $d_r(t)>0$ and $f_r$ has a normal at each point.
Likewise and for the color b the surface
$f_b(t)=\rho(t)x(t)+d_b(t)m_b(t)$, with similar quantities as before with r replaced by b,
does a similar refracting job for rays with color b.
As before, we assume $n_b>n_r>1$, and the medium surrounding the lens is vacuum. To avoid total reflection for each color, compatibility conditions between $\rho$ and $w$ are needed, see \cite[condition (3.8)]{gutierrez:asphericallensdesign} which in our case reads
\[
\lambda_r\,\nu_\rho(t)\cdot w\leq x(t)\cdot w-1,\text{ and } \lambda_b\,\nu_\rho(t)\cdot w\leq x(t)\cdot w-1.
\]
The problem we consider in this section is to determine if there exist $\rho$ and corresponding surfaces $f_r$ and $f_b$ for each color such that $f_r$ can be obtained by a re-parametrization of $f_b$. That is, if there exist a positive function $\rho\in C^2(D)$, real numbers $C_r$ and $C_b$, and a $C^1$ map $\p:D\to D$ such that the surfaces $f_r$ and $f_b$, corresponding to $\rho$, $C_r,C_b$, have normals at each point and
\begin{equation}\label{eq:equality by re parametrization point source}
f_r(t)=f_b(\p(t))\qquad \forall t\in D.
\end{equation}
We refer to this as {\it Problem B}. As in the collimated case, if a solution exists $f_r(D) \subseteq f_b(D)$. Again, from an optical point of view, this means that the lens sandwiched between $\rho$ and $f_b$ refracts both colors into $w$; however, there could be points in $f_b(D)$ that are not reached by red rays.
If $w\notin \Omega$, we will show in Theorem \ref{thm:nonexistence point source} that Problem B is not solvable.
On the other hand, when $w\in \Omega$ we shall prove, in dimension two, that problem B is locally solvable, Theorem \ref{thm:Existence last}.
Notice that by rotating the coordinates we may assume without loss of generality that $w=e$. Theorem \ref{thm:Existence last} will follow from Theorem \ref{thm: Existence} on functional differential equations, assuming an initial size condition on the ratio between the thickness of the lens and its distance to the origin. By local solution we mean that there exists an interval $[-\delta,\delta]\subseteq D$, a positive function $\rho\in C^2[-\delta,\delta]$, real numbers $C_r$ and $C_b$, and
$\varphi:[-\delta,\delta]\to [-\delta,\delta]$ $C^1$ such that the corresponding surfaces $f_r$ and $f_b$ have normals at every point and
\begin{equation*}
f_r(t)=f_b(\p(t))\qquad \forall t\in [-\delta,\delta].
\end{equation*}
We will also show a necessary condition for solvability of Problem B, Corollary \ref{cor: Sufficient Condition}.
We first state the following lemma whose proof is the same as that of Lemma \ref{lem:fixed point}.
\begin{lemma}\label{lem:fixed point normal}
Given a surface $\rho(t)x(t)$, $t\in D$, and $w$ a unit vector in $\R^3$, let $f_r$ and $f_b$ be the surfaces parametrized as above. If $f_r(t)=f_b(t)$ for some $t\in D$, then $\nu_\rho(t)=x(t)$. In addition $d_b(t)=d_r(t)$.
\end{lemma}
We next show nonexistence of solutions to Problem $B$ for $w\notin \Omega$.
\begin{theorem}\label{thm:nonexistence point source}
Let $w$ be a unit vector in $\R^3$. If Problem B is solvable, then $x(t)=w$ for some $t\in D$. Therefore, since $x(D)=\Omega$, Problem $B$ has no solutions for $w\notin \Omega$.
\end{theorem}
\begin{proof}
Suppose there exist $\rho$ and $\varphi:D\to D$ satisfying \eqref{eq:equality by re parametrization point
source}. Since $D$ is a compact and convex domain, by Brouwer fixed point theorem $\varphi$ has a fixed point $t_0$, and from
\eqref{eq:equality by re parametrization point source} $f_r(t_0)=f_b(t_0)$. Therefore, by Lemma \ref{lem:fixed point normal} $\nu_
\rho(t_0)=x(t_0)$, and by the Snell's law at
$\rho(t_0)x(t_0)$ we have $m_b(t_0)=m_r(t_0)=x(t_0)$. Using Snell's law again at $f_r(t_0)=f_b(t_0)$, since $n_r\neq n_b$ and both colors with direction $x(t_0)$ are refracted at
$f_r(t_0)=f_b(t_0)$ into $w$, we obtain $x(t_0)=w$.
\end{proof}
From now on our objective is to show that problem B is locally solvable in dimension two when $w\in \Omega$, Theorem \ref{thm:Existence last}.
\subsection{Two dimensional case, $w\in \Omega$.} Let $w$ be a unit vector in $\R^2$, by rotating the coordinates we will assume that $w=e=(0,1)$. Let $\Omega$ be a compact domain of the upper circle, such that $\Omega=x(D)$ where $D$ is a closed interval in $(-\pi/2,\pi/2)$, and $x(t)=(\sin t,\cos t)$.
We will use the following expression for the normal to a parametric curve.
\begin{lemma}\label{lm:first normal}
If a curve is given by the polar parametrization $\rho(t)x(t)=\rho(t)\,(\sin t, \cos t)$, with $\rho\in C^1$, then the unit outer normal is
$$\nu(t)=\dfrac{1}{\sqrt{\rho^2(t)+\rho'(t)^2}}\left(\rho(t)\sin t-\rho'(t)\cos t, \rho'(t)\sin t+\rho(t)\cos t\right).$$
\end{lemma}
\begin{proof}
The tangent vector to the curve at the point $\rho(t)\,x(t)$, with $x(t)=(\sin t,\cos t)$, equals
$$
(\rho(t)\, x(t))'=\rho'(t)x(t)+\rho(t)x'(t)
=\left(\rho'(t)\sin t+\rho(t)\cos t, \rho'(t)\cos t-\rho(t)\sin t\right).
$$
Thus
\begin{align*}
|(\rho(t)\, x(t))'|^2
&=\rho(t)^2+\rho'(t)^2.
\end{align*}
Hence the normal
$$
\nu(t)=\pm\dfrac{1}{\sqrt{\rho^2(t)+\rho'(t)^2}}\left(\rho(t)\sin t-\rho'(t)\cos t, \rho'(t)\sin t+\rho(t)\cos t\right).
$$
Since $\nu(t)$ is outer, i.e. $x(t)\cdot \nu(t)\geq 0$, so we take the positive sign above and
the lemma follows.
\end{proof}
As a consequence, we obtain the following important lemma.
\begin{lemma}\label{lem:initial condition point source}
Assume Problem $B$ is solvable in the plane when $w=e$. Then $0\in D$,
$$\varphi(0)=0,\qquad d_b(0)=d_r(0), \qquad \text{and } \rho'(0)=0.$$
\end{lemma}
\begin{proof}
Using the proof of Theorem \ref{thm:nonexistence point source}, there exists $t_0\in D$ such that $\varphi(t_0)=t_0$ and $x(t_0)=e=(0,1)$, then $(\sin t_0,\cos t_0)=(0,1),$ and $t_0=0$. By Lemma \ref{lem:fixed point normal}, we get $d_b(0)=d_r(0)$, and $\nu_{\rho}(0)=e$. Therefore, Lemma \ref{lm:first normal} yields
$$
(0,1)=\dfrac{1}{\sqrt{\rho^2(0)+\rho'(0)^2}}(-\rho'(0),\rho(0)).
$$
\end{proof}
\subsection{Derivation of a system of functional equations from the solvability of Problem B in the plane.}\label{subsec:Problem B implies System}
Assume Problem B has a solution refracting rays of both colors b and r into the direction $e$, and recall Lemma \ref{lem:initial condition point source}.
\footnote{We are assuming that $0$ is an interior point of $D$. Otherwise, in our statements the interval $[-\delta,\delta]$ has to be replaced by either $[-\delta,0]$ or $[0,\delta]$.}
We set $\rho(0)=\rho_0$ and $d_b(0)=d_r(0)=d_0$, and prove the following theorem.
\begin{theorem}\label{thm:Problem B implies system}
Suppose there exist $\rho$ and $\varphi$ solving Problem $B$ in an interval $D$.
Let $Z(t)=\(z_1(t),z_2(t),z_3(t),z_4(t),z_5(t)\)\in \mathcal \R^5$ with
\begin{equation}\label{eq:change of variables phi to z}
z_1(t)=\varphi(t),\quad z_2(t)=v_1(t)+\rho_0,\quad z_3(t)=v_2(t),\quad z_4(t)=v_1'(t),\quad z_5(t)=v_2'(t)-\rho_0,
\end{equation}
where $v_1(t)=-\rho(t)\cos t$, and $v_2(t)=\rho(t)\sin t$.
Let $\mathcal Z=(0;{\bf 0,0;}Z'(0),Z'(0))\in \R^{21}$.\footnote{$Z'(0)=\(\p'(0),0,\rho_0,-\rho''(0)+\rho_0,0\)$.}
There exists a neighborhood of $\mathcal Z$ and a map $H$ defined and smooth in that neighborhood with
$$
H:=H(t;\zeta^0,\zeta^1;\xi^0,\xi^1)=(h_1,\cdots,h_5)
$$
where $\zeta^0,\zeta^1,\xi^0,\xi^1\in\R^5$,
$
\zeta^i=\(\zeta^i_1,\zeta^i_2,\cdots,\zeta^i_5\), \xi^i=\(\xi^i_1,\xi^i_2,\cdots,\xi^i_5\),
$
and with the functions $h_1,\cdots ,h_5$ given by \eqref{eq:formula for h1}, \eqref{eq:formula for h2}, \eqref{eq:formula for h3}, \eqref{eq:formula for h4}, and
\eqref{eq:h5formula}, respectively, such that
$Z$ is a solution to the system of functional differential equations
\begin{align}
Z'(t)&=H\(t;Z(t),Z(z_1(t));Z'(t),Z'(z_1(t))\)\label{eq:Optic System}\\
Z(0)&={\bf 0}\notag
\end{align}
for $t$ in a neighborhood of $0$. The map $H$ depends on the values $\rho_0$ and $d_0$.
\end{theorem}
\begin{proof}
From Lemma \ref{lem:initial condition point source}, $Z(0)={\bf 0}$.
We will derive the expressions for $h_i(t;\zeta^0,\zeta^1;\xi^0,\xi^1)$, $1\leq i\leq 5$, so that
$$z_i'(t)=h_i\(t;Z(t),Z(z_1(t)),Z'(t),Z'(z_1(t))\).$$
The first step is to express the quantities involved as functions of $$\(t;Z(t),Z(z_1(t));Z'(t),Z'(z_1(t))\).$$
From the Snell law, a ray emitted from the origin with color r and direction $x(t)=(\sin t,\cos t)$ refracts by a curve $\rho(t)x(t)$ into a medium with refractive index $n_r$ into the direction $m_r(t)$ such that
$
x(t)-\,n_r m_r(t)=\Phi_{n_r}(x\cdot\nu(t))\,\nu(t),
$
where $\nu(t)$ is the outward unit normal to $\rho$ at $\rho(t)x(t)$.
From \eqref{formulaforlambda}
\[
\Phi_{n_r}(s)=s-\sqrt{n_r^2-1+s^2}=\dfrac{1-n_r^2}{s+\sqrt{n_r^2-1+s^2}},
\]
and from Lemma \ref{lm:first normal} $\nu(t)=\dfrac{v'(t)}{|v'(t)|},$ and $x(t)\cdot \nu(t)=\dfrac{\rho(t)}{\sqrt{\rho^2(t)+\rho'(t)^2}}=\dfrac{|v(t)|}{|v'(t)|}$, with $v(t)=\(v_1(t),v_2(t)\)$. So
\begin{align*}
\Phi_{n_r}(x(t)\cdot \nu(t))&=\dfrac{1-n_r^2}{\dfrac{|v(t)|}{|v'(t)|}+\sqrt{n_r^2-1+\dfrac{|v(t)|^2}{|v'(t)|^2}}}=\dfrac{(1-n_r^2)|v'(t)|}{|v(t)|+\sqrt{|v(t)|^2+(n_r^2-1)|v'(t)|^2}}.
\end{align*}
Hence
\begin{align}
m_r(t)&=\dfrac{1}{n_r}\left[x(t)-\Phi_{n_r}(x(t)\cdot\nu(t))\nu(t)\right]=\dfrac{1}{n_r}\left[x(t)-A_r\(v(t),v'(t)\)v'(t)\right]\label{eq:Snelllaw}\\
&:=\(m_{1r}(t),m_{2r}(t)\)\notag
\end{align}
with
\begin{equation}\label{eq:Anr formula}
A_r(v(t),v'(t))=\dfrac{1-n_r^2}{|v(t)|+\sqrt{|v(t)|^2+(n_r^2-1)|v'(t)|^2}}.
\end{equation}
Rewriting the last expressions in terms of the variables $z_i(t)$ introduced in \eqref{eq:change of variables phi to z}, and omitting the dependance in $t$ to simplify the notation, we obtain
\begin{align}
A_r(v(t),v'(t))&=\dfrac{1-n_r^2}{\left|(z_2-\rho_0,z_3)\right|+\sqrt{\left|(z_2-\rho_0,z_3)\right|^2+(n_r^2-1)\left|(z_4,z_5+\rho_0)\right|^2}}:=\mathcal A_r\left(Z(t)\right),\label{eq:Anr}\\
m_{1r}(t)&=\dfrac{1}{n_r}\left[\sin t-\mathcal A_r(Z)\,z_4\right]:=\mu_r\(t,Z(t)\),\label{eq:m1r}\\
m_{2r}(t)&=\dfrac{1}{n_r}\left[\cos t-\mathcal A_r(Z)\, (z_5+\rho_0)\right]:=\tau_r\(t,Z(t)\).\label{eq:m2r}
\end{align}
Notice that $\tau_r\(0,Z(0)\)=\tau_r\(0,\bf{0}\)=1$.
If for each $t$, the ray with direction $m_r(t)$ is refracted by the upper face of the lens into the direction $e=(0,1)$, then the upper face is parametrized by the vector
$$f_{r}(t)=\rho(t)x(t)+d_r(t)m_{r}(t):=\(f_{1r}(t),f_{2r}(t)\).$$
From \eqref{eq:formula for dbt point source}, and \eqref{eq:m2r}
\begin{align}
d_r(t)&=\dfrac{C_{r}-\rho(t)(1-\cos t)}{n_r-m_{2r}(t)}=\dfrac{C_{r}-|v(t)|-v_1(t)}{n_r-\tau_r(t,Z(t))}\label{eq:Dr}\\
&=\dfrac{C_{r}-\left|(z_2-\rho_0,z_3)\right|-z_2+\rho_0}{n_r-\tau_r\(t,Z(t)\)}:=D_r\(t,Z(t)\),\notag
\end{align}
with $C_r$ a constant.
Since $\rho$ and $\varphi$ solve Problem B, from Lemma \ref{lem:initial condition point source} we have $\rho'(0)=0$, and $\nu(0)=(0,1)$. So $m_r(0)=(0,1)$ and from \eqref{eq:formula for dbt point source} we get $C_r=(n_r-1)\,d_0$.
Hence
\begin{align}
f_{1r}(t)&=z_3+D_r(t,Z)\,\mu_r(t,Z):=F_{1r}(t,Z(t))\label{eq:F1r}\\
f_{2r}(t)&=-z_2+\rho_0+D_r(t,Z)\,\tau_r(t,Z):=F_{2r}(t,Z(t))\label{eq:F2r}.
\end{align}
In addition, by the Snell law at $f_r(t)$,
$
m_r(t)-\dfrac{1}{n_r}e=\lambda_{2,r}(t)\,\nu_r(t),
$
where $\nu_r$ is the normal to the upper surface at the point $f_r(t)$ (we are using here that the normal to $f_r$ exists since we are assuming Problem B is solvable).
Since $n_r>1$, then $\lambda_{2,r}>0$ and so taking absolute values in the last expression yields
\begin{align}
\lambda_{2,r}(t)&=\left|m_r(t)-\dfrac{1}{n_r}e\right|=\sqrt{1+\dfrac{1}{n_r^2}-\dfrac{2}{n_r}m_{2r}(t)}=\sqrt{1+\dfrac{1}{n_r^2}-\dfrac{2}{n_r}\tau_r(t,Z)}:=\Lambda_{2,r}(t,Z(t)).\label{eq:lambdar}
\end{align}
For $t\in \R$ and $\zeta=(\zeta_1,\cdots,\zeta_5)\in \R^5$ we let
\begin{equation}\label{eq:Formulas 1}
\begin{cases}
&\mathcal A_r(\zeta)=\dfrac{1-n_r^2}{\left|(\zeta_2-\rho_0,\zeta_3)\right|+\sqrt{\left|(\zeta_2-\rho_0,\zeta_3)\right|^2+(n_r^2-1)\left|(\zeta_4,\zeta_5+\rho_0)\right|^2}}\\%\label{eq:General Anr}\\
&\mu_r(t,\zeta)=\dfrac{1}{n_r}\left[\sin t-\mathcal A_r(\zeta)\,\zeta_4\right],\qquad
\tau_r(t,\zeta)=\dfrac{1}{n_r}\left[\cos t-\mathcal A_r(\zeta)\, (\zeta_5+\rho_0)\right]\\%.\label{eq:General tau}\\
&D_r(t,\zeta)=\dfrac{C_{r}-\left|(\zeta_2-\rho_0,\zeta_3)\right|-\zeta_2+\rho_0}{n_r-\tau_r\(t,\zeta\)},\qquad \text{with } C_r=(n_r-1)\,d_0\\%\label{eq:General Dr}\\
&F_{1r}(t,\zeta)= \zeta_3+D_r(t,\zeta)\,\mu_r(t,\zeta),\qquad
F_{2r}(t,\zeta)=-\zeta_2+\rho_0+D_r(t,\zeta)\,\tau_r(t,\zeta)\\%.\label{eq:General F2r}\\
&\Lambda_{2,r}(t,\zeta)=\sqrt{1+\dfrac{1}{n_r^2}-\dfrac{2}{n_r}\tau_r(t,\zeta)}.
\end{cases}
\end{equation}
The functions $\mathcal A_r,\mu_r,\tau_r$ are well defined and smooth for all $t\in \R$ and for all $\zeta=(\zeta_1,\zeta_2,\zeta_3,\zeta_4,\zeta_5)$ with $(\zeta_2,\zeta_3)\neq (\rho_0,0)$.
Since $\tau_r\(0,\bf{0}\)=1$, then all functions in \eqref{eq:Formulas 1} are well defined and smooth in a neighborhood of $t=0$ and $\zeta=\bf 0$.
Notice that the definitions of $\mathcal A_r(\zeta),\mu_r(t,\zeta),\tau_r(t,\zeta)$ and $\Lambda_{2,r}(t,\zeta)$ depend on the value of $\rho_0$, and the definitions of $D_r(t,\zeta),F_{1r}(t,\zeta)$ and $F_{2r}(t,\zeta)$ depend on the values of $\rho_0$ and $d_0$.
To determine later the functions $h_i$ we
next calculate the derivatives of $A_r,m_{1r}, m_{2r}, d_r, F_{1r}, F_{2r},$ and $\lambda_{2,r}$ with respect to $t$. Differentiating \eqref{eq:Anr}, \eqref{eq:m1r}, \eqref{eq:m2r}, \eqref{eq:Dr}, \eqref{eq:F1r}, and \eqref{eq:F2r} with respect to $t$ yields
{\small
\begin{align}
\dfrac{d}{d\,t}A_r(v(t),v'(t))
&=\dfrac{n_r^2-1}{\(\left|(z_2-\rho_0,z_3)\right|+\sqrt{\left|(z_2-\rho_0,z_3)\right|^2+(n_r^2-1)\left|(z_4,z_5+\rho_0)\right|^2}\)^2}\label{eq:tildeAnr}\\
&\qquad \qquad \left[\dfrac{(z_2-\rho_0,z_3)\cdot (z_2',z_3')}{|(z_2-\rho_0,z_3)|}+\dfrac{(z_2-\rho_0,z_3)\cdot (z_2',z_3')+(n_r^2-1)(z_4,z_5+\rho_0)\cdot (z_4',z_5')}{\sqrt{\left|(z_2-\rho_0,z_3)\right|^2+(n_r^2-1)\left|\(z_4,z_5+\rho_0\)\right|^2}}\right]\notag\\
&=\dfrac{\mathcal A_r(Z)^2}{n_r^2-1}\left[ \dfrac{(z_2-\rho_0,z_3)\cdot (z_2',z_3')}{\left|(z_2-\rho_0,z_3)\right|}+\dfrac{(z_2-\rho_0,z_3)\cdot (z_2',z_3') +(n_r^2-1)(z_4,z_5+\rho_0)\cdot (z_4',z_5')}{\sqrt{\left|(z_2-\rho_0,z_3)\right|^2+(n_r^2-1)\left|(z_4,z_5+\rho_0)\right|^2}}\right]\notag\\
&:=\widetilde{\mathcal A}_r(Z(t),Z'(t)),\notag\\
\dfrac{d}{d\,t}m_{1r}(t)&=\dfrac{1}{n_r}\left[\cos t -\mathcal A_r(Z)\,z_4'-\widetilde{\mathcal A}_r(Z,Z')\,z_4\right]:=\widetilde{\mu}_r\(t,Z(t),Z'(t)\),\label{eq:tildemur}\\
\dfrac{d}{d\,t}m_{2r}(t)&=\dfrac{1}{n_r}\left[-\sin t-\mathcal A_r(Z)\,z_5'-\widetilde{\mathcal A}_r(Z,Z')\,(z_5+\rho_0)\right]:=\widetilde{\tau}_r\(t,Z(t),Z'(t)\)\label{eq:tildetaur},
\end{align}
\begin{align}
\dfrac{d}{d\,t}d_r(t)&=\dfrac{\left(-\dfrac{\(z_2-\rho_0,z_3\)\cdot(z_2',z_3')}{\left|\(z_2-\rho_0,z_3\)\right|}-z_2'\right)(n_r-\tau_r(t,Z))+\widetilde{\tau}_r(t,Z,Z')\left(C_{r}-\left|\(z_2-\rho_0,z_3\)\right|-z_2+\rho_0\right)}{\(n_r-\tau_r(t,Z)\)^2}\label{eq:tildeDr}\\
&=\dfrac{-\dfrac{(z_2-\rho_0,z_3)\cdot (z_2',z_3')}{\left|(z_2-\rho_0,z_3)\right|}-z_2'+\tilde \tau_r(t,Z,Z')\,D_r(t,Z)}{n_r-\tau_r(t,Z)}:=\widetilde {D}_r(t,Z(t),Z'(t))\notag
\end{align}
\begin{align}
\dfrac{d}{d\,t}f_{1r}(t)&=z_3'+D_r(t,Z)\,\widetilde{\mu}_r(t,Z,Z')+\widetilde{D}_r(t,Z,Z')\,\mu_r(t,Z):=\widetilde{F}_{1r}(t,Z(t),Z'(t))\label{eq:tildeF1r}
\end{align}
\begin{align}
\dfrac{d}{d\,t}f_{2r}(t)&=-z_2'+D_r(t,Z)\,\widetilde{\tau}_r(t,Z,Z')+\widetilde{D}_r(t,Z,Z')\,\tau_r(t,Z):=\widetilde {F}_{2r}(t,Z(t),Z'(t))\label{eq:tildeF2r}\\
\dfrac{d}{d\,t}\lambda_{2,r}(t)&=-\dfrac{\dfrac{1}{n_r}\widetilde{\tau}_r(t,Z,Z')}{\Lambda_{2,r}(t,Z)}:=\widetilde{\Lambda}_{2,r}(t,Z(t),Z'(t)).\label{eq:tildelambda}
\end{align}
}
For $t\in \R$ and $\zeta,\xi\in \mathcal \R^5$ we let
{\small
\begin{equation}\label{eq:derivativeformula}
\begin{cases}
&\widetilde{\mathcal A}_r(\zeta,\xi)=\dfrac{\mathcal A_r(\zeta)^2}{n_r^2-1}\left[ \dfrac{(\zeta_2-\rho_0,\zeta_3)\cdot (\xi_2,\xi_3)}{\left|(\zeta_2-\rho_0,\zeta_3)\right|}+\dfrac{(\zeta_2-\rho_0,\zeta_3)\cdot (\xi_2,\xi_3) +(n_r^2-1)(\zeta_4,\zeta_5+\rho_0)\cdot (\xi_4,\xi_5)}{\sqrt{\left|(\zeta_2-\rho_0,\zeta_3)\right|^2+(n_r^2-1)\left|(\zeta_4,\zeta_5+\rho_0)\right|^2}}\right]\\
&\widetilde{\mu}_{r}(t,\zeta,\xi)=\dfrac{1}{n_r}\left[\cos t -\mathcal A_r(\zeta)\xi_4-\widetilde{\mathcal A}_r(\zeta,\xi)\zeta_4\right]\\
&\widetilde{\tau}_{r}(t,\zeta,\xi)=\dfrac{1}{n_r}\left[-\sin t-\mathcal A_r(\zeta)\xi_5-\widetilde{\mathcal A}_r(\zeta,\xi)(\zeta_5+\rho_0)\right]\\
&\widetilde{D}_{r}(t,\zeta,\xi)=\dfrac{-\dfrac{(\zeta_2-\rho_0,\zeta_3)\cdot (\xi_2,\xi_3)}{\left|(\zeta_2-\rho_0,\zeta_3)\right|}-\xi_2+\tilde \tau_r(t,\zeta,\xi)\,D_r(t,\zeta)}{n_r-\tau_r(t,\zeta)}\\
&\widetilde{F}_{1r}(t,\zeta,\xi)=\xi_3+D_r(t,\zeta)\widetilde{\mu}_r(t,\zeta,\xi)+\widetilde{D}_r(t,\zeta,\xi)\mu_r(t,\zeta)\\
&\widetilde{F}_{2r}(t,\zeta,\xi)=-\xi_2+D_r(t,\zeta)\widetilde{\tau}_r(t,\zeta,\xi)+\widetilde{D}_r(t,\zeta,\xi)\tau_r(t,\zeta)\\
&\widetilde{\Lambda}_{2,r}(t,\zeta,\xi)=-\dfrac{\dfrac{1}{n_r}\widetilde\tau_r(t,\zeta,\xi)}{\Lambda_{2,r}(t,\zeta)}.
\end{cases}
\end{equation}
As for \eqref{eq:Formulas 1}, all functions in \eqref{eq:derivativeformula} are well defined and smooth in a neighborhood of $t=0$ and $\zeta=\bf 0$, and for any $\xi$.
}
We mention the following remark that will be used later.
\begin{remark}\label{rmk:derivative}\rm
Using the formulas in \eqref{eq:Formulas 1} and \eqref{eq:derivativeformula}, notice that for any differentiable map $U:V\to R^5$ with $V$ a neighborhood of $t=0$ and with $U(0)=\bf 0$, we have the following formulas valid for $t$ in a neighborhood $V'$ of $t=0$ (possibly smaller than $V$):
$$
\begin{array}{llrr}
\dfrac{d}{dt}[\mathcal A_r(U(t))]&=\widetilde {\mathcal A}_r(U(t),U'(t)),& \dfrac{d}{dt}[\mu_r(t,U(t))]&=\widetilde {\mu}_r(t,U(t),U'(t)), \\ \\ \dfrac{d}{dt}[\tau_r(t,U(t))]&=\widetilde {\tau}_r(t,U(t),U'(t)),&
\dfrac{d}{dt}[D_r(t,U(t))]&=\widetilde {D}_r(t,U(t),U'(t)), \\ \\
\dfrac{d}{dt}[F_{1r}(t,U(t))]&=\widetilde {F}_{1r}(t,U(t),U'(t)) ,
&\dfrac{d}{dt}[F_{2r}(t,U(t))]&=\widetilde {F}_{2r}(t,U(t),U'(t)),\\ \\
\dfrac{d}{dt}[\Lambda_{2,r}(t,U(t))]&=\widetilde {\Lambda}_{2,r}(t,U(t),U'(t)).
\end{array}
$$
\end{remark}
We also obtain the same formulas for the color $b$ with $n_r$ replaced by $n_b$.
We are now ready to calculate $h_i$, one by one, for $i=1,\cdots, 5$ to obtain the system of functional differential equations satisfied by $Z(t)=(\varphi(t),v_1(t)+\rho_0,v_2(t),v_1'(t),v_2'(t)-\rho_0)$.
Recall $h_i:=h_i\(t;\zeta^0,\zeta^1;\xi^0,\xi^1\)$, with $\zeta^0,\zeta^1,\xi^0,\xi^1\in \R^5$,
$\zeta^i=\(\zeta^i_1,\cdots,\zeta^i_5\)$, $\xi^i=\(\xi^i_1,\cdots,\xi^i_5\),$ $i=0,1$.
\\ \\
\textbf{Calculation of $h_1$.}
Since $z_1=\varphi$ satisfies $f_{r}(t)=f_{b}(\varphi(t))$, taking the first components and differentiating with respect to $t$ yields
\begin{equation}\label{eq:first components derivatives equal}
f_{1r}'(t)=\varphi'(t)\,f_{1b}'(\varphi(t)).
\end{equation}
We claim that $f_{1b}'(\varphi(t))\neq 0$ in a neighborhood of $t=0$.
\begin{proof}[Proof of the claim.]
By continuity of $f_{1b}'\circ \varphi$ and since $\p(0)=0$ by Lemma \ref{lem:initial condition point source}, it is equivalent to show that $f_{1b}'(0)\neq 0$.
Recall that $f_{1r}(t)=\rho(t)\sin t+d_r(t)m_{1r}(t)$, $\rho(0)=\rho_0$, $d_r(0)=d_0$, and $m_r(0)=(0,1)$. Then
$$f_{1r}'(0)=\rho_0+d_0\,m_{1r}'(0).$$
Also from \eqref{eq:tildemur}, since $Z(0)={\bf 0}$
$$m_{1r}'(0)=\dfrac{1}{n_r}\left(1-\mathcal A_{r}({\bf 0})z_4'(0)\right).$$
Notice that $z_4'(t)=v_1''(t)=\rho(t)\cos t-\rho''(t)\cos t+2\rho'(t)\sin t$, then $z_4'(0)=\rho_0-\rho''(0)$. Also from \eqref{eq:Anr}
$$
\mathcal A_{r}({\bf 0})=\dfrac{1-n_r^2}{(1+n_r)\rho_0}=\dfrac{1-n_r}{\rho_0}.
$$
We conclude that
\begin{align*}
m_{1r}'(0)&=\dfrac{1}{n_r}\left(1+\dfrac{n_r-1}{\rho_0}(\rho_0-\rho''(0))\right)=1-\dfrac{n_r-1}{n_r}\dfrac{\rho''(0)}{\rho_0}.
\end{align*}
Similarly, $f_{1b}(t)=\rho(t)\sin t+d_b(t)m_b(t)$, $d_b(0)=d_0$, $m_b(0)=(0,1)$, and we get
$f_{1b}'(0)=\rho_0+d_0\,m_{1b}'(0)$, with
$$m_{1b}'(0)=1-\dfrac{n_b-1}{n_b}\dfrac{\rho''(0)}{\rho_0}.$$
Suppose by contradiction that $f_{1b}'(0)=0$. Then by \eqref{eq:first components derivatives equal} $f_{1r}'(0)=0$.
Hence from the calculations above $m_{1b}'(0)=m_{1r}'(0)=-\rho_0/d_0$. Since $n_r\neq n_b$, $m_{1b}'(0)=m_{1r}'(0)$ implies $\rho''(0)=0$. So $$m_{1b}'(0)=m_{1r}'(0)=1=-\rho_0/d_0<0,$$
a contradiction.
\end{proof}
Since $z_1=\p$ we then conclude that $f_{1b}'(z_1(t))\neq 0$ in a neighborhood of $t=0$, and
$$z_1'(t)=\varphi'(t)=\dfrac{f_{1r}'(t)}{f_{1b}'(z_1(t))}.$$
Applying formula \eqref{eq:tildeF1r} for both $b$ and $r$ yields
\begin{align}
z_1'(t)=\varphi'(t)&=\dfrac{\widetilde{F}_{1r}(t,Z(t),Z'(t))}{\widetilde{F}_{1b}(z_1(t),Z(z_1(t)),Z'(z_1(t)))}:=h_1(t;Z(t),Z(z_1(t));Z'(t),Z'(z_1(t)))\notag
\end{align}
with
\begin{equation}\label{eq:formula for h1}
h_1(t;\zeta^0,\zeta^1;\xi^0,\xi^1)=\dfrac{\widetilde F_{1r}(t,\zeta^0,\xi^0)}{\widetilde F_{1b}(\zeta^0_1,\zeta^1,\xi^1)};
\end{equation}
$\widetilde{F}_{1r}$ is defined explicitly in \eqref{eq:derivativeformula}, and $\widetilde{F}_{1b}$ has a similar expression with $n_r$ replaced by $n_b$.
We next verify that $h_1$ is smooth in a neighborhood of $\mathcal Z=(0;{\bf 0,0};Z'(0),Z'(0))$; $Z'(0)=\(\p'(0),0,\rho_0,-\rho''(0)+\rho_0,0\)$. From \eqref{eq:Formulas 1}, $\mathcal A_r$ is smooth in a neighborhood of ${\bf 0}\in R^5$, $\mu_r,\tau_r, D_r, F_{1r}, F_{2r}, \Lambda_{2,r}$ are smooth in a neighborhood of $(0,{\bf 0})\in \R^6$. Also, from \eqref{eq:derivativeformula}, $\widetilde A_{r}$ is smooth in a neighborhood of $({\bf 0,Z'(0)})\in R^{10}$, and $\widetilde{\mu}_r,\widetilde{\tau}_r, \widetilde{D}_r, \widetilde{F}_{1r}, \widetilde{F}_{2r}, \widetilde{\Lambda}_{2,r}$ are smooth in a neighborhood of $(0,\bf{0,Z'(0)})\in R^{11}$. Similarly, we have the same smoothness for the functions corresponding to $n_b$.
Therefore, from \eqref{eq:formula for h1}, to show that $h_1$ is smooth in a neighborhood of $\mathcal Z$, it is enough to prove that $\widetilde{F}_{1b}(0,{\bf 0},Z'(0))\neq 0$.
In fact, since $Z(0)={\bf 0}$ we obtain from \eqref{eq:tildeF1r} for $n_b$ and the claim above that
$$\widetilde {F}_{1b}\(0,{\bf 0},Z'(0)\)=\widetilde{F}_{1b}\(0, Z(0),Z'(0)\)= f_{1b}'(0)\neq 0.$$
\textbf{Calculation of $h_2$ and $h_3$.}
We have
\begin{equation}\label{eq:z2'}
z_2'(t)=v_1'(t)=z_4(t):=h_2(t;Z(t),Z(z_1(t));Z'(t),Z'(z_1(t)))
\end{equation}
with,
\begin{equation}\label{eq:formula for h2}
h_2(t;\zeta^0,\zeta^1;\xi^0,\xi^1)=\zeta^0_4.
\end{equation}
Similarly,
\begin{equation}\label{eq:z3'}
z_3'(t)=v_2'(t)=z_5(t)+\rho_0:=h_3(t;Z(t),Z(z_1(t));Z'(t),Z'(z_1(t)))
\end{equation}
with
\begin{equation}\label{eq:formula for h3}
h_3(t;\zeta^0,\zeta^1;\xi^0,\xi^1)=\zeta^0_5+\rho_0.
\end{equation}
Trivially, $h_2$ and $h_3$ are smooth everywhere.
\textbf{Calculation of $h_4$.}
The rays $m_{r}(t)$ and $m_{b}(\varphi(t))$ are both refracted into $e=(0,1)$ at $f_r(t)$ (since Problem B is solvable $f_r$ has a normal vector), then by the Snell law
\begin{align*}
m_r(t)-\dfrac{1}{n_r}e&=\lambda_{2,r}(t)\nu_S(t)\\%\label{eq:Sr}\\
m_b(\varphi(t))-\dfrac{1}{n_b}e&=\lambda_{2,b}(\varphi(t))\nu_S(t)
\end{align*}
with $\nu_S$ the outer unit normal to $S=\{f_r(D)\}$.
If $\alpha_S$ denotes the first component of $\nu_S$, then
\begin{align*}
m_{1r}(t)=\lambda_{2,r}(t)\alpha_S(t),\qquad
m_{1b}(\varphi(t))=\lambda_{2,b}(\varphi(t))\alpha_S(t).
\end{align*}
Solving in $\alpha_S(t)$ and using \eqref{eq:m1r} and \eqref{eq:lambdar} yields
$$\mu_r(t,Z(t))\,\Lambda_{2,b}(z_1(t),Z(z_1(t)))=\mu_b(z_1(t),Z(z_1(t)))\,\Lambda_{2,r}(t,Z(t)).
$$
Differentiating the last expression with respect to $t$, Remark \ref{rmk:derivative} yields
\begin{align*}
&\tilde \mu_r(t,Z,Z')\,\Lambda_{2,b}(z_1,Z(z_1))
+z_1'\,\widetilde{\Lambda}_{2,b}(z_1,Z(z_1),Z'(z_1))\,\mu_r(t,Z)\\
&\qquad =z_1'\,\widetilde{\mu}_b(z_1,Z(z_1),Z'(z_1))\,\Lambda_{2,r}(t,Z)+\mu_b(z_1,Z(z_1))\,\widetilde{\Lambda}_{2,r}(t,Z,Z').
\end{align*}
Replacing \eqref{eq:tildemur} in the above expression
\begin{align*}
&\dfrac{1}{n_r}\left(\cos t -\mathcal A_r(Z)\,z_4'-\widetilde{\mathcal A}_r(Z,Z')\,z_4\right)\Lambda_{2,b}(z_1,Z(z_1))\\
&\quad=z_1'\left[\widetilde{\mu}_b(z_1,Z(z_1),Z'(z_1))\Lambda_{2,r}(t,Z)-\widetilde{\Lambda}_{2,b}(z_1,Z(z_1),Z'(z_1))\mu_r(t,Z)\right]+\mu_b(z_1,Z(z_1))\widetilde{\Lambda}_{2,r}(t,Z,Z').
\end{align*}
From \eqref{eq:Anr} and \eqref{eq:lambdar}, $\mathcal A_{r}(Z)\Lambda_{2,b}(z_1,Z(z_1))<0$. Then solving the last equation in $z_4'$ yields
{\tiny
\begin{align}
\hspace{-2.5cm}
z_4'&=-\dfrac{z_1'\left[\widetilde{\mu}_b(z_1,Z(z_1),Z'(z_1))\Lambda_{2,r}(t,Z)-\widetilde{\Lambda}_{2,b}(z_1,Z(z_1),Z'(z_1))\mu_{r}(t,Z)\right]+\mu_b(z_1,Z(z_1))\widetilde {\Lambda}_{2,r}(t,Z,Z')-\dfrac{1}{n_r}\left(\cos t-\widetilde{\mathcal A}_r(Z,Z')\,z_4\right)\Lambda_{2,b}(z_1,Z(z_1))}{\dfrac{1}{n_r}\mathcal A_r(Z)\Lambda_{2,b}(z_1,Z(z_1))}\label{eq:z4'}\\
&:=h_4(t;Z(t),Z(z_1(t));Z'(t),Z'(z_1(t))),\notag
\end{align}
}
so
{\tiny
\begin{align}\label{eq:formula for h4}
h_4(t;\zeta^0,\zeta^1;\xi^0,\xi^1)&=-\dfrac{\xi^0_1\left[\widetilde \mu_b(\zeta^0_1,\zeta^1,\xi^1)\Lambda_{2,r}(t,\zeta^0)-\widetilde{\Lambda}_{2,b}(\zeta^0_1,\zeta^1,\xi^1)\mu_r(t,\zeta^0)\right]+\mu_b(\zeta^0_1,\zeta^1)\widetilde {\Lambda}_{2,r}(t,\zeta^0,\xi^0)-\dfrac{1}{n_r}\left(\cos t-\widetilde{\mathcal A}_r(\zeta^0,\xi^0)\zeta^0_4\right)\Lambda_{2,b}(\zeta^0_1,\zeta^1)}{\dfrac{1}{n_r}\mathcal A_r(\zeta^0)\Lambda_{2,b}(\zeta^0_1,\zeta^1)}.
\end{align}
}
$h_4$ is smooth in a neighborhood of $\mathcal Z$, since $\mathcal A_{r}({\bf 0})\Lambda_{2,b}(0,{\bf 0})<0,$ and as shown before all the functions appearing in the expression for $h_4$ are smooth in a neighborhood of $\mathcal Z$ from the comments after \eqref{eq:Formulas 1} and \eqref{eq:derivativeformula}.
\\ \\
\textbf{Calculation of $h_5$.} We have $v_2(t)=-(\tan t) \,v_1(t)$.
Differentiating twice we get
$$v_2''(t)=-(\tan t) \,v_1''(t)-2\,\dfrac{v_1'(t)}{\cos ^2 t}-2\dfrac{\sin t}{\cos ^3 t}\,v_1(t).$$
Then
\begin{align}
z_5'(t)&=-(\tan t) \,z_4'(t)-\dfrac{2}{\cos ^2 t}\,z_4(t)-\dfrac{2 \sin t}{\cos ^3 t}\,(z_2(t)-\rho_0)\label{eq:z5'} \\
&:=h_5(t;Z(t),Z(z_1(t));Z'(t),Z'(z_1(t)))\notag,
\end{align}
and so
\begin{equation}\label{eq:h5formula}
h_5(t;\zeta^0,\zeta^1;\xi^0,\xi^1)=-(\tan t)\,\xi^0_4-\dfrac{2}{\cos ^2 t}\,\zeta^0_4-\dfrac{2\sin t}{\cos^3 t}\,(\zeta^0_2-\rho_0).
\end{equation}
Since $t\in D\subset(-\pi/2,\pi/2)$, then $h_5$ is smooth in $D\times \R^{20}$.
The proof of Theorem \ref{thm:Problem B implies system} is then complete.
\end{proof}
\subsection{Solutions of \eqref{eq:Optic System} yield local solutions to the optical problem.} \label{subsec:converse functional implies optic}
In this section, we show how to obtain a local solution to Problem B by solving the system of functional differential equations \eqref{eq:Optic System}.
\begin{theorem}\label{thm:Converse}
Let $\rho_0,d_0>0$ be given, and suppose that $H$ is the corresponding map defined in Theorem \ref{thm:Problem B implies system}. Assume $P=(p_1,\cdots,p_n)$ is a solution to the system
$$P=H(0;{\bf 0,0};P,P),$$
such that $H$ is smooth in a neighborhood of $\mathcal P:=(0;{\bf 0,0};P,P)$, and
\begin{equation}\label{eq:additional}
0<|p_1|\leq 1.
\end{equation}
Let $Z(t)=(z_1(t),\cdots,z_5(t))$ be a $C^1$ solution to the system \eqref{eq:Optic System} in some open interval $I$ containing $0$, with $Z'(0)=P$ and
\begin{equation}\label{eq:condition on z1 point source}
|z_1(t)|\leq |t|.
\end{equation}
Define
\begin{equation}\label{eq:rho def}
\rho(t)=-\dfrac{z_2(t)-\rho_0}{\cos t},
\end{equation}
and $\varphi(t)=z_1(t)$.
Then there is $\delta>0$ sufficiently small, so that $\varphi:[-\delta,\delta]\to [-\delta,\delta]$ and
$$f_r(t)=f_b(\varphi(t)).$$
Here,
$f_r(t)=\rho(t)x(t)+d_r(t)m_r(t)$, and $f_b(t)=\rho(t)x(t)+d_b(t)m_b(t)$
where
\[d_r(t)=\dfrac{C_r-\rho(t)(1-\cos t)}{n_r-e\cdot m_r(t)},\qquad d_b(t)=\dfrac{C_b-\rho(t)(1-\cos t)}{n_b-e\cdot m_b(t)}
\]
with $C_r=(n_r-1)\,d_0$, $C_b=(n_b-1)\,d_0$; $m_r(t)$ and $m_b(t)$ are the refracted directions of the rays $x(t)$ by the curve $\rho(t)x(t)$ corresponding to each color r and b.
In addition, for $t\in [-\delta,\delta]$, $f_r$ and $f_b$ have normal vectors for every $t$ and
\begin{equation}\label{eq:positivity and internal reflection conditions}
\rho(t)>0,\quad d_r(t),d_b(t)>0,\quad m_r(t)\cdot e\geq 1/n_r,\quad m_b(t)\cdot e\geq1/n_b.
\end{equation}
\end{theorem}
Notice that \eqref{eq:positivity and internal reflection conditions} implies the lens defined with $\rho(t)x(t)$, $f_r$ and $f_b$ is well defined, and moreover total internal reflection is avoided.
\begin{proof}
We obtain the theorem by proving a series of steps.
\begin{step}\label{clm:z3 and rho}
If $\rho$ is from \eqref{eq:rho def}, then
\begin{equation}\label{eq:z3 form}
z_3(t)=\rho(t)\sin t.
\end{equation}
\end{step}
\begin{proof}
Since $z_2(0)=0$, from \eqref{eq:rho def}
\begin{equation}\label{eq:rho at 0}
\rho(0)=\rho_0.
\end{equation}
Since $Z$ is a solution to \eqref{eq:Optic System}, $z_2'(t)=z_4(t)$ by the definition of $h_2$ in \eqref{eq:formula for h2}. Hence and from the definition of $h_5$ in \eqref{eq:h5formula},
\begin{align*}
z_5'(t)&=-(\tan t)\,z_4'(t)-\dfrac{2}{\cos ^2 t}\,z_4(t)-\dfrac{2\sin t}{\cos ^3 t}\,(z_2(t)-\rho_0)\\
&=-(\tan t)\,z_2''(t)-\dfrac{2}{\cos ^2 t}\,z_2'(t)-\dfrac{2\sin t}{\cos ^3 t}\,(z_2(t)-\rho_0).
\end{align*}
By \eqref{eq:rho def}, we have $z_2(t)=-\rho(t)\cos t+\rho_0$, then
\begin{align*}
z_2'(t)&=-\rho'(t)\cos t+\rho(t)\sin t\\
z_2''(t)&=-\rho''(t)\cos t+2\rho'(t)\sin t+\rho(t)\cos t.
\end{align*}
Replacing in the formula for $z_5'$ we get
\begin{align*}
z_5'(t)&=(\tan t)(\rho''(t)\cos t-2\rho'(t)\sin t-\rho(t)\cos t)+\dfrac{2}{\cos ^2 t}(\rho'(t)\cos t-\rho(t)\sin t)+\dfrac{2\sin t}{\cos^3t}\rho(t)\cos t\\
&=\rho''(t)\sin t+2\rho'(t)\cos t -\rho(t)\sin t\\
&=\dfrac{d^2}{dt^2}\left[\rho(t)\sin t\right].
\end{align*}
Hence
$z_5(t)=\dfrac{d}{dt}\left[\rho(t)\sin t\right]+c=\rho'(t)\sin t+\rho(t)\cos t+c.$
Since $z_5(0)=0$, then by \eqref{eq:rho at 0} $c=-\rho_0$, and therefore
\begin{equation}\label{eq:z5 form}
z_5(t)=\rho'(t)\sin t+\rho(t)\cos t-\rho_0.
\end{equation}
By the definition of $h_3$ in \eqref{eq:formula for h3}, we have
$$z_3'(t)=z_5(t)+\rho_0=\rho'(t)\sin t+\rho(t)\cos t=\dfrac{d}{dt}\left[\rho(t)\sin t\right],$$
and since $z_3(0)=0$, we conclude \eqref{eq:z3 form}.
\end{proof}
\begin{step}\label{clm:First Composition}
For each $t\in I$
$$F_{1r}(t,Z(t))=F_{1b}(z_1(t),Z(z_1(t))),$$
with $F_{1r}, F_{1b}$ from \eqref{eq:Formulas 1}.
\end{step}
\begin{proof}
From the definition of $h_1$ in \eqref{eq:formula for h1}
$$\widetilde{F}_{1r}(t,Z(t),Z'(t))=z_1'(t)\widetilde{F}_{1b}(z_1(t),Z(z_1(t)),Z'(z_1(t))).$$
Hence by Remark \ref{rmk:derivative}, integrating the above equality yields
$$F_{1r}(t,Z(t))=F_{1b}(z_1(t),Z(z_1(t)))+c.$$
Since $Z(0)={\bf 0}$, then from the formulas of $F_{1r}$ and $\mu_r$ in \eqref{eq:Formulas 1} we get
$F_{1r}(0,{\bf 0})=F_{1b}(0,{\bf 0})=0$. Hence $c=0$ and Step \ref{clm:First Composition} follows.
\end{proof}
\begin{step}\label{step:choice of delta for total reflection}
There is $\delta>0$ small so that \eqref{eq:positivity and internal reflection conditions} holds.
\end{step}
\begin{proof}
Since $\rho(0)=\rho_0>0$, by continuity there is $\delta>0$ with $[-\delta,\delta]\subseteq I$ so that $\rho(t)$ given by \eqref{eq:rho def} is positive for $t\in [-\delta,\delta]$.
Rays with colors r and b emitted from the origin with direction $x(t)=(\sin t,\cos t)$ are refracted at $\rho(t)x(t)$ into the directions $m_r(t)$, and $m_b(t)$.
For the upper faces $f_r$ and $f_b$ to be able to refract the rays $m_r$ and $m_b$ into $e$, they need to have a normal vector for each $t$, $d_r(t)$ and $d_b(t)$ must be positive for $t\in [-\delta,\delta]$, and $m_r$ and $m_b$ must satisfy the conditions $m_r(t)\cdot e\geq \dfrac{1}{n_r}$, $m_b(t)\cdot e\geq \dfrac{1}{n_b}$ to avoid total reflection.
Notice that from \eqref{eq:rho def}
$$\rho'(t)=-\dfrac{z_2'(t)\cos t+(z_2(t)-\rho_0)\sin t}{\cos ^2 t}.$$
From the definition of $h_2$ in \eqref{eq:formula for h2}, and since $z_4(0)=0$ we conclude that $\rho'(0)=0$. Hence from Lemma \ref{lm:first normal}, the normal to $\rho$ at $\rho(0)(0,1)$ is $\nu(0)=(0,1)$. Since $x(0)=\nu(0)=(0,1)$ we obtain from Snell's law that
\begin{equation}\label{eq:normal refraction}
m_r(0)=m_b(0)=(0,1).
\end{equation}
Thus, $e\cdot m_r(0)=e\cdot m_b(0)=1$, and so
\begin{equation}\label{eq:initial d}
d_r(0)=\dfrac{C_r}{n_r-1}=d_0=\dfrac{C_b}{n_b-1}=d_b(0)>0.
\end{equation}
So choosing $\delta$ small we get $d_r(t), d_b(t)>0$, $e\cdot m_r(t)\geq \dfrac{1}{n_r}$ and $e\cdot m_b(t)\geq \dfrac{1}{n_b}$ in $[-\delta,\delta]$.
Therefore, \eqref{eq:positivity and internal reflection conditions} holds for $\delta>0$ sufficiently small. The fact that $f_r$ and $f_b$ have a normals for every $t$ will be proved in Step \ref{clm:regular points}.
\end{proof}
\begin{step}\label{eq:Identity Composition}
For each $t\in [-\delta,\delta]$
$$f_r(t)=\left(F_{1r}(t,Z(t)),F_{2r}(t,Z(t))\right),$$ and similarly $f_b(t)=\left(F_{1b}(t,Z(t)),F_{2b}(t,Z(t))\right)$.
\end{step}
\begin{proof}
We first show that
\begin{equation}\label{eq:mr recover}
m_r(t)=\(\mu_r(t,Z(t)),\tau_r(t,Z(t))\),
\end{equation}
and similarly for $m_b$. Since the direction $x(t)=(\sin t, \cos t)$ is refracted by $\rho$ into the unit direction $m_r(t)$,
by \eqref{eq:Snelllaw}
$$
m_r(t)=\dfrac{1}{n_r}\left[x(t)-A_r(v(t),v'(t))\,v'(t)\right]
$$
with $A_r$ given in \eqref{eq:Anr formula} and $v(t)=(-\rho(t)\cos t, \rho(t)\sin t)$.
From \eqref{eq:rho def} and \eqref{eq:z3 form},
$$v(t)=\left(z_2(t)-\rho_0,z_3(t)\right),\qquad v'(t)=\left(z_2'(t),z_3'(t)\right).$$
However, by the definition of $h_2$ in \eqref{eq:formula for h2} and the definition of $h_3$ in \eqref{eq:formula for h3}, we have $z_2'=z_4$ and $z_3'=z_5+\rho_0$, then $v'(t)=\(z_4,z_5+\rho_0\)$.
Hence \eqref{eq:Anr formula} becomes
$$
A_r(v(t),v'(t))=\dfrac{1-n_r^2}{\left|(z_2(t)-\rho_0,z_3(t))\right|+\sqrt{\left|(z_2(t)-\rho_0,z_3(t))\right|^2+(n_r^2-1)\left|(z_4(t),z_5(t)+\rho_0)\right|^2}},
$$
and so by \eqref{eq:Formulas 1}, $A_r(v(t),v'(t))=\mathcal A_r(Z(t)).$
We conclude that
$$
m_r(t)=\dfrac{1}{n_r}\left[(\sin t,\cos t)-\mathcal A_r(Z(t))\left(z_4(t),z_5(t)+\rho_0\right)\right],
$$
and hence once again from \eqref{eq:Formulas 1}, the identity \eqref{eq:mr recover} follows.
We now show that $d_r(t)=D_r(t,Z(t))$. In fact, by definition of $v$ we have
$$\rho(t)=|v(t)|=\left|\left(z_2(t)-\rho_0,z_3(t)\right)\right|.$$
Then by \eqref{eq:mr recover}, \eqref{eq:rho def} and the formula for $D_r$ in \eqref{eq:Formulas 1} we get
\begin{align*}
d_r(t)=\dfrac{C_r-\rho(t)+\rho(t)\cos t}{n_r-m_{2r}(t)}=\dfrac{C_r-\left|(z_2(t)-\rho_0,z_3(t))\right|-z_2(t)+\rho_0}{n_r-\tau_r(t,Z(t))}=D_r(t,Z(t)).
\end{align*}
Hence from \eqref{eq:rho def}, \eqref{eq:z3 form}, \eqref{eq:mr recover}, and the formulas of $F_{1r},F_{2r}$ in \eqref{eq:Formulas 1}, we conclude
\begin{align*}
f_r(t)&=\rho(t)(\sin t,\cos t)+d_{r}(t)m_r(t)\\
&=\left(z_3(t),-z_2(t)+\rho_0\right)+D_r(t,Z(t))\left(\mu_r(t,Z(t)),\tau_r(t,Z(t))\right)\\
&=\left(F_{1r}(t,Z(t)),F_{2r}(t,Z(t))\right).
\end{align*}
A similar result holds for $f_b$.
\end{proof}
By assumption $|z_1(t)|\leq |t|$ for $t\in I$, then $z_1(t)\in [-\delta,\delta]$ for every $t\in [-\delta, \delta]$.
\begin{step}\label{clm:regular points}
For $\delta$ small, $f_{1r}'(t)\neq 0$ and $f_{1b}'(t)\neq 0$, and hence $f_r$ and $f_b$ have normal vectors for each $t\in [-\delta,\delta]$.
\end{step}
\begin{proof}
By continuity of $f_{1r}'$ and $f_{1b}'$, it is enough to show that $f_{1r}'(0)\neq 0$ and $f_{1b}'(0)\neq 0$.
Since $H$ is well defined at $\mathcal P$, then from the definition of $h_1$ in \eqref{eq:formula for h1}, $\widetilde {F}_{1b}(0,{\bf 0},P)\neq 0$. We have $Z'(0)=P$, then from Remark \ref{rmk:derivative} and Step \ref{eq:Identity Composition}
$$f_{1b}'(0)=\widetilde{F}_{1b}(0,{\bf 0},Z'(0))=\widetilde{F}_{1b}(0,{\bf 0},P)\neq 0.$$
We next prove that $f_{1r}'(0)\neq 0$. From Steps \ref{clm:First Composition} and \ref{eq:Identity Composition}, $f_{1r}(t)=f_{1b}(z_1(t))$. Differentiating and letting $t=0$ yields
$$f_{1r}'(0)=z_1'(0) f_{1b}'(0).$$
Since $f_{1b}'(0)\neq 0$ and from \eqref{eq:additional} $z_1'(0)=p_1\neq 0$, we obtain $f_{1r}'(0)\neq 0$.
\end{proof}
\begin{step}\label{eq:Colinearity}
The vectors
$m_r(t)-\dfrac{1}{n_r}e$ and $m_b(z_1(t))-\dfrac{1}{n_b}e$ are colinear for every $t\in [-\delta,\delta]$.
\end{step}
\begin{proof}
Since $z_4'(t)=h_4(t;Z(t),Z(z_1(t));Z'(t),Z'(z_1(t)))$, from \eqref{eq:formula for h4} it follows that
\begin{align*}
\dfrac{1}{n_r}&\left(\cos t -\mathcal A_r(Z(t))z_4'(t)-\widetilde{\mathcal A}_r(Z(t),Z'(t))z_4(t)\right)\Lambda_{2,b}(z_1(t),Z(z_1(t)))=\\
&\qquad z_1'(t)\left[\widetilde{\mu}_{b}(z_1(t),Z(z_1(t)),Z'(z_1(t)))\Lambda_{2,r}(t,Z(t))-\widetilde{\Lambda}_{2,b}(z_1(t),Z(z_1(t)),Z'(z_1(t)))\mu_{r}(t,Z(t))\right]\\
&\qquad +\mu_{b}(z_1(t),Z(z_1(t)))\widetilde{\Lambda}_{2,r}(t,Z(t),Z'(t)).
\end{align*}
Hence from the formula of $\widetilde \mu_r$ in \eqref{eq:derivativeformula}, we obtain
\begin{align*}
\widetilde{\mu}_r&(t,Z(t),Z'(t))\,\Lambda_{2,b}(z_1(t),Z(z_1(t)))+z_1'(t)\,\widetilde{\Lambda}_{2,b}(z_1(t),Z(z_1(t)))\,\mu_r(t,Z(t))\\
&\qquad=z_1'(t)\,\widetilde{\mu}_b(z_1(t),Z(z_1(t)))\,\Lambda_{2,r}(t,Z(t))+\mu_b(z_1(t),Z(z_1(t)))\,\widetilde{\Lambda}_{2,r}(t,Z(t),Z'(t)).
\end{align*}
Integrating the resulting identity using Remark \ref{rmk:derivative}, and that $\mu_{r}(0,{\bf 0})=\mu_{b}(0,{\bf 0})=0$ from \eqref{eq:Formulas 1}, we obtain
\begin{equation}\label{eq:First Component bis}
\mu_r(t,Z(t))\,\Lambda_{2,b}\(z_1(t),Z(z_1(t))\)=\mu_b\(z_1(t),Z(z_1(t))\)\,\Lambda_{2,r}(t,Z(t)).
\end{equation}
On the other hand, from \eqref{eq:mr recover}
\begin{equation}\label{eq:unit}
\mu_r(t,Z(t))^2+\tau_r(t,Z(t))^2=1,
\end{equation}
then from \eqref{eq:Formulas 1}, $\Lambda_{2,r}$ can be written as follows
{\small
\begin{align*}
\Lambda_{2,r}(t,Z(t))&=\sqrt{1+\dfrac{1}{n_r^2}-\dfrac{2}{n_r}\tau_r(t,Z(t))}=\sqrt{1-\tau_r(t,Z(t))^2+\left(\dfrac{1}{n_r}-\tau_r(t,Z(t))\right)^2}\\
&=\sqrt{\mu_r(t,Z(t))^2+\left(\dfrac{1}{n_r}-\tau_r(t,Z(t))\right)^2},
\end{align*}
and similarly for $\Lambda_{2,b}$.}
Squaring \eqref{eq:First Component bis} and using the above identity for $n_r$ and $n_b$ we get
\begin{align*}
&\mu_r(t,Z(t))^2\left[\mu_b(z_1(t),Z(z_1(t)))^2+\left(\dfrac{1}{n_b}-\tau_b(z_1(t),Z(z_1(t))\right)^2\right]\\
&=\mu_b(z_1(t),Z(z_1(t)))^2\left[\mu_r(t,Z(t))^2+\left(\dfrac{1}{n_r}-\tau_r(t,Z(t))\right)^2\right].
\end{align*}
Hence
$$
\mu_r(t,Z(t))^2 \left(\dfrac{1}{n_b}-\tau_b(z_1(t),Z(z_1(t)))\right)^2=\mu_b(z_1(t),Z(z_1(t)))^2\left(\dfrac{1}{n_r}-\tau_r(t,Z(t))\right)^2,
$$
and taking square roots
$$ \left| \mu_r(t,Z(t)) \left(\dfrac{1}{n_b}-\tau_b(z_1(t),Z(z_1(t)))\right) \right| =\left |\mu_b(z_1(t),Z(z_1(t)))\left(\dfrac{1}{n_r}-\tau_r(t,Z(t))\right)\right|.$$
From \eqref{eq:mr recover}, since $\delta$ is chosen so that \eqref{eq:positivity and internal reflection conditions} holds in $[-\delta,\delta]$, and $z_1(t)\in [-\delta,\delta]$, we have that $1/n_b-\tau_b(z_1(t),Z(z_1(t)))=1/n_b-e\cdot m_b(z_1(t))\leq 0$, and $1/n_r-\tau_r(t,Z(t))=1/n_r-e\cdot m_r(t)\leq 0$. Moreover, since the functions $\Lambda_{2,b}$ and $\Lambda_{2,r}$ are both positive, then by \eqref{eq:First Component bis} $\mu_r(t,Z(t))$ and $\mu_b(z_1(t),Z(z_1(t)))$ have the same sign obtaining
$$ \mu_r(t,Z(t)) \left(\tau_b(z_1(t),Z(z_1(t)))-\dfrac{1}{n_b}\right) =\mu_b(z_1(t),Z(z_1(t)))\left(\tau_r(t,Z(t))-\dfrac{1}{n_r}\right).$$
We conclude that the vectors
$$\left( \mu_r(t,Z(t)),\tau_r(t,Z(t))-\dfrac{1}{n_r}\right),\quad \left(\mu_b\(z_1(t),Z(z_1(t))\),\tau_b\(z_1(t),Z(z_1(t))\)-\dfrac{1}{n_b}\right)$$
are colinear and the claim follows from \eqref{eq:mr recover}.
\end{proof}
By Steps \ref{clm:First Composition} and \ref{eq:Identity Composition}, we obtain that $f_{1r}(t)=f_{1b}(z_1(t))$, so to conclude the proof of Theorem \ref{thm:Converse},
it remains to show the following.
\begin{step}\label{eq:Second Component}
We have for $|t|\leq \delta$ that
$$f_{2r}(t)=f_{2b}(z_1(t)).$$
\end{step}
\begin{proof}
From Step \ref{clm:regular points}, for $|t|\leq \delta$, $f_r$ and $f_b$ have normals at every point. From \eqref{eq:positivity and internal reflection conditions} and the definitions of $f_r$ and $f_b$, $m_r(t)$ and $m_b(t)$ are refracted into the direction $e$ at $f_r(t)$ and $f_b(t)$, for each $t$, respectively. So the Snell law at $f_r(t)$ and $f_b(t)$ implies that $m_r(t)-\dfrac{1}{n_r}e$ is orthogonal to the tangent vector $f_r'(t)$, and $m_b(z_1(t))-\dfrac{1}{n_b}e$ is orthogonal to $f_b'(z_1(t))$.
Hence by Step \ref{eq:Colinearity}, $f_r'(t)$ and $f_b'(z_1(t))$ are parallel.
From Remark \ref{rmk:derivative} and Steps \ref{clm:First Composition} and \ref{eq:Identity Composition} we also obtain that $f_{1r}'(t)=z_1'(t)f_{1b}'(z_1(t))$.
From Step \ref{clm:regular points} and assumption \eqref{eq:condition on z1 point source}, $f_{1r}'(t)\neq 0$ and $f_{1b}'(z_1(t))\neq 0$ for all $t\in [-\delta,\delta]$, then
$$f_{2r}'(t)=z_1'(t)f_{2b}'(z_1(t)).$$
Integrating the last identity, we obtain $f_{2r}(t)=f_{2b}(z_1(t))+c.$
On the other hand, $z_1(0)=0$,
$f_{2r}(0)=\rho(0)+d_r(0)m_{2r}(0)$, and $f_{2b}(0)=\rho(0)+d_b(0)m_{2b}(0)$.
Hence from \eqref{eq:normal refraction} and \eqref{eq:initial d}, $f_{2r}(0)=f_{2b}(0)$ and so $c=0$, and Step \ref{eq:Second Component} follows.
\end{proof}
This completes the proof Theorem \ref{thm:Converse}.
\end{proof}
\begin{remark}\label{rmk:self intersection}\rm
Notice that $f_b$ has no self intersections in the interval $[-\delta,\delta]$. Because if there would exist $t_1,t_2\in [-\delta,\delta]$ such that
$f_b(t_1)=f_b(t_2)$, then by Rolle's theorem applied to $f_{1b}$ we would get $f_{1b}'(t_0)=0$ for some $t_0\in [t_1,t_2]$, a contradiction with Step \ref{clm:regular points}. The issue of self-intersections in the monochromatic case is discussed in detail in \cite{gutierrez-sabra:freeformgeneralfields}.
This implies that $f_b$ is injective, and similarly $f_r$ is injective.
We also deduce that $\varphi$ is injective: in fact, if $\varphi(t_1)=\varphi(t_2)$ then $f_r(t_1)=f_b(\varphi(t_1))=f_b(\varphi(t_2))=f_r(t_2)$ and so $t_1=t_2$.
\end{remark}
\subsection{On the solvability of the algebraic system \eqref{eq:system P} }
In this section, we analyze for what values of $\rho_0$, $d_0$ the algebraic system $P=H(0;{\bf 0,0};P,P)$ has a solution, where $H$
is given in Theorem \ref{thm:Problem B implies system}. This analysis will be used to apply Theorem \ref{thm: Existence}, and to decide when Problem B has a local solution. Denote
\[
k_0=\dfrac{\rho_0}{d_0},\qquad \Delta_r=\dfrac{n_r}{n_r-1},\qquad \Delta_b=\dfrac{n_b}{n_b-1}.
\]
\begin{theorem}\label{thm:Algebraic system}
The algebraic system $P=H(0;{\bf 0,0};P,P)$ has a solution if and only if $k_0\leq \dfrac{(\Delta_r-\Delta_b)^2}{4\Delta_r\Delta_b}$\footnote{This expression is equal to $\(\sqrt{\dfrac{\Delta_r}{\Delta_b}}-\sqrt{\dfrac{\Delta_b}{\Delta_r}}\)^2$.}.
In case of equality, the system has only one solution $P$ with $|p_1|=1$, and in case of strict inequality the system has two solutions $P$ and $P'$ with $0<|p_1|<1$ and $|p_1'|>1$.
\end{theorem}
\begin{proof}
Recall that $H=H\(t;\zeta^0,\zeta^1;\xi^0,\xi^1\)$.
Suppose $P=(p_1,\cdots,p_5)$ solves
$P=H\left(0;{\bf 0,0}; P,P\right)$. Then
from the definitions of $h_2$ in \eqref{eq:formula for h2}, of $h_3$ in \eqref{eq:formula for h3}, and of $h_5$ in
\eqref{eq:h5formula} we get
\begin{equation}
p_2=h_2\left(0;{\bf 0,0}; P,P\right)=0, \quad p_3=h_3\left(0;{\bf 0,0;} P,P\right)=\rho_0,\label{eq:p3}\quad
p_5=h_5\left(0;{\bf 0,0;} P,P\right)=0.
\end{equation}
Then, from \eqref{eq:Formulas 1} and \eqref{eq:derivativeformula}
{\small
\begin{equation}\label{eq:values at 0}
\begin{cases}
&\mathcal A_r\left({\bf 0}\right)=-\dfrac{n_r}{\Delta_r\,\rho_0},\qquad
\mathcal A_b\left({\bf 0}\right)=-\dfrac{n_b}{\Delta_b\,\rho_0}\\
&\mu_r\left(0,{\bf 0}\right)=\mu_b\left(0,{\bf 0}\right)=0\\
&\tau_r\left(0,{\bf 0}\right)=\tau_b\left(0,{\bf 0}\right)=1\\
&D_r\left(0,{\bf 0}\right)=D_b\left(0,{\bf 0}\right)=d_0\\
&F_{1r}\left(0,{\bf 0}\right)=F_{1b}\left(0,{\bf 0}\right)=0,\qquad
F_{2r}\left(0,{\bf 0}\right)=F_{2b}\left(0,{\bf 0}\right)=\rho_0+d_0\\
&\Lambda_{2,r}\left(0,{\bf 0}\right)=\dfrac{1}{\Delta_r},\qquad
\Lambda_{2,b}\left(0,{\bf 0}\right)=\dfrac{1}{\Delta_b}\\
&\widetilde{\mathcal A}_r\left({\bf 0}, P\right)=\widetilde{\mathcal A}_b\left({\bf 0}, P\right)=0\\
&\widetilde{\mu}_r\left(0,{\bf 0}, P\right)=\dfrac{1}{\Delta_r}\left(\dfrac{\Delta_r}{n_r}+\dfrac{p_4}{\rho_0}\right),\qquad
\widetilde{\mu}_b\left(0,{\bf 0}, P\right)=\dfrac{1}{\Delta_b}\left(\dfrac{\Delta_b}{n_b}+\dfrac{p_4}{\rho_0}\right)\\
&\widetilde {\tau}_r\left(0,{\bf 0}, P\right)=\widetilde {\tau}_{n_r}\left(0,{\bf 0}, P\right)=0\\
&\widetilde {D}_r\left(0,{\bf 0}, P\right)=\widetilde {D}_b\left(0,{\bf 0}, P\right)=0\\
&\widetilde {F}_{1r}\left(0,{\bf 0}, P\right)=\rho_0+\dfrac{d_0}{\Delta_r}\left(\dfrac{\Delta_r}{n_r}+\dfrac{p_4}{\rho_0}\right),\qquad
\widetilde {F}_{1b}\left(0,{\bf 0}, P\right)=\rho_0+\dfrac{d_0}{\Delta_b}\left(\dfrac{\Delta_b}{n_b}+\dfrac{p_4}{\rho_0}\right)\\
&\widetilde {F}_{2r}\left(0,{\bf 0}, P\right)=\widetilde {F}_{2b}\left(0,{\bf 0}, P\right)=0\\
&\widetilde {\Lambda}_{2,r}\left(0,{\bf 0}, P\right)=\widetilde {\Lambda}_{2,b}\left(0,{\bf 0}, P\right)=0.
\end{cases}
\end{equation}
}
Hence by the definition of $h_1$ in \eqref{eq:formula for h1},
and since $h_1\left(0;{\bf 0,0}; P,P\right)$ is well defined, we get that $\widetilde{F}_{1b}\left(0,{\bf 0}, P\right)\neq 0$. That is, from \eqref{eq:values at 0}, $\rho_0+\dfrac{d_0}{\Delta_b}\left(\dfrac{\Delta_b}{n_b}+\dfrac{p_4}{\rho_0}\right)\neq 0$ and we have
\begin{equation}\label{eq:p1}
p_1=h_1\left(0;{\bf 0,0}; P,P\right)
=\dfrac{\widetilde {F}_{1r}\left(0,{\bf 0}, P\right)}{\widetilde{F}_{1b}\left(0,{\bf 0}, P\right)}
=\dfrac{\rho_0+\dfrac{d_0}{\Delta_r}\left(\dfrac{\Delta_r}{n_r}+\dfrac{p_4}{\rho_0}\right)}{\rho_0+\dfrac{d_0}{\Delta_b}\left(\dfrac{\Delta_b}{n_b}+\dfrac{p_4}{\rho_0}\right)}.
\end{equation}
From the definition of $h_4$ in \eqref{eq:formula for h4}
\begin{align}\label{eq:p4}
p_4=h_4\left(0;{\bf 0,0}; P,P\right)&=\dfrac{\dfrac{p_1}{\Delta_b\Delta_r}\(\dfrac{\Delta_b}{n_b}+\dfrac{p_4}{\rho_0}\)-\dfrac{1}{n_r\Delta_b}}{\dfrac{1}{\Delta_r\Delta_b\rho_0}}=\left[p_1\left(\dfrac{\Delta_b}{n_b}+\dfrac{p_4}{\rho_0}\right)-\dfrac{\Delta_r}{n_r}\right]\rho_0.
\end{align}
To solve \eqref{eq:p4} in $p_1$,
first notice that $\dfrac{\Delta_b}{n_b}+\dfrac{p_4}{\rho_0}\neq 0$.
Otherwise, from \eqref{eq:p4} we would obtain $\dfrac{p_4}{\rho_0}=-\dfrac{\Delta_r}{n_r}$, and so $\dfrac{\Delta_r}{n_r}=\dfrac{\Delta_b}{n_b}$, hence $n_r=n_b$, a contradiction since $n_b>n_r$.
Then \eqref{eq:p4} yields
\begin{equation}\label{eq:second form p1}
p_1=\dfrac{\dfrac{\Delta_r}{n_r}+\dfrac{p_4}{\rho_0}}{\dfrac{\Delta_b}{n_b}+\dfrac{p_4}{\rho_0}}.
\end{equation}
Hence from \eqref{eq:p1} and \eqref{eq:second form p1} we get that
$$
\dfrac{\dfrac{\Delta_r}{n_r}+\dfrac{p_4}{\rho_0}}{\dfrac{\Delta_b}{n_b}+\dfrac{p_4}{\rho_0}}=\dfrac{\rho_0+\dfrac{d_0}{\Delta_r}\left(\dfrac{\Delta_r}{n_r}+\dfrac{p_4}{\rho_0}\right)}{\rho_0+\dfrac{d_0}{\Delta_b}\left(\dfrac{\Delta_b}{n_b}+\dfrac{p_4}{\rho_0}\right)}
$$
Then
\begin{equation}\label{eq:Trick 0}
\left(\dfrac{\Delta_r}{n_r}+\dfrac{p_4}{\rho_0}\right)\left(\rho_0+\dfrac{d_0}{\Delta_b}\left(\dfrac{\Delta_b}{n_b}+\dfrac{p_4}{\rho_0}\right)\right)=\left(\dfrac{\Delta_b}{n_b}+\dfrac{p_4}{\rho_0}\right)\left(\rho_0+\dfrac{d_0}{\Delta_r}\left(\dfrac{\Delta_r}{n_r}+\dfrac{p_4}{\rho_0}\right)\right).
\end{equation}
Simplifying,
$$
\rho_0\left(\dfrac{\Delta_r}{n_r}-\dfrac{\Delta_b}{n_b}\right)=\left(\dfrac{\Delta_r}{n_r}+\dfrac{p_4}{\rho_0}\right)\left(\dfrac{\Delta_b}{n_b}+\dfrac{p_4}{\rho_0}\right)\left(\dfrac{1}{\Delta_r}-\dfrac{1}{\Delta_b}\right)d_0.
$$
Notice that
\begin{equation}\label{eq:trick}
\dfrac{\Delta_r}{n_r}-\dfrac{\Delta_b}{n_b}=\dfrac{1}{n_r-1}-\dfrac{1}{n_b-1}=-1+\Delta_r+1-\Delta_b=\Delta_r-\Delta_b.
\end{equation}
Then dividing by $\Delta_r-\Delta_b$\footnote{$\Delta_r-\Delta_b=\dfrac{n_b-n_r}{(n_r-1)(n_b-1)}> 0$}, and using the notation $k_0=\dfrac{\rho_0}{d_0}$ yields
\begin{equation}\label{eq:Useful}
k_0\Delta_r\Delta_b+\left(\dfrac{\Delta_b}{n_b}+\dfrac{p_4}{\rho_0}\right)\left(\dfrac{\Delta_r}{n_r}+\dfrac{p_4}{\rho_0}\right)=0.
\end{equation}
Expanding we obtain that $p_4/\rho_0$ satisfies the following quadratic equation:
\begin{equation}\label{eq:quadratic}
\(\dfrac{p_4}{\rho_0}\)^2+\(\dfrac{\Delta_r}{n_r}+\dfrac{\Delta_b}{n_b}\)\dfrac{p_4}{\rho_0}+k_0\Delta_r\Delta_b+\dfrac{\Delta_r\Delta_b}{n_rn_b}=0.
\end{equation}
The discriminant of \eqref{eq:quadratic} is
\begin{equation}
\delta=\(\dfrac{\Delta_r}{n_r}+\dfrac{\Delta_b}{n_b}\)^2-4\dfrac{\Delta_r\Delta_b}{n_rn_b}-4k_0\Delta_r\Delta_b=\(\dfrac{\Delta_r}{n_r}-\dfrac{\Delta_b}{n_b}\)^2-4k_0\Delta_r\Delta_b,
\end{equation}
then \eqref{eq:trick} yields
\begin{equation}\label{eq:formula for delta}
\delta=(\Delta_r-\Delta_b)^2-4k_0\Delta_r\Delta_b.
\end{equation}
Hence \eqref{eq:quadratic} has a real solutions if and only if $k_0\leq \dfrac{\(\Delta_r-\Delta_b\)^2}{4\Delta_r\Delta_b}$.
Therefore we have proved the necessity part in Theorem \ref{thm:Algebraic system}, and if $P$ solves the algebraic system, then
\begin{align*}
p_4&=\dfrac{-\left(\dfrac{\Delta_r}{n_r}+\dfrac{\Delta_b}{n_b}\right)-
\sqrt{\delta}}{2}\rho_0,\qquad
p_4'=\dfrac{-\left(\dfrac{\Delta_r}{n_r}+\dfrac{\Delta_b}{n_b}\right)
+\sqrt{\delta}}{2}\rho_0
\end{align*}
and by \eqref{eq:second form p1} and \eqref{eq:trick}, the corresponding $p_1$ and $p_1'$ are
\begin{align*}
p_1&=\dfrac{\Delta_r-\Delta_b- \sqrt{\delta}}{\Delta_b-\Delta_r-\sqrt{\delta}},\qquad
p_1'=\dfrac{\Delta_r-\Delta_b+ \sqrt{\delta}}{\Delta_b-\Delta_r+\sqrt{\delta}}.
\end{align*}
Therefore if $P=H(0;{\bf 0,0};P,P)$, then the solutions are
\begin{align}
P&=\(\dfrac{\Delta_r-\Delta_b- \sqrt{\delta}}{\Delta_b-\Delta_r-\sqrt{\delta}},0,\rho_0,\dfrac{-\left(\dfrac{\Delta_r}{n_r}+\dfrac{\Delta_b}{n_b}\right)-\sqrt{\delta}}{2}\rho_0,0\right)\label{eq:P}\\
P'&= \(\dfrac{\Delta_r-\Delta_b+ \sqrt{\delta}}{\Delta_b-\Delta_r+\sqrt{\delta}},0,\rho_0,\dfrac{-\left(\dfrac{\Delta_r}{n_r}+\dfrac{\Delta_b}{n_b}\right)+\sqrt{\delta}}{2}\rho_0,0\right)\label{eq:P'}
\end{align}
with $\delta$ given in \eqref{eq:formula for delta}.
Let us now prove that if $k_0\leq \dfrac{\(\Delta_r-\Delta_b\)^2}{4\Delta_r\Delta_b}$, then the system
$P=H(0;{\bf 0,0};P,P)$ is solvable.
In fact, from the assumption on $k_0$ there is $p_4$ solving \eqref{eq:quadratic}. We claim that this implies $\rho_0+\dfrac{d_0}{\Delta_b}\left(\dfrac{\Delta_b}{n_b}+\dfrac{p_4}{\rho_0}\right)\neq 0$.
Assume otherwise, since \eqref{eq:quadratic} is equivalent to \eqref{eq:Trick 0} and $\rho_0>0$ then $p_4$ solves the system
\begin{align*}
\rho_0+\dfrac{d_0}{\Delta_b}\left(\dfrac{\Delta_b}{n_b}+\dfrac{p_4}{\rho_0}\right)&=0\\
\rho_0+\dfrac{d_0}{\Delta_r}\left(\dfrac{\Delta_r}{n_r}+\dfrac{p_4}{\rho_0}\right)&=0
\end{align*}
Subtracting both identities we get
$$d_0\left(\dfrac{1}{n_b}-\dfrac{1}{n_r}\right)+\dfrac{p_4 d_0}{\rho_0}\left(\dfrac{1}{\Delta_b}-\dfrac{1}{\Delta_r}\right)=0.$$
Since $\dfrac{1}{\Delta_b}-\dfrac{1}{\Delta_r}=\dfrac{1}{n_r}-\dfrac{1}{n_b}$ and $n_r\neq n_b$, dividing by $d_0\left(\dfrac{1}{n_b}-\dfrac{1}{n_r}\right)$ we get
$1-\dfrac{p_4}{\rho_0}=0,$
and then $p_4=\rho_0$.
Replacing in the first equation of the system yields
$\rho_0=-\dfrac{d_0}{\Delta_b}\left(\dfrac{\Delta_b}{n_b}+1\right)$,
and since $\rho_0,d_0>0$, we get a contradiction.
Now let $p_1$ be the corresponding value to this $p_4$ given by \eqref{eq:p1}, and $p_2,p_3,p_5$ from \eqref{eq:p3}; and $P$ be the point with these coordinates.
Hence by the formula for $\widetilde F_{1b}(0,{\bf 0},P)$ in \eqref{eq:values at 0}, we get that
$\widetilde F_{1b}(0,{\bf 0},P)\neq 0$, and therefore $h_1\left(0;{\bf 0,0}; P,P\right)$ is well defined, and $P$ solves the algebraic system. Then the possible values of $P$ solving the algebraic system are $P$ and $P'$ given by \eqref{eq:P} and \eqref{eq:P'}.
We now prove the last part of the theorem.
If $k_0=\dfrac{\(\Delta_r-\Delta_b\)^2}{4\Delta_r\Delta_b}$ then $\delta=0$ and $P=P'=\left(-1,0,\rho_0,\dfrac{-\left(\dfrac{\Delta_r}{n_r}+\dfrac{\Delta_b}{n_b}\right)}{2}\rho_0,0\right)$.
If $k_0<\dfrac{\(\Delta_r-\Delta_b\)^2}{4\Delta_r\Delta_b}$ then the solutions $P\neq P'$. Moreover, since $\Delta_r>\Delta_b$, and from \eqref{eq:formula for delta} $0<\sqrt{\delta}<\Delta_r-\Delta_b$, Therefore
\begin{align*}
0<\left|p_1\right|&=\dfrac{\left |\Delta_r-\Delta_b- \sqrt{\delta}\right|}{\left|\Delta_b-\Delta_r-\sqrt{\delta}\right|}= \dfrac{\Delta_r-\Delta_b- \sqrt{\delta}}{\Delta_r-\Delta_b+\sqrt{\delta}}<1\\
\left |p_1'\right|&=\dfrac{\left|\Delta_r-\Delta_b+ \sqrt{\delta}\right|}{\left|\Delta_b-\Delta_r+\sqrt{\delta}\right|}=\dfrac{\Delta_r-\Delta_b+ \sqrt{\delta}}{\Delta_r-\Delta_b-\sqrt{\delta}}>1.
\end{align*}
\end{proof}
The following corollary gives a necessary condition on $k_0$ for the existence of solutions to Problem $B$.
\begin{corollary}\label{cor: Sufficient Condition}
If $k_0>\dfrac{(\Delta_r-\Delta_b)^2}{4\Delta_r\Delta_b}$ then Problem $B$ has no local solutions.
\end{corollary}
\begin{proof}
If Problem B has a solution then by Theorem \ref{thm:Problem B implies system}, the vector $Z(t)=(\varphi(t),v_1(t)+\rho_0,v_2(t),v_1'(t),v_2'(t)-\rho_0)$ solves the functional system \eqref{eq:Optic System} for $t$ in a neighborhood of $0$.
Plugging $t=0$ in \eqref{eq:Optic System} yields
$$\(0;{\bf 0, 0};Z'(0),Z'(0)\)=H(0;{\bf 0,0};Z'(0),Z'(0)).$$ Hence $Z'(0)$ is a solution to the algebraic system $P=H(0;{\bf 0,0};P,P)$, and from Theorem \ref{thm:Algebraic system} we have $k_0\leq \dfrac{(\Delta_r-\Delta_b)^2}{4\Delta_r\Delta_b}$.
\end{proof}
\begin{corollary}\label{cor:consequence on rho''}
If $k_0\leq \dfrac{(\Delta_r-\Delta_b)^2}{4\Delta_r\Delta_b}$ and a solution $\rho$ and $\varphi$ to Problem B exists, then
\begin{equation}\label{eq:necessary condition for existence Problem B}
\dfrac{\rho''(0)}{\rho_0}\in(\Delta_b,\Delta_r)
\end{equation}
In fact,
\begin{equation}\label{eq:formula for rho''}
\dfrac{\rho''(0)}{\rho_0}=\dfrac{2+\left(\dfrac{\Delta_r}{n_r}+\dfrac{\Delta_b}{n_b}\right)\pm\sqrt{\delta}}{2},
\end{equation}
with $\delta$ given in \eqref{eq:formula for delta}.
\end{corollary}
\begin{proof}
If $\rho$ and $\varphi$ solve Problem B, then from the proof of Corollary \ref{cor: Sufficient Condition}, $Z'(0)$ solves the algebraic system, with
$Z(t)=(\varphi(t),v_1(t)+\rho_0,v_2(t),v_1'(t),v_2'(t)-\rho_0)$. Using the proof of Theorem \ref{thm:Algebraic system}, it follows that $z_4'(0)$ satisfies \eqref{eq:Useful} then
$$\(\dfrac{\Delta_r}{n_r}+\dfrac{z_4'(0)}{\rho_0}\)\(\dfrac{\Delta_b}{n_b}+\dfrac{z_4'(0)}{\rho_0}\)=-k_0\Delta_r\Delta_b< 0,$$
and therefore $\dfrac{z_4'(0)}{\rho_0}\in \left(-\dfrac{\Delta_r}{n_r},-\dfrac{\Delta_b}{n_b}\right)$.
On the other hand, $z_4(t)=v_1'(t)=\rho(t)\sin t-\rho'(t)\cos t$, obtaining
$z_4'(0)=\rho_0-\rho''(0)$, so
\begin{equation}\label{eq:rho''andz4'}
\dfrac{\rho''(0)}{\rho_0}=1-\dfrac{z_4'(0)}{\rho_0}.
\end{equation}
We conclude that
$$\dfrac{\rho''(0)}{\rho_0}\in\left(1+\dfrac{\Delta_b}{n_b},1+\dfrac{\Delta_r}{n_r}\right)=(\Delta_b,\Delta_r).$$
Finally, from \eqref{eq:P}, \eqref{eq:P'}, and \eqref{eq:rho''andz4'}, we obtain \eqref{eq:formula for rho''}.
\end{proof}
\begin{remark}\label{rmk:three colors}\rm
\,The analogue of Problem B for three or more colors has no solution. In fact, if rays, superposition of three colors, are emitted from $O$ and $n_r<n_j<n_b$ are the refractive indices inside the lens for each color, then \eqref{eq:necessary condition for existence Problem B} must be satisfied for the pairs $n_r,n_j$ and $n_j$, $n_b$. Hence $$\dfrac{\rho''(0)}{\rho(0)}\in\left(\Delta_j,\Delta_r\right)\cap\left(\Delta_b,\Delta_j\right),$$
which is impossible since the last intersection is empty.
\end{remark}
\subsection{Existence of local solutions to \eqref{eq:Optic System}}\label{sec:existence of local solution}
In order to prove existence of solutions to the system \eqref{eq:Optic System}, we will apply Theorem \ref{thm: Existence}. From Theorem \ref{thm:Algebraic system}, we must assume that $k_0\leq\dfrac{(\Delta_r-\Delta_b)^2}{4\Delta_r\Delta_b}$. In this case, the algebraic system $P=H(0;{\bf 0,0};P,P)$ has a solution given by \eqref{eq:P} with $|p_1|\leq 1$, and therefore \eqref{eq:First Component} holds. Let $\mathcal P=(0;{\bf 0,0};P,P)$.
To show that $H$ satisfies all the hypotheses of Theorem \ref{thm: Existence}, it remains to show there is a norm in $\R^5$ so that $H$ satisfies conditions (i)-(iv) with respect to this norm in a neighborhood $N_\varepsilon(\mathcal P)$.
Our result is as follows.
\begin{theorem}\label{thm:Existence last}
There exists a positive constant $C(r,b)<\dfrac{(\Delta_r-\Delta_b)^2}{4\Delta_r\Delta_b}$, depending only on $n_r$ and $n_b$, such that for $0<k_0<C(r,b)$, the system \eqref{eq:Optic System} has a unique local solution $Z(t)=(z_1(t),\cdots,z_5(t))$ with $Z'(0)=P$, and $|z_1(t)|\leq |t|$, with $P$ given in \eqref{eq:P}.
Hence from Theorem \ref{thm:Converse} and for those values of $k_0$, there exist unique $\rho$ and $\varphi$ solving Problem B.
\end{theorem}
\begin{proof}
Since by construction $h_i$ are all smooth in a small neighborhood of $\mathcal P$,
$H$ is Lipschitz in that neighborhood for any norm.
To apply Theorem \ref{thm: Existence}, we need to find a norm $\|\cdot \|$ in $\R^5$ and a neighborhood $N_{\varepsilon}(\mathcal P)$ so that $|h_1|\leq 1$, and $H$ satisfies the contraction condition \eqref{eq:Contraction}.
To prove the contraction property of $H$, we first calculate the following matrices $\nabla_{\xi^0}H=\left(\dfrac{\partial h_i}{\partial \xi^0_j}\right)_{1\leq i,j\leq 5}$, and $\nabla_{\xi^1}H=\left(\dfrac{\partial h_i}{\partial \xi^1_j}\right)_{1\leq i,j\leq 5}$ at the point $\mathcal P$.
\\
\textbf{Calculation of $\nabla_{\xi^0}H(\mathcal P)=\left(\dfrac{\partial h_i}{\partial \xi^0_j}(\mathcal P)\right)_{1\leq i,j\leq 5}$.} \\
From \eqref{eq:formula for h2} and \eqref{eq:formula for h3}, $h_2$ and $h_3$ do not depend on $\xi^0$, then
$$\partial_{\xi^0_j}h_2(\mathcal P)=\partial_{\xi^0_j}h_3(\mathcal P)=0,\qquad 1\leq j\leq 5.$$
Also from the definition of $h_5$ in \eqref{eq:h5formula}
$$\partial_{\xi^0_j}h_5(\mathcal P)=-\delta_{j}^5\,(\tan t)|_{\mathcal P}= 0,\qquad 1\leq j\leq 5.$$
We next calculate $\nabla_{\xi^0} h_1(\mathcal P)$. From \eqref{eq:formula for h1},
$$
\nabla_{\xi^0} h_1(\mathcal P)=\dfrac{1}{\widetilde {F}_{1b}\left(0,{\bf 0},P\right)}\nabla_{\xi^0} \widetilde {F}_{1r}(0,{\bf 0},P).
$$
Recall from \eqref{eq:values at 0}
$$\widetilde{ F}_{1b}\left(0,{\bf 0},P\right)=\rho_0+\dfrac{d_0}{\Delta_b}\left(\dfrac{\Delta_b}{n_b}+\dfrac{p_4}{\rho_0}\right),\qquad \mu_r(0,{\bf 0})=0,\qquad D_r(0,{\bf 0})=d_0.$$
Differentiating $\widetilde F_{1r}$ given in \eqref{eq:derivativeformula} with respect to $\xi_j^0$, we then get
$$
\dfrac{\partial\widetilde F_{1r}}{\partial \xi_j^0}(0,{\bf 0},P)=\delta_{j}^3+d_0\dfrac{\partial \widetilde{\mu}_r}{\partial\xi_j^0}(0,{\bf 0},P),\qquad 1\leq j\leq 5.
$$
From \eqref{eq:values at 0}, $\mathcal A_r({\bf 0)}=\dfrac{-n_r}{\Delta_r\rho_0}$, then differentiating $\widetilde \mu_r$ in \eqref{eq:derivativeformula} with respect to $\xi_j^0$ at $(0,{\bf 0},P)$ yields
\begin{equation}\label{eq:derivative of tilde mur}
\dfrac{\partial \widetilde{\mu}_r}{\partial\xi_j^0}(0,{\bf 0},P)=\dfrac{1}{\Delta_r \rho_0}\delta_j^4.
\end{equation}
Therefore
\begin{align}
\nabla_{\xi^0} \widetilde{F}_{1r}(0,{\bf 0}, P)&=\left(0,0,1,\dfrac{1}{k_0\Delta_r},0\right)\label{eq:grad tilde F1r}.
\end{align}
We conclude that
\begin{equation}\label{eq:grad h1 xi0}
\nabla_{\xi^0} h_1(\mathcal P)=\dfrac{1}{\rho_0+\dfrac{d_0}{\Delta_b}\left(\dfrac{\Delta_b}{n_b}+\dfrac{p_4}{\rho_0}\right)}\left(0,0,1,\dfrac{1}{k_0\Delta_r},0\right).
\end{equation}
We next calculate $\nabla_{\xi^0} h_4(\mathcal P)$. Recall from \eqref{eq:values at 0} that
{\small
$$
\mu_b(0,{\bf 0})=\mu_r(0,{\bf 0})=0,\quad \widetilde\mu_b(0,{\bf 0}, P)=\dfrac{1}{\Delta_b}\left(\dfrac{\Delta_b}{n_b}+\dfrac{p_4}{\rho_0}\right),\quad \Lambda_{2,r}(0,{\bf 0})=\dfrac{1}{\Delta_r}, \quad\Lambda_{2,b}(0,{\bf 0})=\dfrac{1}{\Delta_b},\quad
\mathcal A_r({\bf 0})=-\dfrac{n_r}{\Delta_r\rho_0}.
$$
}
Then from \eqref{eq:formula for h4} it follows that
$$
\dfrac{\partial h_4}{\partial \xi_j^0}=\dfrac{\dfrac{1}{\Delta_b}\left(\dfrac{\Delta_b}{n_b}+\dfrac{p_4}{\rho_0}\right)\dfrac{1}{\Delta_r}}{\dfrac{1}{\Delta_r \rho_0}\dfrac{1}{\Delta_b}}\delta_j^1=\rho_0\left(\dfrac{\Delta_b}{n_b}+\dfrac{p_4}{\rho_0}\right)\delta_j^1,\quad 1\leq j\leq 5.
$$
Hence
\begin{equation}\label{eq:grad h4 xi0}
\nabla_{\xi^0} h_4(\mathcal P)=\rho_0\left( \dfrac{\Delta_b}{n_b}+\dfrac{p_4}{\rho_0},0,0,0,0\right).
\end{equation}
We then conclude that
\begin{equation}\label{eq:Total derivative xi0}
\nabla_{\xi^0} H(\mathcal P)=
\begin{bmatrix}
0 & 0 & \dfrac{1}{\rho_0+\dfrac{d_0}{\Delta_b}\left(\dfrac{\Delta_b}{n_b}+\dfrac{p_4}{\rho_0}\right)}& \dfrac{\dfrac{1}{k_0\Delta_r}}{\rho_0+\dfrac{d_0}{\Delta_b}\left(\dfrac{\Delta_b}{n_b}+\dfrac{p_4}{\rho_0}\right)}& 0 \\
0 & 0 & 0 & 0 & 0\\
0 & 0 & 0 & 0 & 0\\
\rho_0\left(\dfrac{\Delta_b}{n_b}+\dfrac{p_4}{\rho_0}\right) & 0 & 0 & 0 & 0\\
0 & 0 & 0 & 0 & 0\\
\end{bmatrix}.
\end{equation}
To calculate the spectral radius of the matrix $\nabla_{\xi^0} H(\mathcal P)$,
set
\begin{equation}\label{eq:form of a c}
a=\dfrac{1}{\rho_0+\dfrac{d_0}{\Delta_b}\left(\dfrac{\Delta_b}{n_b}+\dfrac{p_4}{\rho_0}\right)},\qquad
c=\rho_0\left(\dfrac{\Delta_b}{n_b}+\dfrac{p_4}{\rho_0}\right).
\end{equation}
The eigenvalues of $\nabla_{\xi^0} H(\mathcal P)$ are $0$ (with multiplicity $3$), and $\pm\sqrt{ \dfrac{ac}{k_0\Delta_r}}$.
Notice that from \eqref{eq:P}, and \eqref{eq:trick}
\begin{equation}\label{eq:trick 2}
\dfrac{\Delta_r}{n_r}+\dfrac{p_4}{\rho_0}=\dfrac{\Delta_r-\Delta_b-\sqrt{(\Delta_r-\Delta_b)^2-4k_0\Delta_r\Delta_b}}{2}=\dfrac{2k_0\Delta_r\Delta_b}{\Delta_r-\Delta_b+\sqrt{(\Delta_r-\Delta_b)^2-4k_0\Delta_r\Delta_b}}>0,
\end{equation}
then by \eqref{eq:p1} and \eqref{eq:second form p1}
\begin{equation}\label{eq:trick 3}
\dfrac{ac}{k_0\Delta_r}=\dfrac{\dfrac{\rho_0}{k_0\Delta_r}\left(\dfrac{\Delta_b}{n_b}+\dfrac{p_4}{\rho_0}\right)}{\rho_0+\dfrac{d_0}{\Delta_b}\left(\dfrac{\Delta_b}{n_b}+\dfrac{p_4}{\rho_0}\right)}=\dfrac{\dfrac{\rho_0}{k_0\Delta_r}\left(\dfrac{\Delta_r}{n_r}+\dfrac{p_4}{\rho_0}\right)}{\rho_0+\dfrac{d_0}{\Delta_r}\left(\dfrac{\Delta_r}{n_r}+\dfrac{p_4}{\rho_0}\right)}>0.
\end{equation}
Therefore all the eigenvalues of $\nabla_{\xi^0} H(\mathcal P)$ are real and the spectral radius of $\nabla_{\xi^0} H(\mathcal P)$ is
$R_{\xi^0}=\sqrt{\dfrac{ac}{k_0\Delta_r}}.$ We estimate $R_{\xi^0}$.
From \eqref{eq:formula for delta}, \eqref{eq:trick 2}, and \eqref{eq:trick 3}
\begin{equation}\label{eq:ac}
\dfrac{ac}{k_0\Delta_r}
=
\dfrac{\dfrac{2\Delta_b }{\Delta_r-\Delta_b+\sqrt{\delta}}}{1+\dfrac{2\Delta_b}{\Delta_r-\Delta_b+\sqrt{\delta}}}
=
\dfrac{2\,\Delta_b}{\Delta_r+\Delta_b+\sqrt{\delta}}
\leq
\dfrac{2\,\Delta_b}{\Delta_r+\Delta_b}:=\delta_0<1.
\end{equation}
Since $\sqrt{\delta}<\Delta_r-\Delta_b$, we conclude that
\begin{equation}\label{eq:Rxi0 bound}
\sqrt{\dfrac{\Delta_b}{\Delta_r}}< R_{\xi^0}=\sqrt{\dfrac{2\,\Delta_b}{\Delta_r+\Delta_b+\sqrt{\delta}}}\leq\sqrt{\dfrac{2\Delta_b}{\Delta_b+\Delta_r}}=\sqrt{\delta_0}<1.
\end{equation}
\textbf{Calculation of $\nabla_{\xi^1}H(\mathcal P)=\left(\dfrac{\partial h_i}{\partial \xi^1_j}(\mathcal P)\right)_{1\leq i,j\leq 5}$.} Notice from \eqref{eq:formula for h2}, \eqref{eq:formula for h3} and \eqref{eq:h5formula} that $h_2$, $h_3$ and $h_5$ do not depend on $\xi^1$ then
$$\nabla_{\xi^1}h_2(\mathcal P)=\nabla_{\xi^1}h_3(\mathcal P)=\nabla_{\xi^1}h_5(\mathcal P)={\bf 0}.$$
We calculate $\nabla_{\xi^1}h_1(\mathcal P)$. From \eqref{eq:formula for h1}, and the fact that $p_1=h_1(\mathcal P)=\dfrac{\widetilde{F}_{1r}(0,{\bf 0}, P)}{\widetilde{F}_{1b}(0,{\bf 0}, P)}$ we get
\begin{align*}
\nabla_{\xi^1}h_1(\mathcal P)&=-\dfrac{\widetilde{F}_{1r}(0,{\bf 0}, P)}{\widetilde{F}_{1b}(0,{\bf 0}, P)^2}\nabla_{\xi^1} \widetilde {F}_{1b}(0,{\bf 0},P)=-\dfrac{p_1}{\widetilde F_{1b}(0,{\bf 0},P)}\nabla_{\xi^1} \widetilde {F}_{1b}(0,{\bf 0},P).
\end{align*}
Recall from \eqref{eq:values at 0}
$$\widetilde{F}_{1b}(0,{\bf 0},P)=\rho_0+\dfrac{d_0}{\Delta_b}\left(\dfrac{\Delta_b}{n_b}+\dfrac{p_4}{\rho_0}\right).$$
Similarly as in \eqref{eq:grad tilde F1r}
$$\nabla_{\xi^1}\widetilde{F}_{1b}(\mathcal P)=\left(0,0,1,\dfrac{1}{k_0\Delta_b},0\right).$$
Hence
\[
\nabla_{\xi^1}h_1(\mathcal P)=\dfrac{-p_1}{\rho_0+\dfrac{d_0}{\Delta_b}\left(\dfrac{\Delta_b}{n_b}+\dfrac{p_4}{\rho_0}\right)}\left(0,0,1,\dfrac{1}{k_0\Delta_b},0\right).
\]
We next calculate $\nabla_{\xi^1}h_4(\mathcal P)$.
Recall from \eqref{eq:values at 0}
$$\mathcal A_b({\bf 0})=-\dfrac{n_b}{\Delta_b\rho_0},\quad
\Lambda_{2,b}(0,{\bf 0})=\dfrac{1}{\Delta_b},\quad
\Lambda_{2,r}(0,{\bf 0})=\dfrac{1}{\Delta_r},\quad
\mu_r(0,{\bf 0})=\mu_b(0,{\bf 0})=0,
$$
and as in \eqref{eq:derivative of tilde mur} $\nabla_{\xi^1}\widetilde{\mu}_b(0,{\bf 0},P)=\dfrac{1}{\Delta_b \rho_0}\left(0,0,0,1,0\right)$,
then from \eqref{eq:formula for h4}
\[
\nabla_{\xi^1} h_4(\mathcal P)=\dfrac{p_1\dfrac{1}{\Delta_r}}{\dfrac{1}{\Delta_r\rho_0}\dfrac{1}{\Delta_b}}\dfrac{1}{\Delta_b\rho_0}(0,0,0,1,0)=p_1(0,0,0,1,0)
\]
We conclude that
\begin{equation}\label{eq:Total derivative xi1}
\nabla_{\xi^1} H(\mathcal P)=
\begin{bmatrix}
0 & 0 & \dfrac{-p_1}{\rho_0+\dfrac{d_0}{\Delta_b}\left(\dfrac{\Delta_b}{n_b}+\dfrac{p_4}{\rho_0}\right)}& \dfrac{\dfrac{-p_1}{k_0\Delta_b}}{\rho_0+\dfrac{d_0}{\Delta_b}\left(\dfrac{\Delta_b}{n_b}+\dfrac{p_4}{\rho_0}\right)}& 0 \\
0 & 0 & 0 & 0 & 0\\
0 & 0 & 0 & 0 & 0\\
0 & 0 & 0 & p_1 & 0\\
0 & 0 & 0 & 0 & 0\\
\end{bmatrix}.
\end{equation}
Notice that $\nabla_{\xi^1} H(\mathcal P)$ is an upper triangular matrix with eigenvalues $0$ (with multiplicity $4$) and $p_1$, the spectral radius of $\nabla_{\xi^1} H(\mathcal P)$ is
\begin{equation}\label{eq:spect of grad xi1}
R_{\xi^1}=|p_1|.
\end{equation}
\textbf{Choice of the norm.}
We are now ready to construct a norm for which $H$ is a contraction in the last two variables.
In fact,
we will construct a norm denoted by $\|\cdot\|_{k_0}$ in $\R^5$ depending on $k_0$ such that for small $k_0$
\begin{equation}\label{eq:sum of norms less than one}
\left|\left\|\nabla_{\xi^0}H(\mathcal P)\right\|\right|_{k_0}+\left|\left\|\nabla_{\xi^1}H(\mathcal P)\right\|\right|_{k_0}<1
\end{equation}
where $\left|\left\|\cdot\right\|\right|_{k_0}$ is the matrix norm in $R^{5\times 5}$ induced by $\|\cdot\|_{k_0}$. Recall that $k_0=\rho_0/d_0$.
We will show that for each $0<k_0\leq \dfrac{\(\Delta_r-\Delta_b\)^2}{4\Delta_r\Delta_b}$, there exist $\lambda_1,\cdots ,\lambda_5$ positive depending on $k_0$ such that the norm in $\R^5$ having the form
$$\|x\|_{k_0}=\max\left(\lambda_1|x_1|,\lambda_2|x_2|,\lambda_3|x_3|,\lambda_4|x_4|,\lambda_5|x_5|\right),$$
satisfies
\[
\left|\left\|\nabla_{\xi^0}H(\mathcal P)\right\|\right|_{k_0}<1.
\]
We first choose $\lambda_1=\lambda_2=\lambda_5=1$.
Assume $x\in \R^5$ with $\|x\|_{k_0}=1$, which implies $|x_i|\leq \dfrac{1}{\lambda_i}$. Then from \eqref{eq:Total derivative xi0}, and \eqref{eq:ac}
\begin{align*}
\left\|\nabla_{\xi^0} H(\mathcal P)x\right\|_{k_0}&=\max\left(\left|a\,x_3+\dfrac{a}{k_0\Delta_r}\,x_4\right|,\lambda_4 \,|c\, x_1|\right)\\
&\leq \max\left(\dfrac{1}{\lambda_3}\,|a|+\dfrac{1}{\lambda_4}\dfrac{|a|}{k_0\Delta_r},\lambda_4\,|c|\right)\leq
\max\left(\dfrac{|a|}{\lambda_3}+\dfrac{\delta_0}{\lambda_4\,|c|},\lambda_4\,|c|\right)
\end{align*}
with $a$ and $c$ defined in \eqref{eq:form of a c}. Hence
\[
\left|\left\|\nabla_{\xi^0}H(\mathcal P)\right\|\right|_{k_0}=\max_{\|x\|_{k_0}= 1}\left\|\nabla_{\xi^0} H(\mathcal P)x\right\|_{k_0}
\leq
\max\left\{\dfrac{|a|}{\lambda_3}+\dfrac{\delta_0}{\lambda_4\,|c|},\lambda_4\,|c|\right\}.
\]
We will choose $\lambda_3$ and $\lambda_4$ so that the last maximum is less than one.
Let $\delta_0<\delta_1<\delta_2<1$, with $\delta_0$ defined in \eqref{eq:ac}, $\lambda_4=\delta_2/|c|$ and $\lambda_3=N\,\lambda_4$, with $N$ to be determined depending only on $n_r$ and $n_b$.
Then
\[
\max\left\{\dfrac{|a|}{\lambda_3}+\dfrac{\delta_0}{\lambda_4\,|c|},\lambda_4\,|c|\right\}=\max \left\{\dfrac{1}{\lambda_4\,|c|}\left(\dfrac{a\,c}{N}+\delta_0 \right),\lambda_4\,|c| \right\},
\]
notice that $ac>0$ from \eqref{eq:trick 3}.
From \eqref{eq:ac}, $a\,c\leq k_0\,\Delta_r\leq \Delta_r\,\dfrac{\(\Delta_r-\Delta_b\)^2}{4\Delta_r\Delta_b}:=B(r,b)$, we obtain
\[
\max\left\{\dfrac{|a|}{\lambda_3}+\dfrac{\delta_0}{\lambda_4\,|c|},\lambda_4\,|c|\right\}\leq \max \left\{\dfrac{1}{\delta_2}\left(\dfrac{B(r,b)}{N}+\delta_0 \right),\delta_2 \right\}.
\]
Now pick $N$, large depending only on $n_r$ and $n_b$, so that
$\dfrac{B(r,b)}{N}+\delta_0<\delta_1$. Therefore,
\[
\left|\left\|\nabla_{\xi^0}H(\mathcal P)\right\|\right|_{k_0}\leq \max\left\{\dfrac{\delta_1}{\delta_2},\delta_2 \right\}:=s_0<1,
\]
for all $0<k_0\leq \dfrac{\(\Delta_r-\Delta_b\)^2}{4\Delta_r\Delta_b}$.
It remains to show that with the above norm $\|\cdot \|_{k_0}$ we also have $\eqref{eq:sum of norms less than one}$. To do this we need to choose $k_0$ sufficiently small.
In fact, from \eqref{eq:Total derivative xi1} and for $\|x\|_{k_0}= 1$ we have (since $\lambda_3=N\,\lambda_4$, $\lambda_4=\delta_2/|c|$, and $ac<k_0\Delta_r\leq B(r,b)$)
\begin{align*}
\left|\left\|\nabla_{\xi^1} H(\mathcal P) x\right\|\right|_{k_0}&=|p_1|\,\max\left(\left| a\,x_3+\dfrac{1}{k_0\Delta_b}a\,x_4\right|,\lambda_4\,|x_4|\right)\\
&\leq |p_1|\,\max\left(\dfrac{|a|}{\lambda_3}+\dfrac{1}{\lambda_4}\dfrac{|a|}{k_0\Delta_b},1\right)\\
&=
|p_1|\,\max\left(\dfrac{1}{\delta_2}\(\dfrac{a\,c}{N}+\dfrac{a\,c}{k_0\,\Delta_b}\),1\right)\\
&\leq
|p_1|\,\max\left(\dfrac{1}{\delta_2}\(\dfrac{B(r,b)}{N}+\dfrac{\Delta_r}{\Delta_b}\),1\right):=|p_1|\,s_1,
\end{align*}
for all $0<k_0\leq \dfrac{\(\Delta_r-\Delta_b\)^2}{4\Delta_r\Delta_b}$ with $s_1$ depending only on $n_r$ and $n_b$ ($s_1>1$).
Hence
\[
\left|\left\|\nabla_{\xi^1}H(\mathcal P)\right\|\right|_{k_0}
\leq
|p_1|\,s_1.
\]
Therefore
$$
\left|\left\|\nabla_{\xi^0} H(\mathcal P) \right\|\right|_{k_0}+\left|\left\|\nabla_{\xi^1} H(\mathcal P) \right\|\right|_{k_0}
\leq s_0+ |p_1|\,s_1.$$
From \eqref{eq:formula for delta} and \eqref{eq:P}, $p_1\to 0$ as $k_0\to 0$, and therefore we obtain \eqref{eq:sum of norms less than one} for $k_0$ close to $0$.
\textbf{Verification of \eqref{eq:Contraction}}.
Let us now show that $H$ satisfies the Lipschitz condition \eqref{eq:Lip in xi0,xi1} with constants satisfying \eqref{eq:Contraction} in a sufficiently small neighborhood of $\mathcal P$ and with respect to the norm chosen.
In fact, for $k_0$ sufficiently small, from \eqref{eq:sum of norms less than one}, there is $0<c_0<1$ so that
\[
\left|\left\|\nabla_{\xi^0}H(\mathcal P)\right\|\right|_{k_0}+\left|\left\|\nabla_{\xi^1}H(\mathcal P)\right\|\right|_{k_0}\leq c_0<1.
\]
Since $H$ is $C^1$, there exists
a norm-neighborhood $N_{\varepsilon}(\mathcal P)$ of $\mathcal P$ as in Theorem \ref{thm: Existence}, so that
\begin{align}\label{eq:contraction condition}
&\max_{\(t;\zeta^0,\zeta^1;\xi^0,\xi^1\)\in N_{\varepsilon}(\mathcal P)}\left|\left\|\nabla_{\xi^0} H\(t;\zeta^0,\zeta^1;\xi^0,\xi^1\) \right\|\right|_{k_0}\\
&\qquad \qquad
+\max_{\(\hat t;\hat \zeta^0,\hat \zeta^1;\hat \xi^0,\hat \xi^1\)\in N_{\varepsilon}(\mathcal P)}\left|\left\|\nabla_{\xi^1} H\(\hat t;\hat \zeta^0,\hat \zeta^1;\hat \xi^0,\hat \xi^1\) \right\|\right|_{k_0}\leq c_1,\notag
\end{align}
for some $c_0<c_1<1$. Then by Proposition \ref{eq:prop Bound on C0+C1}, the inequalities \eqref{eq:Lip in xi0,xi1} and \eqref{eq:Contraction} hold with $C_0=\max_{N_{\varepsilon}(\mathcal P)}\left|\left\|\nabla_{\xi^0} H\right\|\right|$ and $C_1=\max_{N_{\varepsilon}(\mathcal P)}\left|\left\|\nabla_{\xi^1} H\right\|\right|$.
\textbf{Verification of \eqref{eq:bd on h1}.}
If $k_0<\dfrac{(\Delta_r-\Delta_b)^2}{4\Delta_r\Delta_b}$, then from Theorem \ref{thm:Algebraic system} $|p_1|<1$, and so $|h_1(\mathcal P)|<1$. Hence by continuity there exists a neighborhood of $\mathcal P$ so that $\left|h_1(t;\zeta^0,\zeta^1;\xi^0,\xi^1)\right|< 1$ for all $(t;\zeta^0,\zeta^1;\xi^0,\xi^1)$ in that neighborhood. Therefore, condition \eqref{eq:bd on h1} in Theorem \ref{thm: Existence} is satisfied.
We conclude that for small values of $k_0$ Theorem \ref{thm: Existence} is applicable and therefore there exists a unique local solution to the system \eqref{eq:Optic System} with $Z'(0)=P$ and $P $ given in \eqref{eq:P}. Notice also that for such a solution $Z$, we have $z_1(0)=0$, and $\left| z_1'(0)\right|=\left| h_1(\mathcal P)\right|=\left|p_1\right|<1$, then by reducing the neighborhood of zero, if necessary, we have that the solution satisfies $|z_1(t)|\leq |t|$. Since we also showed in Theorem \ref{thm:Algebraic system} that $p_1\neq 0$,
Theorem \ref{thm:Converse} is applicable and
the proof of Theorem \ref{thm:Existence last} is then complete.
\end{proof}
Summarizing the question of solvability of Problem B in the plane in a neighborhood of zero, we have:
\begin{enumerate}
\item If $k_0>\dfrac{(\Delta_r-\Delta_b)^2}{4\Delta_r\Delta_b}$ then Problem $B$ is not locally solvable.
\item Let $A=(0,\rho_0)$, $B=(0,\rho_0+d_0)$, and $0<k_0<C(r,b)<\dfrac{(\Delta_r-\Delta_b)^2}{4\Delta_r\Delta_b}$.
Then there exists $\delta>0$
and a unique lens $(L,S)$ with lower face $L=\{\rho(t)x(t)\}_{t\in [-\delta,\delta]}$, $x(t)=(\sin t,\cos t)$, and upper face $S=\{f_b(t)\}_{t\in [-\delta,\delta]}$, $f_b$ defined in Theorem \ref{thm:Converse}, such that $L$ passes through $A$, $S$ passes through $B$, and so that $(L,S)$ refracts all rays emitted from $O$ with colors r and b and direction $x(t)$, $t\in [-\delta,\delta]$ into the vertical direction $e=(0,1)$.
\end{enumerate}
\begin{remark}\label{rmk:final remark}\rm
In this final remark, we point out that Theorem \ref{thm: Existence} is not applicable to find solutions to the system \eqref{eq:Optic System} when $k_0\leq \dfrac{(\Delta_r-\Delta_b)^2}{4\Delta_r\Delta_b}$ and $k_0$ is away from $0$.
In this case we claim that there is no norm in $\R^5$ for which we can obtain \eqref{eq:Lip in xi0,xi1} with $C_0$ and $C_1$ satisfying \eqref{eq:Contraction}.
In fact, from \eqref{eq:formula for delta}, \eqref{eq:P}, \eqref{eq:Rxi0 bound}, and \eqref{eq:spect of grad xi1}
$$R_{\xi^0}+R_{\xi^1}=\sqrt{\dfrac{2\Delta_b}{\Delta_r+\Delta_b+\sqrt{\delta}}}+|p_1|\to \sqrt{\dfrac{2\Delta_b}{\Delta_r+\Delta_b}}+1>1,$$
as $k_0\to \dfrac{(\Delta_r-\Delta_b)^2}{4\Delta_r\Delta_b}$ from below.
Hence for $k_0$ close to $\dfrac{(\Delta_r-\Delta_b)^2}{4\Delta_r\Delta_b}$, we have $R_{\xi^0}+R_{\xi^1}>1$, and hence by Corollary \ref{cor:spectral radii} the claim follows.
\end{remark}
\section*{Acknowledgements}{\small C. E. G. was partially supported by NSF grant DMS--1600578, and A. S. was partially supported by Research Grant 2015/19/P/ST1/02618 from the National Science Centre, Poland, entitled "Variational Problems in Optical Engineering and Free Material Design".}\\
\vspace{-.6cm}
\begin{wrapfigure}{l}{0.2\textwidth}
\begin{center} \includegraphics[width=0.2\textwidth]{EUlogo2.png}
\end{center}
\end{wrapfigure}\\
{\footnotesize This project has received funding from the European Union's Horizon 2020 research and innovation program under the Marie Sk\l{}odowska-Curie grant agreement No. 665778.}\\ \\ | {"config": "arxiv", "file": "1801.07223/Dichromatic.Lens.Jan-22-2018.tex"} |
\newcommand{\abs}[1]{\left| #1 \right|}
\renewcommand{\vec}[1]{\mathbf{#1}}
\section{ Lower Bound Constructions for $k=1$}
\label{lowerb}
In page 235 of \cite{schrijver}, an example is given that shows that the bound $2^n$ given by the Doignon-Bell-Scarf
theorem is tight. In this section, we present a construction of a polytope $P$ that shows that
our upper bound for $k=1$ from Theorem \ref{k-doignon} is tight. This example, together with the verification
of its properties, establish Theorem \ref{lowerboundtheorem}.
For notational convenience, given a set of natural numbers $N$,
let $l_N:= \min_{i\in N}\; i$ denote its least element.
We define the polyhedron as follows
\begin{alignat}{2}
P=\{ x\in \mathbb R^n:\qquad \sum_{i=1}^{j-1} \frac{1}{2^{i}} x_i + x_j + \sum_{i=j+1}^{n}
\frac{1}{2^{i-1}} x_i &\leq 1 \qquad&& j=1,...,n, \label{eq1}\\
- \sum_{i=1}^{j-1} \frac{1}{2^{i}} x_i - x_j - \sum_{i=j+1}^{n}
\frac{1}{2^{i-1}} x_i &\leq 1 \;&& j=1,...,n, \label{eq2}\\
-\frac{1}{\abs{N}} x_{l_N}+\sum_{i \in N, i \neq l_{N}} \frac{1}{\abs{N}} x_i - \sum_{i \not \in N}
\frac{1}{\abs{N}^n} x_i &\leq 1 \; \;&& \forall N \subseteq \{1,2,..,n\}; \; \; \abs{N} \geq 2, \label{eq3}\\
+\frac{1}{\abs{N}} x_{l_N}-\sum_{i \in N, i \neq l_{N}} \frac{1}{\abs{N}} x_i + \sum_{i \not \in N}
\frac{1}{\abs{N}^n} x_i &\leq 1 \; \;&& \forall N \subseteq \{1,2,..,n\}; \; \; \abs{N} \geq 2\quad \}.\label{eq4}
\end{alignat}
The rationale behind the construction of the polyhedron $P$ is the following.
First it is constructed in such a way that 0 is the only integer point in its interior.
Both of the inequalities \eqref{eq1}-\eqref{eq2} are tight at a unit vector $\pm e_i$
and exclude some integer points from $\{-1,0,1\}^n$.
All inequalities \eqref{eq3}-\eqref{eq4} are all tight at exactly one of the remaining integer points of $\{-1,0,1\}^n.$
We will prove Theorem \ref{lowerboundtheorem} through Lemmas \ref{lemmageq2} to \ref{allnecessary}.
Lemma \ref{lemmageq2} proves that the only valid integer points of $P$ are in $\{-1,0,1\}^n$. Lemma \ref{conditionfeasibility}
uses Lemma \ref{lemmageq2} to provide a necessary and sufficient condition of feasibility of integer points.
Lastly, Lemma \ref{allnecessary} shows that each inequality defining $P$ is necessary and contains exactly one tight integer point in its relative interior.
Because we have used only rational data the polyhedron is in fact bounded, thus a polytope. This is the case because if unbounded, its recession cone
would contain a rational direction which would force infinitely many points inside.
\begin{lemma}
If $y \in \Z^n$ has at least an index $j$ such that $\abs{y_j} \geq 2$, then $y \not \in P$
\label{lemmageq2}
\end{lemma}
\begin{proof}
Consider $y\in P\cap \mathbb Z^n.$
Assume by contradiction that $k$ is the largest index with $|y_k|\geq 2.$
We prove the case $y_k\geq 2$. The negative case is symmetric and
omitted.
Add twice Inequality \eqref{eq1} with $j=k$ to Inequality \eqref{eq2}
with $j=1$ which yields
\begin{align}
\left( 2 - \frac{1}{2^{k-1}} \right) y_k + \sum_{i=k+1}^{n}
\frac{1}{2^{i-1}} y_i \leq 3.
\label{sumoftwoineq}
\end{align}
Since, we have assumed that $|y_i|\leq 1$ for all indices $i\geq k+1$,
we can bound
\begin{align}
-\frac{1}{2^{k-1}} < \sum_{i=k+1}^n \frac{1}{2^{i-1}} y_i < \frac{1}{2^{k-1}}.
\label{boundrest}
\end{align}
Using \eqref{sumoftwoineq} and \eqref{boundrest}, we conclude that
\begin{align}
y_k < \frac{3 \cdot 2^{k-1}+1}{2^k-1}.
\label{contradic}
\end{align}
For $k\geq 3$, this provides a contradiction
since the right-hand-side of \eqref{contradic} can be shown to be smaller than 2.
For $k=1$ or $k=2$, this yields $|y_k|\leq 2.$
Observe though that if $k=1$, then the inequalities \eqref{eq1}
and \eqref{eq2} with $j=1$ together with the fact that, using $|y_i|\leq 1$ for
$i\geq 2$, $|\sum_{i=2}^n \frac{1}{2^{i-1}}y_i|< 1$ yield
$|y_1|\leq 1.$
To finish the proof, there still remains to consider the case $k=2$ i.e. $y_2=2$.
If $y_1$ is nonnegative, $y$ violates
\eqref{eq1} with $j=2$.
If $y_1$ is negative, $y$ violates \eqref{eq3}
with $N=\{1,2\}$.
\cqfd
\end{proof}
\begin{definition}
Given a point $y \in \Z^n\setminus\{0\}$, let $l(y)$ be the least nonzero index of $y$,
i.e. $y_i=0$ for all $i<l(y).$
\end{definition}
\begin{lemma}
Let $y \in \{-1,0,1\}^n$. Then $y$ is in $P$ if and only if one of the
following is true
\begin{enumerate}[(i)]
\item $y$ is the origin
\item $y_{l(y)} =1$ and $y_i \in \{-1, 0 \}$, for all $i \geq l(y)+1$
\item $y_{l(y)} = -1$ and $y_i \in \{1, 0 \}$, for all $i \geq l(y)+1$.
\end{enumerate}
\label{conditionfeasibility}
\end{lemma}
\begin{proof}
We first prove that if $y\in\{-1,0,1\}^n$ is feasible then it must satisfy one of the
three conditions.
Assume therefore that $y\in\{-1,0,1\}^n$ is feasible.
The point $y=0$ is trivially feasible (option (i)).
If $y$ is not the origin, there must be
some $y_j \neq 0$. Assume that $y_{l(y)} =1$. If there is a $k\neq l(y)$ with $y_k =1$, then $y$
violates Inequality \eqref{eq1} with $j=k$
$$\sum_{i=1}^{k-1} \frac{1}{2^{i}} x_i + x_k + \sum_{i=k+1}^{n}
\frac{1}{2^{i-1}} x_i \leq 1$$
so $y$ satisfies (ii). The case that $y_{l(y)} = -1$ is
symmetric and omitted and leads to option (iii).
Assume $y$ satisfies one of the three conditions, we want to prove that it is feasible.
Obviously, if $y$ is the origin, it is feasible.
Assume that $y$ satisfies (ii), the other case is symmetric and omitted here.
We now prove that all inequalities are satisfied by $y$.
First consider \eqref{eq1}. The term with $x_{l(y)}$ is less or equal to 1
whereas the remaining part of the summation is nonpositive which makes the
left-hand-side of \eqref{eq1} smaller or equal to 1.
Consider now \eqref{eq2}. If $j\leq l(y)$, the term with $x_{l(y)}$ is nonpositive
whereas the sum of the remaining terms is less or equal to 1 which proves that the
inequality is satisfied. If $j\geq l(y)+1$, let us denote the inequality
as $\sum_{i=1}^n \alpha_i x_i \leq 1.$ Observe that $\alpha_{l(y)}=-\frac{1}{2^{l(y)}}$,
$\alpha_j=-1$ and $0>\alpha_i\geq -\frac{1}{2^{l(y)+1}}$ for all $i\geq l(y)+1, i\neq j$.
Therefore $\alpha_{l(y)} y_{l(y)} +\sum_{i\geq l(y), i\neq j} \alpha_i x_i\leq 0$
and $\alpha_jx_j\leq 1$ which makes the
left-hand-side of \eqref{eq2} smaller or equal to 1.
Consider Inequality \eqref{eq3}.
First observe that the second sum of \eqref{eq3} is bounded from above by $1/|N|.$
Concerning the first two terms, we distinguish two cases.
If $l(y)\in N, l_N\neq l(y)$, it implies that $y_{l_N}=0$ using condition (ii),
and bounds $-\frac{1}{\abs{N}} x_{l_N}+\sum_{i \in N, i \neq l_{N}} \frac{1}{\abs{N}} x_i \leq \frac{1}{|N|}$.
Otherwise $-\frac{1}{\abs{N}} x_{l_N}\leq \frac{1}{|N|}$ and $\sum_{i \in N, i \neq l_{N}} \frac{1}{\abs{N}} x_i \leq 0$
which implies that in both cases, the left-hand-side of \eqref{eq3} is bounded from above by 1.
Consider Inequality \eqref{eq4}.
We distinguish two cases. In the first case, we assume that $l(y)=l_N$.
This implies that first term of \eqref{eq4} equals $1/|N|$, the first sum is bounded
from above by $(|N|-1)/|N|$ as it contains $|N|-1$ terms and the last sum is nonpositive
using condition (ii). Therefore the left-hand-side of \eqref{eq4} is bounded from above by 1.
In the second case, we assume that $l(y)\neq l_N$. Therefore the first term
of \eqref{eq4} is bounded from above by 0, the first sum is bounded from above by
$(|N|-1)/|N|$ and the second sum is bounded from above by $1/|N|^n$ and the result
follows.
\cqfd
\end{proof}
\begin{lemma}
Each of the $2(2^n-1)$ inequalities in $P$ is necessary, i.e., the removal of any inequality from $P$ results in the inclusion of at least one additional integer point in the interior of $P$.
\label{allnecessary}
\end{lemma}
\begin{proof}
We will show the lemma by proving that each facet of
$P$ contains exactly one integer feasible point in its relative interior.
Consider an inequality of type \eqref{eq1}. Observe that for any point satisfying
conditions (ii) or (iii) of Lemma \ref{conditionfeasibility},
$\sum_{i=1}^{j-1} \frac{1}{2^i} x_i + \sum_{i=j+1}^{n}
\frac{1}{2^{i-1}} x_i <1$. Therefore to make \eqref{eq1} tight
we need $x_j=1$ which implies $\sum_{i=1}^{j-1} \frac{1}{2^i} x_i + \sum_{i=j+1}^{n}
\frac{1}{2^{i-1}} x_i =0$. Since the coefficients are in a geometric
progression, this in turn implies $x_i=0$ for all $i\neq j$.
We have therefore proven that the unit vectors are tight for all inequalities
of type \eqref{eq1}. By symmetry, for each inequality of type \eqref{eq2},
only $-e_j$ (where $e_j$ denotes the $j^{th}$ unit vector) is tight,
integer and valid.
Consider an inequality of type \eqref{eq3}. Observe that for any point
satisfying conditions (ii) or (iii) of Lemma \ref{conditionfeasibility},
$- \sum_{i \not \in N}
\frac{1}{\abs{N}^n} x_i < \frac{1}{|N|}.$
Therefore to make \eqref{eq3} tight, we need
$-\frac{1}{\abs{N}} x_{l_N}+\sum_{i \in N, i \neq l_{N}} \frac{1}{\abs{N}} x_i = 1$
which implies $x_{l_N}=-1$ and $x_i=1$ for all $i\in N\setminus l_N$
and $x_i=0$ for $i\not\in N.$
Symmetrically , for each inequality of type \eqref{eq4}, only
$x_{l_N}=1$, $x_i=-1$ for all $i\in N\setminus l_N$
and $x_i=0$ for $i\not\in N$ is tight, integer and valid.
By observing that all points that were shown to be tight
for the facet-defining inequalities of $P$ are all different, the result follows.
Lastly, the fact that there are $2(2^n-1)$ planes follows from the fact
that they are in bijection with double the number of nonempty subsets
of $\{1,\ldots, n\}$.\cqfd
\end{proof}
In this section, we have dealt with the case $k=1$ and proven that the upper bound given in Theorem
\ref{k-doignon} is tight. Since the upper bound for $k=2$ matches that for $k=1$, it is natural to conjecture
that the bound is tight for $k=2$ as well. We know that for
$c(2,k)$ the bound is not tight for $k\geq 3$. We also believe that it is not tight for $n, k\ge 3$ and can further be improved. | {"config": "arxiv", "file": "1405.2480/lowerbound-really-final.tex"} |
\section{Value of Adjugate of Determinant}
Tags: Determinants
\begin{theorem}
Let $D$ be the [[Definition:Determinant of Matrix|determinant]] of order $n$.
Let $D^*$ be the [[Definition:Adjugate Matrix|adjugate]] of $D$.
Then $D^* = D^{n-1}$.
\end{theorem}
\begin{proof}
Let $\mathbf A = \begin{bmatrix} a_{11} & a_{12} & \cdots & a_{1n} \\
a_{21} & a_{22} & \cdots & a_{2n} \\
\vdots & \vdots & \ddots & \vdots \\
a_{n1} & a_{n2} & \cdots & a_{nn}\end{bmatrix}$ and $\mathbf A^* = \begin{bmatrix} A_{11} & A_{12} & \cdots & A_{1n} \\
A_{21} & A_{22} & \cdots & A_{2n} \\
\vdots & \vdots & \ddots & \vdots \\
A_{n1} & A_{n2} & \cdots & A_{nn}\end{bmatrix}$.
Thus $\left({\mathbf A^*}\right)^\intercal = \begin{bmatrix} A_{11} & A_{21} & \cdots & A_{n1} \\
A_{12} & A_{22} & \cdots & A_{n2} \\
\vdots & \vdots & \ddots & \vdots \\
A_{1n} & A_{2n} & \cdots & A_{nn}\end{bmatrix}$ is the [[Definition:Transpose of Matrix|transpose]] of $\mathbf A^*$.
Let $c_{ij}$ be the typical element of $\mathbf A \left({\mathbf A^*}\right)^\intercal$.
Then $\displaystyle c_{ij} = \sum_{k \mathop = 1}^n a_{ik} A_{jk}$ by definition of [[Definition:Matrix Product (Conventional)|matrix product]].
Thus by the [[Expansion Theorem for Determinants/Corollary|corollary of the Expansion Theorem for Determinants]], $c_{ij} = \delta_{ij} D$.
So $\det \left({\mathbf A \left({\mathbf A^*}\right)^\intercal}\right) = \begin{vmatrix} D & 0 & \cdots & 0 \\
0 & D & \cdots & 0 \\
\vdots & \vdots & \ddots & \vdots \\
0 & 0 & \cdots & D\end{vmatrix} = D^n$ by [[Determinant of Diagonal Matrix]].
From [[Determinant of Matrix Product]], $\det \left({\mathbf A}\right) \det \left({\left({\mathbf A^*}\right)^\intercal}\right) = \det \left({\mathbf A \left({\mathbf A^*}\right)^\intercal}\right)$
From [[Determinant of Transpose]]:
: $\det \left({\left({\mathbf A^*}\right)^\intercal}\right) = \det \left({\mathbf A^*}\right)$
Thus as $D = \det \left({\mathbf A}\right)$ and $D^* = \det \left({\mathbf A^*}\right)$ it follows that $DD^* = D^n$.
Now if $D \ne 0$, the result follows.
However, if $D = 0$ we need to show that $D^* = 0$.
Let $D^* = \begin{vmatrix} A_{11} & A_{12} & \cdots & A_{1n} \\
A_{21} & A_{22} & \cdots & A_{2n} \\
\vdots & \vdots & \ddots & \vdots \\
A_{n1} & A_{n2} & \cdots & A_{nn}\end{vmatrix}$.
Suppose that at least one element of $\mathbf A$, say $a_{rs}$, is non-zero (otherwise the result follows immediately).
By the [[Expansion Theorem for Determinants]] and its corollary, we can expand $D$ by row $r$, and get:
:$\displaystyle D = 0 = \sum_{j \mathop = 1}^n A_{ij} t_j, \forall i = 1, 2, \ldots, n$
for all $t_1 = a_{r1}, t_2 = a_{r2}, \ldots, t_n = a_{rn}$.
But $t_s = a_{rs} \ne 0$.
So, by '''(work in progress)''':
:$D^* = \begin{vmatrix} A_{11} & A_{12} & \cdots & A_{1n} \\
A_{21} & A_{22} & \cdots & A_{2n} \\
\vdots & \vdots & \ddots & \vdots \\
A_{n1} & A_{n2} & \cdots & A_{nn}\end{vmatrix} = 0$
{{WIP|One result to document, I've got to work out how best to formulate it.}}
[[Category:Determinants]]
bkm55tefjheohcpkb9z8u2t6jx7dz5p
\end{proof}
| {"config": "wiki", "file": "thm_1115.txt"} |
TITLE: Relationship between homotopy pushout and ordinary pushout
QUESTION [3 upvotes]: I'm trying to understand the homotopy pushouts and currently looking at the homotopy cofiber.
For two maps $f \colon C \to A$ and $g \colon C \to B$ we defined the homotopy pushout to be the regular pushout after replacing the maps $f$ and $g$ by their cofibrations $C \to M_f$ and $C \to M_g$ where $M_f$ and $M_g$ are the mapping cylinders.
As far as I understand the homtopy cofiber (mapping cone) for a map $f \colon X \to Y$ is the pushout of the diagram $ * \leftarrow X \to M_f$ where the second map is a cofibration. This is the same as the pushout of, for example, $ D^2 \leftarrow X \to M_f$, which is the great thing about homotopy pushouts.
But what is the relationship with these pushouts to the regular pushout of $ * \leftarrow X \to Y$? If there is one in general?
For example if $X = S^1$ and $f$ is the attaching map of the 2-cell of $\mathbb{R} P^2$ to $S^1$ then the regular pushout of the maps $f$ and the inclusion $S^1 \to D^2$ is just $\mathbb{R} P^2$ but what happens when considering the homotopy pushouts?
Thanks in advance :)
REPLY [3 votes]: Given your specific model of homotopy pushout, we can construct a concrete "comparison map" from the homotopy pushout to the usual pushout. Given a span $S=(Y\stackrel{f}{\leftarrow} X \stackrel{g}{\rightarrow} Z)$, denote the pushout by $colim(S)$ and the homotopy pushout by $hocolim(S) := colim(M_f\leftarrow X \rightarrow M_g)$. By the universal property of the pushout for $hocolim(S)$, the compositions $M_f \stackrel{\simeq}{\to} Y \hookrightarrow colim(S)$ and $M_g \stackrel{\simeq}{\to} Z \hookrightarrow colim(S)$ induce a canonical continuous function $hocolim(S) \to colim(S)$. However this map depends quite a bit on the particular expression of $S$.
For example, consider the span $S=(*\leftarrow X \rightarrow *)$, whose pushout is a one-point space. The mapping cylinder of a constant map is the inclusion of the boundary of the cone $X\hookrightarrow CX$, so $$hocolim(S) = colim(CX\leftarrow X \rightarrow CX) = \Sigma X$$
where $\Sigma X$ is the suspension. In this case the comparison map $hocolim(S) \to colim(S)$ is just the constant map $\Sigma X\to *$.
The important point of the above example is that even though there is a homotopy equivalence of diagrams (i.e. homotopy equivalences between the spaces making the appropriate diagrams commute), the pushouts have different homotopy types. In fact one of the main motivations for the homotopy pushout is to have a construction where the homotopy type of the output is invariant under homotopy equivalences of diagrams, and it turns out that taking the pushout of a cofibrant replacement of the diagram does satisfy this invariance. (Here our model of cofibrant replacement is to replace the two target spaces with their mapping cylinders.) Moreover, there is a theorem (see for example Strom's Modern Classical Homotopy Theory Proposition 6.49) which says
If $S = (C\stackrel{f}{\leftarrow} A \stackrel{i}{\hookrightarrow} B)$ is a span such that $i$ is a cobifration, then the comparison map $hocolim(S) \to colim(S)$ is a homotopy equivalence.
In your particular example, the map $S^1 \to D^2$ is a cofibration, so the usual pushout $\mathbb{R}P^2$ actually "is" a homotopy pushout. In fact, the mapping cylinder of $S^1 \to \mathbb{R}P^1$ is the Mobuis strip, so the pushout of the cofibrant replacement is even homeomorphic to $\mathbb{R}P^2$. | {"set_name": "stack_exchange", "score": 3, "question_id": 3234701} |
TITLE: multiplicative Euler's $\phi$ function
QUESTION [1 upvotes]: Here is the pdf and array
I am not understanding the proof that $\phi$ is multiplicative function i.e for relatively prime $m,n$ we have $\phi(mn)=\phi(m)\phi(n)$
There were Three Lemmas before the proving original Theorem which I understood
$L_1: a \text{ is prime to } mn \Leftrightarrow(a,m)=1,(a,n)=1$
$L_2: a=qn+r, (r,n)=1 \text{ Then} (a,n)=1$
$L_3:$ If $c$ be the integer and $(a,n)=1$ Then the number of integers in the set $\{c,c+a,c+2a,\dots,c+(n-1)a\}$ that are prime to $n$ is $\phi(n)$
Then they arranged $mn$ integers in $n$ rows and $m$ collumns , I have not understood this arrangement and based on this arrangements the proof. could any one help me understand?
REPLY [1 votes]: I hope the following explanation makes the claim as clear as possible:
THM Assume that $(a,b)=1$. Then $$(a,y)=1 \text{ and } (b,x)=1\iff (ax+by,ab)=1$$
P We prove the contrapositive of each direction.
$(\Rightarrow)$ Suppose thus that there is a prime $p$ such that $p\mid (ax+by,ab)$. Then $p\mid ab$. Without loss of generality, assume $p \mid a$. Since $p\mid ax+by$, we have $p\mid by$, and since $(a,b)=1$, $p\mid y$. Thus $p\mid (a,y)\implies (a,y)>1$. We have thus proven, under the hypothesis that $(a,b)=1$; that $$(ax+by,ab)>1\implies (a,y)>1 \text{ or } (b,x)>1$$
since the other option would have been assuming that $p\mid a$.
$(\Leftarrow)$ Now suppose $(x,b)>1$. Then $(ax+by,ab)>1$ since $(x,b)\mid ax+by$. Analogously, $(a,y)>1$ implies $(ax+by,ab)>1$. $\blacktriangle$
COR Suppose that $(a,b)=1$, and that $x$ ranges through the $\phi(b)$ numbers coprime to $b$ and $y$ ranges throughout the $\phi(a)$ numbers coprime to $a$. Then $ax+by$ ranges throughout the $\phi(a)\cdot\phi(b)$ numbers coprime to $ab$, which in turn is $\phi(a\cdot b)$. | {"set_name": "stack_exchange", "score": 1, "question_id": 454154} |
TITLE: Question on Cardinality ..Help
QUESTION [1 upvotes]: a) Let $n$ be a positive integer. Define a relation on $\mathbb{Z} $, which yields a partition of $\mathbb{Z}$ with $n$ elements; and give the partition.
b) Deduce that $n\omega = \omega$ where $\omega$ is the cardinality if $\mathbb{Z}$.
I was thinking that I can define the mapping $f: \mathbb{Z} \rightarrow\mathbb{Z}^+ - \lbrace1,2,3\rbrace$ as $f(-n) = 2n$ if $n \in \mathbb{Z}^+ $
$ f(n) = 2n + 1$ if $n \in \mathbb{Z}^+ \cup \lbrace 0 \rbrace$
$f$ is thus a $1-1$ corresponce between the set $\mathbb{Z} $ and the set $\mathbb{Z}^+ - \lbrace1,2,3\rbrace$
but the cardinality of the subset $\mathbb{Z}^+ - \lbrace1,2,3\rbrace$ is not $n$ , thats the problem.That's all I have tried..any help anyone?
REPLY [0 votes]: Recall that a partition on a set $X$ is a set of non-empty subsets of $X$ such that each element of $X$ appears in exactly one of the subsets in the partition. So, what are some partitions of $\mathbb{Z}$. Well the easiest would be the partition $P_1=\{\mathbb{Z}\}$ which has one element. On the other end of the extreme scale, we could define the partition $$P_{\infty} = \{\ldots \{-2\}, \{-1\}, \{0\}, \{1\}, \{2\} \ldots\}=\{\{n\}\mid n\in\mathbb{Z}\}.$$ This partition has $\omega$ elements.
How might one go about making a partition of $\mathbb{Z}$ with two elements? Well we just need to split $\mathbb{Z}$ into exactly two non-empty, non-overlapping subsets - one way to do this would be the partition $P_2=\{\{0\},\mathbb{Z}\setminus\{0\}\}$, and another would be $P_2'=\{\{n\mid n\mbox{ even}\},\{n\mid n\mbox{ odd}\}\}$. Both $P_2$ and $P_2'$ are partitions and both have cardinality $2$, so either would work, and in fact there are (infinitely) many other ways to get a partition with two elements.
I think from here you might be able to think of a partition with $3$ elements. How about $n$ elements?
This seems to be the part of the question that you're having trouble with so I'll not write anything on part b unless you feel you need some hints for that as well. | {"set_name": "stack_exchange", "score": 1, "question_id": 761852} |
TITLE: Show that if $f(1)=1$, then there exists a constant $\alpha$ such that $f(x)=x^\alpha$ for all $x \in (0, +\infty)$.
QUESTION [1 upvotes]: Let $f: (0, +\infty) \to\mathbb R$ be a differentiable function such that $f(xy)=f(x)f(y)$ for all $x,y \in (0, +\infty)$.
Show that if $f(1)=1$, then there exists a constant $\alpha$ such that $f(x)=x^\alpha$ for all $x \in (0, +\infty)$.
So far:
$$f(x^\alpha)=f(\underbrace{x\cdot x\cdots x}_{\alpha\text{ times}})=\underbrace{f(x)\cdot f(x)\cdots f(x)}_{\alpha\text{ times}}=\alpha\cdot f(x)$$
Hence, $f(x^\alpha)=\alpha\cdot f(x)$.
From here I am not quite sure, but I believe I need to differentiate both sides, then move all terms to one side equaling zero and solve?
REPLY [1 votes]: First note that $f(y) > 0$(Why?). Now let $g(x) = \ln(f(a^x))$, where $a>0$. We then have
$$g(x+y) = \ln(f(a^{x+y})) = \ln(f(a^xa^y)) = \ln(f(a^x) f(a^y)) = g(x) + g(y)$$
This is the Cauchy function equation and if $g(x)$ is continuous, the solution is
$$g(x) = cx$$
Hence, we have
$$f(a^x) = e^{cx} \implies f(x) = x^t$$
Note that in the above proof, we only relied on the continuity of $f$.
Below are some similar problems based on Cauchy function equation:
Is there a name for function with the exponential property $f(x+y)=f(x) \cdot f(y)$?
Classifying Functions of the form $f(x+y)=f(x)f(y)$
If $f\colon \mathbb{R} \to \mathbb{R}$ is such that $f (x + y) = f (x) f (y)$ and continuous at $0$, then continuous everywhere
continuous functions on $\mathbb R$ such that $g(x+y)=g(x)g(y)$
What can we say about functions satisfying $f(a + b) = f(a)f(b) $ for all $a,b\in \mathbb{R}$? | {"set_name": "stack_exchange", "score": 1, "question_id": 597456} |
TITLE: Linear system with probabilities (algebra)
QUESTION [2 upvotes]: I have a small problem that delays my project , it seems I am stuck here loads of time. It is probably very easy but I cant see it right now , I am very anxious about this, please take a look:
\begin{align*}
P_0 &= \frac45P_1 + \frac34P_1\\
P_1 &= \frac15P_0 + \frac14P_1
\end{align*}
also I know this: $P_1+P_0=1$
The solution is $P_0= 15/19$ and $P_1= 4/19$.
I have a very similar problem and I don't know how to continue from this point on and delays my project!! How am i supposed to solve this ? For more probabilities like $P_1, P_2, P_3$ what can we do ? Is there some general algorithm for this with steps ?
Thanks a lot!
REPLY [0 votes]: So let's do this:
We have $P_1=1-P_0$ as mentioned in my comment. Then plug this into the first equation leads to
$$P_0=\frac{4}{5}P_0+\frac{3}{4}(1-P_0)$$
which is equivalent to
$$P_0-\frac{4}{5}P_0+\frac{3}{4}P_0=\frac{3}{4}$$
Write the left hand side as
$$P_0(1-\frac{4}{5}+\frac{3}{4})=\frac{3}{4}$$
now, $1-\frac{4}{5}+\frac{3}{4}=\frac{20-16+15}{20}=\frac{19}{20}$, so we have
$$P_0\frac{19}{20}=\frac{3}{4}$$
therefore we end up with
$$P_0=\frac{20\cdot 3}{4\cdot 19}=\frac{5\cdot 3}{19}=\frac{15}{19}$$
Going back to $P_1=1-P_0$, give $P_1=1-\frac{15}{19}=\frac{4}{19}$.
By the way, to ping someone in the comment, like @n.c. don't use a "space" between @ and the name. Otherwise the person will not be notified. | {"set_name": "stack_exchange", "score": 2, "question_id": 280782} |
TITLE: Weak or strong Liapunov function
QUESTION [0 upvotes]: You are given the system
$$\dot{x}=-x-xy^2; \dot{y}=2x^2y-x^2y^3$$
(a) What does the linearization about $x^*=(0,0)$ tell us about the local behavior.
So $Df(x,y) = \begin{bmatrix}
-1-y^2 & -2xy \\[0.3em]
4xy-2xy^3 & 2x^2-3x^2y^2
\end{bmatrix}$
So at $x^*=(0,0)$ we have the following
$Df(0,0) = \begin{bmatrix}
-1 & 0 \\[0.3em]
0 & 0
\end{bmatrix}$
with $\lambda_1=-1$, $\lambda_2=0$
Note that Re $\lambda_i < 0$ for $i=1,2$
Then $x^*$ is asymptotically stable
(b) Look for the Liapinouv function of the form $V(x,y)=ax^2+by^2$ for the equilibrium point $(0,0)$. Is it weak or strong Liapunov function?
$$\dot{V}=2ax[-x-xy^2]+2by[2x^2y-x^2y^3]=-2ax^2-2ax^2y^2+4bx^2y^2-2bx^2y^4$$
I need help with part (b). I am not sure if the function is weak or strong
REPLY [1 votes]: Hint for part (a): The origin is not asymptotically stable, so the right answer is that the linearization tells us nothing about the stability, although it certainly tells us something about the local behavior. Spoiler: take $x=0$ and see what you get.
Hint for part (b): Note that
$$
-2ax^2-2ax^2y^2+4bx^2y^2-2bx^2y^4=-2x^2[a+(a-2b)y^2+by^4]
$$
and so it cannot be a "strong" Lyapunov function (as you call it, the name is rather unusual), because it vanishes for $x=0$. Spoiler for the rest: for $(x,y)$ very small, $a+(a-2b)y^2+by^4$ is approximately $a\ne0$, and so it is a "weak" Lyapunov function. | {"set_name": "stack_exchange", "score": 0, "question_id": 1761076} |
TITLE: Solution verification: Find the orthogonal trajectories of the family of curves for $x^2 + 2y^2 = k^2$
QUESTION [6 upvotes]: I need help with the following question:
Find the orthogonal trajectories of the family of curves for $x^2 + 2y^2 = k^2$
I have taken the following steps, are they correct? From what I understand, I have to take the following steps
First differentiate to find the differential equation
Then write the differential equation in this
$$\frac{dy}{dx} = -\frac{1}{f(x,y)}$$
Then solve the new equation.
So, this is what I have:
The first step is to differentiate with respect to $x$ so find $\frac{dy}{dx}$.
\begin{align}
\frac{dy}{dx} x^2 + \frac{dy}{dx} 2y^2 &= \frac{dy}{dx}k^2 \\
2x + \frac{dy}{dx}\cdot 4y &= 0 \\
\frac{dy}{dx} &= -\frac{x}{2y}
\end{align}
Now, the second step is to do the negative recipricol.
$$\frac{dy}{dx} = \frac{2y}{x}$$
The third step is to solve the newly formed differential equation.
\begin{align}
dy \frac{1}{2y} &= \frac{1}{x} dx \\
\int \frac{1}{2y} dy &= \int \frac{1}{x} dx \\
\frac{\ln{y}}{2} &= \ln{x} + C \\
\ln y &= 2 \ln x + C\\
\ln y &= \ln x^2 + C \\
y &= Ax^2 \quad
\end{align}
In this case, $A = e^C$. Are my steps correct?
Thanks a bunch for your help!
EDIT:
I have a found a minor error in the last few steps when multiplying the two. It should be:
\begin{align}
\ln y &= 2(\ln x + C) \\
\ln y &= 2\ln x + 2C \\
e^{\ln y} &= e^{\ln x^2 + 2C} \\
y &= x^2 \cdot e^{2C} \\
\end{align}
Here, we let $A = e^{2C}$ and say the final answer is $$y = Ax^2$$
REPLY [1 votes]: From the family of ellipses
$$x^2+2y^2=k^2 ......(1)$$
you obtained the family of parabolas
$$y=Ax^2......(2)$$
Setting $x=Y\sqrt{2}$ and $y=X/\sqrt{2}$ in (1), we obtain:
$$2Y^2+X^2=k^2 ......(3)$$
So we obtain another family of parabolas
$$Y=B_1X^2......(4)$$
Which is equivalent to
$$x=By^2......(5)$$
This is because the original family of curves are symmetric in swapping $x$ and $y\sqrt{2}$. | {"set_name": "stack_exchange", "score": 6, "question_id": 852492} |
TITLE: Can we check whether a Cantor set is self-similar or not?
QUESTION [6 upvotes]: Given a Cantor set $C$ on the real line, do we have some ways to determine whether it is self-similar or not? In particular, how can we check that $C$ is not self-similar?
Edited:
Definition: Let $\{f_i\}_i$ be a family of contraction maps, i.e. $|f_i(x)−f_i(y)|=r_i|x−y|$ where $0<r_i<1$. $C$ is self-similar if there are such maps so that $E=\bigcup_i f_i(E)$.
To make things concrete, we may consider the following central Cantor set as an example.
Suppose $r_k = \frac{1}{k+2}$ for $k \geq 1$. Let $I_e = [0,1], I_0 = [0, r_1], I_1 = [1-r_1 , 1]$. For each $k \geq 1$, $w \in \{0,1\}^k$, let $I_w$ be a subinterval at level $k$. We take $I_{w1} , I_{w2}$ to be two subintervals of $I_w$ placed on the left and right with lengths $$|I_{w1}| = |I_{w2}| = |I_w| \cdot r_{k+1} . $$
Then $C= \bigcap_{k=1}^\infty \bigcup_{w \in \{0,1 \}^k} I_w$ is a Cantor set. Is it not self-similar?
REPLY [4 votes]: First of all, self-similar sets are not necessarily topological Cantor sets. For example, closed intervals are self-similar, as it is easy to check. However, if we assume (using your notation) that the pieces $f_i(E)$ are pairwise disjoint, then $E$ is a Cantor set. This is known as the strong separation condition. When dealing with self-similar sets, a weaker separation that is often used and assumed is the open set condition: there exists a nonempty open set $U$ such that $f_i(U)\subset U$ and $f_i(U)$ are pairwise disjoint. Closed intervals are self-similar sets under the open set condition, so the open set condition is not enough to guarantee a self-similar set is a Cantor set.
In general, there isn't a simple way to determine whether a given compact set (Cantor or not) is self-similar. However, self-similar sets are more regular than arbitrary compact sets. For example:
Different concepts of fractal dimension such as Hausdorff and box-counting (Minkowski) dimension coincide for arbitrary self-similar sets.
If a self-similar set $E$ is not a point, then it has strictly positive Hausdorff dimension. This can be seen as follows: after reordering, let $f_1, f_2$ be two of the maps generating $E$ with distinct fixed points (if all the maps have the same fixed point, then $E$ equals that fixed point). Let $F$ be the self-similar set corresponding to the maps $f_1^n, f_2^n$, where $n$ is large enough that $f_1^n(I)$ and $f_2^n(I)$ are disjoint intervals, where $I$ is the closed interval joining the fixed points of $f_1, f_2$. Then $F$ is a self-similar set with the strong separation condition, for which there is a well-known formula for the Hausdorff dimension which is always strictly positive (this can also be checked directly for $F$), and $F\subset E$.
Under the open set condition (but not in general) self-similar sets have positive and finite Hausdorff measure in their dimension.
The particular set $C$ defined in the question has Hausdorff dimension $0$. This can be seen by using the natural covers by sets $I_w, w\in\{0,1\}^k$, and is essentially a consequence of the fact that $r_k\to 0$, so that the relative size of intervals of generation $k+1$ inside intervals of generation $k$ tends to $0$. Since $C$ is not a point, by 2. above, $C$ cannot be self-similar. | {"set_name": "stack_exchange", "score": 6, "question_id": 439711} |
TITLE: Showing that the ring $\mathbb Z[ \sqrt{2}]$ has exactly $2$ automorphisms.
QUESTION [0 upvotes]: Here is the question I am trying to understand its solution:
I am wondering:
1- Why this proof shows that the ring $\mathbb Z[ \sqrt{2}]$ has exactly $2$ automorphisms? Could anyone explain this to me, please?
2- What are the general steps of finding all automorphisms of a ring and confirming that they are all?
REPLY [1 votes]: To answer your question no. 1, if $\phi:\mathbb Z[\sqrt2] \longrightarrow \mathbb Z[\sqrt2]$ is an automorphism, then $\phi$ will be an automorphism on $\mathbb Z$ too. From the definition of ring homomorphism, we can write $\phi(m + n \sqrt 2) = \phi(m)+\phi(n)\phi(\sqrt 2)$. Now, find automorphisms on $\mathbb Z$. They are precisely $n \longmapsto n$ (since for any ring homomorphism, $1 \longmapsto 1$). Hence $\phi(m+n\sqrt 2) = m+n\phi(\sqrt 2)$. Your book says clearly why $\phi(\sqrt 2) = \pm\sqrt 2$.
To answer your question no. 2, at first take $1 \longmapsto 1$. It eases things. Usually we come across rings in the form of $R[\xi]$ where $\xi$ is from some bigger ring containing $R$, $R$ is some arbitrary ring. Generally we take $R$ as unital and commutative, hence $R$ contains $\mathbb Z$ as a subring and hence in $R$, $a \longmapsto a$. Now if $\xi$ is algebraic/integral on $R$, take the minimal polynomial of $\xi$ over $R$ and check if you take $\xi$ to its roots, whether they give you automorphisms or not. If $\xi$ is transcendental, then I think we just have one choice, $\xi \longmapsto \xi$, although I'm not quite sure about it. | {"set_name": "stack_exchange", "score": 0, "question_id": 4525084} |
TITLE: Resources for complexity results for optimization problems in restricted graph classes
QUESTION [5 upvotes]: I am specifically interested in optimization problems in graphs (minimum coloring, maximum clique, maximum matching etc.) and I need a resource (database) which contains complexity results on different graph classes. The best resource that I know is the following:
http://graphclasses.org/
I think this web site is great because it covers all the graph classes, however the results presented (recognition, independent set, domination) in the site and my interests do not match.
I am looking for similar resources.
Thanks.
REPLY [3 votes]: Get Spinrad's book on efficient graph representations:
http://www.amazon.com/Efficient-Representations-Fields-Institute-Monographs/dp/0821828150
Also check out Li and Vitanyai's book on Kolmogorov Complexity:
http://www.amazon.com/Efficient-Representations-Fields-Institute-Monographs/dp/0821828150
You will get an appriciation for each graph class by studying succinct data structures. When you understand why certain classes of graphs take less storage than others you gain a great understanding of how to tailor optimization problems on to them. | {"set_name": "stack_exchange", "score": 5, "question_id": 3748} |
TITLE: Example of a completely mixed bimatrix game.
QUESTION [0 upvotes]: I want an example of a completely mixed Bimatrix game. I have no clue how to approach. I guess it's a trial and error process.
A completely mixed game is one where every optimal strategy (equilibrium strategy) of either player( considering 2 player game) is completely mixed.
It can't have pure strategy, so the entries of the matrix have to be chosen accordingly.
It seems that Matching Pennies is one such celebrated example. Can someone come up with another such example just from the definition of completely mixed Bimatrix game? That is I want the intuition behind coming up with such an example. For example if the Payoff matrix is $3*3$ then keeping track of all mixed strategies become difficult.
REPLY [0 votes]: Well, every game must have at least one equilibrium. So, to find a completely mixed game we just need to find a game with no pure strategy equilibria. All this means is that the set of squares on the matrix for player 1 that are their best responses are disjoint from the best responses for player two.
So consider a blank 3x3 matrix for player one. Then fill in player One's best responses. We will mark these with 1's. So for example we could have $$\begin{bmatrix}0 & 1 & 0 \\ 1 & 0 & 0 \\ 0 & 1 &0 \end{bmatrix}$$ In that case player 1's best response is to play the middle unless player two plays the middle, whence player one should play the left. Now, just pick any other spots for player 2's best responses (choosing from the options top, middle, and bottom), the only key is that there can be no intersections. So, for example, the best responses for player 2 could be $$\begin{bmatrix}1 & 0 & 0 \\ 0& 1 & 0 \\ 0 & 0 &1 \end{bmatrix}$$
Or, anything else, as long as the two have no intersections. Then just fill in any numbers that satisfy the 1s the above matrices being the best responses. So for example:$$ \begin{bmatrix}(1,5 ) & (4,3) & (3,0) \\ (4,1)& (1,5) & (2,0) \\ (3,2) & (4,1) &(0,1) \end{bmatrix} $$
Is a 3x3 bimatrix game with no pure strategy equilibria. But this method could easily generate infinitely many.
(By the way the example game only has one equilibrium, where player 1 plays the middle $\frac15$ of the time, and right $\frac45$ of the time, and where player two plays the top $\frac34$ of the time while playing the middle $\frac14$ of the time.)
I hope that helps. | {"set_name": "stack_exchange", "score": 0, "question_id": 2203933} |
TITLE: What is the probability getting 2 jokers in 3 draws?
QUESTION [0 upvotes]: My friend and I are playing Dicey Dungeons (great game btw) and we are trying to reverse engineer combinatorics / permutations in our head.
The translation of our problem amounts to the title: What is the probability of getting 2 jokers in 3 draws? There are 54 cards, and 2 of them are jokers.
In our Dicey Dungeons problem, our problem is this: We have a deck of 14 cards, where there is one card that is duplicated. We want to know the probability we will get two of the same card in the first draw of 3 cards.
In the long run, there are multiple cards with duplicates, so in our deck of 14, we might have something like AAABBBCCDEFGHH, but for now we assume we have ABCDEFGHIJKLMM, and we want to know the probability of getting M in 3 draws. How do we calculate this?
REPLY [0 votes]: Initially, you have $54$ cards, two of which are jokers.
On the first draw, either you draw a joker (probability: $2/54$) or you do not (probability: $52/54$).
If your first draw is a joker, you now need to know the probability of drawing one joker from the remaining $53$ cards. Drawing it on the first try happens with probability $1/53$ and you do not care what the next card is, so you draw it with probability $52/52$. If you draw it on the second try, you draw any other card first, with probability $52/53$ and then draw the remaining joker, with probability $1/52$.
Alternatively, your first draw is not a joker, with probability $52/54$, and your next two draws are jokers, with probabilities $2/53$ and $1/52$, respectively.
Collecting these paths through drawing two jokers in three draws, we have
JJx: $(2/54)(1/53)(52/52)$
JxJ: $(2/54)(52/53)(1/52)$
xJJ: $(52/54)(2/53)(1/52)$
In all three cases, the numerators we multiple together are $1$, $2$, and $52$ and the denominators we multiply together are $54$, $53$, and $52$. So the total probability of two jokers in three draws is
$$ 3 \cdot \frac{1 \cdot 2 \cdot 52}{52 \cdot 53 \cdot 54} = \frac{1}{53 \cdot 9} = \frac{1}{477} \text{.} $$ | {"set_name": "stack_exchange", "score": 0, "question_id": 3352898} |
TITLE: volterra integral
QUESTION [1 upvotes]: Good day, I'm studying about Volterra integral. What justifies the last 2 inequalities? If I am not wrong, the norm is the $\sup$-norm on $[0,1]$.
REPLY [1 votes]: First one is simply bringing in the absolute value inside the integral and using the previous result. Second one is using Mean Value theorem for $F(u)$. | {"set_name": "stack_exchange", "score": 1, "question_id": 2188476} |
\begin{document}
\maketitle
\begin{abstract}
We give the description of the first and second complex interpolation of vanishing Morrey spaces,
introduced
in \cite{AS, CF}.
In addition, we show that
the diamond subspace (see \cite{HNS}) and one of the function spaces in \cite{AS} are the same.
We also give several examples for showing that each of the complex interpolation of these spaces is different. \\
\noindent
{\bf Classification: 42B35, 46B70, 46B26}
\noindent
Keywords: Morrey spaces, vanishing Morrey spaces, complex interpolation
\end{abstract}
\section{Introduction}
Let $1\le q\le p<\infty$. The Morrey space
$\cM^p_q=\cM^p_q(\mathbb{R}^n)$, introduced in
\cite{Mo}, is defined as the set of all $f\in L^q_{\rm loc}(\mathbb{R}^n)$ for which
\[
\|f\|_{\cM^p_q}
:=
\sup_{r>0}
m(f,p,q;r)<\infty,
\]
where
\[
m(f,p,q;r)
:=
\sup_{x\in \mathbb{R}^n}
|B(x,r)|^{\frac1p}
\left(
\frac{1}{|B(x,r)|}
\int_{B(x,r)} |f(y)|^q \ dy \right)^{\frac1q},\ (r>0).
\]
Note that, for $p=q$, $\cM^p_q$ coincides with the Lebesgue space $L^p$.
Meanwhile, if $p<q$, then $\cM^p_q$ is strictly larger than $L^p$. For instance, the function $f(x):=|x|^{-\frac{n}{p}}$ belongs to $\cM^p_q$
but it is not in $L^p$.
As a generalization of Lebesgue spaces, one may inquire whether the interpolation of linear operators
in Morrey spaces also holds.
The first answer of this question was given by G. Stampacchia in \cite{St}. He proved a partial generalization of the Riesz-Thorin interpolation theorem in Morrey spaces where the domain of the linear operator is assumed to be the Lebesgue spaces. However, when the domain of the linear operator is Morrey spaces,
there are some counterexamples for the interpolation of linear operator in Morrey spaces (see \cite{BRV, RV}). Although these examples show the lack of interpolation property of Morrey spaces,
there are some recent results about the description of complex interpolation of Morrey spaces.
The first result in this direction can be found in \cite{CPP}, where the authors proved that if
\begin{align}\label{eq:p0q0}
\theta \in (0,1), 1\le q_0\le p_0<\infty, 1\le q_1\le p_1<\infty
\end{align}
and
\begin{align}\label{eq:pq}
\frac1p:=
\frac{1-\theta}{p_0}
+
\frac{\theta}{p_1}
\quad
{\rm and}
\quad
\frac1q:=
\frac{1-\theta}{q_0}
+
\frac{\theta}{q_1},
\end{align}
then
\begin{align}\label{eq:CPP}
[\cM^{p_0}_{q_0}, \cM^{p_1}_{q_1}]_\theta
\subseteq \cM^{p}_{q}.
\end{align}
Here, $[\cdot, \cdot]_\theta$ denotes the first complex interpolation space.
Assuming the additional assumption
\begin{align}\label{eq:prop}
\frac{p_0}{q_0}
=
\frac{p_1}{q_1},
\end{align}
Lu et al. \cite{LYY} proved that
\begin{align}
[\cM^{p_0}_{q_0}, \cM^{p_1}_{q_1}]_\theta
=
\overline{\cM^{p_0}_{q_0} \cap \cM^{p_1}_{q_1}}
^{\cM^p_q}.
\end{align}
The corresponding result on the second complex interpolation spaces was obtained by Lemari\'e-Rieusset \cite{L14}.
A generalization of the results in \cite{LYY, L14}
in the setting of the generalized Morrey spaces can be seen in \cite{HS, HS2}.
In addition to complex interpolation of Morrey spaces, there are several papers on the description of complex interpolation of some closed subspaces of Morrey spaces.
For instance, Yang et al. \cite{YYZ} proved that
\begin{align}\label{eq:YYZ}
[\overset{\circ}{\cM}{}^{p_0}_{q_0},
\overset{\circ}{\cM}{}^{p_1}_{q_1}]_\theta
=
\overset{\circ}{\cM}{}^{p}_{q},
\end{align}
where the parameters are given by \eqref{eq:p0q0} and \eqref{eq:pq}
and
$\overset{\circ}{\cM}{}^{p}_{q}$ denotes the closure in $\cM^p_q$ of the set of smooth functions with compact support.
Other results on complex interpolation of closed subspaces of Morrey spaces are considered in \cite{HS2, HNS, HS3, H, YSY}. In particular, the authors in \cite{HS2} consider the space $\overline{\mathcal{M}}{}^p_q:=\overline{L^\infty \cap \mathcal{M}^p_q}^{\mathcal{M}^p_q}$
In this article, we shall investigate complex interpolation of vanishing Morrey spaces. These spaces were introduced in \cite{ AS, CF}.
Let us recall the their definition as follows.
\begin{definition}
Let $1\le q\le p<\infty$.
The vanishing Morrey space at the origin $V_0\cM^p_q$ and the vanishing Morrey space at infinity $V_\infty\cM^p_q$ are defined by
\[
V_0\cM^p_q:=
\{f\in \cM^p_q:
\lim_{r\to 0} m(f,p,q;r)=0\}
\]
and
\[
V_\infty\cM^p_q:=
\{f\in \cM^p_q:
\lim_{r\to \infty} m(f,p,q;r)=0\},
\]
respectively.
The third subspace is the space $V^{(*)}\cM^p_q$ which is defined to be the set of all functions $f\in \cM^p_q$ such that
\[
\lim_{N\to \infty}
\sup_{x\in \mathbb{R}^n}
\int_{B(x,1)}
|f(y)|^q \chi_{\mathbb{R}^n \setminus B(0,N)}(y) \ dy=0.
\]
\end{definition}
Our main results are the following two theorems.
\begin{theorem}\label{thm:171213-1}
Assume \eqref{eq:p0q0}, \eqref{eq:prop}, and $q_0\neq q_1$.
Define $p$ and $q$ by \eqref{eq:pq}.
Then
\begin{align}\label{eq:171213-2}
[V_0{\mathcal M}^{p_0}_{q_0},V_0{\mathcal M}^{p_1}_{q_1}]_\theta
&=[{\mathcal M}^{p_0}_{q_0},{\mathcal M}^{p_1}_{q_1}]_\theta
\notag\\
&=
\left\{f\in \cM^p_q:
\lim_{N\to \infty}\|f-\chi_{\{\frac{1}{N}\le |f|\le N\}}f\|_{\cM^p_q}=0\right\},
\end{align}
\begin{align}\label{eq:171213-3}
[V_\infty{\mathcal M}^{p_0}_{q_0}, V_\infty {\mathcal M}^{p_1}_{q_1}]_\theta
&=
V_\infty\cM^p_q \cap [{\mathcal M}^{p_0}_{q_0},{\mathcal M}^{p_1}_{q_1}]_\theta
\notag \\
&=
\left\{f\in V_\infty\cM^p_q:
\lim_{N\to \infty}\|f-\chi_{\{\frac{1}{N}\le |f|\le N\}}f\|_{\cM^p_q}=0\right\},
\end{align}
and
\begin{align}\label{eq:171213-4}
[V^{(*)}{\mathcal M}^{p_0}_{q_0},V^{(*)}{\mathcal M}^{p_1}_{q_1}]_\theta
&=
V^{(*)}\cM^p_q \cap [{\mathcal M}^{p_0}_{q_0},{\mathcal M}^{p_1}_{q_1}]_\theta
\notag \\
&=
\left\{f\in V^{(*)}\cM^p_q:
\lim_{N\to \infty}\|f-\chi_{\{\frac{1}{N}\le |f|\le N\}}f\|_{\cM^p_q}=0\right\}.
\end{align}
\end{theorem}
\begin{theorem}\label{thm:171213-2}
Assume \eqref{eq:p0q0} and \eqref{eq:prop}.
Define $p$ and $q$ by \eqref{eq:pq}.
\begin{align}\label{eq:thm:171213-2-1}
[V_0{\mathcal M}^{p_0}_{q_0},V_0{\mathcal M}^{p_1}_{q_1}]^\theta
=
{\mathcal M}^p_q,
\end{align}
\begin{align}\label{eq:thm:171213-2-2}
[V_\infty{\mathcal M}^{p_0}_{q_0}, V_\infty{\mathcal M}^{p_1}_{q_1}]^\theta
=
\{f \in {\mathcal M}^p_q \,:
\chi_{\{a\le |f|\le b\}} f\in V_\infty\cM^p_q
{\ \rm for \ all } \ 0<a<b<\infty
\},
\end{align}
and
\begin{align}\label{eq:thm:171213-2-3}
[V^{(*)}{\mathcal M}^{p_0}_{q_0}, V^{(*)}{\mathcal M}^{p_1}_{q_1}]^\theta
=
\{f \in {\mathcal M}^p_q \,:
\chi_{\{a\le |f|\le b\}} f\in V^{(*)}\cM^p_q
{\ \rm for \ all } \ 0<a<b<\infty
\}.
\end{align}
\end{theorem}
Note that (\ref{eq:171213-2})
and
(\ref{eq:thm:171213-2-1}) are immediate once we notice
that $L^\infty \cap {\mathcal M}^p_q \subset V_0{\mathcal M}^p_q$
(see Lemma \ref{lem:171223-1}). In addition to the vanishing Morrey spaces, we discuss the space ${\mathbb M}^p_q$, that is,
the set of all functions $f\in \mathcal{M}^p_q$ for which
$\displaystyle
\lim_{|y| \to 0}f(\cdot+y)=f
$
in the topology of ${\mathcal M}^p_q$.
These spaces were first introduced in \cite{Zo}.
We show that ${\mathbb M}^p_q$ is equal to
the diamond space $\overset{\diamond}{\mathcal M}{}^p_q$, namely, the closure
in $\cM^p_q$ of all functions
$f$
such that
$\partial^\alpha f\in {\mathcal M}^p_q$
for all
$\alpha \in {\mathbb N}_0{}^n$ and $j \in {\mathbb N}$ (see Theorem \ref{thm:171218-1} below).
As a consequence, the complex interpolation of
${\mathbb M}^p_q$ follows from the result in \cite{HNS}. Remark that the authors in \cite{AS} also introduced the space $V^{(*)}_{0,\infty}\cM^p_q$ which is defined by
\[
V^{(*)}_{0,\infty}\cM^p_q:=
V_0\cM^p_q \cap V_\infty\cM^p_q
\cap V^{(*)}\cM^p_q.
\]
Since this space is equal to $\overset{\circ}{\cM}{}^{p}_{q}$ and \eqref{eq:YYZ} holds, we do not consider the complex interpolation of this space.
The rest of this article is organized as follows. In Section 2 we recall the definition of the complex interpolation method and some previous results about complex interpolation of Morrey spaces and their subspaces.
We give the proof of Theorems \ref{thm:171213-1} and \ref{thm:171213-2} in Sections 3 and 4, respectively.
In Section 5, we show that $\mathbb{M}^p_q$ is equal to $\overset{\diamond}{\mathcal{M}}{}^p_q$.
Finally, we compare each subspace in Theorems
\ref{thm:171213-1} and \ref{thm:171213-2} and investigate their relation by giving several examples in Section 6.
\section{Preliminaries}
\subsection{The complex interpolation method}
Let us recall the definition of complex interpolation method, introduced in \cite{Calderon3}. We follow the presentation in the book \cite{Be}. Throughout this paper, we define the set $S:=\{z\in \mathbb{C}: 0<{\rm Re}(z)<1\}$
and $\overline{S}$ be its closure. First, we recall the following definitions.
\begin{definition}[Compatible couple]
A couple of Banach spaces $(X_0, X_1)$ is called compatible if there exists a Hausdorff topological vector space $Z$ for which $X_0$ and $X_1$ are continuously embedded into $Z$.
\end{definition}
\begin{definition}[The first complex interpolation functor]
Let $(X_0, X_1)$ be a compatible couple of Banach spaces. The space $\mathcal{F}(X_0, X_1)$ is defined to be the set of all bounded continuous function $F:\overline{S} \to X_0+X_1$ for which
\begin{enumerate}
\item
$F$ is holomorphic in $S$;
\item
For each $k=0,1$,
the function $t\in \mathbb{R} \mapsto F(k+it) \in X_k$ is bounded and continuous.
\end{enumerate}
For every $F\in \mathcal{F}(X_0,X_1)$, we define the norm
\[
\|F\|_{\mathcal{F}(X_0, X_1)}
:=
\max_{k=0,1} \sup_{t\in \mathbb{R}}
\|F(k+it)\|_{X_k}.
\]
\end{definition}
\begin{definition}[The first complex interpolation space]
Let $\theta \in (0,1)$. The first complex interpolation of a compatible couple of Banach spaces $(X_0, X_1)$ is defined by
\[
[X_0, X_1]_\theta
:=\{F(\theta): F\in \mathcal{F}(X_0, X_1)\}.
\]
The norm on $[X_0, X_1]_\theta$ is defined by
\[
\|f\|_{[X_0, X_1]_\theta
}
:=\inf \{\|F\|_{\mathcal{F}(X_0,X_1)}:
f=F(\theta), F\in \mathcal{F}(X_0,X_1) \}.
\]
\end{definition}
\noindent
We shall use the following density result.
\begin{lemma}\label{lem:dense}{\rm \cite{Calderon3}}
Let $\theta \in (0,1)$ and given a compatible couple of Banach spaces $(X_0, X_1)$.
Then the space $X_0\cap X_1$ is dense in $[X_0, X_1]_\theta$.
\end{lemma}
We now consider the second complex interpolation method.
Let $X$ be a Banach space and recall that
the space ${\rm Lip}({\mathbb R},X)$ is defined to be the set of all $X$-valued functions $f$ on $\mathbb{R}$ for which
\[
\|f\|_{{\rm Lip}({\mathbb R},X)}
:=
\sup_{-\infty<t<s<\infty}
\frac{\|f(s)-f(r)\|_X}{|s-t|}
\]
is finite. The definition of the second complex interpolation space is given as follows.
\begin{definition}[The second complex interpolation functor]
Let $(X_0, X_1)$ be a compatible couple of Banach spaces.
The space $\mathcal{G}(X_0, X_1)$ is the set of all continuous functions $G:\overline{S} \to X_0+X_1$ for which
\begin{enumerate}
\item
$G|_{S}$ is holomorphic;
\item
$\displaystyle \sup\limits_{z\in \overline{S}} \frac{\left\|G(z)\right\|_{X_0+X_1}}{1+|z|} <\infty$;
\item
For each $k=0,1$, the function $t\in \mathbb{R} \mapsto G(k+it) \in X_k$ belongs to ${\rm Lip}({\mathbb R}, X_k)$.
\end{enumerate}
For every $G\in \mathcal{G}(X_0, X_1)$, we define
\[
\|G\|_{\mathcal{G}(X_0, X_1)}
:=
\max_{k=0,1}
\sup_{t\in \mathbb{R}}
\|G(k+i\cdot)\|_{{\rm Lip}(\R, X_k)}.
\]
\end{definition}
\begin{definition}[The second complex interpolation space]
Let $\theta \in (0,1)$ and $(X_0, X_1)$ be a compatible couple of Banach spaces. The second complex interpolation space $[X_0, X_1]^\theta$
is defined by
\[
[X_0, X_1]^\theta
:=
\{G'(\theta): G\in \mathcal{G}(X_0, X_1)\}.
\]
The space $[X_0, X_1]^\theta$ is equipped with the norm
\[
\|f\|_{[X_0, X_1]^\theta}
:=
\inf\{\|G\|_{\mathcal{G}(X_0, X_1)}:
f=G'(\theta), G\in \mathcal{G}(X_0, X_1)
\}.
\]
\end{definition}
We shall utilize the following relation between the first and second complex interpolation method.
\begin{lemma}\label{lem:160417-3}
{\rm \cite[Lemma 2.4]{HS2}}
Let $(X_0, X_1)$ be a compatible couple of Banach spaces and
let $G\in \cG(X_0,X_1)$ be fixed.
For every $z\in \overline{S}$,
and $k\in \N$, set
\begin{align}\label{eq:160417-3}
H_k(z):=\frac{G(z+2^{-k}i) -G(z)}{2^{-k}i}.
\end{align}
Then, $H_k(\theta)\in [X_0,X_1]_\theta$, for every $\theta \in (0,1)$.
\end{lemma}
\subsection{Previous results on complex interpolation of Morrey spaces}
First, let us recall the results on the second complex interpolation method of Morrey spaces.
\begin{proposition}\label{prop:Lemarie-HS}{\rm \cite{HS, L14}}
Keep the same assumption as in Theorem
\ref{thm:171213-2}.
Let $f\in \mathcal{M}^p_q$.
Define the functions $F$ and $G$ on $\overline{S}$ by
\begin{align}\label{eq:171220-20}
F(z):={\rm sgn}(f) |f|^{p\left(\frac{1-z}{p_0}+\frac{z}{p_1}\right)}, \
(z\in \overline{S})
\end{align}
and
\begin{align}\label{eq:171220-210}
G(z)
:=
(z-\theta)
\int_0^1 F(\theta+(z-\theta)t)\ dt,\ (z\in \overline{S}).
\end{align}
Then, for every $z\in \overline{S}$, we have
\begin{align}\label{eq:171220-27}
|G(z)|\le
(1+|z|)\left(|f|^{\frac{p}{p_0}}+|f|^{\frac{p}{p_1}}\right).
\end{align}
Moreover,
$G\in \mathcal{G}(\mathcal{M}^{p_0}_{q_0}, \mathcal{M}^{p_1}_{q_1})$.
\end{proposition}
\begin{theorem}
\label{thm:Lemarie}{\rm \cite{L14}}
Keep the same assumption as in Theorem
\ref{thm:171213-2}. Then $$[\mathcal{M}^{p_0}_{q_0}, \mathcal{M}^{p_1}_{q_1}]^\theta =\mathcal{M}^p_q.$$
\end{theorem}
The description of complex interpolation of some closed subspaces of Morrey spaces is given as follows.
\begin{theorem}\label{thm:HS2}{\rm \cite{HS2}}
Keep the same assumption as in Theorem
\ref{thm:171213-1}.
Then
\[
[\overline{\mathcal{M}}{}^{p_0}_{q_0},
\overline{\mathcal{M}}{}^{p_1}_{q_1}]_\theta
=
[\mathcal{M}^{p_0}_{q_0}, \mathcal{M}^{p_1}_{q_1}]_\theta
=
\{f\in \mathcal{M}^p_q: \lim_{N\to \infty}
\|f-\chi_{\{1/N\le |f|\le N\}}f\|_{\mathcal{M}^p_q}=0\}.
\]
\end{theorem}
\begin{theorem}\label{thm:HS2-2}{\rm \cite{HS2}}
Keep the same assumption as in Theorem
\ref{thm:171213-2}. Then
\[
[\overline{\mathcal{M}}{}^{p_0}_{q_0},
\overline{\mathcal{M}}{}^{p_1}_{q_1}]^\theta
=
{\mathcal{M}}{}^p_q.
\]
\end{theorem}
We now recall the complex interpolation results of the diamond spaces in \cite{HNS}.
To state these results, we recall the following notation.
\begin{definition}\label{d-1}
Let $\psi\in C^\infty_{\rm c}(\mathbb{R}^n)$
satisfy
$\chi_{Q(4)}\leq \psi\leq \chi_{Q(8)}$,
where $Q(r):=[-r,r]^n$.
Set $\varphi_0:=\psi$ and for $j\in\N$, define
\begin{equation*}\label{eq:151113-21}
\varphi_j:=\psi(2^{-j}\cdot)-\psi(2^{-j+1}\cdot).
\end{equation*}
We also define $\varphi_j(D)f:=\mathcal{F}^{-1}(\varphi_j \cdot \mathcal{F}f)$, where $\mathcal{F}$ and $\mathcal{F}^{-1}$
denote the Fourier transform and its inverse.
For $a\in (0,1)$, $J\in \mathbb{N}$, and a measurable function $f$, we define
\[S(f):=\left(\sum_{j=0}^\infty|\varphi_j(D)f|^2\right)^{\frac12}
\
{\rm and}
\
S(f;a,J):=
\chi_{\{a\le S(f)\le a^{-1}\}}
\left(\sum_{j=J}^\infty|\varphi_j(D)f|^2\right)^{\frac12}.
\]
\end{definition}
Using the notation in Definition \ref{d-1}, let us state the description of complex interpolation of diamond spaces.
\begin{theorem}
{\rm \cite[Theorem 1.4]{HNS}}
\label{thm:HNS}
Let $\theta \in (0,1)$, $1<q_0\le p_0<\infty$,
and $1<q_1\le p_1<\infty$.
Assume the condition \eqref{eq:prop}.
Define $p$ and $q$ by \eqref{eq:pq}.
Then
\begin{align*}
[\overset{\diamond}{{\mathcal M}}{}^{p_0}_{q_0},
\overset{\diamond}{{\mathcal M}}{}^{p_1}_{q_1}]_\theta
=
\{f\in
\overset{\diamond}{{\mathcal M}}{}^{p}_{q}
: \lim_{N\to \infty}
\|f-\chi_{\{1/N\le |f|\le N\}}f\|_{\cM^p_q}=0
\}
\end{align*}
and
\begin{align*}
[\overset{\diamond}{{\mathcal M}}{}^{p_0}_{q_0},
\overset{\diamond}{{\mathcal M}}{}^{p_1}_{q_1}]^\theta
=
\bigcap_{0<a<1}
\left\{
f \in {\mathcal M}^p_q
\,:\,
\lim_{J \to \infty}
\|S(f;a,J)\|_{{\mathcal M}^p_q}
=0
\right\}.
\end{align*}
\end{theorem}
\section{The first complex interpolation of vanishing Morrey spaces}
\begin{lemma}\label{lem:171223-1}
Let $1\le q\le p<\infty$. Then, $\overline{\mathcal{M}}{}^{p}_q \subseteq V_0\mathcal{M}^p_q$.
\end{lemma}
\begin{proof}
Let $g\in L^\infty \cap \mathcal{M}^p_q$. Then, for every $r>0$, we have
\[
m(g, p, q; r)\lesssim \|g\|_{L^\infty} r^{\frac{n}{p}},
\]
so $\lim\limits_{r\to 0^+} m(g, p, q; r)=0$. Hence, $g\in V_0\mathcal{M}^p_q$.
Thus, $L^\infty \cap \mathcal{M}^p_q \subseteq V_0\mathcal{M}^p_q$.
Since $V_0\mathcal{M}^p_q$ is a closed subspace of $\mathcal{M}^p_q$, we conclude that $\overline{\mathcal{M}}{}^{p}_q \subseteq V_0\mathcal{M}^p_q$.
\end{proof}
\begin{lemma}\label{lem:171227-1}
Let $1\le q< p<\infty$. Then $L^\infty_{\rm c} \subseteq V_\infty\mathcal{M}^p_q \cap V^{(*)}\mathcal{M}^p_q$.
\end{lemma}
\begin{proof}
Let $f\in L^\infty_{\rm c}$. Then, for every $r>0$, we have
\[
m(f, p, q;r)
\lesssim
r^{\frac{n}{p}-\frac{n}{q}}
\|f\|_{L^q}.
\]
Consequently, $\lim\limits_{r\to \infty}m(f, p, q;r)=0$. Therefore, $f\in V_\infty \mathcal{M}^p_q$. We now show that
\begin{align}\label{eq:171227-4}
f\in V^{(*)} \mathcal{M}^p_q.
\end{align}
For every $N\in \mathbb{N}$, we have
\[
\sup_{x\in \mathbb{R}^n}
\int_{B(x,1)} |f(y)|^q \chi_{\mathbb{R}^n \setminus B(0,N)}(y) \ dy\le \|f\chi_{\mathbb{R}^n \setminus B(0,N)}\|_{L^q}^q.
\]
Therefore,
since the right-hand side is zero for large $N$
we have
\[
\lim_{N\to \infty}
\sup_{x\in \mathbb{R}^n}
\int_{B(x,1)} |f(y)|^q \chi_{\mathbb{R}^n \setminus B(0,N)}(y) \ dy=0,
\]
which implies \eqref{eq:171227-4}.
\end{proof}
\begin{lemma}\label{lem:171218-1}
Let $\theta \in (0,1)$, $1\le q_0\le p_0<\infty$,
and $1\le q_1\le p_1<\infty$.
Define $p$ and $q$ by
\[
\frac1p:=\frac{1-\theta}{p_0}+\frac{\theta}{p_1}
\quad
{\rm and}
\quad
\frac1q:=\frac{1-\theta}{q_0}+\frac{\theta}{q_1}.
\]
Then we have the following inclusions:
\[
V_\infty\cM^{p_0}_{q_0}
\cap
V_\infty\cM^{p_1}_{q_1}
\subseteq
V_\infty\cM^{p}_{q},
\quad
{\rm and}
\quad
V^{(*)}\cM^{p_0}_{q_0}
\cap
V^{(*)}\cM^{p_1}_{q_1}
\subseteq
V^{(*)}\cM^{p}_{q}.
\]
\end{lemma}
\begin{proof}
We only prove the first inclusion.
The proof of
another
inclusion is similar. Let $f\in V_\infty\cM^{p_0}_{q_0}
\cap
V_\infty\cM^{p_1}_{q_1}$. Then
\begin{align}\label{eq:171213-1}
\lim_{r\to \infty}
m(f,p_0,q_0;r)
=0
\quad
{\rm and}
\quad
\lim_{r\to \infty}
m(f,p_1,q_1;r)
=0.
\end{align}
By H\"older's inequality, for every $r>0$, we have
\[
m(f,p,q;r)
\le
m(f,p_0,q_0;r)^{1-\theta}
m(f,p_1,q_1;r)^{\theta}.
\]
Combining this inequality and \eqref{eq:171213-1}, we get $\lim\limits_{r\to \infty}m(f,p,q;r)=0$, so $f\in V_\infty\cM^p_q$, as desired.
\end{proof}
\begin{proposition}\label{pr:171218-1}
Let $f\in V_\infty\cM^p_q$ be such that
$\displaystyle
f=\lim_{N \to \infty}\chi_{\{1/N\le |f|\le N\}}f
$
in ${\mathcal M}^p_q$.
For a fixed $N\in \mathbb{N}$, define
\begin{align}\label{eq:FN}
F_N(z)={\rm sgn}(f)|f|^{p\frac{1-z}{p_0}+p\frac{z}{p_1}}\chi_{\{1/N\le |f| \le N\}} \quad
(z\in \overline{S}).
\end{align}
Then $F_N(\theta)\in [V_\infty\mathcal{M}^{p_0}_{q_0}, V_\infty\mathcal{M}^{p_1}_{q_1}]_\theta$.
\end{proposition}
\begin{proof}
Observe that \eqref{eq:pq} and \eqref{eq:prop} imply
\begin{align}\label{eq:171220-10}
\frac{p_0}{q_0}
=
\frac{p_1}{q_1}
=
\frac{p}{q}.
\end{align}
Without loss of generality, assume that $p_0>p_1$.
Define $F_{N,0}(z):=\chi_{\{|f|\le 1\}}F_N(z)$ and
$F_{N,1}(z):=F_N(z)-F_{N,0}(z)$. Since
\[
|F_{N,0}(z)|=
\chi_{\{\frac1N\le |f|\le 1\}}
|f|^{\frac{p}{p_0}}
|f|^{\left(\frac{p}{p_1}-\frac{p}{p_0}\right){\rm Re}(z)}
\le |f|^{\frac{p}{p_0}},
\]
by using \eqref{eq:171220-10},
we have
\begin{align}\label{eq:171220-7}
m(F_{N,0}(z), p_0, q_0;r)
\le
m\left(|f|^{\frac{p}{p_0}}, p_0, q_0;r\right)
=
m(f, p, q;r)^{\frac{p}{p_0}}.
\end{align}
Taking $r\to 0^{+}$ and using the fact that
$f\in V_\infty\mathcal{M}^p_q$, we have $F_{N,0}(z)\in V_\infty\mathcal{M}^{p_0}_{q_0}$. Moreover, by \eqref{eq:171220-7}, we also have
\begin{align}\label{eq:171220-8}
\|F_{N,0}(z)\|_{V_\infty\mathcal{M}^{p_0}_{q_0}}
\le
\|f\|_{\mathcal{M}^p_q}^{\frac{p}{p_0}}.
\end{align}
By a similar argument, we have $F_{N,1}(z)\in V_\infty\mathcal{M}^{p_1}_{q_1}$ and
\begin{align}\label{eq:171220-9}
\|F_{N,1}(z)\|_{V_\infty\mathcal{M}^{p_1}_{q_1}}
\le
\|f\|_{\mathcal{M}^p_q}^{\frac{p}{p_1}}.
\end{align}
Combining \eqref{eq:171220-8} and \eqref{eq:171220-9}, we have $F_{N}(z)\in V_\infty\mathcal{M}^{p_0}_{q_0}+V_\infty\mathcal{M}^{p_1}_{q_1}$ and
\begin{align}\label{eq:171220-11}
\sup_{z\in \overline{S}}
\|F_N(z)\|_{V_\infty\mathcal{M}^{p_0}_{q_0}+V_\infty\mathcal{M}^{p_1}_{q_1}}
\le
\|f\|_{\mathcal{M}^p_q}^{\frac{p}{p_0}}
+\|f\|_{\mathcal{M}^p_q}^{\frac{p}{p_1}}
<\infty.
\end{align}
We now show the continuity of $F_N$. Let $z\in \overline{S}$ and $h\in \overline{S}$ be such that $z+h\in \overline{S}$. Since
\begin{align*}
|F_{N,0}(z+h)-F_{N,0}(z)|
&=
\left|
|f|^{h\left(\frac{p}{p_1}-\frac{p}{p_0}\right)}
-1
\right|
|F_{N,0}(z)|
\\
&\le
\left(
e^{|h|\left(\frac{p}{p_1}-\frac{p}{p_0}\right)|\log |f||}-1
\right)
|F_{N,0}(z)|
\\
&\le
\left(
e^{|h|\left(\frac{p}{p_1}-\frac{p}{p_0}\right)\log N}-1
\right)
|F_{N,0}(z)|,
\end{align*}
by using \eqref{eq:171220-8}, we have
\begin{align}\label{eq:171220-12}
\|F_{N,0}(z+h)-F_{N,0}(z)\|_{V_\infty\mathcal{M}^{p_0}_{q_0}}
\le
\left(
e^{|h|\left(\frac{p}{p_1}-\frac{p}{p_0}\right)\log N}-1
\right)
\|f\|_{\mathcal{M}^p_q}^{\frac{p}{p_0}}.
\end{align}
Similarly,
\begin{align}\label{eq:171220-13}
\|F_{N,1}(z+h)-F_{N,1}(z)\|_{V_\infty\mathcal{M}^{p_1}_{q_1}}
\le
\left(
e^{|h|\left(\frac{p}{p_1}-\frac{p}{p_0}\right)\log N}-1
\right)
\|f\|_{\mathcal{M}^p_q}^{\frac{p}{p_1}}.
\end{align}
Combining \eqref{eq:171220-12} and \eqref{eq:171220-13}, we get
\begin{align*}
\|F_{N}(z+h)-F_{N}(z)\|_{V_\infty\mathcal{M}^{p_0}_{q_0}+V_\infty\mathcal{M}^{p_1}_{q_1}}
\le
\left(
e^{|h|\left(\frac{p}{p_1}-\frac{p}{p_0}\right)\log N}-1
\right)
\left(\|f\|_{\mathcal{M}^p_q}^{\frac{p}{p_0}}
+
\|f\|_{\mathcal{M}^p_q}^{\frac{p}{p_1}}
\right).
\end{align*}
This implies
\[
\lim_{h\to 0}
\|F_{N}(z+h)-F_{N}(z)\|_{V_\infty\mathcal{M}^{p_0}_{q_0}+V_\infty\mathcal{M}^{p_1}_{q_1}}
=0.
\]
Hence, $F_N$ is continuous on $\overline{S}$.
The proof of holomorphicity of $F_N$ in $S$ goes as follows.
For every $z\in \overline{S}$, define
\[
F_{N,0}'(z):=F_{N,0}(z)\left(\frac{p}{p_1}-\frac{p}{p_0}\right) \log |f|,
\
F_{N,1}'(z):=F_{N,1}(z)\left(\frac{p}{p_1}-\frac{p}{p_0}\right) \log |f|,
\
\]
and $F_{N}'(z):=F_{N,0}'(z)+F_{N,1}'(z)$.
As a consequence of \eqref{eq:171220-8}
and \eqref{eq:171220-9}, we have
\begin{align*}
\|F_N'(z)\|_{V_\infty\mathcal{M}^{p_0}_{q_0}
+V_\infty\mathcal{M}^{p_1}_{q_1}}
&\le
\|F_{N,0}'(z)\|_{V_\infty\mathcal{M}^{p_0}_{q_0}}
+
\|F_{N,1}'(z)\|_{V_\infty\mathcal{M}^{p_1}_{q_1}}
\\
&\le
\left(\frac{p}{p_1}-\frac{p}{p_0}\right)
(\log N) \left(\|f\|_{\mathcal{M}^p_q}^{\frac{p}{p_0}}
+\|f\|_{\mathcal{M}^p_q}^{\frac{p}{p_1}}
\right),
\end{align*}
so $F_N'(z)\in V_\infty\mathcal{M}^{p_0}_{q_0}
+V_\infty\mathcal{M}^{p_1}_{q_1}$. Now, let $z\in S$ and $h\in \mathbb{C}\setminus \{0\}$ be such that $z+h\in \overline{S}$. Then
\begin{align}\label{eq:171221-1}
&\left|
\frac{F_{N,0}(z+h)-F_{N,0}(z)}{h}
-F_{N,0}'(z)
\right|
\nonumber
\\
&\qquad\le
|F_{N,0}(z)|
\left(\frac{p}{p_1}-\frac{p}{p_0}
\right)
|\log |f||
\left( e^{|h|\left(\frac{p}{p_1}-\frac{p}{p_0}\right) |\log |f||}
-1\right)
\nonumber
\\
&\qquad\le
|F_{N,0}(z)|
\left(\frac{p}{p_1}-\frac{p}{p_0}
\right)
(\log N)
\left( e^{|h|\left(\frac{p}{p_1}-\frac{p}{p_0}\right) \log N}
-1\right).
\end{align}
Combining \eqref{eq:171220-8} and \eqref{eq:171221-1}, we get
\begin{align}\label{eq:171221-2}
\left\|
\frac{F_{N,0}(z+h)-F_{N,0}(z)}{h}
-F_{N,0}'(z)
\right\|_{V_\infty\mathcal{M}^{p_0}_{q_0}}
\lesssim
\left( e^{|h|\left(\frac{p}{p_1}-\frac{p}{p_0}\right) \log N}
-1\right)\|f\|_{\mathcal{M}^p_q}^{\frac{p}{p_0}}.
\end{align}
Similarly,
\begin{align}\label{eq:171221-3}
\left\|
\frac{F_{N,1}(z+h)-F_{N,1}(z)}{h}
-F_{N,1}'(z)
\right\|_{V_\infty\mathcal{M}^{p_1}_{q_1}}
\lesssim
\left( e^{|h|\left(\frac{p}{p_1}-\frac{p}{p_0}\right) \log N}
-1\right)\|f\|_{\mathcal{M}^p_q}^{\frac{p}{p_1}}.
\end{align}
Here the implicit constants
in (\ref{eq:171221-1}) and (\ref{eq:171221-2})
can depend on $N$.
Hence, it follows from \eqref{eq:171221-2} and
\eqref{eq:171221-3} that
\[
\lim_{h\to 0}
\left\|
\frac{F_{N}(z+h)-F_{N}(z)}{h}
-F_{N}'(z)
\right\|_{V_\infty\mathcal{M}^{p_0}_{q_0}+V_\infty\mathcal{M}^{p_1}_{q_1}}
=0,
\]
so $F_N$ is holomorphic in $S$.
Finally, we show the boundedness and continuity of
the function $t\in \mathbb{R} \mapsto F_N(k+it)\in V_\infty\mathcal{M}^{p_k}_{q_k}$ for each $k\in \{0,1\}$. Note that, by using \eqref{eq:171220-10}, we have
\[
m(F_N(k+it), p_k, q_k;r)
\le
m(|f|^{\frac{p}{p_k}}, p_k, q_k;r)
=
m(f, p_k, q_k;r)^{\frac{p}{p_k}},
\]
so $F_{N}(k+it)\in V_\infty\mathcal{M}^{p_k}_{q_k}$
and
\begin{align}\label{eq:171221-4}
\sup_{t\in \mathbb{R}}
\|F_N(k+it)\|_{V_\infty\mathcal{M}^{p_k}_{q_k}}
\le
\|f\|_{\mathcal{M}^p_q}^{\frac{p}{p_k}}.
\end{align}
Hence, $t\in \mathbb{R} \mapsto F_N(k+it)\in V_\infty\mathcal{M}^{p_k}_{q_k}$ is bounded.
Let $t_0\in \mathbb{R}$ be fixed. Then, by \eqref{eq:171221-4}, for every $t\in \mathbb{R}$, we have
\begin{align*}
\|F_N(k+it)-
F_N(k+it_0)\|_{V_\infty\mathcal{M}^{p_k}_{q_k}}
&=
\left\|F_N(k+it_0)\left(|f|^{i\left(\frac{p}{p_0}-\frac{p}{p_1}\right)(t-t_0)}-1\right)
\right\|_{V_\infty\mathcal{M}^{p_k}_{q_k}}
\\
&\le
\|f\|_{\mathcal{M}^p_q}^{\frac{p}{p_k}}
\left(
e^{\left(\frac{p}{p_1}-\frac{p}{p_0}\right) (\log N)|t_1-t_2|}
-1\right),
\end{align*}
so
\[
\lim_{t\to t_0}
\|F_N(k+it)-
F_N(k+it_0)\|_{V_\infty\mathcal{M}^{p_k}_{q_k}}
=0.
\]
This shows that $t\in \mathbb{R} \mapsto F_N(k+it)\in V_\infty\mathcal{M}^{p_k}_{q_k}$ is continuous.
Hence, we have shown that $F_N\in \mathcal{F}(V_\infty\mathcal{M}^{p_0}_{q_0}, V_\infty\mathcal{M}^{p_1}_{q_1})$.
Thus, $F_N(\theta)\in [V_\infty\mathcal{M}^{p_0}_{q_0}, V_\infty\mathcal{M}^{p_1}_{q_1}]_\theta$.
\end{proof}
\begin{remark}
The similar result is also valid when $V_\infty$ is replaced by $V^{(*)}$.
\end{remark}
We are now ready to prove Theorem \ref{thm:171213-1}.
\begin{proof}[Proof of Theorem \ref{thm:171213-1}]
According to Lemma \ref{lem:171223-1}, we have
$\overline{\mathcal{M}}{}^{p_0}_{q_0} \subseteq V_0\mathcal{M}^{p_0}_{q_0} \subseteq \mathcal{M}^{p_0}_{q_0}$
and
$\overline{\mathcal{M}}{}^{p_1}_{q_1} \subseteq V_0\mathcal{M}^{p_1}_{q_1} \subseteq \mathcal{M}^{p_1}_{q_1}$.
Consequently, by virtue of Theorem \ref{thm:HS2-2}, we have \eqref{eq:171213-2}.
Next, we only prove \eqref{eq:171213-3} because the proofs of \eqref{eq:171213-3} and \eqref{eq:171213-4} are similar.
Let $f\in [V_\infty \mathcal{M}^{p_0}_{q_0}, V_\infty \mathcal{M}^{p_1}_{q_1}]_\theta$.
Then, by virtue of Lemma \ref{lem:dense}, we can choose $\{f_j\}_{j=1}^\infty \subseteq V_\infty \mathcal{M}^{p_0}_{q_0} \cap V_\infty \mathcal{M}^{p_1}_{q_1}$ such that
\begin{align}\label{eq:171219-1}
\lim_{j\to \infty}
\|f-f_j\|_{[V_\infty \mathcal{M}^{p_0}_{q_0}, V_\infty \mathcal{M}^{p_1}_{q_1}]_\theta}
=0.
\end{align}
According to Lemma \ref{lem:171218-1}, we have $\{f_j\}_{j=1}^\infty \subseteq V_\infty\mathcal{M}^p_q$.
Combining $[V_\infty\mathcal{M}^{p_0}_{q_0}, V_\infty\mathcal{M}^{p_1}_{q_1}]_\theta \subseteq [\mathcal{M}^{p_0}_{q_0}, \mathcal{M}^{p_1}_{q_1}]_\theta \subseteq \mathcal{M}^p_q$ and
\eqref{eq:171219-1}, we get
\begin{align}\label{eq:171219-2}
\lim_{j\to \infty}
\|f-f_j\|_{\mathcal{M}^p_q}
=0,
\end{align}
so $f\in V_\infty\mathcal{M}^p_q$.
Consequently,
\begin{align*}
[V_\infty{\mathcal M}^{p_0}_{q_0}, V_\infty{\mathcal M}^{p_1}_{q_1}]_\theta
\subseteq
V_\infty \cM^p_q
\cap
[{\mathcal M}^{p_0}_{q_0}, {\mathcal M}^{p_1}_{q_1}]_\theta
\end{align*}
Conversely, let
$V_\infty \cM^p_q
\cap
[{\mathcal M}^{p_0}_{q_0}, {\mathcal M}^{p_1}_{q_1}]_\theta$ .
Then, by virtue of Theorem \ref{thm:HS2}, we have
\begin{align}\label{eq:171220-1}
\lim_{N\to \infty}\|f-\chi_{\{1/N\le |f|\le N\}}f\|_{\mathcal{M}^p_q}=0.
\end{align}
Define $F_N(z)$ by \eqref{eq:FN}. Then, by virtue of Proposition
\ref{pr:171218-1}, we have
$$F_N(\theta)\in [V_\infty\mathcal{M}^{p_0}_{q_0}, V_\infty\mathcal{M}^{p_1}_{q_1}]_\theta.$$ Moreover, for every $M, N \in \mathbb{N}$ with $M>N$, we have
\begin{align*}
\|F_M(\theta)-F_N(\theta)\|_{[V_\infty\mathcal{M}^{p_0}_{q_0}, V_\infty\mathcal{M}^{p_1}_{q_1}]_\theta}
\le
\max_{k=0,1}
\|f\chi_{\{|f|<\frac1N\}\cup\{|f|>N\}}\|_{\mathcal{M}^p_q}^{\frac{p}{p_k}},
\end{align*}
so by \eqref{eq:171220-1}, we see that
$\{F_N(\theta)\}_{N=1}^\infty$ is a Cauchy sequence in $[V_\infty\mathcal{M}^{p_0}_{q_0}, V_\infty\mathcal{M}^{p_1}_{q_1}]_\theta$.
Consequently, there exists $g\in [V_\infty\mathcal{M}^{p_0}_{q_0}, V_\infty\mathcal{M}^{p_1}_{q_1}]_\theta$ such that
\begin{align}\label{eq:171220-2}
\lim_{N\to \infty}
\|F_N(\theta)-g\|_{[V_\infty\mathcal{M}^{p_0}_{q_0}, V_\infty\mathcal{M}^{p_1}_{q_1}]_\theta}
=0.
\end{align}
By using $[V_\infty\mathcal{M}^{p_0}_{q_0}, V_\infty\mathcal{M}^{p_1}_{q_1}]_\theta \subseteq [\mathcal{M}^{p_0}_{q_0}, \mathcal{M}^{p_1}_{q_1}]_\theta \subseteq \mathcal{M}^p_q$ again, we see that \eqref{eq:171220-2} implies
\begin{align}\label{eq:171220-3}
\lim_{N\to \infty}
\|F_N(\theta)-g\|_{\mathcal{M}^p_q}
=0.
\end{align}
Since $f-F_N(\theta)=f-\chi_{\{1/N\le |f|\le N\}}f$, we may combine \eqref{eq:171220-1} and \eqref{eq:171220-3} to obtain $f=g$, so $f\in [V_\infty \mathcal{M}^{p_0}_{q_0}, V_\infty\mathcal{M}^{p_1}_{q_1}]_\theta$.
This completes the proof of Theorem \ref{thm:171213-1}.
\end{proof}
\section{The second complex interpolation of vanishing Morrey spaces}
First, we show that the vanishing Morrey
spaces $V_\infty\mathcal{M}^p_q$ is a Banach lattice on $\mathbb{R}^n$.
\begin{lemma}\label{lem:171220-1}
Let $1\le q\le p<\infty$. If $f\in V_\infty\mathcal{M}^p_q$ and $|g|\le |f|$, then
$g\in V_\infty\mathcal{M}^p_q$.
\end{lemma}
\begin{proof}
The assertion follows immediately from the inequality
\[
m(g, p, q; r)\le
m(f, p, q;r),
\]
for every $r>0$.
\end{proof}
\begin{remark}\label{rem:171220-1}
By a similar argument, we also can show that
$V_0\mathcal{M}^p_q$ and $V^{(*)}\mathcal{M}^p_q$ are Banach lattices on $\mathbb{R}^n$.
\end{remark}
We now prove the following inclusion result, whose proof is similar to that of \cite[Lemma 8]{HS}.
\begin{lemma}\label{lem:171220-2}
Keep the same assumption as in Theorem \ref{thm:171213-2}. Then
\[
\mathcal{M}^p_q
\cap
\overline{V_\infty\mathcal{M}^p_q}^{\mathcal{M}^{p_0}_{q_0}+\mathcal{M}^{p_1}_{q_1}}
\subseteq
\bigcap_{0<a<b<\infty}
\{f\in \mathcal{M}^p_q:
\chi_{\{a\le |f|\le b\}}f \in V_\infty\mathcal{M}^p_q\}.
\]
\end{lemma}
\begin{proof}
We may assume that $q_0>q_1$.
Then $q_0>q>q_1$.
Let $f\in \mathcal{M}^p_q
\cap
\overline{V_\infty\mathcal{M}^p_q}^{\mathcal{M}^{p_0}_{q_0}+\mathcal{M}^{p_1}_{q_1}}
$. We shall show that, for every $0<a<b<\infty$
\begin{align}\label{eq:171221-7}
\chi_{\{a\le |f|\le b\}}f \in V_\infty\mathcal{M}^p_q.
\end{align}
In view of Lemma \ref{lem:171220-1}, we can prove \eqref{eq:171221-7} by showing that
\begin{align}\label{eq:171221-8}
\chi_{\{a\le |f|\le b\}}\Theta(|f|) \in V_\infty\mathcal{M}^p_q,
\end{align}
where $\Theta:[0,\infty) \to [0,\infty)$ is defined by
\begin{align*}
\Theta(t):=
\begin{cases}
0, &\quad 0\le t< \frac{a}{2} \ {\rm or}\ t>2b,
\\
2t-a, &\quad \frac{a}{2}<t\le a,
\\
a, &\quad a\le t\le b,
\\
-\frac{a}{b}t+2a, &\quad b<t\le 2b.
\end{cases}
\end{align*}
Since $f\in
\overline{V_\infty\mathcal{M}^p_q}^{\mathcal{M}^{p_0}_{q_0}+\mathcal{M}^{p_1}_{q_1}}$,
we can choose
$\{f_j\}_{j=1}^\infty \subseteq V_\infty\mathcal{M}^p_q$,
$\{g_j\}_{j=1}^\infty \subseteq \mathcal{M}^{p_0}_{q_0}$,
and
$\{h_j\}_{j=1}^\infty \subseteq \mathcal{M}^{p_1}_{q_1}$ such that $f=f_j+g_j+h_j$,
\begin{align}\label{eq:171221-10}
\lim_{j\to \infty}
\|g_j\|_{\mathcal{M}^{p_0}_{q_0}}
=0
\
{\rm and}
\
\lim_{j\to \infty}
\|h_j\|_{\mathcal{M}^{p_1}_{q_1}}
=0.
\end{align}
Note that, by virtue of Lemma \ref{lem:171220-1} and the inequality
\[
\chi_{\{a\le |f|\le b\}}
\Theta(|f_j|)
\le |f_j|,
\]
we have
$
\chi_{\{a\le |f|\le b\}}
\Theta(|f_j|)
\in V_\infty\mathcal{M}^p_q.
$
Therefore, \eqref{eq:171221-8} is valid once we can show that
\begin{align}\label{eq:171221-9}
\lim_{j\to \infty}
\|\chi_{\{a\le |f|\le b\}}
(\Theta(|f_j|)-\Theta(|f|)\|_{\mathcal{M}^p_q}
=0
\end{align}
Since
\[
|\Theta(t_1)-\Theta(t_2)|
\lesssim
\min(1, |t_1-t_2|),
\]
for every $t_1, t_2\ge 0$, we have
\begin{align}\label{eq:171221-12}
\|\chi_{\{a\le |f|\le b\}}
(\Theta(|f_j|)-\Theta(|f|)\|_{\mathcal{M}^p_q}
&\lesssim
\|\chi_{\{a\le |f|\le b\}}
\min(1, |g_j|)\|_{\mathcal{M}^p_q}
\nonumber
\\
&\quad+
\|\min(1,|h_j|)\|_{\mathcal{M}^p_q}.
\end{align}
By using $q>q_1$, we have
\begin{align}\label{eq:171221-17}
\|\min(1,|h_j|)\|_{\mathcal{M}^p_q}
\le \||h_j|^{q_1/q}\|_{\mathcal{M}^p_q}
=
\|h_j\|_{\mathcal{M}^{p_1}_{q_1}}^{q_1/q}.
\end{align}
Meanwhile, by using the H\"older inequality, we get
\begin{align}\label{eq:171221-18}
\|\chi_{\{a\le |f|\le b\}}
\min(1, |g_j|)\|_{\mathcal{M}^p_q}
&\le
\|g_j\|_{\mathcal{M}^{p_0}_{q_0}}^{1-\theta}
\|\chi_{\{a\le |f|\le b\}}
\min(1, |g_j|)\|_{\mathcal{M}^{p_1}_{q_1}}^{\theta}
\nonumber
\\
&\lesssim
\|g_j\|_{\mathcal{M}^{p_0}_{q_0}}^{1-\theta}
\|f\|_{\mathcal{M}^{p}_{q}}^{\frac{\theta q}{q_1}}.
\end{align}
Combining \eqref{eq:171221-10},
\eqref{eq:171221-12}, \eqref{eq:171221-17},
and
\eqref{eq:171221-18}, we obtain
\eqref{eq:171221-9}.
\end{proof}
We are now ready to prove Theorem \ref{thm:171213-2}.
\begin{proof}[Proof of Theorem \ref{thm:171213-2}]
By virtue of Theorem \ref{thm:HS2-2} and Lemma \ref{lem:171223-1}, we have
\[
\mathcal{M}^p_q
\subseteq
[V_0\mathcal{M}^{p_0}_{q_0}, V_0\mathcal{M}^{p_1}_{q_1}]^\theta.
\]
Combining this, $[V_0\mathcal{M}^{p_0}_{q_0}, V_0\mathcal{M}^{p_1}_{q_1}]^\theta
\subseteq [\mathcal{M}^{p_0}_{q_0}, \mathcal{M}^{p_1}_{q_1}]^\theta$, and Theorem \ref{thm:Lemarie}, we have
\eqref{eq:thm:171213-2-1}.
Next, we only show \eqref{eq:thm:171213-2-2} because we can prove
\eqref{eq:thm:171213-2-3} by a similar argument.
Let $f\in [V_\infty\mathcal{M}^{p_0}_{q_0}, V_\infty\mathcal{M}^{p_1}_{q_1}]^\theta$.
Then there exists $G\in \mathcal{G}(V_\infty\mathcal{M}^{p_0}_{q_0}, V_\infty\mathcal{M}^{p_1}_{q_1})$ such that
\begin{align*}
f=G'(\theta).
\end{align*}
Consequently,
\begin{align}\label{eq:171221-15}
\lim_{k\to \infty}
\|f-H_k(\theta)\|_{\mathcal{M}^{p_0}_{q_0}+\mathcal{M}^{p_1}_{q_1}}
=0,
\end{align}
where $H_k$ is defined in Lemma \ref{lem:160417-3}. Combining \eqref{eq:171221-15}
with
the second part of
Theorem \ref{thm:171213-1}, Lemma \ref{lem:160417-3} and
$[V_\infty\mathcal{M}^{p_0}_{q_0}, V_\infty\mathcal{M}^{p_1}_{q_1}]^\theta \subseteq \mathcal{M}^p_q$, we have
\[
f\in \mathcal{M}^p_q
\cap \overline{V_\infty\mathcal{M}^p_q}^{\mathcal{M}^{p_0}_{q_0}+\mathcal{M}^{p_1}_{q_1}}.
\]
Therefore, by virtue of Lemma \ref{lem:171220-2}, we have $f\in \mathcal{M}^p_q$ and $\chi_{\{a\le|f|\le b\}}f\in V_\infty\mathcal{M}^p_q$ for every $0<a<b<\infty$.
We now show that
\begin{align}\label{eq:171220-19}
\bigcap_{0<a<b<\infty}
\{f\in \mathcal{M}^p_q:
\chi_{\{a\le |f|\le b\}}\in V_\infty\mathcal{M}^p_q\}
\subseteq
[V_\infty\mathcal{M}^{p_0}_{q_0},
V_\infty\mathcal{M}^{p_1}_{q_1}]^\theta.
\end{align}
Suppose that $f$ belongs to the set in the left-hand side of \eqref{eq:171220-19}.
Let $F$ and $G$ be defined by \eqref{eq:171220-20} and \eqref{eq:171220-210}, respectively. In view of Proposition \ref{prop:Lemarie-HS}, we only need to show that
\begin{align}\label{eq:171220-23}
G(z)\in V_\infty\mathcal{M}^{p_0}_{q_0}
+V_\infty\mathcal{M}^{p_1}_{q_1} \quad (z\in \overline{S})
\end{align}
and
\begin{align}\label{eq:171220-21}
G(k+it)-G(k)\in V_\infty\mathcal{M}^{p_k}_{q_k}
\quad
(k\in \{0,1\}, t\in \mathbb{R}).
\end{align}
The proof of \eqref{eq:171220-23} goes as follows.
Define $G_0(z):=\chi_{\{|f| \le 1\}}G(z)$ and
$G_1(z):=G(z)-G_0(z)$. For every $\varepsilon\in (0,1)$, we write $G_\varepsilon(z):=\chi_{\{|f|\ge \varepsilon\}}G_0(z)$. Then, by \eqref{eq:171220-27}, we have
\[
|G_\varepsilon(z)|
\le 2(1+|z|)
\chi_{\{\varepsilon\le |f|\le 1\}}
\le
\frac{2(1+|z|)}{\varepsilon}
\chi_{\{\varepsilon\le |f|\le 1\}}|f|.
\]
Since $\chi_{\{\varepsilon\le |f|\le 1\}}f\in V_\infty\mathcal{M}^{p_0}_{q_0}$,
by virtue of Lemma \ref{lem:171220-1}, we have
$G_\varepsilon(z)\in V_\infty\mathcal{M}^{p_0}_{q_0}$. Meanwhile,
\begin{align*}
|G_0(z)-G_\varepsilon(z)|
=
\left|
\chi_{\{|f|<\varepsilon\}}
\frac{F(z)-F(\theta)}{\left(\frac{p}{p_1}-\frac{p}{p_0}\right) \log |f|}
\right|
\lesssim
\frac{|f|^{\frac{p}{p_0}}}{-\log \varepsilon}.
\end{align*}
This implies
\[
\|G_0(z)-G_\varepsilon(z)\|_{\mathcal{M}^{p_0}_{q_0}}
\lesssim
\frac{\|f\|_{\mathcal{M}^p_q}^{\frac{p}{p_0}}}{-\log \varepsilon}.
\]
Therefore, $\lim\limits_{\varepsilon\to 0^{+}} \|G_0(z)-G_\varepsilon(z)\|_{\mathcal{M}^{p_0}_{q_0}}=0$. Consequently, $G_0(z)\in V_\infty\mathcal{M}^{p_0}_{q_0}$. By a similar argument, we also have $G_1(z)\in V_\infty\mathcal{M}^{p_1}_{q_1}$.
Since $G(z)=G_0(z)+G_1(z)$, we obtain \eqref{eq:171220-23}.
We now prove \eqref{eq:171220-21}.
For every $N\in \mathbb{N}$, we define
\[
H_N(k+it)
:=(G(k+it)-G(k))\chi_{\{N^{-1}\le |f|\le N\}}.
\]
It follows from \eqref{eq:171220-27} that
\[
|H_N(k+it)|
\le C_{k, N, t} \chi_{\{N^{-1}\le |f|\le N\}}|f|.
\]
Therefore, by virtue of Lemma \ref{lem:171220-1}, we have
$H_N(k+it)\in V_\infty\mathcal{M}^{p_k}_{q_k}$.
Moreover,
\begin{align*}
\|G(k+it)-G(k)-H_N(k+it)\|_{\mathcal{M}^{p_k}_{q_k}}
&\lesssim
\|
\chi_{\{|f|<N^{-1}\}\cup \{|f|>N\}}
\frac{|F(k+it)|+|F(k)|}{|\log |f||}
\|_{\mathcal{M}^{p_k}_{q_k}}
\\
&\lesssim
\frac{\left\||f|^{\frac{p}{p_k}}\right\|_{\mathcal{M}^{p_k}_{q_k}}}{\log N}
=
\frac{\|f\|_{\mathcal{M}^{p}_{q}}^{\frac{p}{p_k}}}{\log N}.
\end{align*}
Consequently,
$
\lim\limits_{N\to \infty}
\|G(k+it)-G(k)-H_N(k+it)\|_{\mathcal{M}^{p_k}_{q_k}}=0$.
Combining this and $H_N(k+it)\in V_\infty\mathcal{M}^{p_k}_{q_k}$, we obtain \eqref{eq:171220-21}, as desired.
\end{proof}
\section{Complex interpolation of $(\mathbb{M}^{p_0}_{q_0}, \mathbb{M}^{p_1}_{q_1})$}
\begin{theorem}\label{thm:171218-1}
Let $1 \le q \le p<\infty$. Then
$\mathbb{M}^p_q=\overset{\diamond}{\mathcal M}{}^p_q$.
\end{theorem}
\begin{proof}
Let $f \in \overset{\diamond}{\mathcal M}{}^p_q$.
Then there exists
$\{f_j\}_{j=1}^\infty \subset {\mathcal M}^p_q$
such that
$\partial^\alpha f_j\in {\mathcal M}^p_q$
for all
$\alpha \in {\mathbb N}_0{}^n$ and $j \in {\mathbb N}$
and that
$\displaystyle
\lim_{j \to \infty}f_j=f
$
in the topology of ${\mathcal M}^p_q$.
Let $y \in {\mathbb R}^n$.
We observe
\begin{align*}
\|f(\cdot+y)-f\|_{{\mathcal M}^p_q}
&\le
\|f_j(\cdot+y)-f_j\|_{{\mathcal M}^p_q}
+
\|f(\cdot+y)-f_j(\cdot+y)\|_{{\mathcal M}^p_q}
+
\|f-f_j\|_{{\mathcal M}^p_q}\\
&\le
\|f_j(\cdot+y)-f_j\|_{{\mathcal M}^p_q}
+2
\|f-f_j\|_{{\mathcal M}^p_q}.
\end{align*}
We note that
each $f_j$ is smooth in view of the fact that
$f \in {\rm BUC}$
whenever
$\partial^\alpha f \in {\mathcal M}^p_q$
for all $\alpha$ such that $|\alpha| \le \dfrac{n}{p}+1$.
So,
by the mean value theorem,
\begin{align*}
\|f_j(\cdot+y)-f_j\|_{{\mathcal M}^p_q}
&=
\left\|
\int_0^1 y \cdot \nabla f_j(\cdot+t y)\,dt
\right\|_{{\mathcal M}^p_q}\\
&\le
\int_0^1 |y| \cdot \|\nabla f_j(\cdot+t y)\|_{{\mathcal M}^p_q}\,dt\\
&=|y| \cdot \|\nabla f_j\|_{{\mathcal M}^p_q}.
\end{align*}
Thus,
\begin{align*}
\|f(\cdot+y)-f\|_{{\mathcal M}^p_q}
&\le
|y| \cdot \|\nabla f_j\|_{{\mathcal M}^p_q}
+2
\|f-f_j\|_{{\mathcal M}^p_q}.
\end{align*}
If we let $y \to 0$,
then we obtain
\[\limsup_{y \to 0}
\|f(\cdot+y)-f\|_{{\mathcal M}^p_q}
\le
2\|f-f_j\|_{{\mathcal M}^p_q}.
\]
It remains to let $j \to \infty$.
Conversely let $f \in {\mathbb M}^p_q$.
Choose a non-negative function $\rho \in C^\infty(\{|y|<1\})$
with $\|\rho\|_{L^1}=1$.
Set $\rho_j=j^n \rho(j \cdot)$
for $j \in {\mathbb N}$.
We set
$f_j=\rho_j*f$.
Then we have
\[
\|f-f_j\|_{{\mathcal M}^p_q}
\le
\int_{{\mathbb R}^n}\rho_j(y)\|f-f(\cdot-y)\|_{{\mathcal M}^p_q}\,dy
\le
\sup_{|y| \le j^{-1}}\|f-f(\cdot-y)\|_{{\mathcal M}^p_q}.
\]
As a result, letting $j \to \infty$,
we obtain
$\displaystyle
\lim_{j \to \infty}f_j=f
$
in the topology of ${\mathcal M}^p_q$.
Since $\partial^\alpha f_j=(\partial^\alpha \rho_j)*f \in {\mathcal M}^p_q$,
we obtain
$f \in \overset{\diamond}{\mathcal M}{}^p_q$.
\end{proof}
As a corollary of Theorems \ref{thm:HNS} and \ref{thm:171218-1}, we have the following result.
\begin{corollary}
Keep the same assumption as in Theorem \ref{thm:HNS}. Then
\begin{align*}
[{\mathbb M}^{p_0}_{q_0},
{\mathbb M}^{p_1}_{q_1}]_\theta
=
\{f\in
{\mathbb M}^{p}_{q}
: \lim_{N\to \infty}
\|f-\chi_{\{1/N\le |f|\le N\}}f\|_{\cM^p_q}=0
\}
\end{align*}
and
\begin{align*}
[{\mathbb M}^{p_0}_{q_0},
{\mathbb M}^{p_1}_{q_1}]^\theta
=
\bigcap_{0<a<1}
\left\{
f \in {\mathcal M}^p_q
\,:\,
\lim_{J \to \infty}
\|S(f;a,J)\|_{{\mathcal M}^p_q}
=0
\right\}.
\end{align*}
\end{corollary}
\section{Examples}
In this section, we shall examine the relation between each subspace in Theorems
\ref{thm:171213-1} and \ref{thm:171213-2} and comparing them by giving several examples.
Let $\theta \in (0,1)$ and assume that
\begin{align}\label{eq:171227-8}
1\le q_0<p_0<\infty, \ 1\le q_1<p_1<\infty, \ {\rm and} \ \frac{p_0}{q_0}=\frac{p_1}{q_1}.
\end{align}
Let $p$ and $q$ be defined by \eqref{eq:pq}.
Define
$$
E(p,q):=\{(y_k+(R-1)(a_1+R a_2+\cdots))_{k=1, \ldots, n}\,:\,
\{a_j\}_{j=1}^\infty \in \{0,1\}^{\infty} \cap \ell^1({\mathbb N}), y\in [0,1]^n\},
$$
where $R>2$ solves $R^{\frac{n}{p}-\frac{n}{q}}2^{\frac{n}{q}}=1$,
so that each connected component of $E(p,q)$ is a closed cube with volume $1$.
It is known that $\chi_{E(p,q)} \in {\mathcal M}^p_q({\mathbb R}^n)$;
see \cite{SST11-1}.
We also define
\[
E_m:=\bigcup_{j=0}^{m-1} [j, j+m^{-nq/p}]\times [0,m]^{n-1}
\]
for $m\in \mathbb{N}$.
For $x \in {\mathbb R}^n$,
we let
\begin{align*}
f_1(x)&:=\chi_{E(p,q)}(x)\\
f_2(x)&:=\sum_{j=1}^\infty \chi_{j!{\bf e}_1+[0,1]^n}(x)\\
f_3(x)&:=|x|^{-n/p}(x)\\
f_4(x)&:=\sum_{m=1}^\infty \chi_{E_m}(x-m!e_1).
\end{align*}
We have the following list of the membership:
\[
\begin{tabular}{|
@{\hspace{5pt}}c@{\hspace{5pt}}||
@{\hspace{5pt}}c@{\hspace{5pt}}|
@{\hspace{5pt}}c@{\hspace{5pt}}|
@{\hspace{5pt}}c@{\hspace{5pt}}|
@{\hspace{5pt}}c@{\hspace{5pt}}|
}
\hline
& $f_1$ & $f_2$ & $f_3$ & $f_4$\\
\hline
$[V_0{\mathcal M}^{p_0}_{q_0},V_0{\mathcal M}^{p_1}_{q_1}]_\theta=
[{\mathcal M}^{p_0}_{q_0},{\mathcal M}^{p_1}_{q_1}]_\theta$&$\circ$&$\circ$&$\times$&$\circ$
\\
\hline
$[V_\infty{\mathcal M}^{p_0}_{q_0},V_\infty{\mathcal M}^{p_1}_{q_1}]_\theta$&$\times$&$\circ$&$\times$&$\times$
\\
\hline
$[V^{(*)}{\mathcal M}^{p_0}_{q_0},V^{(*)}{\mathcal M}^{p_1}_{q_1}]_\theta$&$\times$&$\times$&$\times$&$\circ$\\
\hline
$[V_0{\mathcal M}^{p_0}_{q_0},V_0{\mathcal M}^{p_1}_{q_1}]^\theta={\mathcal M}^p_q$&$\circ$&$\circ$&$\circ$&$\circ$\\
\hline
$[V_\infty{\mathcal M}^{p_0}_{q_0},V_\infty{\mathcal M}^{p_1}_{q_1}]^\theta$&$\times$&$\circ$&$\circ$&$\times$\\
\hline
$[V^{(*)}{\mathcal M}^{p_0}_{q_0},V^{(*)}{\mathcal M}^{p_1}_{q_1}]^\theta$&$\times$&$\times$&$\circ$&$\circ$\\
\hline
\end{tabular}
\]
In the table,
$\circ$ stands for the membership,
while
$\times$ means that the function does not belong to the function space. The detail verification of this table is given as follows.
\begin{corollary}\label{cor:171226-1}
Let $\theta \in (0,1)$ and assume \eqref{eq:171227-8}.
Then,
$f_1$ belongs to
$[V_0\mathcal{M}^{p_0}_{q_0}, V_0\mathcal{M}^{p_1}_{q_1}]_\theta$, but
$$f_1 \notin [V^{(*)}\mathcal{M}^{p_0}_{q_0}, V^{(*)}\mathcal{M}^{p_1}_{q_1}]_\theta \cup [V_\infty\mathcal{M}^{p_0}_{q_0}, V_\infty\mathcal{M}^{p_1}_{q_1}]_\theta.$$
\end{corollary}
\begin{proof}
Observe that, for every $N\in \mathbb{N}$, we have
\[
f_1-\chi_{\{\frac1N\le |f_1|\le N\}} f_1=f_1\chi_{\{|f_1|\le \frac1N\}\cup \{|f_1|>N\}}=0.
\]
Therefore,
\begin{align}\label{eq:171226-1}
\lim_{N\to \infty}
\|f_1-\chi_{\{\frac1N\le |f_1|\le N\}} f_1\|_{\mathcal{M}^p_q}=0.
\end{align}
Since $f_1\in \mathcal{M}^p_q$ satisfies \eqref{eq:171226-1}, by virtue of Theorem \ref{thm:171213-1}, we have $f_1 \in [V_0\mathcal{M}^{p_0}_{q_0}, V_0\mathcal{M}^{p_1}_{q_1}]_\theta$. According to Theorem \ref{thm:171213-1}, we only need to show that
\begin{align}\label{eq:171226-19}
f_1\notin V^{(*)}\mathcal{M}^p_q \cup V_\infty\mathcal{M}^p_q.
\end{align}
Let $N\in \mathbb{N}$. Then there exists a closed cube $Q=Q_N\subseteq E(p,q)$ of length $1$ such that $Q\subseteq \mathbb{R}^n \setminus B(0, 2N)$. Since
\begin{align*}
\sup_{x\in \mathbb{R}^n}
\int_{B(x,1)} |f_1(y)|^q \chi_{\mathbb{R}^n\setminus B(0,N)}(y) \ dy
&\ge \int_{B(c_Q, 1)} \chi_Q(y) \chi_{\mathbb{R}^n \setminus B(0, N)}(y) \ dy
\\
&=|Q \cap B(c_Q, 1)|
\ge \left(\frac{2}{\sqrt{n}}\right)^n,
\end{align*}
we see that
\[
\lim_{N\to \infty}
\sup_{x\in \mathbb{R}^n}
\int_{B(x,1)} |f_1(y)|^q \chi_{\mathbb{R}^n\setminus B(0,N)}(y) \ dy
\neq 0.
\]
Hence, $f_1\notin V^{(*)} \mathcal{M}^p_q$.
We now show that $f_1\notin V_\infty\mathcal{M}^p_q$.
Let $k\in \mathbb{N}$. By a geometric observation, $[0, R^k]^n \cap E(p,q)=\cup_{j=1}^{2^{kn}} Q_j$, where $\{Q_j\}_{j=1}^{2^{kn}}$ is a collection of closed cube of length 1. Therefore,
\begin{align*}
m\left(f_1, p, q; \frac{\sqrt{n}R^k}{2}\right)
&\gtrsim
|B(c_{[0, R^k]^n}, R^k)|^{\frac1p-\frac1q}
\left(
\int_{[0, R^k]^n} |f_1(y)|^q \ dy\right)^{\frac1q}
\\
&\sim
R^{k\left(\frac{n}{p}-\frac{n}{q}\right)}
|[0, R^k]^n \cap E(p,q)|^{\frac1q}
=
R^{k\left(\frac{n}{p}-\frac{n}{q}\right)}
2^{\frac{kn}{q}}=1.
\end{align*}
Since $k$ is arbitrary, we see that
$\lim\limits_{r\to \infty} m(f_1, p, q; r)\ne 0$, so $f_1\notin V_\infty\mathcal{M}^p_q$, as desired.
\end{proof}
\begin{corollary}\label{cor:171226-2}
Keep the same assumption as in Corollary \ref{cor:171226-1}.
Then, $f_1$ belongs to
$[V_0\mathcal{M}^{p_0}_{q_0}, V_0\mathcal{M}^{p_1}_{q_1}]^\theta$, but
$$f_1\notin [V^{(*)}\mathcal{M}^{p_0}_{q_0}, V^{(*)}\mathcal{M}^{p_1}_{q_1}]^\theta \cup [V_\infty\mathcal{M}^{p_0}_{q_0}, V_\infty\mathcal{M}^{p_1}_{q_1}]^\theta.$$
\end{corollary}
\begin{proof}
The first assertion follows from Corollary \ref{cor:171226-1} and
\[
[V_0\mathcal{M}^{p_0}_{q_0}, V_0\mathcal{M}^{p_1}_{q_1}]_\theta \subseteq [V_0\mathcal{M}^{p_0}_{q_0}, V_0\mathcal{M}^{p_1}_{q_1}]^\theta.
\]
Combining \eqref{eq:171226-19}, Theorem \ref{thm:171213-2},
and the identity
\[
\chi_{\{1/2\le |f_1|\le 1\}} f_1=f_1,
\]
we conclude that $f_1\notin [V^{(*)}\mathcal{M}^{p_0}_{q_0}, V^{(*)}\mathcal{M}^{p_1}_{q_1}]^\theta \cup [V_\infty\mathcal{M}^{p_0}_{q_0}, V_\infty\mathcal{M}^{p_1}_{q_1}]^\theta.$
\end{proof}
\begin{corollary}\label{cor:171226-3}
Keep the same assumption as in Corollary \ref{cor:171226-1}.
Then,
$f_2$ belongs to
$[V_\infty \mathcal{M}^{p_0}_{q_0}, V_\infty\mathcal{M}^{p_1}_{q_1}]_\theta$, but
$f_2\notin [V^{(*)}\mathcal{M}^{p_0}_{q_0}, V^{(*)}\mathcal{M}^{p_1}_{q_1}]_\theta$.
\end{corollary}
\begin{proof}
By a similar argument as in the proof of \cite[Theorem 4.1]{AS}, we have $f_2\in V_\infty\mathcal{M}^p_q$ but $f_2 \notin V^{(*)} \mathcal{M}^p_q$.
Moreover, $f_2$ satisfies
\[
\lim_{N\to \infty}
\|f_2-f_2\chi_{\{1/N\le |f_2|\le N\}}\|_{\mathcal{M}^p_q}=0.
\]
Therefore, by virtue of Theorem \ref{thm:171213-1},
we have the desired conclusion.
\end{proof}
\begin{corollary}\label{cor:171226-4}
Keep the same assumption as in Corollary \ref{cor:171226-1}.
Then,
$f_2$ belongs to
$[V_\infty \mathcal{M}^{p_0}_{q_0}, V_\infty\mathcal{M}^{p_1}_{q_1}]^\theta$, but
$f_2\notin [V^{(*)}\mathcal{M}^{p_0}_{q_0}, V^{(*)}\mathcal{M}^{p_1}_{q_1}]^\theta$.
\end{corollary}
\begin{proof}
Note that, $f_2 \in [V_\infty \mathcal{M}^{p_0}_{q_0}, V_\infty\mathcal{M}^{p_1}_{q_1}]^\theta$ is a consequence
of Corolary \ref{cor:171226-3} and
\[
[V_\infty \mathcal{M}^{p_0}_{q_0}, V_\infty\mathcal{M}^{p_1}_{q_1}]_\theta
\subseteq
[V_\infty \mathcal{M}^{p_0}_{q_0}, V_\infty\mathcal{M}^{p_1}_{q_1}]^\theta.
\]
Meanwhile, by the identity
\[
\chi_{\{1/2\le |f_2|\le 1\}}f_2=f_2
\]
and $f_2 \in V^{(*)}\mathcal{M}^p_q$, we have
$f_2 \notin [V^{(*)}\mathcal{M}^{p_0}_{q_0}, V^{(*)}\mathcal{M}^{p_1}_{q_1}]^\theta$.
\end{proof}
\begin{corollary}\label{cor:171226-5}
Keep the same assumption as in Corollary
\ref{cor:171226-1}. Then
\begin{align}\label{eq:171226-10}
f_3 \in [V_0\mathcal{M}^{p_0}_{q_0}, V_0\mathcal{M}^{p_1}_{q_1}]^\theta
\setminus [V_0\mathcal{M}^{p_0}_{q_0}, V_0\mathcal{M}^{p_1}_{q_1}]_\theta,
\end{align}
\[
f_3 \in [V^{(*)}\mathcal{M}^{p_0}_{q_0}, V^{(*)}\mathcal{M}^{p_1}_{q_1}]^\theta
\cap [V_\infty\mathcal{M}^{p_0}_{q_0}, V_\infty,\mathcal{M}^{p_1}_{q_1}]^\theta
\]
and
\begin{align}\label{eq:171226-5}
f_3 \notin [V^{(*)}\mathcal{M}^{p_0}_{q_0}, V^{(*)}\mathcal{M}^{p_1}_{q_1}]_\theta
\cup [V_\infty\mathcal{M}^{p_0}_{q_0}, V_\infty\mathcal{M}^{p_1}_{q_1}]_\theta.
\end{align}
\end{corollary}
\begin{proof}
Since $f_3\in \mathcal{M}^p_q$, by virtue of Theorem \ref{thm:171213-2}, we have $f_3 \in [V_0\mathcal{M}^{p_0}_{q_0}, V_0\mathcal{M}^{p_1}_{q_1}]^\theta$.
Note that $f_3$ fails to belong
to $[V_0{\mathcal M}^{p_0}_{q_0},V_0{\mathcal M}^{p_1}_{q_1}]_\theta$
because
\[
f_3=\lim_{N \to \infty}\chi_{[N^{-1},N]}(f)f
\]
fails in ${\mathcal M}^p_q$.
Let $0<a<b<\infty$. Since $\chi_{\{a\le |f_3|\le b\}}f_3 \in L^\infty_{\rm c}$, by virtue of Lemma \ref{lem:171227-1}, we have $\chi_{\{a\le |f_3|\le b\}}f_3\in V^{(*)}\mathcal{M}^p_q \cap V_\infty \mathcal{M}^p_q$. Therefore, according to Theorem \ref{thm:171213-2}, we have
$f_3 \in [V^{(*)}\mathcal{M}^{p_0}_{q_0}, V^{(*)}\mathcal{M}^{p_1}_{q_1}]^\theta \cap [V_\infty\mathcal{M}^{p_0}_{q_0}, V_\infty\mathcal{M}^{p_1}_{q_1}]^\theta$.
Meanwhile, \eqref{eq:171226-5} follows immediately from
\[
([V^{(*)}\mathcal{M}^{p_0}_{q_0}, V^{(*)}\mathcal{M}^{p_1}_{q_1}]_\theta
\cup [V_\infty\mathcal{M}^{p_0}_{q_0}, V_\infty\mathcal{M}^{p_1}_{q_1}]_\theta)
\subseteq
[V_0\mathcal{M}^{p_0}_{q_0}, V_0\mathcal{M}^{p_1}_{q_1}]_\theta
\]
and \eqref{eq:171226-10}.
\end{proof}
\begin{corollary}\label{cor:171226-6}
Keep the same assumption as in Corollary \ref{cor:171226-1}.
Then,
$f_4$ belongs to
$[V^{(*)} \mathcal{M}^{p_0}_{q_0}, V^{(*)}\mathcal{M}^{p_1}_{q_1}]^\theta$, but
$f_4\notin [V_\infty\mathcal{M}^{p_0}_{q_0}, V_\infty \mathcal{M}^{p_1}_{q_1}]^\theta$.
\end{corollary}
\begin{proof}
Let $x\in \mathbb{R}^n$ and $N\in \mathbb{N}$.
Note that, if $m\in \mathbb{N}$ satisfies
\[
(m!+m)^2+(n-1)m^2<N,
\]
then for every
$y \in [m!, m!+m]\times [0, m]^{n-1}$, we have
$|y|<N$. Consequently,
\begin{align}\label{eq:171227-7}
\int_{B(x,1)} &|f_4(y)|^q \chi_{\mathbb{R}^n\setminus B(0, N)}(y) \ dy
\nonumber
\\
&=
\sum_{m\in \mathbb{N}}
\int_{B(x,1)} \chi_{E_m}(y-m!e_1) \chi_{\mathbb{R}^n\setminus B(0, N)}(y) \ dy
\nonumber
\\
&=
\sum_{m\in \mathbb{N}, (m!+m)^2+(n-1)m^2\ge N}
\int_{B(x,1)} \chi_{E_m}(y-m!e_1) \chi_{\mathbb{R}^n\setminus B(0, N)}(y) \ dy
\nonumber
\\
&\le
\sum_{m\in \mathbb{N}, (m!+m)^2+(n-1)m^2\ge N}
\int_{B(x,1)} \chi_{E_m}(y-m!e_1) \ dy.
\end{align}
By a geometric observation, we see that
\begin{align}\label{eq:171227-17}
\int_{B(x,1)} \chi_{E_m}(y-m!e_1) \ dy.
&=
\int_{\mathbb{R}^n}
\chi_{E_m}(y) \chi_{B(x,1)}(y+m!e_1) \ dy
\nonumber
\\
&=
\int_{\mathbb{R}^n}
\chi_{E_m}(y) \chi_{B(x-m!e_1,1)}(y) \ dy
\nonumber
\\
&=
|E_m \cap B(x-m!e_1,1)|
\nonumber
\\
&\lesssim
\frac{|E_m|}{m^n}
=
\frac{m^{-\frac{nq}{p}} \cdot m^{n-1}}{m^n}
=m^{-\frac{nq}{p}-1}.
\end{align}
Combining \eqref{eq:171227-7} and \eqref{eq:171227-17}, we get
\[
\sup_{x\in \mathbb{R}^n}
\int_{B(x,1)} |f_4(y)|^q \chi_{\mathbb{R}^n\setminus B(0, N)}(y) \ dy
\lesssim
\sum_{m\in \mathbb{N}, (m!+m)^2+(n-1)m^2\ge N}
m^{-\frac{nq}{p}-1}.
\]
Since $\sum\limits_{m\in \mathbb{N}}
m^{-\frac{nq}{p}-1}<\infty$, we have
\[
\lim_{N\to \infty}
\sup_{x\in \mathbb{R}^n}
\int_{B(x,1)} |f_4(y)|^q \chi_{\mathbb{R}^n\setminus B(0, N)}(y) \ dy
=0,
\]
so $f_4\in V^{(*)}\mathcal{M}^p_q$. According to Remark \ref{rem:171220-1}, we have
\[
\chi_{\{a\le |f_4|\le b\}}f_4 \in V^{(*)}\mathcal{M}^p_q,
\]
for every $0<a<b<\infty$.
Therefore, by virtue of \eqref{eq:thm:171213-2-3}, we conclude that
$$f_4 \in [V^{(*)} \mathcal{M}^{p_0}_{q_0}, V^{(*)}\mathcal{M}^{p_1}_{q_1}]^\theta.$$
We now prove that
$f_4 \notin [V_\infty \mathcal{M}^{p_0}_{q_0}, V_\infty\mathcal{M}^{p_1}_{q_1}]^\theta$.
For every $N\in \mathbb{N}$, we define
$$Q:=[N!, N!+N]\times [0, N]^{n-1}.$$
Since $|E_N|=N^{n-\frac{nq}{p}}$, we have
\begin{align*}
m(f_4, p, q;\sqrt{n}N/2)
&\gtrsim
N^{\frac{n}{p}-\frac{n}{q}}
\left(
\int_{B(c_Q, \sqrt{n}N/2)}
|f_4(y)|^q \ dy
\right)^{\frac1q}
\\
&\ge
N^{\frac{n}{p}-\frac{n}{q}}
\left(
\int_{Q}
\chi_{E_N}(y-N!e_1) \ dy
\right)^{\frac1q}
\\
&=
N^{\frac{n}{p}-\frac{n}{q}}
|E_N|^{\frac1q}=1,
\end{align*}
so $\lim\limits_{r\to \infty} m(f_4, p, q;r)\neq 0$.
Therefore, $f_4 \notin V_\infty\mathcal{M}^p_q$.
Combining this with \eqref{eq:thm:171213-2-2} and
\[
\chi_{\{1/2\le |f_4|\le 1\}}f_4=f_4,
\]
we conclude that $f_4 \notin [V_\infty \mathcal{M}^{p_0}_{q_0}, V_\infty\mathcal{M}^{p_1}_{q_1}]^\theta$.
\end{proof}
\begin{corollary}\label{cor:171226-7}
Keep the same assumption as in Corollary \ref{cor:171226-1}.
Then,
$f_4$ belongs to
$[V^{(*)} \mathcal{M}^{p_0}_{q_0}, V^{(*)}\mathcal{M}^{p_1}_{q_1}]_\theta$, but
$f_4\notin [V_\infty\mathcal{M}^{p_0}_{q_0}, V_\infty \mathcal{M}^{p_1}_{q_1}]_\theta$.
\end{corollary}
\begin{proof}
In the proof of Corollary \ref{cor:171226-6}, it is shown that $f_4 \in V^{(*)}\mathcal{M}^p_q$. Moreover, for every $N\in \mathbb{N}$, we have
\[
\chi_{\{|f_4|<1/N\}\cup \{|f_4|> N\}}f_4=0,
\]
so $\lim\limits_{N\to \infty} \|f_4-\chi_{\{1/N\le |f_4|\le N\}}f_4\|_{\mathcal{M}^p_q}=0$. Therefore, by \eqref{eq:171213-4}, we have $$f_4 \in [V^{(*)} \mathcal{M}^{p_0}_{q_0}, V^{(*)}\mathcal{M}^{p_1}_{q_1}]_\theta.$$
The second assertion follows from
\[
[V_\infty\mathcal{M}^{p_0}_{q_0}, V_\infty \mathcal{M}^{p_1}_{q_1}]_\theta
\subseteq
[V_\infty\mathcal{M}^{p_0}_{q_0}, V_\infty \mathcal{M}^{p_1}_{q_1}]^\theta
\]
and Corollary \ref{cor:171226-6}.
\end{proof} | {"config": "arxiv", "file": "1906.03941.tex"} |
TITLE: Dipole matrix elements for Bloch wavefunctions?
QUESTION [2 upvotes]: Suppose we have a one-dimensional periodic system with lattice constant $a_0$. From Bloch's theorem, we can express the wavefunction for an electron in band $m$ with crystal momentum $k$ $\left\langle x \middle| \psi_{m,k} \right\rangle$ as follows:
$$
\left\langle x \middle| \psi_{m,k} \right\rangle = e^{i k x} u_{m,k}(x),
$$
where $u_{m,k}(x + a_0) = u_{m,k}(x)$. I don't understand the following expression for the matrix elements of the position operator:
$$
\langle \psi_{m,k} | x | \psi_{m',k'}\rangle = i \delta_{m,m'} \delta_{k,k'} \frac{\partial}{\partial k} + i \delta_{k,k'} X_{m,m'},
$$
where
$$
X_{m,m'}= i N \int_0^{a_0} e^{i (k - k') x} u^*_{m,k}(x) \frac{\partial}{\partial k} u_{m',k'}(x) dx.
$$
The second term is easy enough to understand. The first term, however... how can the expectation value between two eigenstates be a derivative? Am I missing something obvious here?
REPLY [0 votes]: The second term is easy enough to understand. The first term, however... how can the expectation value between two eigenstates be a derivative? Am I missing something obvious here?
There is nothing special going on here $-$ this is a pretty common feature.
For contrast, consider the matrix element of the position operator between two (free-particle) momentum eigenstates:
$$
⟨p|\hat x|p'⟩ = i\hbar \frac{\partial}{\partial p} \delta(p-p'),
$$
where you do need to take due care about where the derivative is acting on, but which is basically identical to your expression.
(Note that it is often easier to encapsulate all the complexity by "calculating" the derivative as $\delta'(p-p')$, as mike stone's answer does, but if you do this then it is essential to keep in mind that the derivative of the Dirac delta is a distribution and needs to be treated as such, i.e., not as a function, but as a functional.)
Something equivalent happens for the matrix element of momentum between two position eigenstates:
$$
⟨x|\hat p|x'⟩ = -i\hbar \frac{\partial}{\partial x} \delta(x-x').
$$
In other words, the behaviour you're seeing there is generic. | {"set_name": "stack_exchange", "score": 2, "question_id": 735202} |
TITLE: Limits of complex numbers
QUESTION [0 upvotes]: "We say $z_n \rightarrow \infty$ if, for each positive number $M$ (no
matter how large), there is an integer $N$ such that $|z_n |>M$
whenever $n > N$; similary $\lim_{z\rightarrow z_0}f(z) =\infty$ means
that for each positive number $M$, there is a $\delta >0$ such that
$|f(z)|>M$ whenever $0<|z-z_0|<\delta$."
I don't understand what this means? I think it says when the magnitude goes to infinity so does the complex numbers, but what exactly does that mean?
REPLY [3 votes]: The precise definitions are given in what you wrote. If you want a more intuitive explanation, the first clause states that a sequence $\{z_n\}$ approaches $\infty$ if given any $M$, there is some index $N$ in the sequence such that every term after that point in the sequence is at least distance $M$ away from the origin in the complex plane.
The second clause explains that the notation $\lim_{z\to z_0}f(z)=\infty$ means that for any $M$, there exists a $\delta>0$ such that if $z$ is a point contained within the circle (besides possibly $z_0$) of radius $\delta$ around $z_0$, then the image $f(z)$ of $z$ is at least distance $M$ away from the origin. | {"set_name": "stack_exchange", "score": 0, "question_id": 285673} |
TITLE: What is the attitude to this problem?
QUESTION [0 upvotes]: I am trying to show that for ever $z,w \in \mathbb{C}$ the following hold : $$
|1-z \bar{w}|^{2}+|z+w|^{2}=\left(1+|z|^{2}\right)\left(1+|w|^{2}\right)
$$
I am not sure if my approach here is true, I notice that I can mark $$z=a+bi$$ $$w=c+di$$ and than subtitude it in the above expression, but this seems too much algebra and I think this is not the point of the problem. (I am not even sure if that approach is true)
REPLY [1 votes]: The approach you're suggesting is right. And yes, it would involve some algebra. However, there is an identity you can use to solve it and the exercise seems to be an application of that identity. I'll provide that much and leave the rest to you. Use the fact that for any $z \in \mathbb{C}$, $z\bar{z}=|z|^2$. | {"set_name": "stack_exchange", "score": 0, "question_id": 4051380} |
\begin{document}
\maketitle
\section{Introduction}
Integrability in physical systems is an extremely powerful tool often allowing one to extract exact results in very complicated systems.
One such example is maximally supersymmetric Yang-Mills theory in $4$D ($\lN=4$ SYM) whose spectral problem, due to integrability, has been reduced to a simple set of equations on a handful of Baxter Q-functions called Quantum Spectral Curve (QSC) \cite{Gromov:2013pga}.
At the same time, in integrable spin chains, the Q-functions serve as building blocks of the model's wave functions and correlation functions in a special
basis called
separation of variables (SoV) basis \cite{SklyaninFBA} leading one to believe that the same should be true in $\lN=4$ SYM.
Remarkably, certain three point correlation functions in $\lN=4$ SYM have indeed been shown to take an incredibly simple form when expressed in terms of
the QSC Q-functions \cite{Cavaglia:2018lxi,Giombi:2018qox,McGovern:2019sdd} and the resulting expressions are reminiscent of
correlation functions in integrable spin chains when expressed in separated variables. This observation has been one of the main driving factors in the development
of SoV methods for higher rank \cite{Gromov:2016itr,Maillet:2018bim,Ryan:2018fyo,Maillet:2018czd,Ryan:2020rfk,Maillet:2020ykb} integrable spin chains which was, until recently, only
applicable to the simplest rank one models with $\sl(2)$ symmetry, see \cite{Ryan:2022ybk} for a recent comprehensive review.
The operator-based SoV (OSoV) construction, going back to the original ideas of Sklyanin \cite{Sklyanin:1984sb}, has recently been supplemented with a \textit{functional} SoV (FSoV) construction \cite{Cavaglia:2019pow} allowing one to compute highly
non-trivial quantities such as scalar products and form factors directly in separated variables bypassing the explicit operator-based construction of the SoV bases. This makes the functional approach particularly attractive in settings where an explicit
construction of the SoV bases is complicated. Such example include infinite-dimensional systems without a highest-weight state.
For instance, FSoV has been already used to compute non-perturbative correlators directly in $\lN=4$ SYM and its cousin 4D conformal fishnet theory\footnote{The operator-based SoV construction has also been used to compute Basso-Dixon correlators in $2D$ fishnet CFT \cite{Derkachov:2018rot} and has recently seen remarkable extensions to the $4$D setting \cite{Derkachov:2019tzo,Derkachov:2020zvv,Derkachov:2021ufp}. The crucial difference with our approach is that in those papers the SoV is applied in the ``mirror" channel. This simplifies the problem of finding the SoV basis dramatically. At the same time it requires one to consider each Feynman diagram separately rather than giving a resummed non-perturbative result.} \cite{Cavaglia:2018lxi,Cavaglia:2021mft}. In
particular, the functional SoV approach allows one to naturally compute a family of diagonal form factors $\langle
\Psi|\partial_p \hat{I}|\Psi\rangle$, where $p$ is some parameter of the model and $\hat{I}$ is an integral of motion, using standard quantum mechanical perturbation theory
arguments \cite{Cavaglia:2019pow,Gromov:2020fwh,Cavaglia:2021mft}. Interesting quantities which can be extracted using
this approach are the form-factors of various \textit{local} operators, including a family of non-trivial Feynman diagrams \cite{Cavaglia:2021mft} in conformal fishnet theory. Unfortunately, it was not initially clear how to advance beyond the computation of diagonal form-factors.
In this
paper we advance the study of correlators using the FSoV approach by tackling this issue with a novel {\it character projection} technique
and by identifying a set of $(N-1)\times (N+1)$
distinguished operators $\pr_{a,r}(u)$, which we call \textit{principal}. The operators $\pr_{a,r}(u)$
are polynomials in $u$, whose coefficients are certain linear combinations of elements of the spin chain monodromy matrices in anti-symmetric representations of $\sl(N)$.
In general, they do not commute with the integrals of motion, but, nevertheless,
with the help of the character projection method, we managed to compute their off-diagonal matrix elements in a simple determinant form in terms of the Q-functions.
Even more generally, we show that the same determinant form holds true for the
form factor $\langle \Psi_A|\pr_{a,r}(u)|\Psi_B\rangle$, where
$|\Psi_B\rangle$ and $\langle \Psi_A|$ are two general factorisable
states, which also includes off-shell Bethe states.
Generalising our results further, we show that the form-factors of certain anti-symmetric combinations of the principal operators also take a determinant form. Importantly, a particular case of such combinations is the SoV ${\bf B}(u)$ operator, which is at the heart of the operatorial SoV construction \cite{Sklyanin:1992sm,Smirnov2001,Gromov:2016itr,Gromov:2018cvh,Ryan:2018fyo,Ryan:2020rfk}.
The SoV ${\bf B}$ operator has long been known as a rather mysterious object having been initially obtained by Sklyanin \cite{Sklyanin:1992sm} in analogy with SoV for classical integrable models \cite{Sklyanin:1992eu}. In higher rank systems, its relation to quantum SoV was only recently understood in \cite{Gromov:2016itr} but the precise reason for its structure remained unclear, despite its many nice properties \cite{Ryan:2018fyo,Ryan:2020rfk}. In this paper we demonstrate that ${\bf B}$ naturally follows from the interplay between the FSoV construction and the approach presented in \cite{Maillet:2018bim,Ryan:2018fyo,Ryan:2020rfk} and we derive its explicit form directly. This closes an important conceptual gap in the existing literature.
Finally, we also compute the SoV representation of all the principal operators, which allows one to construct arbitrary combinations of these operators (not only anti-symmetric). In particular we show that those operators generate the complete set of the spin chain Yangian operators $T_{ij}(u)$.
Note that at least in the finite dimensional case, this implies, via the ``quantum inverse transform" \cite{Slavnov:2018kfx}
that we have an access to all local symmetry generators
$\ee_{ij}^{(\alpha)}$ from which one can in turn build any physical observables in this system. We also believe this to be the case in general but we do not have a simple proof of this.
This paper is organsied as follows. In section \ref{sec:review} we review basic aspects of $\sl(N)$ spin chains and elements of the operatorial SoV construction which we will use throughout the paper. In \ref{sec3} we review the functional SoV method, which is the main tool used in this paper, and we use it to approach the computation of diagonal form-factors. In \ref{sec:sl3disc} we tackle the computation of off-diagonal correlators for the so-called \textit{principal operators} and introduce the character projection trick in the simplest but highly non-trivial setting of $\sl(3)$ spin chains. In \ref{sec5} we extend our construction to include correlators of multiple principal operators, and find the SoV $\bf B$ and $\bf C$ operators. The general $\sl(N)$ case is an almost-trivial extension, which we perform in section \ref{sec:slnextension}, which demonstrates the power of our construction. Finally in section \ref{sec7} we prove that the principal operators form a complete basis of the observables. Four appendices supplement the main text. Appendix \ref{app:alt} contains an alternative derivation of a key relation used in the main text. Appendix \ref{dict} contains a derivation of the matrix elements of principal operators in the SoV bases. Finally, in appendix \ref{deriveB} we discuss how the FSoV method together with the SoV bases built using the ideas of \cite{Maillet:2018bim} allows one to deduce Sklyanin's SoV $B$ operator which has long been a gap in the literature. We have attached an ancillary Mathematica file with the arxiv submission of this paper which computes the SoV matrix elements of the principal operators. The file is named \verb"CodeForFormfactorsInSoVbasis.nb".
\section{Lightning review of $\sl(N)$ spin chains and separation of variables}\label{sec:review}
Here we give a speedy review of the main notations and formulate our set-up
for the $\sl(N)$ spin chain.
\subsection{$\sl(N)$ spin chain and transfer matrices}\label{slnalg}
In order to keep our exposition short we only write the basic formulas we use throughout the paper with special emphasis on $\sl(3)$ spin chains. In this paper we consider mainly the same set-up as in \cite{Gromov:2020fwh}.
\paragraph{$\gl(N)$ algebra.}
To define the $\sl(N)$ spin chain we introduce $L$ copies of the $\gl(N)$ algebra, one per site of the spin chain, with each copy generated by $\ee_{ij}^{(\alpha)}$ subject to the commutation relations
\beq
\label{algebrasl}
\[\mathbb{E}^\alpha_{ij}\, ,\mathbb{E}^\beta_{kl}\]=\delta^{\alpha\beta}(\delta_{jk}\mathbb{E}^\alpha_{il}-\delta_{li}\mathbb{E}^\alpha_{kj})\,.
\eeq
We will consider a spin-$\bs$ highest-weight representation of this algebra with highest-weight state $|0\rangle$ satisfying
\begin{equation}
\begin{split}
& \mathbb{E}^\alpha_{ij} |0\rangle = 0,\quad i<j \\
& \mathbb{E}^\alpha_{ii} |0\rangle = \omega_{i}|0\rangle \,,
\end{split}
\end{equation}
where $\omega_1=-\bs$ and $\omega_i=+\bs$ for $i>2$. This is the simplest non-compact representation which can be considered and we have chosen it for simplicity to illustrate our main results, but we believe all the main statements can be easily extended to more general representations.
One can build the representation space of one site $\alpha$ of the spin chain as polynomials of $N-1$ variables $z^\alpha_1, \dots z^\alpha_{N-1}$. The full representation space will just be the tensor product of the representations at each site, for a total of $L(N-1)$ degrees of freedom. The explicit form of the generators can be found in \cite{Gromov:2020fwh} for $N=2$ and $N=3$ and the general $N$ construction can be found for example in \cite{Derkachov:2006fw,antonenko2021gelfand}.
Using the $\gl(N)$ algebra we define a Lax operator acting on the $\alpha$-th site
\beq
\label{Laxop}
\mathcal{L}^{(\alpha)}_{ij}(u)=u\,\delta_{ij}+i\,\mathbb{E}^\alpha_{ji}\,.
\eeq
As usual, the $\alpha$-th Lax operator will act on the tensor product of a quantum space, i.e. the representation space of the $\alpha$-th site of the spin chain, and an auxiliary space $\mathbb{C}^N$. We can then build the monodromy matrix by taking a product in the auxiliary space of the Lax operators at every site
\begin{equation}\label{transfer11}
T_{ij}(u)=\sum_{k_1}\dots\sum_{k_{L-1}} \mathcal{L}_{i k_1}^{(1)}\left(u-\theta_{1}\right) \mathcal{L}_{k_{1} k_{2}}^{(2)}\left(u-\theta_{2}\right) \ldots \mathcal{L}_{k_{L-1} j}^{(L)}\left(u-\theta_{L}\right)
\end{equation}
where $\theta_\alpha$ denote the spin chain inhomogeneities which we take to be real. The monodromy matrix satisfies the following commutation relation, known as RTT relations
\begin{equation}\label{yangiangens}
-i(u-v)[T_{jk}(u),T_{lm}(v)] = T_{lk}(v)T_{jm}(u)-T_{lk}(u)T_{jm}(v) \,,
\end{equation}
which defines the Yangian algebra $\mathcal{Y}(\gl(N))$ with generators $T_{ij}(u)$.
The fundamental transfer matrix $\T(u)$ which generates integrals of motion is defined to be the trace of the monodromy matrix
\begin{equation}
\T(u)={\rm tr}\left(T(u)\right)=T_{11}(u) + \dots + T_{NN}(u)
\end{equation}
and satisfies $[\T(u),\T(v)]=0$. Taking the trace, however, will result in integrals of motion which have degenerate spectrum (due to the preserved $\sl(n)$ symmetry). To lift the degeneracies we introduce a twist $G$ in the monodromy matrix and instead define
\begin{equation}
\bbT(u) = {\rm tr}(T(u)G)
\end{equation}
where $G$ is an $N\times N$ matrix. The twisting preserves the commutativity $[\T(u),\T(v)]=0$, guaranteeing that the twisted transfer matrix continues to produce commuting integrals of motion.
For a diagonalisable $G$, without loss of generality we can choose $G$ to be diagonal with distinct eigenvalues $\lambda_j$, $j=1,\dots,N$. As we will see, for our purposes it is much more convenient, by adjusting the frame with a suitable $\sl(N)$ rotation, to choose $G$ to be the so-called companion matrix with entries
\begin{equation}\label{companion}
G_{ij}=(-1)^{j-1}\chi_j\delta_{i1}+\delta_{i,j+1}\,,
\end{equation}
where $\chi_j$ are the elementary symmetric polynomials in the eigenvalues $\lambda_i$
\begin{equation}
\prod_{j=1}^N (t+\lambda_j) =\displaystyle \sum_{r=0}^N t^{N-r}\chi_r\,.
\end{equation}
Note that $\chi_r$ are the characters of the anti-symmetric representations of $GL(N)$. The usefulness of this choice of twist in the SoV framework has now been extensively demonstrated \cite{Ryan:2018fyo,Ryan:2020rfk,Gromov:2020fwh}. Most notably, the separated variable bases producing factorised wave functions for the integrals of motion are independent of the twist eigenvalues in this frame. We will return to this point later.
From the definition of the Lax operator~\eqref{Laxop} we see that the transfer matrix is a polynomial in the spectral parameter $u$ of degree $L$. Therefore we can think of the coefficients of the powers of $u$ as integrals of motion (IoM). Since there are only $L$ independent IoMs in $\mathbb{T}$, we are still missing $L(N-2)$ IoMs to grant integrability\footnote{The Hilbert space of the spin chain is the space of polynomials in $L(N-1)$ variables.}. Hence we need to introduce additional transfer matrices in anti-symmetric representations of $\sl(N)$, denoted as $\mathbb{T}_{a}(u)$ \footnote{Higher transfer matrices are usually denoted $\T_{a,s}(u)$ with $s=1$ for anti-symmetric representations. Since we will not use any other transfer matrices we simply denote $\T_{a,1}=\T_{a}$.} with $\T(u)=\T_1(u)$. These are easily obtained by the fusion procedure \cite{Zabrodin:1996vm} for the Lax operator, where we take anti-symmetric products of the fundamental one \eqref{Laxop} with shifts in the spectral parameter as:
\beq
\label{Laxfusion}
\mathcal{L}_{\bar{j}, \bar{k}}^{(a)}=\mathcal{L}^{j_{1}}{ }_{\left[k_{1}\right.}\left(u+i \frac{a-1}{2}\right) \mathcal{L}_{k_{2}}^{j_{2}}\left(u+i \frac{a-3}{2}\right) \ldots \mathcal{L}_{\left.k_{a}\right]}^{j_{a}}\left(u-i \frac{a-1}{2}\right)\,,
\eeq
where $\bar{j}=\{j_1,\dots,j_L\}$ and $\bar{k}=\{k_1,\dots,k_L\}$.
We also need to fuse the twist matrix in a similar way, obtaining:
\beq\label{fusedtwist}
G_{\bar{j}, \bar{k}}^{(a)}=G_{[k_{1}}^{j_{1}} G_{k_{2}}^{j_{2}} \ldots G_{k_{a}]}^{j_{a}}\,.
\eeq
Note that although the lower indices are explicitly anti-symmetrised the anti-symmetrisation is also present in the upper indices.
A feature of the fused twist matrix which will play a very important role later in the text is the following -- when $G$ is the companion twist matrix \eqref{companion} then all fused twists $G^{(a)}$ are linear in characters $\chi_r$, $r=0,\dots,N$ with $\chi_0=1$. Clearly this is true for $G$ itself but the fact that it holds for all $G^{(a)}$ is a bit surprising as these are degree $a$ polynomials in the entries of $G$. To prove this consider \eqref{companion}: the $\delta_{i1}$ ensures that $\chi_r$ can only appear in \eqref{fusedtwist} when an index in the set $j_1,\dots,j_a$ is equal to $1$. Any term which is at least quadratic in characters would require at least two such terms in this set to be $1$, but such a term would then vanish due to the anti-symmetry of these indices. It is also easy to verify that for any $a=1,\dots,N-1$ each character $\chi_r$ appears with non-zero coefficient in the twist matrix.
Finally, the transfer matrix in the antisymmetric representation $\mathbb{T}_{a}$ is obtained in the same way as the fundamental one:
\beq\label{highertransfer}
\mathbb{T}_{a}(u)=\sum_{\bar{b}, \bar{b}_{i}} \mathcal{L}_{\bar{b} \bar{b}_{1}}^{a(1)}\left(u-\theta_{1}\right) \mathcal{L}_{\bar{b}_{1} \bar{b}_{2}}^{a(2)}\left(u-\theta_{2}\right) \ldots \mathcal{L}_{\bar{b}_{L-1} \bar{b}_{L}}^{a(L)}\left(u-\theta_{L}\right) G_{\bar{b}_{L} \bar{b}}^{(a)}\,.
\eeq
The transfer matrices generate a mutually commuting family of integrals of motion
\begin{equation}
[\T_{a}(u),\T_b(v)]=0\,.
\end{equation}
Since $\mathcal{L}^a$ is a polynomial in $u$ of degree $a$, the corresponding transfer matrix will have degree $a L$. Since in principle we can have $a=1\,\dots,N$, it looks like we might have too many IoMs. However, it turns out that there are some trivial prefactors of $u$ in $\mathbb{T}_{a}$ for $a>1$. For example, the so-called quantum determinant $\mathbb{T}_{N}$ is completely non-dynamical i.e. it is proportional to the identity operator. By removing all such scalar prefactors the family of transfer matrices $\mathbb{T}_{a}$, $a={1,\dots,N-1}$ contains precisely $L(N-1)$ nontrivial IoMs, matching the numbers of dofs and guaranteeing integrability of our spin chain. We can introduce reduced transfer matrices $t_{a}(u)$, which are polynomials of degree $L$ related to the original transfer matrices as
\begin{equation}\label{transfertrivial}
\T_{a}(u) = t_{a} \left(u+\frac{i}{2}(a-1) \right)\prod_{k=1}^{a-1}Q_\theta^{[2\bs-2k+a-1]}(u)\,,
\end{equation}
where we introduced the polynomial $Q_\theta(u)=\prod_{\alpha=1}^L(u-\theta_\alpha)$ and use the standard notation for shifts of the spectral parameter
\begin{equation}
f^{[n]}(u):=f\left(u+\frac{i}{2}n\right),\quad f^\pm(u) =f(u\pm\tfrac{i}{2}), \quad f^{\pm\pm}(u) =f(u\pm i)\,.
\end{equation}
We can expand $t_{a}$ into a family of mutually commuting integrals of motion $\hat{I}_{a,\beta}$
\begin{equation}\label{integralsofmotion}
t_{a}(u) = \chi_a\, u^L+\displaystyle \sum_{\beta=1}^{L} u^{\beta-1}\hat{I}_{a,\beta}\,.
\end{equation}
We will define the right (left) eigenstates of the mutually commuting transfer matrices as $|\Psi\rangle$ ($\langle\Psi|$), and the relative eigenvalue of $t_{a}$ as $\tau_{a}$, so that:
\beq
t_{a}(u)|\Psi\rangle=\tau_a(u)|\Psi\rangle,\quad \langle\Psi| t_a(u)=\langle\Psi| \tau_a(u)\,.
\eeq
Similarly for the integrals of motion $\hat{I}_{a,\beta}$ we have
\beq
\hat{I}_{a,\beta}|\Psi\rangle=I_{a,\beta}|\Psi\rangle,\quad \langle\Psi| \hat{I}_{a,\beta}=I_{a,\beta}\langle\Psi|\;.
\eeq
\subsection{Principal operators}
\label{principalop}
A major goal in this paper will be to compute the matrix elements of (sums of) certain monodromy matrix entries between two transfer matrix eigenstates and their generalisation to arbitrary factorisable states. We will refer to these particular monodromy matrix entries as \textit{principal operators}.
The principal operators are defined as follows. As was demonstrated above, each of the fused companion twist matrices $G^{(a)}$ are linear in the characters $\chi_r$. As such, each of the transfer matrices $t_a(u)$ admits an expansion\footnote{In \LaTeX $\, $one can use the \textbackslash textpeso command to generate this symbol.}
\begin{equation}
\label{genfunc}
t_a(u) \equiv \displaystyle \sum_{r=0}^N \chi_r\, \pr_{a,r}(u)\,.
\end{equation}
We call the operators $\pr_{a,r}(u)$ principal and the reason for their importance will become clear in section \ref{sec:sl3disc}. Note that they are independent of the twist eigenvalues $\lambda_j$ as all twist dependence of the transfer matrices is contained in the characters $\chi_r$.
For example, the transfer matrix $t_1(u)$ can be expanded as
\begin{equation}
t_1(u) = \displaystyle \sum_{j=1}^{N-1}\chi_0 T_{j,j+1}(u) + \displaystyle \sum_{r=1}^{N} \chi_r (-1)^{r-1}T_{r1}(u)\,,
\end{equation}
where $\chi_0=1$.
Similar expansions can be performed for the higher transfer matrices $t_{a}(u)$. For this we need to introduce \textit{quantum minors} defined by
\begin{equation}
T\left[^{i_1\dots i_a}_{j_1\dots j_a}\right](u) = \displaystyle \sum_{\sigma} (-1)^{|\sigma|} T_{i_1 j_{\sigma(1)}}^{[a-1]}(u)\dots T_{i_a j_{\sigma(a)}}^{[-a+1]}(u)
\end{equation}
where the sum is over all elements $\sigma$ of the permutation group on $a$ indices.
The transfer matrices $\T_a(u)$ in the $a$-th antisymmetric representation are then given by
\begin{equation}
\T_a(u) = \displaystyle \sum_{1\leq i_1 < \dots< i_a \leq N} T\left[^{i_1\dots i_a}_{j_1\dots j_a}\right](u)G_{j_1 i_1}\dots G_{j_a i_a}\,.
\end{equation}
As a result of the summation condition $1\leq i_1 < \dots < i_a \leq N$ the coefficient of each $\chi_r$ is a sum of quantum minors with distinct upper indices which cannot cancel each other and as a result the coefficient of each $\chi_r$ is non-zero as long as $1\leq a \leq N-1$.
While most principal operators are given by large sums over quantum minors things simplify for $a=N-1$ as the $N-1$-th anti-symmetric representation monodromy matrix is simply equal to the quantum-inverse matrix of $T(u)$ divided by a trivial factor. We introduce the notation $T^{ij}$ for these operators, defined by
\begin{equation}\label{monupup}
T^{ij}(u)\, \prod_{k=1}^{N-1}Q_\theta^{[2(\bs-k)]}(u)= T\left[^{1\, \dots\, \hat{j}\,\dots\, N}_{1\, \dots\, \hat{i}\,\dots\, N} \right]\left(u-\frac{i}{2}(N-2) \right)
\end{equation}
where the notation $\hat{i}$, $\hat{j}$ means that the corresponding index is removed. It is then easy to derive
\begin{equation}
t_{N-1}(u) = \displaystyle \sum_{r=0}^{N-1}\chi_r\, T^{r+1,N}(u) - \chi_N \displaystyle \sum_{j=1}^{N-1} T^{j+1,j}(u)\,.
\end{equation}
We will write out explicitly the principal operators in terms of monodromy matrix elements $T_{ij}$ for the special cases of $\sl(2)$ and $\sl(3)$. \paragraph{$\sl(2)$ case.}
In this case we have
\begin{equation}
t_1(u) = T_{12}(u) + \chi_1 T_{11}(u) - \chi_2 T_{21}(u)
\end{equation}
and hence
\begin{equation}\label{sl2principal}
\pr_{1,0}(u) =T_{12}(u),\quad \pr_{1,1}(u) =T_{11}(u),\quad \pr_{1,2}(u) = -T_{21}(u)\,.
\end{equation}
\paragraph{$\sl(3)$ case.} For the special case of $\sl(3)$ there are only two non-trivial transfer matrices $t_1(u)$ and $t_2(u)$ which in the notations described above admit the expansions of table~\ref{table}, where $t_2$ is written both in terms of the original monodromy elements $T_{ij}$ and the elements $T^{ij}$ defined in~\eqref{monupup}
\renewcommand{\arraystretch}{1.7}
\begin{center}\label{table}
\begin{tabular}{| c | l | l |}
\hline
$\pr_{1,0}(u)=$ & $+T_{12}+T_{23} $ &\\
$\pr_{1,1}(u)=$ & $+T_{11} $ &\\
$\pr_{1,2}(u)=$ & $-T_{21} $ &\\
$\pr_{1,3}(u)=$ & $+T_{31} $ &\\
\hline
\hline
$\pr_{2,0}(u)=$
& $\left(T_{12}T_{23}^{--}-T_{13}T_{22}^{--}\right)/Q_\theta^{[2\bs-2]}$
& $+T^{13}/Q_\theta^{[2\bs-2]}$\\
$\pr_{2,1}(u)=$
& $\left(T_{11}T_{23}^{--}-T_{13}T_{21}^{--}\right)/Q_\theta^{[2\bs-2]}$
& $+T^{23}/Q_\theta^{[2\bs-2]}$ \\
$\pr_{2,2}(u)=$
& $\left(T_{11}T_{22}^{--}-T_{12}T_{21}^{--}\right)/Q_\theta^{[2\bs-2]}$& $+T^{33}/Q_\theta^{[2\bs-2]}$\\
$\pr_{2,3}(u)=$
& $
\left(-T_{11}T_{32}^{--}+T_{12}T_{31}^{--}-T_{21}T_{33}^{--}+T_{23}T_{31}^{--}\right)/Q_\theta^{[2\bs-2]}
$& $-(T^{21}+T^{32})/Q_\theta^{[2\bs-2]}$\\
\hline
\end{tabular}
\end{center}
Since the transfer matrices $t_a(u)$ admit the expansion \eqref{integralsofmotion} into integrals of motion $\hat{I}_{a,\alpha}$ it clearly follows that each $\hat{I}_{a,\alpha}$ also admits a linear expansion into characters $\chi_r$. We will denote the coefficients of the characters in this expansion $I_{a,\alpha}^{(r)}$ and so
\beq\la{Ichi}
\hat I_{a,\alpha}=\sum_{r=0}^N
\chi_r\, \hat I_{a,\alpha}^{(r)}\;.
\eeq
Finally, since the transfer matrices commute for different values of the spectral parameters $[t_a(u),t_b(v)]=0$ we see that by expanding into principal operators we obtain the relation
\begin{equation}
\la{commP}
\sum_{r,s}\chi_r \chi_s [\pr_{a,r}(u),\pr_{b,s}(v)]=0\,.
\end{equation}
As this should hold for arbitrary twists $\lambda$ it is easy to see\footnote{For example one can change variables from $\lambda_i,\;i=1,\dots,N$ to $\chi_i,\;i=1,\dots,N$. The Jacobian of such transformation is simply a Vandermonde determinant of $\lambda$'s so this is always possible for generic $\lambda$'s.
After that \eq{commP} becomes a quadratic polynomial in $N$ independent variable $\chi_i,\;i=1,\dots,N$ which is identically zero, which is only possible if all coefficients vanish.
}
that the above expression implies
$[\pr_{a,r}(u),\pr_{b,s}(v)]+[\pr_{a,s}(u),\pr_{b,r}(v)]=0$ which in particular gives $[\pr_{a,r}(u),\pr_{b,r}(v)]=0$, that is principal operators corresponding to the same character index $r$ form a commutative family.
\subsection{Baxter equations}
The spectrum of transfer matrices can be determined by means of the Baxter equations. These are finite-difference equations for functions denoted $Q_i$ and $Q^i$, $i=1,\dots,N$ called Q-functions.
The Baxter equations can be conveniently written as
\begin{equation}\la{bax}
\lO Q_i =0,\quad \lO^\dagger Q^i=0\;.
\end{equation}
We refer to $\lO$ as the Baxter operator and $\lO^\dagger$ as the dual Baxter operator.
They are finite-difference operators defined as
\begin{equation}\label{dualbax}
\lO^\dagger = \sum_{a=0}^N (-1)^a \tau_a(u)\lD^{N-2a}\;\;,\;\;
\lO = \sum_{a=0}^N (-1)^a \lD^{2a-N}\tau_a(u)\varepsilon(u)
\end{equation}
where $\lD$ is the shift operator satisfying $\lD\,f(u)=f(u+\frac{i}{2})$, $\tau_a,\,\,a=1,\dots,N-1$ are the eigenvalues of the reduced transfer matrices $t_a$ and we have denoted:
\begin{equation}
\tau_0(u) = Q_\theta^{[2\bs]},\quad \tau_N(u) = \chi_N Q_\theta^{[-2\bs]},\quad Q_{\theta}(u)=\prod_{\alpha=1}^L(u-\theta_{\alpha})\,,
\end{equation}
and finally $\varepsilon(u)$ is the function
\begin{equation}
\varepsilon(u) = \prod_{\beta=1}^L \frac{\Gamma(\bs-i(u-\theta_\beta))}{\Gamma(1-\bs-i(u-\theta_\beta))}\,.
\end{equation}
The Q-functions can be characterised by their large-$u$ asymptotics which are related to the twist eigenvalues $\lambda_j$. We label the Q-functions so that their asymptotics read
\begin{equation}
Q_j \sim \lambda_j^{iu}u^{M_i},\quad Q^j \sim \lambda_j^{-iu}u^{M^i}
\end{equation}
where $M_i$ and $M^i$ are some real quantum numbers determined by the state in question. It is also possible to choose the Q-functions so that one of the Q-functions $Q_i$, which we take to be $Q_1$, and $N-1$ of the Q-functions $Q^i$, which we take to be $Q^{1+a}$, $a=1,\dots,N-1$, are \textit{twisted polynomials}, meaning that they have the structure
\begin{equation}
Q_1(u) = \lambda_1^{iu}q_1(u),\quad Q^{1+a}(u) = \lambda_{1+a}^{-iu} q^{1+a}(u)\,,
\end{equation}
where $q_1(u)$ and $q^{1+a}(u)$ are polynomials. Requiring that the equations \eq{bax} have twisted-polynomials solutions $Q_1$ and $Q^{2},\dots,Q^N$ we constrain the possible values for the coefficients $\tau_a(u)$, which gives the spectrum of the IoM for the spin chain in this approach \cite{Krichever:1996qd}.
\subsection{SoV bases and wave functions}
The SoV bases for the representations considered in this work were constructed in \cite{Gromov:2020fwh}. The left SoV basis $\bra{\svx}$ is obtained by diagonalising the $\bB$ operator \cite{Sklyanin:1992sm}\cite{Smirnov2001}\cite{Gromov:2016itr}
and the right SoV basis $|\svy\rangle$ is obtained by diagonalising the $\bC$ operator \cite{Gromov:2019wmz,Gromov:2020fwh}, see \cite{Gromov:2020fwh} for definitions of the $\bB$ and $\bC$ operators for the $\sl(N)$ case. For the specific case of $\sl(3)$ we have
\beqa
{\bf B}(u)&=&
-T_{11}(T_{11}^{--}T_{22}-T_{21}^{--}T_{21})-
T_{21}(T_{11}^{--}T_{23}-T_{21}^{--}T_{13})\;,\\
{\bf C}(u)&=&
-T_{11}(T_{11}T_{22}^{++}-T_{21}T_{21}^{++})-
T_{21}(T_{11}T_{23}^{++}-T_{21}T_{13}^{++})\;.
\eeqa
Using the RTT relations \eqref{yangiangens} it is possible to rewrite these expressions in a slightly different form:
\beqa\label{BandC}
{\bf B}(u)&=&
-T_{11}(T_{11}T_{22}^{--}-T_{22}T^{--}_{21})-
(T_{11}T_{23}^{--}-T_{13}T_{21}^{--})T_{21}\;,\\
{\bf C}(u)&=&
-T_{11}(T_{11}^{++}T_{22}-T^{++}_{22}T_{21})-
(T^{++}_{11}T_{23}-T^{++}_{13}T_{21})T_{21}\;.
\eeqa
This simple rewriting allows us to express the ${\bf B}$ and ${\bf C}$ operators in terms of the principal operators (after removing the trivial non-dynamical factor) in an ordering which will be convenient later
\beq\la{BCinP}
-\frac{{\bf B}(u)}{Q_\theta^{[2\bs-2]}}\equiv
{{\bf b}(u)}
=\pr_{1,1}\pr_{2,2}-\pr_{2,1}\pr_{1,2}\;\;,\;\;
-\frac{{\bf C}(u)}{Q_\theta^{[2\bs]}}
\equiv
{{\bf c}(u)}
=
\pr_{1,1}\pr^{++}_{2,2}-\pr^{++}_{2,1}\pr_{1,2}\;.
\eeq
The spectrum of $\bB(u)$ was first found in \cite{Gromov:2016itr}
and then generalised for general representations in \cite{Ryan:2018fyo}.
In our case we get
\beq
\langle \svx|{\bf b}(u) = \langle \svx|\prod_{\alpha=1}^L\prod_{a=1}^{N-1}(u-\svx_{\alpha,a})\;\;,\;\;
{\bf c}(u)|\svy\rangle = \prod_{\alpha=1}^L\prod_{a=1}^{N-1}(u-\svy_{\alpha,a})|\svy\rangle
\eeq
where each SoV basis element $\langle \svx|$ and $|\svy\rangle$ is parameterised by $L(N-1)$ numbers $\svx_{\alpha,a}$ and $\svy_{\alpha,a}$ respectively, with $\alpha=1,\dots, L$ and $a=1,\dots,N-1$, which are of the form
\begin{equation}\label{svxsvy}
\svx_{\alpha,a}=\theta_\alpha + i(\bs+n_{\alpha,a}),\quad \svy_{\alpha,a}=\theta_\alpha + i(\bs+m_{\alpha,a}+1-a)\,,
\end{equation}
where $n_{\alpha,a}$ and $m_{\alpha,a}$ are non-negative integers subject to the constraints $n_{\alpha,1}\geq \dots \geq n_{\alpha,N-1}\geq 0 $ and $m_{\alpha,1}\geq \dots \geq m_{\alpha,N-1}\geq 0$ with each possible configuration corresponding to a basis state. In the polynomial representation described above in section \ref{slnalg} the SoV ground states $\langle 0|$ and $|0\rangle$ (with all $n$'s or $m$'s being zero) can be shown to be constant polynomials.
It is convenient to fix their normalization to be
\beq\la{xy0}
\langle 0|=1\;\;,\;\;|0 \rangle = 1\;.
\eeq
\paragraph{SoV charge.}
A useful object proposed in \cite{Gromov:2019wmz} is the so-called SoV charge operator ${\bf N}$. It commutes with the ${\bf B}(u)$ and ${\bf C}(u)$ operators and is diagonalised in both SoV bases $|\svy\rangle$ and $\langle \svx|$ and counts the number of ``excitations" above the SoV ground state. More precisely:
\begin{equation}\label{SoVcharge}
{\bf N}|\svy\rangle = \left(\displaystyle \sum_{\alpha,a}m_{\alpha,a}\right)|\svy\rangle ,\quad \langle\svx| {\bf N}= \langle\svx|\left(\displaystyle \sum_{\alpha,a}n_{\alpha,a}\right)\,.
\end{equation}
It can be obtained as the first non-trivial coefficient in the large $u$ expansion of $\bB(u)$ or $\bC(u)$.
\paragraph{Wave function factorisation.}
The $\langle \svx|$ basis factorises the wave functions $\Psi(\svx)$ of the right transfer matrix eigenstates $|\Psi\rangle$ whereas the basis $|\svy\rangle$ factorises the wave functions $\Psi(\svy)$ of the left transfer matrix eigenstates $\langle\Psi|$.
The right wave functions (i.e. eigenfunctions of the transfer matrices) are then given explicitly by
\begin{equation}\label{wave1}
\Psi(\svx):=\langle \svx|\Psi\rangle = \displaystyle \prod_{\alpha=1}^L \prod_{a=1}^{N-1} Q_1(\svx_{\alpha,a})
\end{equation}
and the left wave functions are given by
\begin{equation}\label{wave2}
\Psi(\svy):=\langle \Psi|\svy\rangle = \displaystyle \prod_{\alpha=1}^L \prod_{a=1}^{N-1} \det_{1\leq a,b\leq N-1}Q^{a+1}\left(\svy_{\alpha,b}+\frac{i}{2}(N-2)\right)\;.
\end{equation}
For the above to be correct one should of course fix the normalisation of $\Psi$, for example by fixing $\langle 0|\Psi\rangle $ and $\langle \Psi|0\rangle $ in agreement with \eq{wave1} and \eq{wave2}. After that \eq{wave1} and \eq{wave2} stay true for any element of the SoV basis \cite{Gromov:2016itr,Ryan:2018fyo,Ryan:2020rfk,Gromov:2020fwh}.
The scalar product between two states, normalised as described above, is then given by
\begin{equation}\label{SoVscalarprod}
\langle\Psi_A |\Psi_B\rangle = \displaystyle \sum_{\svx,\svy} \Psi_A(\svy)\mathcal{M}_{\svy,\svx}\Psi_B(\svx)\,.
\end{equation}
Here $\mathcal{M}_{\svy,\svx}$ is the measure in the SoV basis.
It can be also written in terms of the dual bases $|\svx\rangle$
and $\langle\svy|$, which are defined such that $\langle\svx|\svx'\rangle=\delta_{\svx,\svx'}$
and $\langle\svy|\svy'\rangle=\delta_{\svy,\svy'}$, as
the overlap $\mathcal{M}_{\svy,\svx} = \langle\svy|\svx\rangle$.
In general the overlaps $\langle\svy|\svx\rangle$ are not diagonal and so the matrix $\mathcal{M}_{\svy,\svx}$ could be potentially quite complicated. Nevertheless it is known explicitly from \cite{Gromov:2020fwh}, and we review its structure next.
\subsection{SoV measure}
The explicit form of the measure, worked out in \cite{Gromov:2020fwh}, is given by \footnote{There is a typo in \cite{Gromov:2020fwh} where the sign factor $s_{\bf L}$ does not appear. However, it is correctly included in the Mathematica code contained in that paper.}
\beq\label{measure}
\lM_{\svy,\svx}=
s_{\bf L}\sum_{k}
\left.
{\rm sign}(\sigma)
\(\prod_{a=1}^{N-1}
\frac{
\Delta_a
}{
\Delta_\theta
}\)
\prod_{\alpha=1}^L\prod_{a=1}^{N-1} \frac{r_{\alpha,n_{\alpha,a}}}{r_{\alpha,0}}\right|_{\sigma_{\alpha,a}=k_{\alpha,a}-m_{\alpha,a}+a}.
\eeq
We will now summarise the notations we use, following \cite{Gromov:2020fwh}. $s_{\bf L}$ is a simple sign factor
\begin{equation}
s_{\bf L}=(-1)^{\frac{L}{4}(L-1)(N^2+N-2)}\,.
\end{equation}
$\sigma$ denotes a permutation of $L$ copies of the numbers $\{1,2,\dots,N-1\}$
\begin{equation}\la{ini}
\{\underbrace{1,\dots,1}_{L},\dots,\underbrace{N-1,\dots,N-1}_{L}\}
\end{equation}
with $\sigma_{\alpha,a}$ denoting the number at position $a+(\alpha-1)(N-1)$. $\sigma^0$ denotes the identity permutation on this set and so $\sigma^0_{\alpha,a}=a$. The signature of the permutation ${\rm sign}(\sigma)$ is $\pm 1 $ depending on the number of elementary permutations needed to bring the ordered set $u_{\sigma^{-1}(1)}\cup u_{\sigma^{-1}(2)}\dots \cup
u_{\sigma^{-1}(N-1)}$ to the canonical order $u_{1,1},u_{1,2},\dots,u_{L,N-1}$ where $u_{\sigma^{-1}(a)}=\{u_{\alpha,b}: \sigma_{\alpha,b}=a\}$. Whereas ${\rm sign}(\sigma)$
could be ambiguous due to different possible orderings inside $\sigma^{-1}(a)$, the combination with the Vandermondes $\Delta_b$ is well defined. There are $\frac{(N-1)L!}{L!^{N-1}}$ possible permutations $\sigma$, and if $\sigma$ is not such a permutation we define ${\rm sign}(\sigma)=0$.
Since the SoV charge operator~\eqref{SoVcharge} commutes with both ${\bf b}(u)$ and ${\bf c}(u)$, $\lM_{\svy,\svx}$ is only non-zero if the states $\langle\svx|$ and $|\svy\rangle$ have the same SoV charge eigenvalue. Furthermore, $\lM_{\svy,\svx}$ is only non-zero if there exists a permutation $\sigma$ of the number \eq{ini} such that
\begin{equation}\label{measperms}
m_{\alpha,a}=n_{\alpha,a}-\sigma_{\alpha,a}+a
\end{equation}
for each $\alpha,a$. There are distinct dual basis states $|\svx\rangle$ with the same value of $n_{\alpha,a}$ and hence there are multiple permutations satisfying \eqref{measperms}. We denote such inequivalent permutations (within each $\alpha)$ by $k$ which we then sum over. The sum over $k$ is needed only in a limited number of cases, for example in the $\sl(3)$ case only $k=n$ is possible.
In~\eqref{measure}, $\Delta_b$, which depends on $\sigma$, denotes the Vandermonde determinant constructed from all $\svx_{\alpha,a}$ for which $\sigma_{\alpha,a}=b$ and $\Delta_\theta$ denotes the Vandermonde determinant built from $\theta$'s
\begin{equation}
\Delta_\theta=\displaystyle \prod_{\alpha<\beta}(\theta_\alpha-\theta_\beta)\,.
\end{equation}
Finally, the function $r_{\alpha,n}$ is defined as
\beq \label{resmu}r_{\alpha, n}=-\frac{1}{2 \pi} \prod_{\beta=1}^{L}\left(n+1-i \theta_{\alpha}+i \theta_{\beta}\right)_{2 \mathbf{s}-1}\,,
\eeq
where $(z)_s=\frac{\Gamma(s+z)}{\Gamma(z)}$ is the Pochhammer symbol.
An explicit Mathematica implementation of the measure is provided in \cite{Gromov:2020fwh}. Although the scalar product can be expressed as the sum \eqref{SoVscalarprod} it is most conveniently expressed using the functional SoV (FSoV) formalism which we now review.
\section{Functional Separation of Variables method}\la{sec3}
In this section we review the key idea of the functional separation of variables method of~\cite{Cavaglia:2019pow}. We will then extend this method in section \ref{sec:sl3disc} by introducing the character projection tool.
\subsection{Functional orthogonality and scalar product}
The key relation in the functional SoV approach is the adjointness condition \cite{Cavaglia:2019pow,Gromov:2019wmz,Gromov:2020fwh}
\begin{equation}\label{adjointness}
\bl f \lO^\dagger g \br_\alpha = \bl g M_\alpha \lO f \br_\alpha \,,
\end{equation}
where the bracket $\bl f(w) \br_\alpha$ is defined by
\begin{equation}
\bl f(w) \br_\alpha = \int^\infty_{-\infty} {\rm d}w\,\mu_\alpha(w) f(w)\,,
\end{equation}
the measure factor $\mu_\alpha$ is given by \cite{Gromov:2020fwh}
\begin{equation}
\la{measure}
\mu_\alpha(w) = \frac{1}{1-e^{2\pi(w-\theta_\alpha-i\bs)}}\displaystyle \prod_{\beta=1}^L \frac{\Gamma(\bs-i(w-\theta_\beta))}{\Gamma(1-\bs-i(w-\theta_\beta))}\,,
\end{equation}
and $M_\alpha$ is some irrelevant factor which does not depend on the functions $f$ and $g$. By appropriate re-definitions of the Baxter equations it is possible to set $M_\alpha=1$, but it is nontrivial in our current conventions.
We will be interested in particular in the case where the functions $f$ and $g$ are certain Q-functions $Q_1$ and $Q^2,\dots,Q^N$ or functions with similar analytic properties.
The way to compute these integrals is to close the contour in the upper half plane and write them as a sum of residues. However, we need to ensure that the integrals actually converge and that the contour can be closed in this way without changing the result. In order to do so, it is sufficient to impose constraints on the twists that we find inside the Q-functions, as in \cite{Gromov:2020fwh}, which read
\begin{equation}
0<{\rm arg}\lambda_a-{\rm arg}\lambda_1<2\pi,\quad a=2,\dots,N\,.
\end{equation}
Once we do this,
we can replace the integral by the sum of the residues in the upper half-plane. Since the Q-functions are analytic everywhere, the only contribution comes from the simple poles of the measure factor \eqref{measure}. These poles are situated at $w=\theta_{\alpha}+i\bs+i n,\,n\in\mathbb{Z}_{\geq 0}$. As such we can write the bracket as an infinite sum of the residues at the poles of the measure:
\beq\la{brsum}
\bl
f(w)
\br_\alpha =\sum_{n=0}^\infty \frac{r_{\alpha,n}}{r_{\alpha,0}} f(\theta_{\alpha}+i\bs+i n)\;,
\eeq
with $r_{\alpha,n}$ being the residue of $\mu_{\alpha}$ at the pole $\theta_{\alpha}+i \bs+i n$:
\beq
r_{\alpha,n}=-\frac{1}{2\pi}\prod_{\beta=1}^L (n+1-i\theta_\alpha+i\theta_\beta)_{2\bs-1}\;,
\eeq
where $(z)_s=\frac{\Gamma (s+z)}{\Gamma (z)}$ denotes the Pochhammer symbol and we have included the overall normalisation $r_{\alpha,0}$ for convenience.
\subsection{Basic idea of Functional SoV}
To demonstrate the basic idea of the FSoV notice that the adjointness condition \eqref{adjointness} implies in particular
\begin{equation}
\bl f \lO^\dagger Q^{1+a}\br_\alpha=0= \bl Q_1 \lO^\dagger g\br_\alpha=0,\quad \alpha=1,\dots,L,\quad a=1,\dots,N-1
\end{equation}
and so if we pick $Q^{1+a}_A$ and $Q_1^B$ to be the Q-functions associated to two transfer matrix eigenstates $\ket{\Psi_A}$ and $\ket{\Psi_B}$ we have:
\begin{equation}
\bl Q_1^B(\lO^\dagger_A-\lO^\dagger_B)Q^{1+a}_A\br_\alpha=0,\, \alpha=1,\dots,L,\, a=1,\dots,N-1\,.
\end{equation}
Now if we insert the explicit form of $\lO^\dagger$ \eqref{dualbax} for the two states $A$ and $B$ we obtain the following system of equations:
\begin{equation}\label{linsystem}
\displaystyle \sum_{\beta=1}^{L}\sum_{b=1}^{N-1}\bl Q_1^B u^{\beta-1} Q^{1+a\, [N-2b]}_A\br_\alpha I^{AB}_{b,\beta}=0,\, \alpha=1,\dots,L,\, a=1,\dots,N-1
\end{equation}
where we have defined $I^{AB}_{b,\beta}=(-1)^{b}(I_{b,\beta}^A - I_{b,\beta}^B)$. Here $I^A_{b,\beta}$ ($I^B_{b,\beta}$) are the eigenvalues of the integrals of motion $\hat{I}_{b,\beta}$ (defined in \eqref{integralsofmotion}) evaluated on the state $|\Psi_A\rangle$ ($|\Psi_B\rangle$). All other terms of the Baxter operator cancel out since they do not depend on the state. Since the collection of integrals of motion $I_{b,\beta}$ has non-degenerate spectrum at least one of the differences $I^{AB}_{b,\beta}$ must be non-zero for the two distinct states and so in order for the linear system \eqref{linsystem} to have a non-trivial solution we must have\footnote{A row in this matrix is labelled by the pair $(a,\alpha)$ and a column is labelled by the pair $(b,\beta)$. The pairs of indices $(a,\alpha)$ and $(b,\beta)$ are ordered lexicographically.}
\begin{equation}\label{functionalorth}
\det_{(a,\alpha),(b,\beta)} \bl Q_1^B u^{\beta-1} Q^{1+a\, [N-2b]}_A\br_\alpha\ \propto\ \delta_{AB}\,.
\end{equation}
This is the functional orthogonality relation. It reproduces a crucial feature of the scalar product between two Bethe states, namely that it vanishes for two distinct states. In fact, it can be shown \cite{Gromov:2020fwh} to be exactly \textit{identical} to the scalar product \eqref{SoVscalarprod} by including a state-independent normalisation $\lN$ which should be chosen to ensure that $\mathcal{M}_{0,0}=1$ and so we have
\begin{equation}\label{funscalarprod}
\langle\Psi_A|\Psi_B\rangle = \frac{1}{\lN}\det_{(a,\alpha),(b,\beta)} \bl Q_1^B u^{\beta-1} Q^{1+a\, [N-2b]}_A\br_\alpha\,,
\end{equation}
where the normalisation factor $\lN$ is given by
\begin{equation}
\lN =\prod_{\alpha>\beta}(\theta_\alpha-\theta_\beta)^{N-1} = (-1)^{\frac{L}{2}(L-1)(N-1)}\Delta_\theta^{N-1}
\end{equation}
where $\Delta_\theta$ is the Vandermonde determinant
\begin{equation}
\Delta_\theta:=\prod_{\alpha<\beta}(\theta_\alpha-\theta_\beta)\,.
\end{equation}
\subsection{Scalar product between arbitrary factorisable states}
\label{scalarprod}
The functional orthogonality relation \eqref{functionalorth}, together with the orthogonality conditions for the vacuum state $\mathcal{M}_{0,\svx}=\delta_{0,\svx}$ and $\mathcal{M}_{\svy,0}=\delta_{\svy,0}$, allows one to completely determine all matrix elements $\mathcal{M}_{\svy,\svx}$ of the measure from the knowledge of the determinant form of the scalar product \eqref{funscalarprod}. In fact, by considering all possible pairs of different states $A$ and $B$, we obtain a system of linear equations for every matrix element. A rigorous counting can even be carried out in the infinite-dimensional case \cite{Gromov:2020fwh}.
As was noticed in \cite{Gromov:2020fwh}
the fact that the determinant \eqref{funscalarprod}
reproduces the sum \eqref{SoVscalarprod} is independent of whether or not the functions $Q_1$ and $Q^{1+a}$ actually solve the Baxter equation. As a result, we can consider any so-called {\it factorisable} states $|\Phi\rangle$ and $\langle \Theta|$ with wave functions
\begin{equation}\label{factorisable}
\Phi(\svx)= \displaystyle \prod_{\alpha=1}^L\prod_{a=1}^{N-1} F_\alpha(\svx_{\alpha,a}),\quad \Theta(\svy)= \displaystyle \prod_{\alpha=1}^L\displaystyle \det_{1\leq a,b\leq N-1} G_\alpha^{1+a}\left(\svy_{\alpha,b}+\frac{i}{2}(N-2)\right)\,,
\end{equation}
where $F_\alpha$ and $G_\alpha^{1+a}$ can be any functions (chosen such that the infinite sum over SoV states converges) and their scalar product will still be given by the determinant \eqref{funscalarprod}, where the bracket is understood as the sum over residues \eq{brsum}.
A useful and non-trivial example to consider is the case of the scalar product between eigenstates of two transfer matrices built with different twists. Concretely, we consider a family of transfer matrices $\T_{a}$ corresponding to \eqref{highertransfer} and another family of transfer matrices $\tilde{\T}_{a}$
with $G$ replaced by $\tilde{G}$, obtained by replacing the twist parameters $\lambda_i$ of $G$ with a new set $\tilde{\lambda}_i$. It was first demonstrated in \cite{Ryan:2018fyo} and further explored in \cite{Ryan:2020rfk,Gromov:2020fwh} that the SoV bases
are independent of the twist parameters $\lambda_j$ after appropriate normalisation. As a result, the same SoV bases serve to factorise the wave functions of transfer matrices built with \textit{any} twist matrix of the form \eqref{companion} such as $\tilde{G}$ and so we have
\begin{equation}\label{SoVscalarprodtwist}
\langle\Psi_A |\tilde{\Psi}_B\rangle= \displaystyle \sum_{\svx,\svy} \Psi_A(\svy)\mathcal{M}_{\svy,\svx}\tilde{\Psi}_B(\svx)\,,
\end{equation}
where we have denoted a right eigenstate of the transfer matrices $\tilde{\T}_a$ by $|\tilde{\Psi}_B\rangle$.
This means that we can easily compute scalar products between eigenstates of transfer matrices with different twists via determinants of Q-functions. In particular we get:
\begin{equation}\la{scalar}
\langle\Psi_A |\tilde{\Psi}_B\rangle=\frac{1}{\lN} \det_{(a,\alpha),(b,\beta)} \bl \tilde{Q}_1^B u^{\beta-1} Q^{1+a\, [N-2b]}_A\br_\alpha
\end{equation}
where $\tilde{Q}_1^B$ are the Q-functions associated to the state $|\tilde{\Psi}_B\rangle$ and the transfer matrices $\tilde{\T}_a(u)$.
\subsection{Correlators from variation of spin chain parameters}\la{sec:det}
The functional SoV approach allows one to extract a host of diagonal form-factors by varying the integrals of motion with respect to some parameter $p$ of the spin chain, such as twists $\lambda_j$ or inhomogenities $\theta_\alpha$ or even the local representation weights \cite{Cavaglia:2021mft}. The construction is based on standard quantum mechanical perturbation theory and we review it here for completeness.
The starting point is the trivial relation $\bl Q_1 \lO^\dagger Q^{1+a}\br=0$
with $Q^{1+a}$ being on-shell Q-function i.e. satisfying the dual Baxter equation $\lO^\dagger Q^{1+a}=0$.
This obviously remains true if we consider a variation $p\rightarrow p+\delta p$ of the parameter $p$ in $Q^{1+a}$ and $\lO$ resulting in $\bl Q_1 (\lO^\dagger +\delta_p \lO^\dagger) (Q^{1+a}+\delta_p Q^{1+a})\br=0$. Expanding to first order in $\delta$, using the adjointness property
of $\lO^\dagger$
and also assuming that $\lO Q_1=0$ we obtain at the leading order in the perturbation
\begin{equation}
\bl Q_1 \partial_p\lO^\dagger Q^{1+a}\br_\alpha=0\,.
\end{equation}
By expanding out $\partial_p \lO^\dagger$ this relation allows one to obtain an inhomogeneous linear system for the derivatives $\partial_p I_{b,\beta}$ of the integral of motion eigenvalues $I_{b,\beta}$. As a result we have the relation, following from Cramer's rule,
\begin{equation}\la{ratio}
\frac{\langle\Psi|\partial_p \hat{I}_{b',\beta'}|\Psi\rangle}{\langle \Psi|\Psi\rangle} = \partial_p I_{b',\beta'} = \frac{\displaystyle \det_{(a,\alpha),(\beta,b)} m'_{(a,\alpha),(b,\beta)}}{\displaystyle \det_{(a,\alpha),(\beta,b)} m_{(a,\alpha),(b,\beta)}}\,,
\end{equation}
where $m_{(a,\alpha),(b,\beta)}=\bl Q_1 u^{\beta-1}\lD^{N-2b} Q^{1+a} \br_\alpha$ and $m'$ is obtained from $m$ by replacing the column $(b',\beta')$ with
$y_{(a, \alpha)} \equiv\bl Q_{1} \hat{Y}_{p} \circ Q^{1+a}\br_{\alpha}$,
where $\hat{Y}_p$ is the part of $\partial_p\lO^\dagger$ which does not depend on the integrals of motion, given by:
\beq
\hat{Y}_{p}=-\left(\partial_{p} Q_{\theta}^{[-2 \bs]} \mathcal{D}^{-N}+(-1)^{N} \partial_{p} Q_{\theta}^{[+2 \bs]} \mathcal{D}^{+N}\right)-\sum_{b=1}^{N-1}(-1)^{b+1} \partial_{p} \chi_{b} u^{L} \mathcal{D}^{-2 b+N}\,.
\eeq
We introduce the short-hand notation for the determinants as follows
\beq
[o_{b,\beta}] \equiv
\det_{(a,\alpha),(b,\beta)}\bl \tilde Q_{1}^B o_{b,\beta}Q^A_{1,1+a}\br\;,
\eeq
where $o_{b,\beta}$ is some finite difference operator. Since the l.h.s. makes no reference to the twists or indices $A$ and $B$ used on the Q-functions these should be inferred from context. As such the scalar product in this notation is given by
\begin{equation}
\langle \Psi_A | \tilde{\Psi}_B\rangle = \frac{1}{\lN} [w^{\beta-1} \lD^{3-2b}]\,.
\end{equation}
We will also use the replacement notation
\beqa
[(b',\beta')\to o]\,,
\eeqa
which corresponds to replacing $w^{\beta'-1}\lD^{3-2b'}$ in the determinant $[w^{\beta-1} \lD^{3-2b}]$ with the finite difference operator $o$. For instance the numerator of \eq{ratio} becomes
\beq
\displaystyle \det_{(a,\alpha),(b,\beta)} m'_{(a,\alpha),(b,\beta)} \equiv [(b',\beta')\to \hat Y]\;.
\eeq
Since the scalar product $\langle \Psi|\Psi\rangle$ in our normalisation is proportional to the denominator of the right hand side (see \eq{SoVscalarprodtwist}) we have
\begin{equation}\label{diagonalff}
\langle\Psi|\partial_p \hat{I}_{b',\beta'}|\Psi\rangle =\frac{1}{\lN}[(b',\beta')\to \hat Y]\,.
\end{equation}
It is appealing to assume then that the operator $\d_p \hat I_{b',\beta'}$ can be characterised by this particular modification of the structure of the determinant as compared to the identity operator given by \eq{SoVscalarprodtwist}.
One can also notice that for the identity operator in
\eq{SoVscalarprodtwist} we managed to obtain a more general relation with the left and right states corresponding to two different eigenvalues of the transfer matrix or, even more generally, to the transfer matrices with different twists.
It is thus is very tempting to upgrade the relation \eq{diagonalff} by replacing $\langle \Psi|$ and $Q^{1+a}$ accordingly by those corresponding to a different state. Whereas this does give the right result in some cases, as was noticed in \cite{Cavaglia:2018lxi}, in general this strategy, unfortunately, fails as we verified explicitly for some small length cases.
However, for the case when the parameters $p$ are the twist angles this naive approach gives the correct result as we prove in the next section where we also provide generalisations of this result.
At this point it is, however, very easy to announce our main observation of the next section, which we prove rigorously there. Namely, we noticed that for the case of $p=\lambda_a$
the equation \eq{diagonalff} survives a series of upgrades. Firstly, it works for two arbitrary left and right factorisable states. Secondly, and probably the most surprising, it still works for multiple derivatives in the twist parameters:
\begin{equation}\label{diagonalff1}
\langle\Psi^A|\partial_{\lambda_{a_1}}\dots \partial_{\lambda_{a_k}} \hat{I}_{b',\beta'}|\Psi^B\rangle =\frac{1}{\lN}\[(b',\beta')\to -\sum_{b=1}^{N-1}(-1)^{b+1} \partial_{\lambda_{a_1}}\dots \partial_{\lambda_{a_k}} \chi_{b} u^{L} \mathcal{D}^{-2 b+N}\]\,.
\end{equation}
In the next section we will derive this identity using the character projection extension of the FSoV method.
We will also see more explicitly what the operators of the type \eq{diagonalff1} are closely related with the principal operators introduced earlier.
\section{Character projection}\label{sec:sl3disc}
In this section we extend the FSoV method, introduced in the previous section, in order to obtain form-factors of non-trivial operators between two arbitrary factorisable states.
We will use these results in the next section
to extract the matrix elements of a set of observables in the SoV bases in a similar way to the measure, which then allows us to efficiently compute the expectation values of a complete set of physical observables.
For simplicity in this section we constrain ourselves to the $\sl(3)$ case.
\subsection{Derivation}\la{sec:der}
We start from the conjugate Baxter operator $\lO^\dagger$. A common notation for the Q-functions of the $\sl(3)$ case is $Q^{1+a}:=Q_{1,1+a}$ and we will use this notation here. $\lO^\dagger$ gives $0$ when applied to the $Q_{1,1+a}$ functions as they satisfy the Baxter equation \eq{bax}, which in the $\sl(3)$ case becomes:
\beq\la{dBax}
\lO^\dagger = Q_\theta^{[2\bs]}\cD^3
-
\tau_1 \cD^{1}
+
\tau_2 \cD^{-1}
-\chi_{3}Q_\theta^{[-2\bs]}\cD^{-3}\;\;,\;\;
\lO^\dagger Q_{1,1+a} = 0\;.
\eeq
This implies that for any $g$, chosen such that the integral in the scalar product is convergent, we have:
\beq\label{baxtereq2}
\bl g \lO_A^\dagger Q^A_{1,a+1}\br_\alpha = 0\;\;,\;\;\alpha=1,\dots,L\;\;,\;\;a=1,2\;.
\eeq
For definiteness we take $g=\tilde Q_1^B$, which is a Q-function
corresponding to a state of a transfer matrix with generic twist $\tilde\lambda_a$, different from that of the state $A$, which we denote as $\lambda_a$.
The corresponding characters are denoted as $\tilde \chi_r$
and $\chi_r$.
We consider the set of $2L$ equations in~\eqref{baxtereq2}
as equations on the $2L$ integrals of motion $I_{b,\beta}^A,\;b=1,2,\;\beta=1,\dots,L$, which are the non-trivial coefficients in $\tau_2(u)$ and $\tau_1(u)$. More explicitly we have
\beq\la{eqI}
\sum_{\beta,b}(-1)^{b} \bl \tilde Q_{1}^B u^{\beta-1}\lD^{3-2b} Q^A_{1,a+1}\br_\alpha
I^A_{b,\beta}
= -\sum_{r=0}^3\chi_r
\bl \tilde Q_{1}^B \lO_{(r)}^\dagger Q^A_{1,a+1}\br_\alpha\;,
\eeq
where we introduced the following notations for the non-dynamical terms in the dual Baxter equation \eq{dBax}:
\begin{equation}
\lO^\dagger_{(0)} = Q_\theta^{[2\bs]}\lD^3 \;\;,\;\;
\lO^\dagger_{(1)} = -u^L\lD \;\;,\;\;
\lO^\dagger_{(2)} = u^L\lD^{-1} \;\;,\;\;
\lO^\dagger_{(3)} = -Q_\theta^{[-2\bs]}\lD^{-3}
\;.
\end{equation}
The solution to \eq{eqI} can be written as a ratio of determinants. In the notations of
section~\ref{sec:det} we have
\beq\la{eqA}
I_{b',\beta'}^A=(-1)^{b'+1}
\sum_{r=0}^3\chi_r
\frac{
[(b',\beta')\to \lO^\dagger_{(r)} ]}{
[w^{\beta-1}\lD^{3-2b}]
}\;.
\eeq
At the same time, since $I_{b',\beta'}^A$ is the eigenvalue of the operator $\hat I_{b',\beta'}$
on the left eigenstate $\langle\Psi^A|$
we have
\beq\la{eqB}
I_{b',\beta'}^A=\frac{\langle \Psi^A|\hat I_{b',\beta'}|\tilde\Psi^B\rangle}
{\langle \Psi^A|\tilde \Psi^B\rangle}
={\cal N}\frac{\langle \Psi^A|\hat I_{b',\beta'}|\tilde \Psi^B\rangle}
{[w^{\beta-1}\lD^{3-2b}]
}
\eeq
where in the last identity we used the expression for the scalar product of two factorisable states \eq{scalar}. Comparing
\eq{eqA} and \eq{eqB} we
get
\beq\la{eqII}
\langle \Psi^A|\hat I_{b',\beta'}|\tilde \Psi^B\rangle =
\frac{(-1)^{b'+1}}{{\cal N}}
\sum_{r=0}^3\chi_r\;
[(b',\beta')\to \lO^\dagger_{(r)} ]\;.
\eeq
The next step, which we call {\it character projection} is quite crucial. As we discussed in
Section~\ref{principalop} the IoMs, as operators, depend non-trivially on the twist of the spin chain $\lambda_a$, but when expressed in terms of the characters this dependence is linear in $\chi_r$, see \eqref{Ichi}.
We also notice that the r.h.s. of~\eqref{eqII} has explicit linear dependence on $\chi_r$. However, notice that both sides of \eq{eqII} have an additional implicit dependence on the twists through the eigenstate $\langle \Psi^A|$ and the corresponding Q-function $Q^A_{1,1+a}$. In order to remove this dependence we use the result of section~\ref{scalarprod}, which states that
the determinants in the r.h.s. of \eq{eqII} can be written in the form
\beq\la{compA}
\frac{(-1)^{b'+1}}{{\cal N}}[(b',\beta')\to \lO^\dagger_{(r)} ]
=\sum_{\svx,\svy}\Psi^A(\svy) M^{{(r);b',\beta'}}_{\svy,\svx}
\tilde\Psi^B(\svx)
\eeq
which is analogous to \eq{SoVscalarprodtwist}, with $M^{{(r);\beta',a'}}_{\svy,\svx}$ being independent of the states $A$ and $B$.
In section~\ref{matrixelements} we compute the coefficients $M^{{(r);\beta',a'}}_{\svx,\svy}$ explicitly.
The expression \eq{compA} is obtained by
expanding the determinant and comparing the combinations of the Q-functions with those appearing in $\tilde\Psi^B(\svx) $ and $\Psi^A(\svy)$ as shown in \eq{wave1} and \eq{wave2}.
At the same time for the l.h.s. of \eq{eqII} we have
\beq\la{compB}
\langle \Psi^A|\hat I_{b',\beta'}|\tilde\Psi^B\rangle=
\sum_{\svx,\svy}
\langle \Psi^A|\svy\rangle\langle\svy|\hat I_{b',\beta'}|\svx\rangle\langle\svx|\tilde\Psi^B\rangle
\eeq
by using completeness of SoV bases. The operator $\hat I_{b',\beta'}$ can be decomposed into terms corresponding to different characters $\chi_r$ as $\hat I_{b',\beta'}=\sum_{r=0}^3 \chi_r \hat I_{b',\beta'}^{(r)}$, see \eq{Ichi}. By comparing
\eq{compA} and \eq{compB}
we get
\beq\la{PPxy}
\sum_{\svx,\svy}
\langle \Psi^A|\svy\rangle
\langle\svx|\tilde\Psi^B\rangle
\[\sum_{r=0}^3\chi_r\(
\langle\svx|\hat I_{b',\beta'}^{(r)}|\svy\rangle-M^{{(r);b',\beta'}}_{\svx,\svy}
\)\]=0\,.
\eeq
Note that the expression in the square brackets does not depend on the state $A$ and only carries the information on the twist of this state in the characters $\chi_r$. For simplicity, consider an arbitrary finite dimensional case with representation of dimension $D$ per site.
Considering the expression in the square bracket as a collection of $D^L\times D^L$ numbers computed for different $\svx$ and $\svy$ we get a system of linear equations on those coefficients.
There are $D^L$ states $\langle\Psi^A|$ and $D^L$ states $|\tilde\Psi^B\rangle$ so we have as many equations as unknowns and furthermore the matrix $\langle\Psi^A|\svy\rangle\langle\svx|\tilde\Psi^B\rangle$
can be considered as an overlap matrix between two complete bases $\langle \Psi^A|\otimes |\tilde\Psi^B\rangle$ to $\langle\svx|\otimes |\svy\rangle$ in the double copy of the initial Hilbert space $H\otimes H^\dagger$, and thus is not degenerate.
In fact we have many more of the equations as $|\tilde\Psi^B\rangle$
contains its own set of independent continuous twist parameters. We see that as a consequence of the consistency of the linear system it should have a trivial solution and thus we should have that the square bracket is identically zero
\beq\label{roundbr}
\sum_{r=0}^3\chi_r\(
\langle\svx|\hat I_{b',\beta'}^{(r)}|\svy\rangle-M^{{(r);b',\beta'}}_{\svx,\svy}
\)=0\;.
\eeq
The above equation also stays true for the infinite dimensional case and this will be argued in Appendix \ref{dict} where the coefficients $M^{{(r);b',\beta'}}_{\svx,\svy}$ are explicitly computed.
Another way to arrive to \eq{roundbr}
from \eq{PPxy} is by multiplying the l.h.s. by $\langle \svy'|\Psi^A\rangle
\langle\tilde\Psi^B|\svx'\rangle
$ and summing over complete basis of eigenstates $\Psi^A$ and $\Psi^B$. Due to the completeness relations we have $\sum_A\langle \svy'|\Psi^A\rangle \langle \Psi^A|\svy\rangle = \delta_{yy'}$ which removes the dependence on the wave functions and leads to \eq{roundbr}.
Next, the round bracket in \eqref{roundbr} does not depend on the twists,
and the only way the above identity stays true for arbitrary values of twists is if
\beq\la{Isov}
\langle\svx|I_{b',\beta'}^{(r)}|\svy\rangle=M^{{(r);b',\beta'}}_{\svx,\svy}\;.
\eeq
Thus we get a set of $4\times 2\times L$ observables $\hat{I}_{a,\alpha}^{(r)}$ explicitly in the SoV basis, which are precisely the coefficients of the principal operators $\pr_{a,r}(u)$
\beq\la{eqpr}
\pr_{a,r}(u)=
\sum_{\beta=1}^{L} I^{(r)}_{a,\beta}u^{\beta-1}+u^L\delta_{a,r}\;.
\eeq
In section~\ref{completeness}
we prove that this set of observables is complete and we will explicitly compute the SoV matrix elements for $\pr_{a,r}(u)$ in section~\ref{principalsovbasis}.
Finally, after obtaining the relations \eqref{Isov} for the individual operators in the SoV basis we can revert the logic and multiply \eq{Isov} by $\sum_{\svx,\svy}
\langle \Psi^A|\svy\rangle
\langle\svx|\tilde\Psi^B\rangle
$ to obtain the {\it character projected} version of the equation \eq{eqII}
\beq\la{eqIIpr}
\boxed{\langle \Psi^A|\hat I^{(r)}_{b',\beta'}|\tilde \Psi^B\rangle =
\frac{(-1)^{b'+1}}{{\cal N}}
[(b',\beta')\to \lO^\dagger_{(r)}]}\;,
\eeq
which constitutes the main result of this section.
To summarise, we obtained a determinant form of form-factors of all operators $\hat{I}^{(r)}_{b,\beta}$ between two
arbitrary factorisable states.
It is easy to see that \eq{eqIIpr} is equivalent to \eq{diagonalff1}.
Before closing this subsection a comment is in order. A key step in our derivation relied on the denominator in \eqref{eqB} being non-zero. This is indeed non-zero as long as $|\tilde{\Psi}_B\rangle$ is not orthogonal to $\langle\Psi_A|$ which is true as long as $|\tilde{\Psi}_B\rangle$ is a generic factorisable state or as long as the twists in $|\tilde{\Psi}_B\rangle$ are independent from those in $\langle\Psi_A|$. The expressions \eqref{eqIIpr} for the form-factors are then valid for any choice of twists or indeed any factorisable states. However, it is possible to recast the derivation in an alternate way which avoids this step completely and we present it in Appendix \ref{app:alt}. Finally, the counting argument presented above relied on the representation being finite dimensional. The results remain true even when extended to the infinite-dimensional case as is discussed in Appendix~\ref{dict}.
\subsection{Form-factors for $\sl(3)$ principal operators}
\label{formfactors}
In the previous section we found the form-factors of the coefficients $\hat{I}_{a,\alpha}^{(r)}$ of the $u$-expansion of the principal operators $\pr_{a,r}(u)$. In this section we derive compact determinant expressions for the form-factors of $\pr_{a,r}(u)$ themselves as functions of the spectral parameter $u$.
We will use $w$ for the dummy spectral parameter appearing inside the determinants to avoid confusion with $u$ -- the argument of $\pr_{a,r}(u)$.
Let us start from $\pr_{1,1}(u)=T_{11}(u)$. From \eq{eqpr} we see this principal operator is a generating function for the set of operators $\hat I_{1,\alpha}^{(1)}$ with $\alpha=1,\dots,L$. From \eq{eqIIpr}
we thus have
\beqa\la{sumdet}
\langle \Psi^A|T_{11}(u)|\tilde{\Psi}^B\rangle &=&
u^L
\langle \Psi^A|\tilde{\Psi}^B\rangle
-\frac{1}{\cN}
\sum_{\beta'=1}^{L} u^{\beta'-1}
[(1,\beta')\rightarrow w^L \lD]\;.
\eeqa
This expression appears to be a sum over determinants. Let us show that it can be compressed into a single determinant.
Let us write the determinants in the sum \eq{sumdet} more explicitly by introducing the notation
\begin{equation}
[o_{b,\beta}] = [o_{1,1},\dots,o_{1,L},o_{2,1},\dots,o_{2,L}]\,,
\end{equation}
obtaining
\beqa
{\cal N}\langle \Psi^A|\pr_{1,1}(u)|\tilde{\Psi}^B\rangle&=&\\
\nn&-&{\color{red}u^0}[{\color{blue}w^L\lD},w\lD,w^2\lD,\dots,
w^{L-1}\lD,
\lD^{-1},w\lD^{-1},\dots,w^{L-1} \lD^{-1}]\\
\nn&-&{\color{red}u^1}[\lD,{\color{blue}w^L\lD},w^2\lD,\dots,
w^{L-1}\lD,
\lD^{-1},w\lD^{-1},\dots,w^{L-1} \lD^{-1}]\\
\nn&-&{\color{red}u^2}[\lD,w\lD,{\color{blue}w^L\lD},\dots,
w^{L-1}\lD,
\lD^{-1},w\lD^{-1},\dots,w^{L-1} \lD^{-1}]\\
\nn&\dots\\
\nn&+&{\color{red}u^L}[\lD,w\lD,w^2\lD,\dots,
w^{L-1}\lD,
\lD^{-1},w\lD^{-1},\dots,w^{L-1} \lD^{-1}]\,,
\eeqa
where in the last term we also wrote the overlap of the states in the determinant form \eq{scalar}. By a simple rearrangement of the columns we get
$
(-1)^{L}[\{(w^j-u^j)\lD\}_{j=1}^L,
\{w^{j-1}\lD^{-1}\}_{j=1}^L]
$
or equivalently
$
(-1)^{L}[\{(w-u) w^{j-1}\lD\}_{j=1}^L,
\{w^{j-1}\lD^{-1}\}_{j=1}^L]
$. Hence we arrive to the following expression as a single determinant
\beqa\label{T11}
\boxed{
\langle \Psi^A|\pr_{1,1}(u)|\tilde{\Psi}^B\rangle =
\frac{(-1)^{L}}{\cN}[\{(w-u)w^{j-1}\lD\}_{j=1}^L,
\{w^{j-1}\lD^{-1}\}_{j=1}^L]\;.
}
\eeqa
We will now introduce a very convenient shorthand notation. For ordered sets ${\bf u}_a$ and $4$ integers $L_a$, $a=0,1,2,3$ we define the following object
\beqa\la{brf}
&&\Big[
L_0;{\bf u}_0
\Big|
L_1;{\bf u}_1
\Big|
L_2;{\bf u}_2
\Big|
L_3;{\bf u}_3
\Big]_\Psi=\frac{1}{\cal N}\times\\
\nn&&
\Big[\Big\{
\frac{\Delta_{{\bf u}_0\cup w}}{
\Delta_{{\bf u}_0}
} w^{j} \lD^{3}\Big\}_{j=0}^{L_0-1},
\Big\{
\frac{\Delta_{{\bf u}_1\cup w}}{
\Delta_{{\bf u}_1}
} w^{j} \lD^{1}\Big\}_{j=0}^{L_1-1},
\Big\{
\frac{\Delta_{{\bf u}_2\cup w}}{
\Delta_{{\bf u}_2}
} w^{j} \lD^{-1}\Big\}_{j=0}^{L_2-1},
\Big\{
\frac{\Delta_{{\bf u}_3\cup w}}{
\Delta_{{\bf u}_3}
} w^{j} \lD^{-3}\Big\}_{j=0}^{L_3-1}
\Big]\,,
\eeqa
where $\Delta_{\bf v}$ for some ordered set $\bf v$ is a Vandermonde determinant
\beq
\Delta_{\bf v} = \prod_{i<j} (v_i-v_j)
\eeq
and ${\bf v}\cup w$ means that we add one element $w$ to the ordered set ${\bf v}$ at the end.
For example equation \eq{T11} can be written as
\beqa\la{P11}
\langle \pr_{1,1}(u)\rangle =
\Big[ 0;\Big|L;u\Big |L;\Big |0;\Big]_\Psi\;.
\eeqa
Here and below we will systematically omit
$\Psi^A$ and $\tilde \Psi^B$.
Note that the determinant in the r.h.s. of
\eq{P11} implicitly contains the Q-functions of the corresponding states.
Using a similar strategy as above we derived the following single determinant expressions for the principal operators between two arbitrary factorisable states
\beqa
\bea{lcllllllll}
\langle \hat 1\rangle =
&\Big[&0;
&\Big|&L;
&\Big|&L;
&\Big|&0;
&\Big]_\Psi
\\ \vspace{1mm}
\langle \pr_{1,0}
(u)\rangle = -
&\Big[&
1;\theta-i\bs
&\Big|&L-1;
u
&\Big|&
L;
&\Big|&
0;
&\Big]_\Psi
\\ \vspace{1mm}
\langle \pr_{1,1}
(u)\rangle =
&\Big[&0;
&\Big|&
L;u
&\Big|&
L;
&\Big|&0;
&\Big]_\Psi
\\ \vspace{1mm}
\langle \pr_{1,2}
(u)\rangle = (-1)^L
&\Big[&0;
&\Big|&
L-1;u
&\Big|&
L+1;
&\Big|&0;
&\Big]_\Psi
\\ \vspace{1mm}
\langle \pr_{1,3}
(u)\rangle =
-&\Big[&0;
&\Big|&
L-1;u
&\Big|&
L;
&\Big|&
1;\theta+i\bs
&\Big]_\Psi
\\ \vspace{1mm}
\langle \pr_{2,0}
(u)\rangle = (-1)^L
&\Big[&
1;\theta-i\bs
&\Big|&
L;
&\Big|&
L-1;u
&\Big|&0;
&\Big]_\Psi
\\ \vspace{1mm}
\langle \pr_{2,1}
(u)\rangle =
(-1)^{L-1}
&\Big[&0;
&\Big|&
L+1;
&\Big|&
L-1;u
&\Big|&0;
&\Big]_\Psi
\\ \vspace{1mm}
\langle \pr_{2,2}
(u)\rangle =
&\Big[&0;
&\Big|&
L;
&\Big|&
L;u
&\Big|&0;
&\Big]_\Psi
\\ \vspace{1mm}
\langle \pr_{2,3}
(u)\rangle = (-1)^L
&\Big[&0;
&\Big|&
L;
&\Big|&
L-1;u
&\Big|&1;\theta+i\bs
&\Big]_\Psi
\eea
\eeqa
Here we have defined $\theta\pm i\bs:=\{\theta_1\pm i\bs,\dots,\theta_L\pm i\bs\}$. In the next section we will use these expressions to obtain the matrix elements in the SoV basis of the principal operators.
\subsection{Form-factors for $\sl(2)$ principal operators}
In order to compare with previous results in the literature
we also write form-factors for the principal operators in the case of the $\sl(2)$ spin chain in a form similar to those of the previous section.
We start from the $\sl(2)$ Baxter operator $\lO^\dagger=Q_{\theta}^{[2s]}\lD^2-\tau_1+\chi_2Q_{\theta}^{[-2s]}\lD^{-2}$. For the $\sl(2)$ spin chain, we only have the fundamental transfer matrix $t_1(u)$, so we only have the principal operators $\pr_{1,r}(u),\;r=0,1,2$. The notation \eq{brf} in the $\sl(2)$ case becomes
\beqa
&&\Big[
L_0;{\bf u}_0
\Big|
L_1;{\bf u}_1
\Big|
L_2;{\bf u}_2
\Big]_\Psi=\frac{1}{\cal N}\times\\
\nn&&
\Big[\Big\{
\frac{\Delta_{{\bf u}_0\cup w}}{
\Delta_{{\bf u}_0}
} w^{j} \lD^{2}\Big\}_{j=0}^{L_0-1},
\Big\{
\frac{\Delta_{{\bf u}_1\cup w}}{
\Delta_{{\bf u}_1}
} w^{j}\Big\}_{j=0}^{L_1-1},
\Big\{
\frac{\Delta_{{\bf u}_2\cup w}}{
\Delta_{{\bf u}_2}
} w^{j} \lD^{-2}\Big\}_{j=0}^{L_2-1},
\Big]\;.
\eeqa
Following exactly the same steps as for $\sl(3)$ we find that the matrix elements for the principal operators and the identity operator are given by
\renewcommand{\arraystretch}{1.7}
\beqa
\label{sl2insert}
\bea{lllllllllll}
\langle\hat 1\rangle=
& \Big[0;&\big|&L;&\big|&0; & \Big]_\Psi \\
\langle \pr_{1,0}(u)\rangle=
+\langle T_{12}(u)\rangle
=-& \Big[1;\theta-i \bs&\big|&L-1;u&\big|&0;& \Big]_\Psi \\
\langle\pr_{1,1}(u)\rangle =
+\langle T_{11}(u)\rangle =
& \Big[0;&\big|&L;u&\big|&0;&\Big]_\Psi \\
\langle\pr_{1,2}(u)\rangle =-\langle T_{21}(u)\rangle =(-1)^L&
\Big[0;&\big|&L-1;u&\big|&1;\theta+i \bs &\Big]_\Psi\;.
\eea
\eeqa
Here we used \eqref{sl2principal} to relate principal operators with the elements of the monodromy matrix. From these equations it is already easy to see that $T_{11}(u)=\bB(u)$ is the SoV $\bf B$-operator, which acting on the factorised wave function, replaces $Q(w)\;\to\;(u-w)Q(w)$. We will analyse the action of the remaining operators on the SoV basis in the next section.
\subsection{Principal operators in SoV basis}\label{principalsovbasis}
The goal of this section is to convert the form factors we have derived in section~\ref{formfactors} to the SoV basis. The general strategy is simple: starting from a form factor $\langle \Psi^A|\hat{O}|\tilde{\Psi}^B\rangle$, for some operator $\hat{O}$, which we assume can be expressed as
\begin{equation}
\langle \Psi^A|\hat{O}|\tilde{\Psi}^B\rangle= \Big[
L_0;{\bf u}_0
\Big|
L_1;{\bf u}_1
\Big|
L_2;{\bf u}_2
\Big|
L_3;{\bf u}_3
\Big]_\Psi
\end{equation}
we insert two resolutions of the identity $\sum_{\svx}|\svx\rangle\langle\svx|=\sum_{\svy}|\svy\rangle\langle\svy|=1$:
\beq
\label{eqformfactors}
\langle \Psi^A|\hat{O}|\tilde{\Psi}^B\rangle=\sum_{\svx,\svy}\langle\Psi^A|\svy\rangle\, \langle\svx|\tilde{\Psi}^B\rangle\,\langle\svy|\hat{O}|\svx\rangle\,.
\eeq
We then use~\eqref{wave1} and~\eqref{wave2} to write the r.h.s. in terms of Q-functions. Since the l.h.s. can be written in terms of determinants of Q-functions as proven in section~\ref{formfactors},
we can treat~\eqref{eqformfactors} as a linear system, where the unknowns are precisely the form factors in the SoV basis. It is then immediate to read off the matrix elements $\langle\svy|\hat{O}|\svx\rangle$.
It is straightforward to deduce a general formula, which we derive in Appendix~\ref{dict}, which reads
\begin{equation}\label{psixy}
\Big[
L_0;{\bf u}_0
\Big|
L_1;{\bf u}_1
\Big|
L_2;{\bf u}_2
\Big|
L_3;{\bf u}_3
\Big]_\Psi = \displaystyle\sum_{\svx\svy} \tilde{\Psi}_B(\svx)\Psi_A(\svy) \Big[
L_0;{\bf u}_0
\Big|
L_1;{\bf u}_1
\Big|
L_2;{\bf u}_2
\Big|
L_3;{\bf u}_3
\Big]_{\svx\svy}
\end{equation}
where we have introduced the notation
\begin{equation}\label{xybracket}
\Big[
L_0;{\bf u}_0
\Big|
L_1;{\bf u}_1
\Big|
L_2;{\bf u}_2
\Big|
L_3;{\bf u}_3
\Big]_{\svx\svy}
:= \left.\frac{s_{\bf L}}{\Delta_{\theta}^2}\displaystyle \sum_{k} {\rm sign}(\sigma)\prod_{\alpha,a}\frac{r_{\alpha,n_{\alpha,a}}}{r_{\alpha,0}} \frac{\Delta_{{\bf u}_b\cup \svx_{\sigma^{-1}(b)}}}{
\Delta_{{\bf u}_b}}\right|_{\sigma_{a,\alpha} = k_{a,\alpha}-m_{\alpha,a}+a}\,.
\end{equation}
The notation used here is identical to that used for the measure \eqref{measure}, with the only difference now being the sign factor $s_{\bf L}$ is defined as, for $\sl(N)$,
\begin{equation}
s_{\bf L} := (-1)^{\frac{LN}{4}(L-1)(N-1)+\sum_{n=0}^N \frac{L_n}{2}(L_n-1)}
\end{equation}
and now $\sigma$ in \eqref{xybracket} is a permutation of the set
\begin{equation}\label{sigmaset}
\{\underbrace{0,\dots,0}_{L_0}, \underbrace{1,\dots,1}_{L_1},\underbrace{2,\dots,2}_{L_2},\underbrace{3,\dots,3}_{L_3}\}
\end{equation}
and as before $\sigma_{\alpha,a}$ denotes the number in position $a+2(\alpha-1)$.
\paragraph{Selection rules}
One can show that the SoV charge operator \eqref{SoVcharge} imposes selection rules on the states $\langle\svy|$ and $|\svx\rangle$ for which the matrix elements $\langle\svy|\hat{O}|\svx\rangle$ can be non-zero. As we explain in Appendix \ref{dict}, the overlap can only be non-zero if there exists some permutation $\rho^\alpha$ of $\{1,2\}$ such that
\begin{equation}
m_{\alpha,a} =n_{\alpha,\rho^\alpha_a}-\sigma_{\alpha,\rho^{\alpha}_a}-a
\end{equation}
for some fixed $\sigma$. We now sum over all values of $(\alpha,a)$ and denote the SoV charge of the state $\langle\svy|$ ($|\svx\rangle$) by ${\bf N}_\svy$ (${\bf N}_\svx$). We obtain
\begin{equation}
{\bf N}_\svy-{\bf N}_\svx = 3L - \displaystyle\sum_{\alpha,a}\sigma_{\alpha,\rho^\alpha_a}\,.
\end{equation}
Since $\sigma$ is a permutation of \eqref{sigmaset} the sum $\displaystyle\sum_{\alpha,a}\sigma_{\rho^\alpha_a,\alpha}$ simply equates to $L_1+2L_2+3L_3$ and hence we see that $\langle\svy|\hat{O}|\svx\rangle$ is only non-zero if
\begin{equation}\label{selectionrule}
{\bf N}_\svy-{\bf N}_\svx =3L -\displaystyle\sum_{n=0}^3 n\, L_n\,.
\end{equation}
Notice that this reproduces the observation of \cite{Gromov:2020fwh} that the measure $\lM_{\svy\svx}=\langle\svy|\svx\rangle$ is only non-zero if ${\bf N}_\svx={\bf N}_{\svy}$. Indeed, for the measure we have $L_0=L_3=0$ and $L_1=L_2=L$. Plugging into \eqref{selectionrule} we immediately find ${\bf N}_\svx={\bf N}_{\svy}$.
\subsubsection{$\sl(2)$ matrix elements}
Using the general formula \eqref{psixy} we will compute the SoV matrix elements of the $\sl(2)$ principal operators in order to make contact with existing results in literature.
Modifying the notation \eqref{xybracket} to the case of $\sl(2)$ we define
\begin{equation}\label{sl2dict}
\Big[
L_0;{\bf u}_0
\Big|
L_1;{\bf u}_1
\Big|
L_2;{\bf u}_2
\Big]_\Psi= \displaystyle\sum_{\svx\svy} \tilde{\Psi}_B(\svx)\Psi_A(\svy) \Big[
L_0;{\bf u}_0
\Big|
L_1;{\bf u}_1
\Big|
L_2;{\bf u}_2
\Big]_{\svx\svy}
\end{equation}
with
\begin{equation}
\Big[
L_0;{\bf u}_0
\Big|
L_1;{\bf u}_1
\Big|
L_2;{\bf u}_2
\Big]_{\svx\svy}= \left.\frac{s_{\bf L}}{\Delta_{\theta}}\displaystyle {\rm sign}(\sigma)\prod_{\alpha,a}\frac{r_{\alpha,n_{\alpha}}}{r_{\alpha,0}} \prod_b \frac{\Delta_{{\bf u}_b\cup \svx_{\sigma^{-1}(b)}}}{
\Delta_{{\bf u}_b}}\right|_{\sigma_{\alpha} = n_{\alpha}-m_{\alpha}+1}
\end{equation}
$\sigma$ is a permutation of the set
\begin{equation}
\{\underbrace{0,\dots,0}_{L_0}, \underbrace{1,\dots,1}_{L_1},\underbrace{2,\dots,2}_{L_2}\}
\end{equation}
with $\sigma_{\alpha}$ denoting the number at position $\alpha$. Notice that unlike in the higher rank case there is no sum over $k$ as only $k_\alpha=n_\alpha$ is possible.
We will now use this general formula to derive the SoV matrix elements of the $\sl(2)$ principal operators. We will begin with the operator $\pr_{1,1}(u)=T_{11}(u)$ for which we have
\begin{equation}
\langle \pr_{1,1}(u)\rangle = \Big[
0;
\Big|
L;u
\Big|
0;
\Big]_{\Psi}\,.
\end{equation}
In this case $\sigma$ is simply a permutation of $\{1,\dots,1\}$ and the only possibility is that it is the identity permutation with $\sigma_{\alpha}=1$. As a result we find that the non-zero matrix elements $\langle\svy|\pr_{1,1}(u)|\svx\rangle$ are given by
\begin{equation}
\langle\svy|\pr_{1,1}(u)|\svx\rangle = \left.\frac{1}{\Delta_\theta}\displaystyle \prod_{\alpha=1}^L (u-\svx_{\alpha})\prod_{\alpha>\beta}(\svx_\alpha-\svx_\beta)\prod_{\alpha=1}^L \frac{r_{\alpha,n_{\alpha}}}{r_{\alpha,0}}\right|_{m_{\alpha}=n_{\alpha}}\,.
\end{equation}
We then read off that \footnote{For $\sl(2)$ we see that the measure is diagonal and so $\langle\svy|\, \propto\, \langle \svx|$. We keep the notation $\langle\svy|$ in order to be consistent with higher rank.}
\begin{equation}
\langle\svy|\pr_{1,1}(u)|\svx\rangle = \prod_{\alpha=1}^L (u-\svx_\alpha) \langle\svy|\svx\rangle
\end{equation}
and hence the operator $\pr_{1,1}(u)=T_{11}(u)$ is diagonalised in the basis $|\svx\rangle$. This is not surprising as $T_{11}(u)$ coincides with the Sklyanin's ${\bf B}$ operator when the twist is taken to be of the form \eqref{companion}. What is remarkable is that we \textit{derived} that this operator acts diagonally on the SoV basis directly from the FSoV construction. We will later see that this persists at higher rank.
Next we examine $\pr_{1,0}(u)=T_{12}(u)$ and have
\begin{equation}
\langle\pr_{1,0}(u)\rangle = -\Big[1;\theta-i \bs\big|L-1;u\big|0; \Big]_\Psi.
\end{equation}
Using the relation \eqref{sl2dict} we obtain
\begin{equation}
\Big[1;\theta-i \bs\big|L-1;u\big|0; \Big]_{\svx\svy}= \left.\frac{s_{\bf L}}{\Delta_{\theta}}\displaystyle {\rm sign}(\sigma) \frac{\Delta_{\theta-i\bs\cup \svx_{\sigma^{-1}(0)}}}{
\Delta_{\theta-i\bs}}\Delta_{u\cup \svx_{\sigma^{-1}(1)}}\prod_{\alpha}\frac{r_{\alpha,n_{\alpha}}}{r_{\alpha,0}}\right|_{\sigma_{\alpha} = n_{\alpha}-m_{\alpha}+1}
\end{equation}
where now $\sigma$ is a permutation of the set
\begin{equation}
\{0,1,\dots,1\}\,.
\end{equation}
We can characterise each $\sigma$ by the property $\sigma_\gamma=0$ for some $\gamma=1,\dots,L$ and there are $L$ such permutations. Hence, we obtain
\begin{equation}
\langle\svy|\pr_{1,0}(u)|\svx\rangle= \left. \displaystyle\displaystyle \frac{Q_\theta^{[2\bs]}(\svx_\gamma)}{\Delta_\theta}\prod_{\alpha\neq \gamma}\frac{u-\svx_\alpha}{\svx_\gamma-\svx_\alpha}\prod_{\alpha>\beta}(\svx_\alpha-\svx_\beta)\prod_{\alpha}\frac{r_{\alpha,n_{\alpha}}}{r_{\alpha,0}}\right|_{m_\gamma=n_\gamma-1, m_\alpha=n_\alpha}
\end{equation}
where we have used that $|\sigma|=\gamma-1$. The situation with $\pr_{1,2}(u)=-T_{21}(u)$ is identical. We have
\begin{equation}
\langle\svy|\pr_{1,2}(u)|\svx\rangle = (-1)^L
\Big[0;\big|L-1;u\big|1;\theta+i \bs \Big]_{\svx\svy}\,.
\end{equation}
Now, $\sigma_\gamma$ is a permutation of
\begin{equation}
\{1,\dots,1,2\}
\end{equation}
Up to the fact that now $|\sigma_\gamma|=L-\gamma$ the situation is identical to the previous case and we find
\begin{equation}
\langle\svy|\pr_{1,2}(u)|\svx\rangle =-\left. \displaystyle \displaystyle \frac{Q_\theta^{[-2\bs]}(\svx_\gamma)}{\Delta_\theta}\prod_{\alpha\neq \gamma}\frac{u-\svx_\alpha}{\svx_\gamma-\svx_\alpha}\prod_{\alpha>\beta}(\svx_\alpha-\svx_\beta)\prod_{\alpha}\frac{r_{\alpha,n_{\alpha}}}{r_{\alpha,0}}\right|_{m_\gamma=n_\gamma+1, m_\alpha=n_\alpha}
\end{equation}
which perfectly reproduces the well-known $\sl(2)$ results \cite{Sklyanin:1991ss}.
\subsubsection{$\sl(3)$ matrix elements - explicit example}
We now turn our attention to the matrix elements of the $\sl(3)$ principal operators. Since we have access to the general formula \eqref{psixy} we will not present the matrix elements $\langle\svy| \pr_{a,r}(u)|\svx\rangle$ for each principal operator explicitly. Instead we will demonstrate an explicit computation showing the formula \eqref{xybracket} being used in practice.
We consider an $\sl(3)$ spin chain of length $L=2$. The bases $\langle\svy|$ and $|\svx\rangle$ are labelled by non-negative integers $m_{\alpha,a}$ and $n_{\alpha,a}$ respectively, with $a,\alpha\in\{1,2\}$. Hence, we will use the notation
\begin{equation}
\langle\svy|:=\langle m_{1,1},m_{1,2};m_{2,1},m_{2,2}|,\quad |\svx\rangle = |n_{1,1},n_{1,2};n_{2,1},n_{2,2}\rangle\,.
\end{equation}
We will compute the following matrix element
\begin{equation}\label{overlapexample}
\langle 3,2;0,0 |\pr_{1,0}(u) |2,1;1,0\rangle\,.
\end{equation}
The starting point is the expression
\begin{equation}
\langle\Psi_A| \pr_{1,0}(u)|\tilde{\Psi}_B\rangle
= -\Big[
1;\theta-i\bs
\Big|L-1;
u
\Big|
L;
\Big|
0;
\Big]_\Psi\,.
\end{equation}
As a result of \eqref{psixy} we see that the SoV matrix elements are given by
\begin{equation}
\langle\svy|\pr_{1,0}(u)|\svx\rangle = -\Big[
1;\theta-i\bs
\Big|L-1;
u
\Big|
L;
\Big|
0;
\Big]_{\svy,\svx}\,.
\end{equation}
We will use the expression obtained in \eqref{xybracket} to explicitly compute \eqref{overlapexample}. Repeating it here for convenience, \eqref{xybracket} reads
\begin{equation}
\langle \svy|\hat{O}|\svx\rangle = s_{\bf L}\left.\displaystyle \sum_{k} \frac{(-1)^{|\sigma|}}{\Delta_\theta^2}\prod_{\alpha,a}\frac{r_{\alpha,n_{\alpha,a}}}{r_{\alpha,0}} \prod_b \frac{\Delta_{{\bf u}_b\cup \svx_{\sigma^{-1}(b)}}}{
\Delta_{{\bf u}_b}}\right|_{\sigma_{\alpha,a} = k_{\alpha,a}-m_{\alpha,a}+a}\,.
\end{equation}
For the case at hand, we have $L=2$ and $L_0=L_1=1$, $L_2=2$ and $L_3=0$. Furthermore,
\begin{equation}
{\bf u}_0 = \theta-i\bs :=\{\theta_1-i\bs,\theta_2-i\bs\},\quad {\bf u}_1=\{u\}
\end{equation}
with both ${\bf u}_2$ and ${\bf u}_3$ empty.
First, in order to obtain a non-zero matrix element we need to check that the SoV charges of $\langle\svy|$ and $|\svx\rangle$ satisfy the SoV charge selection rule \eqref{selectionrule} which reads
\begin{equation}\label{selection2}
{\bf N}_\svy-{\bf N}_\svx =3L -\displaystyle\sum_{n=0}^3 n\, L_n
\end{equation}
with ${\bf N}_{\svx}=\sum_{\alpha,a}n_{\alpha,a}$ and ${\bf N}_{\svy}=\sum_{\alpha,a}m_{\alpha,a}$ and $L=2$. We have
\begin{equation}
{\bf N}_{\svx}=2+1+1 = 4,\quad {\bf N}_{\svy} = 3+2 = 5\,.
\end{equation}
For the operator $\pr_{1,0}(u)$ we have $L_0=1$, $L_1=1$ $L_2=2$ and $L_3=3$ and hence \eqref{selection2} is satisfied. As such, $\sigma$ in \eqref{xybracket} corresponds to a permutation of
\begin{equation}\label{sigmaset1}
\{0,1,2,2\}\,.
\end{equation}
We now need to construct permutations of the set $\{n_{1,1},n_{1,2},n_{2,1},n_{2,2}\}$ for fixed $\alpha$. In general there are $4$ possible permutations which read
\begin{equation}
\begin{split}
& \{n_{1,1},n_{1,2},n_{2,1},n_{2,2}\},\quad \{n_{1,2},n_{1,1},n_{2,1},n_{2,2}\}, \\
& \{n_{1,1},n_{1,2},n_{2,2},n_{2,1}\},\quad \{n_{1,2},n_{1,1},n_{2,2},n_{2,1}\}
\end{split}
\end{equation}
but if there are degeneracies in $n_{\alpha,a}$ for fixed $\alpha$ there can be fewer permutations. In our case there are no degeneracies and we have the following permutations
\begin{equation}\label{kpermset}
\{2,1,1,0\},\quad \{1,2,1,0\},\quad \{2,1,0,1\},\quad\{1,2,0,1\}\,.
\end{equation}
The formula \eqref{xybracket} requires summing over all permutations in \eqref{kpermset} for which $\sigma_{\alpha,a} = k_{\alpha,a}-m_{\alpha,a}+a$ produces a valid permutation of \eqref{sigmaset1}. For each of the permutations in \eqref{kpermset} the corresponding $\sigma_{\alpha,a}$ are given by
\begin{equation}
\{0,1,2,2\},\quad \{0,1,3,3\},\quad \{-1,2,2,2\},\quad \{-1,2,1,3\}\,.
\end{equation}
Only the first set corresponds to a permutation of $\{0,1,2,2\}$, which has $|\sigma|=1$, and hence the only term in the sum over permutations of $n_{\alpha,a}$ for fixed $\alpha$ comes from $\{2,1,1,0\}$. Of course, in general there can be multiple such permutations which need to be taken into account.
From here, for this single $\sigma$, we can read off
\begin{equation}
\svx_{\sigma^{-1}(0)}=\svx_{1,1},\quad \svx_{\sigma^{-1}(1)}=\svx_{1,2},\quad \svx_{\sigma^{-1}(2)}=\{\svx_{2,1},\svx_{2,2}\}
\end{equation}
which results in
\begin{equation}
\prod_b \frac{\Delta_{{\bf u}_b\cup \svx_{\sigma^{-1}(b)}}}{
\Delta_{{\bf u}_b}} =Q_\theta^{[2\bs]}(\svx_{1,1})(u-\svx_{1,2})(\svx_{2,1}-\svx_{2,2})\,.
\end{equation}
Finally we plug everything in, obtaining
\begin{equation}
\langle 3,2;0,0 |\pr_{1,0}(u) |2,1;1,0\rangle=-i(u-\theta_1-i(\bs+1))\frac{Q_\theta^{[2\bs]}(\theta_1+i(\bs+2))}{(\theta_1-\theta_2)^2}\frac{r_{1,2}}{r_{1,0}}\frac{r_{1,1}}{r_{1,0}}\frac{r_{2,1}}{r_{2,0}}\,.
\end{equation}
or more explicitly
{\small
\beqa
-\frac{8 \bs^3 (\bs+1) (2 \bs+1) \left(2 \bs-i \theta _{12}\right){}^2
\left(1-i \theta _{12}+2 \bs\right) \left(2-i \theta _{12}+2
\bs\right) \left(2 \bs+i \theta _{12}\right) \left(1-i \theta _{1}+\bs+i u\right)}{\theta _{12}^2 \left(i-\theta
_{12}\right) \left(i+\theta _{12}\right){}^2 \left(\theta
_{12}+2 i\right)}\;.
\eeqa
}
\normalsize
where we have defined $\theta_{12}=\theta_1-\theta_2$.
\section{Form-factors of Multiple Insertions}
\la{sec5} In the previous sections we derived various matrix elements of the principal operators. In this section we will extend this consideration to multiple insertions of the principal operators.
The most general case can be obtained by using the matrix elements in the SoV basis, however, this does not guarantee that the form-factor will have a simple determinant form. We consider this general case in section~\ref{matrixelements}. At the same time, for a large number of combinations of the principal operators we still managed to obtain determinant representations as we explain now.
\subsection{Antisymmetric combinations of principal operators}
The set-up in this section is similar to that of section~\ref{sec:der}. We consider the $\sl(3)$ case with two factorisable states $\langle\Psi^A|$ and $|\tilde\Psi^B\rangle$. In addition we assume that the state $\langle\Psi^A|$ is on-shell meaning that it is an actual wave function of a spin chain and that it diagonalises the transfer matrix with twists $\lambda_a$.
Let us try to extend the previous method to general multiple insertions.
The starting point is again from \eq{eqI}, which we write below for convenience
\beq\la{eqIcopy}
\sum_{b,\beta}(-1)^{b} \bl \tilde Q_{1}^B u^{\beta-1}\lD^{3-2b} Q^A_{1,a+1}\br_\alpha
I^A_{b,\beta}
= -\sum_{r=0}^3\chi_r
\bl \tilde Q_{1}^B \lO_{(r)}^\dagger Q^A_{1,a+1}\br_\alpha\;.
\eeq
We rewrite the above equation by modifying one term in the sum in the l.h.s. at $b,\beta=b'',\beta'''$.
Namely, we replace
$\bl \tilde Q_{1}^B
(w^{\beta''-1}\lD^{3-2b''})
Q^A_{1,a+1}\br_\alpha$
by $\bl \tilde Q_{1}^B
{\cal O}_{(s)}^\dagger
Q^A_{1,a+1}\br_\alpha$. In order for the equality to hold we also have to change the r.h.s. accordingly
\beqa\la{rewriteEQ}
&&\sum_{\beta,b}(-1)^{b} \bl \tilde Q_{1}^B
\left.(w^{\beta-1}\lD^{3-2b})\right|_{w^{\beta''-1}\lD^{3-2b''}\to {\cal O}_{(s)}^\dagger}
Q^A_{1,a+1}\br_\alpha
I^A_{b,\beta}\\
\nn&&=
-\sum_{r=0}^3\chi_r
\bl \tilde Q_{1}^B \lO_{(r)}^\dagger Q^A_{1,a+1}\br_\alpha
+
(-1)^{b''} \bl \tilde Q_{1}^B
\[{\cal O}_{(s)}^\dagger-(w^{\beta''-1}\lD^{3-2b''}) \]
Q^A_{1,a+1}\br_\alpha
I^A_{b'',\beta''}
\;.
\eeqa
So far this is just an innocent rewriting.
Next we treat the r.h.s. as an inhomogeneous part of the linear system on $I_{b,\beta}^A$ and apply Cramer's rule. As we have two terms in the r.h.s. of \eq{rewriteEQ} we obtain a sum of two ratios of determinants. As a result, for $b',\beta'\neq b'',\beta''$ we have
\beqa
\label{rewriteEQ2}
I^A_{b',\beta'}&=&(-1)^{b'+1}\frac{
[(b'',\beta'')\to {\cal O}_{(s)}^\dagger,(b',\beta')\to \sum_r\chi_r {\cal O}_{(r)}^\dagger]
}{[(b'',\beta'')\to {\cal O}_{(s)}^\dagger]}\\
\nn&-&(-1)^{b'+b''}
I^A_{b'',\beta''}\frac{
[(b'',\beta'')\to {\cal O}_{(s)}^\dagger,(b',\beta')\to w^{\beta''-1}{\cal D}^{3-2b''}]
}{[(b'',\beta'')\to {\cal O}_{(s)}^\dagger]}\,.
\eeqa
Notice that the term with ${\cal O}^\dagger_{(s)}$ in the r.h.s. of \eq{rewriteEQ} disappears as it produces a zero determinant in the numerator. The last term in~\eqref{rewriteEQ2} can be simplified a bit as we first replace the $(b'',\beta'')$ column with ${\cal O}^\dagger_{(s)}$
and then insert into the column
$(b',\beta')$ the exact expression which was previously at the column $(b'',\beta'')$
\beqa
I^A_{b',\beta'}&=&(-1)^{b'+1}\frac{
[(b'',\beta'')\to {\cal O}_{(s)}^\dagger,(b',\beta')\to \sum_r\chi_r {\cal O}_{(r)}^\dagger]
}{[(b'',\beta'')\to {\cal O}_{(s)}^\dagger]}\\
\nn&+&(-1)^{b'+b''}
I^A_{b'',\beta''}\frac{
[(b',\beta')\to {\cal O}_{(s)}^\dagger]
}{[(b'',\beta'')\to {\cal O}_{(s)}^\dagger]}\,.
\eeqa
Next we use the previously derived \eq{eqIIpr},
which in the new notations becomes
$\left[(b',\beta')\to \lO^\dagger_{(r)}\right] =
{(-1)^{b'+1}}{{\cal N}}
\langle \Psi^A|\hat I^{(r)}_{b',\beta'}|\tilde \Psi^B\rangle$.
We get
\beqa
&&I^A_{b',\beta'}
\langle \Psi^A|\hat I^{(s)}_{b'',\beta''}|\tilde \Psi^B\rangle
-
I^A_{b'',\beta''}{
\langle \Psi^A|\hat I^{(s)}_{b',\beta'}|\tilde \Psi^B\rangle
}{
}\\ \nn
&&=\sum_r\chi_r \frac{(-1)^{b'+b''}}{{\cal N}}{
[(b'',\beta'')\to {\cal O}_{(s)}^\dagger,(b',\beta')\to {\cal O}_{(r)}^\dagger]
}\;.
\eeqa
Then we use that $I^A_{b',\beta'}
\langle \Psi^A|=
\langle \Psi^A|\hat I_{b',\beta'}
$ to plug the l.h.s. under one expectation value
\beqa
&&
\langle \Psi^A|\hat I_{b',\beta'}\hat I^{(s)}_{b'',\beta''}
-
\hat I_{b'',\beta''}
\hat I^{(s)}_{b',\beta'}|\tilde \Psi^B\rangle
=\sum_r\chi_r \frac{(-1)^{b'+b''}}{{\cal N}}{
[(b'',\beta'')\to {\cal O}_{(s)}^\dagger,(b',\beta')\to {\cal O}_{(r)}^\dagger]
}\;.
\eeqa
Finally, we apply the character projection trick to obtain
\beqa
&&
\boxed{
\langle \Psi^A|\hat I^{(r)}_{b',\beta'}\hat I^{(s)}_{b'',\beta''}
-
\hat I^{(r)}_{b'',\beta''}
\hat I^{(s)}_{b',\beta'}|\tilde \Psi^B\rangle
=\frac{(-1)^{b'+b''}}{{\cal N}}{
[(b'',\beta'')\to {\cal O}_{(s)}^\dagger,(b',\beta')\to {\cal O}_{(r)}^\dagger]
}}\;.
\eeqa
As before, once we have this expression we can remove the assumption that $\Psi^A$ is an on-shell and replace it by a generic factorisable state following the same argument as in section~\ref{sec3}.
Finally, the derivation we outlined above can be iterated to get the following general expression for the multiple insertions of the principal operators antisymmetrised w.r.t. the multi-indices $(b,\beta)$
{\small\beqa\la{multinsertion}
&&
\boxed{
\langle \Psi^A|\hat I^{(s_1)}_{[b_1,\beta_1}\dots
\hat I^{(s_k)}_{b_k,\beta_k]}
|\tilde \Psi^B\rangle
=\frac{(-1)^{b_1+\dots+b_k+k}}{k!\;{\cal N}}{
[(b_1,\beta_1)\to {\cal O}_{(s_1)}^\dagger,\dots,
(b_k,\beta_k)\to {\cal O}_{(s_k)}^\dagger
]
}}\;.
\eeqa}
Note that the r.h.s. vanishes if any of the character indices $(s_i)$ coincide. Thus in order to get a nontrivial r.h.s. we can have at most $4$ antisymmetrised principal operators for the $\sl(3)$ case and $N+1$ for general $\sl(N)$.
The fact that the r.h.s. is antisymmetric in the character indices is also reflected on the l.h.s., where this is a consequence of the commutativity of transfer matrices. In fact, expanding the relation~\eqref{commP} in $u$ and $v$ we immediately get that $\hat{I}_{[b',\beta'}^{(r)}\hat{I}_{b'',\beta'']}^{(s)}=-\hat{I}_{[b',\beta'}^{(s)}\hat{I}_{b'',\beta'']}^{(r)}$. Since this can be done for any consecutive pair of character indices in the l.h.s. of~\eqref{multinsertion}, it follows that this quantity is completely antisymmetric in the character indices as a consequence of the RTT relations \eq{yangiangens}.
Finally, like in section~\ref{formfactors} we can convert
the expression for the form-factor of the coefficients of the principal operators into the form-factor of the principal operators themselves. For example, we have:
\beqa\la{PPff}
{\small\bea{lcllllllll}
(-1)^L\dfrac{\langle \pr_{1,1}(u)
\pr_{1,2}(v)-
\pr_{1,1}(v)
\pr_{1,2}(u)
\rangle}{u-v} =
&\Big[&0;
&\Big|&L-1;u,v
&\Big|&
L+1;
&\Big|&0;
&\Big]_\Psi\\
\langle \pr_{1,1}(u)
\pr_{2,2}(v)-
\pr_{2,1}(v)
\pr_{1,2}(u)
\rangle =
&\Big[&0;
&\Big|&
L;u
&\Big|&
L;v
&\Big|&0;
&\Big]_\Psi\\
-\langle \pr_{1,0}(u)
\pr_{2,2}(v)-
\pr_{2,0}(v)
\pr_{1,2}(u)
\rangle =
&\Big[&
1;\theta-i\bs
&\Big|&
L-1;u
&\Big|&
L;v
&\Big|&0;
&\Big]_\Psi\\
(-1)^{L-1}\langle \pr_{1,0}(u)
\pr_{2,3}(v)-
\pr_{2,0}(v)
\pr_{1,3}(u)
\rangle =
&\Big[&
1;\theta-i\bs
&\Big|&
L-1;u
&\Big|&
L-1;v
&\Big|&
1;\theta+i\bs
&\Big]_\Psi\,.
\eea}
\eeqa
For a more complicated but nice looking example of a triple insertion we get:
\beqa
\bea{lcccllllll}
\frac{\epsilon^{ijk}
\langle
\pr_{1,1}(u_{i})
\pr_{1,2}(u_{j})\pr_{1,3}(u_{k})
\rangle}{(u_1-u_2)(u_1-u_3)(u_2-u_3)} =(-1)^L
&\Big[&0;
&\Big|&
L-2;u_1,u_2,u_3
&\Big|&
L+1;
&\Big|&
1;\theta+i \bs
&\Big]_{\Psi}\,.
\eea
\eeqa
Notice that the second form-factor in \eq{PPff}
contains exactly the same combination that we found for the expressions for $\bB$ and $\bC$ operators in \eq{BCinP}! We will discuss the implications of this observation in section \ref{secBC}.
\subsection{Via Matrix elements in SoV basis}
\label{matrixelements}
In the above subsection we demonstrated how it is possible to write a large family of correlation functions with anti-symmetrised insertions of principal operators. However, this does not exhaust all possible correlators. On the other hand, we can in principal reduce the computation of correlators with any number of insertions to sums over products of form-factors with a single insertion by inserting a resolution of the identity over transfer matrix eigenstates. In practice this is not very useful as one would need to know the Q-functions for every state and not just those appearing in the wave functions.
This issue can be resolved by using the matrix elements of the principal operators in the SoV bases instead. Consider the double insertion
\begin{equation}
\langle\Psi_A|\pr_{a,r}(u)\pr_{b,s}(v)|\tilde{\Psi}_B\rangle\,.
\end{equation}
We now consider three resolutions of the identity
\begin{equation}
1 = \sum_{\svx} |\svx\rangle\langle\svx| = \sum_{\svy} |\svy\rangle\langle\svy| =\sum_{\svx,\svy} |\svx\rangle\langle\svy|(\lM^{-1})_{\svy,\svx}
\end{equation}
where $(\lM^{-1})_{\svy,\svx}$ denotes the components of the inverse SoV measure $\lM$ \eqref{measure} which appears in the resolution of the identity
\begin{equation}
1=\sum_{\svx,\svy} |\svy\rangle\langle\svx|\lM_{\svy,\svx}\,.
\end{equation}
We insert the three resolutions into the above correlator, obtaining
\beq
\langle\Psi_A| \pr_{a,r}(u) \pr_{b,s}(v)|\Psi_B\rangle = \displaystyle \sum_{\svx,\svx',\svy,\svy'}\Psi_A(\svy)\ \langle \svy|\pr_{a,r}(u)|\svx'\rangle\ \langle\svy'| \pr_{b,s}(v) |\svx\rangle\ (\lM^{-1})_{\svy',\svx'} \Psi_B(\svx)\,.
\eeq
At this point we see that the computation of multi-insertions becomes quite complicated. Indeed, for the rank $1$ $\sl(2)$ case the measure $\lM_{\svy,\svx}$ is diagonal and so the computation of the inverse measure $(\lM^{-1})_{\svy',\svx'}$ is trivial. For higher rank the measure is no longer diagonal and $(\lM^{-1})_{\svy',\svx'}$ needs to be computed. Nevertheless, it can be computed since $\lM_{\svy,\svx}$ is explicitly known \eqref{measure} and furthermore $\lM_{\svy,\svx}$, in an appropriate order of $\svx$ and $\svy$, is an upper-triangular block diagonal matrix where each block is finite-dimensional even in the case of non-compact $\sl(N)$ \cite{Gromov:2020fwh}.
\subsection{SoV ${\bf B}$ and ${\bf C}$ operators}\la{secBC}
In this section we will demonstrate that our results allow one to derive that the SoV ${\bf B}$ and ${\bf C}$ operators \eqref{BandC} are diagonalised in the SoV bases $|\svx\rangle$ and $\langle\svy|$ respectively. Structurally, the ${\bf B}$ and ${\bf C}$ operators are very similar. We recall the expressions \eqref{BCinP} which read
\begin{equation}
\begin{split}
& {\bf B}(u) =\pr_{1,1}(u)\pr_{2,2}(u)-\pr_{2,1}(u)\pr_{1,2}(u) \\
& {\bf C}(u) =\pr_{1,1}(u)\pr_{2,2}(u+i)-\pr_{2,1}(u+i)\pr_{1,2}(u)\,.
\end{split}
\end{equation}
Both of these expressions are special cases of the general double insertion $\pr_{1,1}(u)\pr_{2,2}(v)-\pr_{2,1}(v)\pr_{1,2}(u)$ appearing in \eqref{PPff}. We will denote this operator as $B(u,v)$, that is
\begin{equation}
B(u,v) = \pr_{1,1}(u)\pr_{2,2}(v)-\pr_{2,1}(v)\pr_{1,2}(u)\,.
\end{equation}
By using the relation \eqref{psixy} we can convert its matrix elements in the $\Psi$ basis in \eqref{PPff} to matrix elements in the $\svx,\svy$ basis. The result simply reads
\begin{equation}\label{Buvxy}
\langle \svy|B(u,v)|\svx\rangle = \left.\frac{s_{\bf L}}{\Delta_{\theta}^2}\displaystyle \sum_{k} {\rm sign}(\sigma)\prod_{\alpha,a}\frac{r_{\alpha,n_{\alpha,a}}}{r_{\alpha,0}} \Delta_{u\cup \svx_{\sigma^{-1}(1)}}\Delta_{v\cup \svx_{\sigma^{-1}(2)}}\right|_{\sigma_{a,\alpha} = k_{a,\alpha}-m_{\alpha,a}+a}
\end{equation}
where $\sigma$ is a permutation of
\begin{equation}
\{\underbrace{1,\dots,1}_{L},\underbrace{2,\dots,2}_{L} \}\,.
\end{equation}
We now examine the special cases $v=u$ and $v=u+i$, relevant for ${\bf B}$ and ${\bf C}$ respectively.
\paragraph{B operator}
The crucial point is that in \eqref{Buvxy} we have that $\Delta_{u\cup \svx_{\sigma^{-1}(1)}}\Delta_{v\cup \svx_{\sigma^{-1}(2)}} = (u-\svx_{\sigma^{-1}(1)})(v-\svx_{\sigma^{-1}(2)})\Delta_1\Delta_2$ and hence, we see that, for $v=u$, we have
\beq
(u-\svx_{\sigma^{-1}(1)})(u-\svx_{\sigma^{-1}(2)})=\prod_{\alpha,a}(u-\svx_{\alpha,a})
\eeq
which is independent of $\sigma$. Hence, this factor can be pulled outside the sum over permutations and we obtain
\begin{equation}
\begin{split}
\langle \svy|B(u,u)|\svx\rangle & = \prod_{\alpha,a}(u-\svx_{\alpha,a})\left.\frac{s_{\bf L}}{\Delta_{\theta}^2}\displaystyle \sum_{k} {\rm sign}(\sigma)\prod_{\alpha,a}\frac{r_{\alpha,n_{\alpha,a}}}{r_{\alpha,0}} \Delta_1\Delta_2\right|_{\sigma_{a,\alpha} = k_{a,\alpha}-m_{\alpha,a}+a} \\ & = \prod_{\alpha,a}(u-\svx_{\alpha,a}) \langle\svy|\svx\rangle\,.
\end{split}
\end{equation}
Hence the operator ${\bf B}(u):=B(u,u)$ acts diagonally on $|\svx\rangle$ with eigenvalue $\prod_{\alpha,a}(u-\svx_{\alpha,a})$. This coincides precisely with the spectrum of Sklyanin's ${\bf B}(u)$ operator \cite{Gromov:2020fwh}.
\paragraph{C operator}
We will now show that ${\bf C}$ is diagonalised in the $|\svy\rangle$ basis in the same manner as we did for ${\bf B}$.
We start again from the expression:
\begin{equation}
\langle \svy|B(u,u+i)|\svx\rangle = \left.\frac{s_{\bf L}}{\Delta_{\theta}^2}\displaystyle \sum_{k} {\rm sign}(\sigma)\prod_{\alpha,a}\frac{r_{\alpha,n_{\alpha,a}}}{r_{\alpha,0}} \Delta_{u\cup \svx_{\sigma^{-1}(1)}}\Delta_{u+i\cup \svx_{\sigma^{-1}(2)}}\right|_{\sigma_{a,\alpha} = k_{a,\alpha}-m_{\alpha,a}+a}\,.
\end{equation}
We will now show that $\Delta_{u\cup \svx_{\sigma^{-1}(1)}}\Delta_{u+i\cup \svx_{\sigma^{-1}(2)}}=\prod_{\alpha,a}(u-\svy_{\alpha,a})\Delta_1\Delta_2$. We have
\begin{equation}
\Delta_{u\cup \svx_{\sigma^{-1}(1)}}\Delta_{u+i\cup \svx_{\sigma^{-1}(2)}} = (u-\svx_{\sigma^{-1}(1)})(u+i-\svx_{\sigma^{-1}(2)})\Delta_1\Delta_2
\end{equation}
We now examine the factor $(u-\svx_{\sigma^{-1}(1)})(u+i-\svx_{\sigma^{-1}(2)})$ which can be rewritten as
\begin{equation}
\prod_{\alpha,a:\sigma_{a,\alpha}=1}(u-\svx_{\alpha,a})\prod_{\alpha,a:\sigma_{a,\alpha}=2}(u+i-\svx_{\alpha,a})\,.
\end{equation}
Next, we use that $\svx_{\alpha,a}=\theta_\alpha+i(\bs+n_{\alpha,a})$ and $\svy_{\alpha,a}=\theta_\alpha+i(\bs+m_{\alpha,a}-a)$ with $n_{\alpha,a}=m_{\alpha,a}-\sigma_{a,\alpha}+a$ to obtain
\begin{equation}
\prod_{\alpha,a:\sigma_{a,\alpha}=1}(u-\svx_{\alpha,a})\prod_{\alpha,a:\sigma_{a,\alpha}=2}(u+i-\svx_{\alpha,a})=\prod_{\alpha,a}(u-\theta_\alpha-i\bs -m_{\alpha,a}+1-a)\,.
\end{equation}
The final expression coincides with $\prod_{\alpha,a}(u-\svy_{\alpha,a})$ which is independent of $\sigma$. Hence we obtain
\beq
\langle \svy|{\bf C}(u)|\svx\rangle:=\langle \svy|B(u,u+i)|\svx\rangle=\prod_{\alpha,a}(u-\svy_{\alpha,a})\langle\svy|\svx\rangle\,,
\eeq
meaning that the operator ${\bf C}(u)$ acts diagonally on the $\langle\svy|$ basis with eigenvalue $\prod_{\alpha,a} (u-\svy_{\alpha,a})$.
\section{Extension to $\sl(N)$ spin chains}\label{sec:slnextension}
In this section we will extend our results from the previous sections to the $\sl(N)$ case. The construction is a simple generalisation of the results in the previous sections, where we focused mainly on $\sl(2)$ and $\sl(3)$ cases.
We will briefly go through the main steps of the derivations.
\subsection{Determinant representation of form-factors}
We start again from the dual Baxter operator
\beq
\lO^\dagger_A = \sum_{a=0}^N(-1)^a \tau^A_a(u) \lD^{N-2a},\quad \lO^\dagger_A Q_A^{1+a}=0\,.
\eeq
Now we consider the usual trivial identity, where $\lO^\dagger_A$ is applied to $Q^{1+a}_A$:
\beq
\bl Q_1^B\lO_A^\dagger Q^{1+a}_A\br_{\alpha}=0\;\;, \quad a=1,\dots,N-1,\,\,\, \alpha=1,\dots,L
\eeq
Now we first expand the Baxter operator and the eigenvalues of the transfer matrices $\tau_a^A$ in the spectral parameter $u$, obtaining
\beq
\sum_{b,\beta}(-1)^{b} \bl Q_{1}^B u^{\beta-1}\lD^{N-2b} Q^{1+a}_A\br_\alpha
I^A_{b,\beta}
= -\sum_{r=0}^N\chi^A_r
\bl Q_{1}^B \lO_{(r)}^\dagger Q^{1+a}_A\br_\alpha\;,
\eeq
where we have defined
\begin{equation}
\lO^\dagger_{(0)} = Q_\theta^{[2\bs]}\lD^N \;\;,\;\;
\lO^\dagger_{(r)} = (-1)^{r} u^L\lD^{N-2r},\,\,r=1,\dots,N-1 \;\;,\;\;
\lO^\dagger_{(N)} =(-1)^{N} Q_\theta^{[-2\bs]}\lD^{-N}
\;.
\end{equation}
Using Cramers' rule, we can compute the matrix elements of the integrals of motion exactly as in the $\sl(3)$ case leading to
\begin{equation}
I_{b',\beta'} =(-1)^{b'+1}\frac{[(b',\beta')\rightarrow \sum_{r=0}^N \chi_r\, \lO^\dagger_{(r)}]}{[w^{\beta-1}\lD^{N-2b}]}\,.
\end{equation}
Since $\langle \Psi_A|$ is an eigenvector of $\hat{I}_{b,\beta}$ with eigenvalue $I_{b,\beta}$ we can rewrite the above as
\begin{equation}
\langle\Psi_A|\hat{I}_{b',\beta'}|\tilde{\Psi}_B\rangle = \frac{(-1)^{b'+1}}{\lN}\frac{[(b',\beta')\rightarrow \sum_{r=0}^N \chi_r\, \lO^\dagger_{(r)}]}{[w^{\beta-1}\lD^{N-2b}]}\,.
\end{equation}
The principal operator coefficients $\hat{I}^{(r)}_{b,\beta}$ are then introduced via the expansion into characters of the integrals of motion $\hat{I}_{b,\beta}$
\begin{equation}
\hat{I}_{b',\beta'} = \displaystyle\sum_{r=0}^N \chi_r \hat{I}_{b',\beta'}^{(r)}\,.
\end{equation}
Performing character projection we then obtain the form-factors
\begin{equation}
\langle \Psi_A|\hat{I}^{(r)}_{b',\beta'}|\tilde{\Psi}_B\rangle =
\frac{(-1)^{b'+1}}{\lN}[(b',\beta')\rightarrow \lO^\dagger_{(r)}]\,.
\end{equation}
We see that this relation is identical to that of the $\sl(3)$ case \eq{eqIIpr}.
In the same way as in $\sl(3)$ we can assemble the operators $\hat{I}_{b,\beta}^{(r)}$ into the generating functions $\pr_{b,r}(u)$ -- the principal operators. The form-factor of the generating function $\pr_{b',r}(u)$ defined by \eqref{genfunc} is then given by
\begin{equation}
\langle \Psi_A|\pr_{b',r}(u)|\tilde{\Psi}_B\rangle=\delta_{b'r}u^L [w^{\beta-1}\lD^{N-2b}] + \displaystyle \sum_{\beta'=1}^L(-1)^{b'+1} u^{\beta'-1}[(b',\beta')\rightarrow \lO^\dagger_{(r)}]\,.
\end{equation}
This result can be easily recast in determinant form using the same arguments as the $\sl(3)$ case. We introduce the notation:
\beqa\la{brfslN}
&&\Big[
L_0;{\bf u}_0
\Big|\dots
\Big|
L_N;{\bf u}_N
\Big]_\Psi=\frac{1}{\cal N}\times\\
\nn&&
\Big[\Big\{
\frac{\Delta_{{\bf u}_0\cup w}}{
\Delta_{{\bf u}_0}
} w^{j} \lD^{N}\Big\}_{j=0}^{L_0-1},
\dots,
\Big\{
\frac{\Delta_{{\bf u}_N\cup w}}{
\Delta_{{\bf u}_N}
} w^{j} \lD^{-N}\Big\}_{j=0}^{L_N-1}
\Big]\;.
\eeqa
We will write explicit expression for the form factors of type $\langle \pr_{b',r}(u)\rangle$. We have that:
\beqa
\bea{|l|lllllllllll|}
\hline
r = b'
& &\Big[0;&
&\Big|&
\dots
&\Big|&
(L)^r;u
&\Big|&\dots
&\Big|0;&\Big]_{\Psi} \\
\hline
r=0 &(-1)^{b'L+b'+L}&\Big[1;\theta-i \bs&
&\Big|&
\dots
&\Big|&
(L-1)^{b'};u
&\Big|&\dots
&\Big|0;&\Big]_{\Psi}\\
\hline
r=N&(-1)^{b'+L(N-b')+N+1}
&\Big[0;&
&\Big|&
\dots
&\Big|&
(L-1)^{b'};u
&\Big|&\dots
&\Big|1;\theta+i \bs;&\Big]_{\Psi}\\
\hline
\eea
\eeqa
\beqa
\bea{|l|llllllllllllll|}
\hline
r>b'&(-1)^{b'+r+1+L(r-b')}
&\Big[0;&\Big|& \dots &\Big|&
(L-1)^{b'};u
&
&\Big|&
\dots
&\Big|&
(L+1)^{r};
&\Big|&\dots
&\Big|0;\Big]_{\Psi}\\
\hline
r<b' & (-1)^{b'+r+L(b'-r)}
&\Big[0;&\Big|& \dots &\Big|&
(L+1)^{r};
&
&\Big|&
\dots
&\Big|&
(L-1)^{b'};u
&\Big|&\dots
&\Big|0;\Big]_{\Psi}\\ \hline
\eea
\eeqa
\paragraph{Multiple insertions}
The expression \eqref{multinsertion} for multiple insertions generalise without modification from the $\sl(3)$ case and we have
\begin{equation}
\langle \Psi^A|\hat I^{(s_1)}_{[b_1,\beta_1}\dots
\hat I^{(s_k)}_{b_k,\beta_k]}
|\tilde \Psi^B\rangle
=\frac{(-1)^{b_1+\dots+b_k+k}}{k!\;{\cal N}}{
[(\beta_1,b_1)\to {\cal O}_{(s_1)}^\dagger,\dots,
(\beta_k,b_k)\to {\cal O}_{(s_k)}^\dagger
]
}
\end{equation}
As mentioned in the $\sl(3)$, the l.h.s. is anti-symmetric in character indices and so in order to get a non-zero correlator we require that $k\leq N+1$.
\subsection{Matrix elements in SoV bases}
We can repeat the arguments from the $\sl(3)$ section to compute all form-factors of the form $\langle\svy|\pr_{a,r}(u)|\svx\rangle$. We introduce the notation
\begin{equation}
\Big[
L_0;{\bf u}_0
\Big|\dots
\Big|
L_N;{\bf u}_N\Big]_{\svy,\svx}
\end{equation}
defined by the property
\begin{equation}\label{xysln}
\Big[
L_0;{\bf u}_0
\Big|\dots
\Big|
L_N;{\bf u}_N\Big]_{\Psi} = \displaystyle \sum_{\svx,\svy}\Psi_A(\svy)\Big[
L_0;{\bf u}_0
\Big|\dots
\Big|
L_N;{\bf u}_N\Big]_{\svy,\svx}\tilde{\Psi}_B(\svx)
\end{equation}
where we remind the reader that the SoV wave functions are given by
\begin{equation}
\Psi_A(\svy)=\displaystyle\prod_{\alpha=1}^L\det_{1\leq a,a'\leq N-1}Q_A^{1+a}(\svy_{\alpha,a'}+\tfrac{i}{2}),\quad \tilde{\Psi}_B(\svx)=\displaystyle\prod_{\alpha=1}^L \prod_{a=1}^{N-1}\tilde{Q}_1^B(\svx_{\alpha,a})\,.
\end{equation}
The explicit expression for \eqref{xysln} is worked out to be
\begin{equation}\label{slnxyexplicit}
\left.\frac{s_{\bf L}}{\Delta_{\theta}^{N-1}}\displaystyle \sum_{k} (-1)^{|\sigma|}\prod_{\alpha,a}\frac{r_{\alpha,n_{\alpha,a}}}{r_{\alpha,0}} \prod_b \frac{\Delta_{{\bf u}_b\cup \svx_{\sigma^{-1}(b)}}}{
\Delta_{{\bf u}_b}}\right|_{\sigma_{a,\alpha} = k_{a,\alpha}-m_{\alpha,a}+a}\,.
\end{equation}
The index $b$ takes values in the set $\{0,1,\dots,N\}$, $a\in\{1,\dots,N-1\}$ and $\alpha\in\{1,\dots,L\}$ and the summation is over all permutations $k$ of the set $\{n_{\alpha,a}\}$ for fixed $\alpha$
for which $\sigma$ defined by $\sigma_{\alpha,a}=k_{\alpha,a}-m_{\alpha,a}+a$ defines a permutation of the set
\begin{equation}
\{\underbrace{0,\dots,0}_{L_0},\dots, \underbrace{N,\dots,N}_{L_N} \}\,.
\end{equation}
The matrix element \eqref{slnxyexplicit} is only non-zero if the SoV charges ${\bf N}_\svx$ and ${\bf N}_{\svy}$ satisfy the relation
\begin{equation}
{\bf N}_{\svy}-{\bf N}_{\svx} = \frac{N}{2}(N-1)L - \displaystyle \sum_{n=0}^N n\, L_n\,.
\end{equation}
The details of the derivation are exactly the same as in the $\sl(3)$ case described in Appendix~\ref{dict}.
\paragraph{B and C operators.}
Having access to the complete set of SoV matrix elements it is now easy to determine which operators correspond to the SoV ${\bf B}$ and ${\bf C}$ operators. Following the derivation in the $\sl(3)$ case it is trivial to work out that ${\bB}(u)$ corresponds to the operator with
\begin{equation}\label{slnB}
{\bf u}_0 = {\bf u}_N=\{\},\quad {\bf u}_r =\{u\},\quad r=1,\dots,N-1
\end{equation}
whereas ${\bf C}(u)$ corresponds to the operator with
\begin{equation}\label{slnC}
{\bf u}_0 = {\bf u}_N=\{\},\quad {\bf u}_r =\{u+i(r-1)\},\quad r=1,\dots,N-1\,.
\end{equation}
Indeed, by examining the matrix element \eqref{slnxyexplicit} as in the $\sl(3)$ case we immediately read off that the operator defined by \eqref{slnB} (\eqref{slnC}) acts diagonally on $|\svx\rangle$ ($\langle\svy|$) with eigenvalue given by $\prod_{\alpha,a}(u-\svx_{\alpha,a})$ ($\prod_{\alpha,a}(u-\svy_{\alpha,a})$) and hence coincides with ${\bf B}(u)$ (${\bf C}(u)$) respectively due to the non-degeneracy of these operators' spectra. It is possible to work out what these operators correspond to in terms of principal operators $\pr_{a,r}(u)$. They are given by
\begin{equation}
\begin{split}
& {\bf B}(u)=(N-1)!\varepsilon^{a_1\dots a_{N-1}}\pr_{a_1,1}(u)\dots \pr_{a_{N-1},N-1}(u) \\
& {\bf C}(u)=(N-1)!\varepsilon^{a_1\dots a_{N-1}}\pr_{a_1,1}(u)\dots \pr_{a_{N-1},N-1}(u+i(N-2))\,.
\end{split}
\end{equation}
The fact that these operators coincide with the ${\bf B}$ and ${\bf C}$ operators of \cite{Gromov:2020fwh} is not manifest -- application of the RTT relation \eqref{yangiangens} is required as was already demonstrated in the $\sl(3)$ case. Nevertheless, the fact that their spectra and eigenstates coincide guarantees that they are equal.
\section{Properties of principal operators}
\la{sec7} The main goal of this section is to demonstrate the completeness of the set of the principal operators. We show that any element of the Yangian can be obtained as a combination of the principal operators, which in at least finite dimensional cases guarantees that
all physical observable can be obtained in this way. In the last section we also give explicit expressions for the principal operators in the diagonal frame -- i.e. in the case when the twist matrix becomes diagonal.
\subsection{Completeness}
\label{completeness}
In this section we will demonstrate a crucial property of the operator basis, namely that knowledge of the matrix elements of each of our principal operators is equivalent to the knowledge of the matrix elements of every operator $T_{ij}(u)$ in the Yangian algebra \eqref{yangiangens}. More precisely we will show that any element of the Yangian $T_{ij}(u)$
can be constructed as a polynomial of degree at most $N+1$ in principal operators.
Knowing all $T_{ij}(u)$ is essentially equivalent to the full algebra of observables.
For example, in the finite dimensional case i.e. when $\bs = -n/2,\;n\in {\mathbb Z}_+$ one can use the ``inverse scattering transform" \cite{Maillet:1999re} to construct local symmetry generators acting on a single site of the chain in terms of $T_{ij}(u)$.
The precise notion of completeness could be ambiguous -- in order to be precise in this paper when referring to completeness of the system of principal operators we understand that any element of the Yangian can be generated in finitely many steps (independently of the length of the chain). Note that while we are not aware of any simple way to extract local operators in the infinite-dimensional case in terms of $T_{ij}(u)$ we would like to stress that these operators still contain all information about the system. For example, consider the infinite-dimensional highest-weight representation used in this paper and consider some local operator $\mathbb{E}^{(\alpha)}$. The key point is the existence of the SoV basis $\langle\svx|$ which is constructed by action of polynomials in $T_{ij}(u)$ on the SoV ground state $\langle 0|$ \cite{Gromov:2020fwh}. Hence, the action of $\mathbb{E}^{(\alpha)}$ on the SoV basis can be re-expressed as a sum over (finitely many\footnote{There are only finitely many states of a given SoV charge, and each local Lie algebra generator raises or lowers the SoV charge by some finite amount.}) SoV basis states $\langle\svx'|$ and hence the matrix elements of $\mathbb{E}^\alpha$ are completely fixed by the SoV matrix elements of the monodromy matrix $T_{ij}(u)$.
We now show that the principal operators generate the full Yangian. Our starting point is the large $u$ expansion of the operators $T_{ij}(u)$
\begin{equation}
T_{ij}(u) = u^L\delta_{ij} + u^{L-1} \left(i \lE_{ji} - \delta_{ij}\Theta \right)+ \mathcal{O}\left(u^{L-2} \right), \quad \Theta:=\displaystyle \sum_{\alpha=1}^L \theta_\alpha\,.
\end{equation}
Note that the indices on $\lE$ are swapped compared to those on $T$. The operators $\lE_{ij}$ are generators of the global $\gl(N)$ algebra
\begin{equation}
\lE_{ij} = \displaystyle \sum_{\alpha=1}^L \ee_{ij}^{(\alpha)}
\end{equation}
and satisfy the $\gl(N)$ commutation relations
\beq\la{comgln}
[\lE_{ij},\lE_{kl}]=\delta_{jk}\lE_{il} - \delta_{li}\lE_{kj}\;.
\eeq
We will now prove the following property: that any $T_{ij}(v)$ can be expressed as a commutator of a global $\gl(N)$ generator and a principal operator $T_{k1}(v)$. The key point is the RTT relation \eqref{yangiangens} expanded at large $u$ which reads
\begin{equation}
[\lE_{ji},T_{kl}(v)]= T_{kj}(v)\delta_{il}-T_{il}(v)\delta_{kj}\,.
\end{equation}
From here it is clear that we can write any operator $T_{ij}(v)$ as
\begin{equation}\label{globcom}
T_{ij}(v) = T_{11}(v)\delta_{ij} + [\mathcal{E}_{j1},T_{i1}(v)] = \pr_{1,1}(v)\delta_{ij} +(-1)^{i-1}[\mathcal{E}_{j1},\pr_{1,i}(v)]
\end{equation}
where the r.h.s. only contains principal operators and global $\gl(N)$ generators.
The family of principal operators includes the following global Lie algebra generators: $\lE_{1j}$ and $\lE^-=\displaystyle\sum_{j=1}^{n-1}\lE_{j+1,j}$. These appear in the asymptotics of the generating functions
\begin{equation}
\pr_{1,0}(u) = i u^{L-1} \lE^- + \mathcal{O}(u^{L-2}),\quad (-1)^j\pr_{1,j}(u) = u^L\delta_{j1} + u^{L-1} \left(i \lE_{1j} - \delta_{ij}\Theta \right)+ \mathcal{O}\left(u^{L-2} \right)\,.
\end{equation}
Hence, if we can prove that these operators can be used to generate the set of $\mathcal{E}_{j1}$ then it follows from \eqref{globcom} that knowing the matrix elements of all principal operators implies knowledge of the matrix elements of all $T_{ij}(u)$. From the commutation relations \eq{comgln} it is easy to see that
\beq
\lE_{j+1,1}=[\lE^-,\lE_{j1}]\;.
\eeq
Thus, we have
\begin{equation}
\lE_{j1} = \underbrace{[\lE^-,[\lE^-,[\dots,[\lE^-,\lE_{11}]]}_{j-1}\,,
\end{equation}
where the r.h.s. contains only principal operators.
After that from \eq{globcom} we get all operators $T_{ij}(u)$
generated, which completes the proof.
Let us remark that despite the abundance of literature on SoV in $\sl(2)$ spin chains the relation \eqref{globcom} does not seem to have been exploited. Indeed, the standard approach is to obtain the matrix elements of the one non-principal operator $T_{22}(u)$ in terms of the principal operators via the quantum determinant relation
\begin{equation}
{\rm qdet}T(u)= T_{11}^-T_{22}^+ - T_{21}^- T_{12}^+
\end{equation}
together with the known eigenvalue of the quantum determinant and the fact that $T_{11}(u)$ is invertible, see for example \cite{Niccoli:2020zla}. This produces a rather complicated expression for $T_{22}(u)$. On the other hand, using the relation \eqref{globcom} we see that $T_{22}(u)$ can be written in terms of principal operators simply as
\begin{equation}
T_{22}(u) = \pr_{1,1}(u) -[\mathcal{E}_{21},\pr_{2,1}(u)]\,.
\end{equation}
\subsection{Principal operators in the diagonal frame}
In the main part of the paper we used the frame with the twist matrix $G$ being of the special form \eq{companion}. Whereas for SoV approach this choice is extremely beneficial, as the SoV basis does not depend on the twist eigenvalues $\lambda_a$, it is not the most commonly used in the literature. A more standard choice is the diagonal twist $g={\rm diag}(\lambda_1,\dots,\lambda_N)$.
In this section we give an explicit way to relate those two conventions.
As we will see the basic consequence of changing the frame is that the explicit expressions for the principal operators $\pr_{r,s}$ in terms of the monodromy matrix elements $T_{ij}$ will slightly change in the frame where the twist matrix is diagonal.
In the companion twist frame the transfer matrix $\T_1(u)$ is given by $\T_1(u) = {\rm tr}\left( T(u) G\right)$ where $G$ is the companion twist matrix \eqref{companion}. We want to perform a similarity transformation $\Pi(S)$ on the Hilbert space of the spin chain where $S$ is some $GL(N)$ group element and $\Pi(S)$ denotes its representative on the spin chain so that the transfer matrix transforms as ${\rm tr}\left( T(u) G\right)\rightarrow {\rm tr}\left( T(u) g\right)$ where $g$ is the diagonal twist matrix with the same eigenvalues as $G$. As was established in \cite{Gromov:2020fwh} a possible choice for $S$ is given by the Vandermonde matrix
\begin{equation}
(S^{-1})_{ij}= \lambda_j^{N-i}\,.
\end{equation}
Under this transformation the monodromy matrix elements $T_{ij}(u)$ transform as
\begin{equation}\la{Ttrans}
T_{ij}(u)\rightarrow \Pi^{-1}T_{ij}(u)\Pi = (S^{-1}T(u) S)_{ij}\;\;,\;\;\Pi\equiv \Pi(S)
\end{equation}
with similar expressions holding for anti-symmetric monodromy matrices.
To summarise we have the wave-functions in the diagonal frame related to the wave-functions in the companion frame by
\beq
|\Psi^{\rm diag}\rangle = \Pi^{-1}|\Psi\rangle\;\;,\;\;
\langle \Psi^{\rm diag}| = \langle\Psi|\Pi\;.
\eeq
and they diagonalise the transfer matrices
${\mathbb T}_1^{{\rm diag}}(u)$ and ${\mathbb T}_1(u)$ correspondingly related as
\beq
{\mathbb T}_1^{{\rm diag}}(u) =
\Pi^{-1}{\mathbb T}_1(u) \Pi=
{\rm tr} (S^{-1} T(u) S G)= {\rm tr} ( T(u) g )\;.
\eeq
Similarly we define $\pr^{\rm diag}_{a,r}=
\Pi^{-1}\pr_{a,r} \Pi$ so that
\beq
\langle \Psi_A^{\rm diag}
|\pr^{\rm diag}_{a,r}
|\Psi_B^{\rm diag}\rangle =
\langle \Psi_A|\pr_{a,r}
|\Psi_B\rangle = {\rm determinant}\,.
\eeq
Note that the above expression only holds for the states with the same twist unlike the expressions in the companion twist frame which hold for any twist on either state.
In general the expressions for the principal operators in the diagonal frame in terms of $T_{ij}$ are quite bulky, but straightforward to work out from \eq{Ttrans}.
For example for $\sl(3)$ we have
$\pr^{\rm diag}_{1,1}=(S^{-1} T S)_{1,1}$
\beqa
\pr^{\rm diag}_{1,1}&=&
\frac{\lambda _1^2 T_{11}}{\left(\lambda _1-\lambda
_2\right) \left(\lambda _1-\lambda
_3\right)}-\frac{\lambda _1^2 T_{12}}{\left(\lambda
_1-\lambda _2\right) \left(\lambda _2-\lambda
_3\right)}+\frac{\lambda _1^2 T_{13}}{\left(\lambda
_1-\lambda _3\right) \left(\lambda _2-\lambda
_3\right)}\\ \nn
&+&\frac{\lambda _2^2 T_{21}}{\left(\lambda
_1-\lambda _2\right) \left(\lambda _1-\lambda
_3\right)}-\frac{\lambda _2^2 T_{22}}{\left(\lambda
_1-\lambda _2\right) \left(\lambda _2-\lambda
_3\right)}+\frac{\lambda _2^2 T_{23}}{\left(\lambda
_1-\lambda _3\right) \left(\lambda _2-\lambda
_3\right)}\\ \nn
&+&\frac{\lambda _3^2 T_{31}}{\left(\lambda
_1-\lambda _2\right) \left(\lambda _1-\lambda
_3\right)}-\frac{\lambda _3^2 T_{32}}{\left(\lambda
_1-\lambda _2\right) \left(\lambda _2-\lambda
_3\right)}+\frac{\lambda _3^2 T_{33}}{\left(\lambda
_1-\lambda _3\right) \left(\lambda _2-\lambda
_3\right)}\;.
\eeqa
Note that whereas in the companion twist frame the principal operators by definition where independent of the twist eigenvalues, in the diagonal frame they explicitly depend on $\lambda_i$.
In order to get nice looking expressions is it better to introduce the notation $T^{\rm good}(u)=S^{-1}T(u)SG=S^{-1}T(u)gS$ going back to~\cite{Gromov:2016itr}. It obeys ${\mathbb T}_1^{\rm diag}={\rm tr}(T^{\rm good})$ and is related in a simple way to the principal operators in the diagonal frame \eqref{companion} so that
\begin{equation}
\begin{split}\la{PTg}
& \pr^{\rm diag}_{1,i}(u) =(-1)^{N-i} \frac{T^{\rm good}_{i,N}(u)}{\chi_N},\quad i=1,2,\dots,N\,, \\
& \pr^{\rm diag}_{1,0}(u) = \sum_{i=1}^{N-1}\(
T^{\rm good}_{i,i}-(-1)^{N-i}\frac{\chi_i}{\chi_N}T^{\rm good}_{i,N}
\)\,.
\end{split}
\end{equation}
One can check that $\sum_{r=0}^N\pr^{\rm diag}_{1,r}(u)\chi_r = {\rm tr}(T^{\rm good})$. In particular, from \eq{PTg} the above we see that the form-factor of any $T^{\rm good}_{i,N}$ in the diagonal frame is a determinant.
For the particular case of $\sl(2)$ these operators generalise the well-known operators $T^{\rm good}_{11}$ and $T^{\rm good}_{22}$ which act as conjugate momenta of the separated variables encoded in $T^{\rm good}_{12}$, see \cite{Sklyanin:1984sb}.
\section{Outlook}
In this paper we used the functional separation of variables (FSoV) technique in combination with the novel character projection (CP) method to compute all matrix elements of the set of principal operators which in particular includes some individual monodromy matrix elements $T_{ij}$ and their combinations in a concise determinant form.
We also showed that they generate a complete basis of observables of the spin chain and contain the SoV $\bf{B}$ operator as a particular case.
Thus we gained access to the matrix elements of a set of operators which generates a complete set of observables in high-rank integrable $\sl(N)$ spin chains.
Let us note that determinant representations for form factors of some $T_{ij}$ have appeared in the literature before for the $\sl(3)$ case in the Nested Bethe Ansatz approach \cite{Belliard:2012av,pakuliak2014form,Pakuliak:2014ela}. However,
in addition to giving an alternative form for those objects, the results presented in this paper have a number of advantages and conceptual differences:
\begin{itemize}
\item Firstly, the form factors are expressed directly in terms of Baxter Q-functions instead of Bethe roots. From a direct calculational perspective Q-functions offer a significant advantage \cite{Marboe:2016yyn}.
\item Secondly, the FSoV approach, which we use and extend here, does not require the existence of a highest-weight state. As such our approach is applicable to models which do not have the highest-weight state, for example the conformal spin chain (fishchain)\footnote{The fishchain captures the operators with non-trivial dimension at finite coupling, but it misses a large class of protected operators, which in turn are governed by a different integrability construction known as the (hyper)-eclectic spin chain, see \cite{Ipsen:2018fmu,Ahn:2020zly,Ahn:2021emp,Garcia:2021mzb}.} \cite{Gromov:2019bsj} describing correlators with non-trivial coupling dependence in $4$D conformal fishnet theory.
\item Thirdly, as demonstrated, our approach is valid for any rank $\sl(N)$ with general formulas being almost equally simple to write down as for $\sl(3)$.
\item Fourthly, our formulas are applicable to the set-up where the transfer matrix eigenstates are constructed with two distinct twists, which have attracted attention recently \cite{belliard2021overlap}, or in fact any two arbitrary off-shell states (to which we refer to as factorisable). In addition to being a new result, this is a very important technical advantage for example in non-highest-weight models where the scalar product between states built with the same twist is divergent \cite{Cavaglia:2021mft} and so deforming one set of twists serves as a natural regulator\footnote{See \cite{Cavaglia:2020hdb} for the explicit realisation of the twist in fishnet CFT.}.
\item Finally, using our approach we were able to compute the matrix elements of the principal operators in the SoV bases meaning one can compute the matrix elements of any number of insertions.
\end{itemize}
The FSoV approach was already worked out in detail in \cite{Cavaglia:2021mft} and thus the new methods we developed here can be applied immediately as the CP method is very general.
Furthermore, the operator SoV construction for any highest-weight representation was carried out in \cite{Ryan:2018fyo,Ryan:2020rfk} which can then be combined with the FSoV method to extract the SoV measure in the same way as in \cite{Gromov:2020fwh} and then the form factors as in this work.
We also believe that the formulas presented for form-factors extend immediately to the $q$-deformed high-rank XXZ case \cite{Maillet:2018rto} after simple modification as is already the case in the $\sl(2)$ setting \cite{Niccoli:2012ci}, and it would be interesting to check directly, allowing one to extend the recent rank $1$ results \cite{Pei:2020ljw} and to study high-rank correlators at zero temperature along the lines of \cite{Niccoli:2020zla}.
Finally, it would be very interesting to develop the FSoV formalism and the approach to correlators developed in this paper for spin chains based on different algebras and with different boundary conditions. The Q-system for models with orthogonal symmetry has attracted huge attention recently \cite{Ferrando:2020vzk,Ekhammar:2020enr,Ekhammar:2021myw,Tsuboi:2021xzl} and will likely play a large role in the SoV approach to correlators in conformal fishnet theories in $D\neq 4$ \cite{Kazakov:2018qbr,Basso:2019xay}. As well as this the SoV construction for models with open boundary conditions has recently been studied \cite{Maillet:2019hdq} with operatorial methods. Such a spin chain is known to describe a Wilson loop in $\lN=4$ SYM with insertions of local operators in the ladders limit \cite{Gromov:2021ahm}.
Another extension would be to study the integrable boundary states within the SoV formalism. First steps were already done in~\cite{Caetano:2020dyp,Cavaglia:2021mft} and recently this problem received an increased interest~\cite{Jiang:2020yk, Komatsu:2020yk, Gombor:2020cs,Caetano:2021dbh,Kristjansen:2021xno}.
\paragraph{Acknowledgements}
We are grateful to A.~Cavagli\`{a} for discussions. We are also grateful to F.~Levkovich-Maslyuk for carefully reading the manuscript. The work of N.G. and P.R was supported by the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme
– 60 – (grant agreement No. 865075) EXACTC.
\appendix
\section{Alternative derivation}\la{app:alt}
In this appendix we present an alternative derivation of \eqref{eqIIpr} which avoids using Cramer's rule and hence avoids expressing the integral of motion eigenvalues $I_{b',\beta'}^A$ as a ratio with a potentially vanishing denominator.
Our starting point is the following trivial equality
\begin{equation}
[(b',\beta')\rightarrow \lO_A^\dagger]=0\,.
\end{equation}
We then expand out $\lO^\dagger_A$
\begin{equation}
\lO_A^\dagger = \displaystyle \sum_{b,\beta}(-1)^b I_{b,\beta}^A w^{\beta-1}D^{N-2b} + \sum_{r=0}^N \chi_r \lO^\dagger_{(r)}
\end{equation}
and notice a number of cancellations. Indeed, in the sum
\begin{equation}\label{appdetsum}
\displaystyle \sum_{b,\beta}(-1)^b [(b',\beta')\rightarrow w^{\beta-1} D^{N-2b}]
\end{equation}
only a single term will survive and it is precisely $(-1)^{b'} [(b',\beta')\rightarrow w^{\beta'-1} D^{N-2b'}]$. This is a result of the anti-symmetry of the determinant as all other terms in the sum \eqref{appdetsum} will produce two identical columns in the determinant and hence vanish.
As such we obtain the relation
\begin{equation}
(-1)^b [(b',\beta')\rightarrow w^{\beta'-1} D^{N-2b'}]I_{b',\beta'} = - \displaystyle \sum_{r=0}^{N}[(b',\beta')\rightarrow \lO^\dagger_{(r)}]
\end{equation}
and see that the coefficient of $I_{b',\beta'}$ is precisely $(-1)^{b'}\lN \langle\Psi_A|\tilde{\Psi}_B\rangle$. From here on the derivation from section \ref{sec:sl3disc} proceeds exactly as before: we replace
$\langle\Psi_A|\tilde{\Psi}_B\rangle I^A_{b',\beta'}$ with $\langle\Psi_A|\hat{I}_{b',\beta'}|\tilde{\Psi}_B\rangle$, expand $\hat{I}_{b',\beta'}$ into a sum over characters $\chi_r$, perform character projection and obtain the result
\begin{equation}
\langle\Psi_A|\hat{I}_{b',\beta'}^{(r)}|\tilde{\Psi}_B\rangle = \frac{(-1)^{b'+1}}{\lN} [(b',\beta')\rightarrow \lO^\dagger_{(r)}]\,.
\end{equation}
\section{Mapping $\langle\Psi_A|\hat{O}|\Psi_B\rangle$ to $\langle\svy|\hat{O}|\svx\rangle$}\label{dict}
Our goal in this section is to prove the relation \eqref{psixy} which we repeat here for convenience
\begin{equation}\label{genexpress}
\Big[
L_0;{\bf u}_0
\Big|
L_1;{\bf u}_1
\Big|
L_2;{\bf u}_2
\Big|
L_3;{\bf u}_3
\Big]_\Psi = \displaystyle\sum_{\svx\svy} \tilde{\Psi}_B(\svx)\Psi_A(\svy) \Big[
L_0;{\bf u}_0
\Big|
L_1;{\bf u}_1
\Big|
L_2;{\bf u}_2
\Big|
L_3;{\bf u}_3
\Big]_{\svx\svy}
\end{equation}
where we use the notation
\beqa
&&\Big[
L_0;{\bf u}_0
\Big|
L_1;{\bf u}_1
\Big|
L_2;{\bf u}_2
\Big|
L_3;{\bf u}_3
\Big]_\Psi=\frac{1}{\cal N}\times\\
\nn&&
\Big[\Big\{
\frac{\Delta_{{\bf u}_0\cup w}}{
\Delta_{{\bf u}_0}
} w^{j} D^{3}\Big\}_{j=0}^{L_0-1},
\Big\{
\frac{\Delta_{{\bf u}_1\cup w}}{
\Delta_{{\bf u}_1}
} w^{j} D^{1}\Big\}_{j=0}^{L_1-1},
\Big\{
\frac{\Delta_{{\bf u}_2\cup w}}{
\Delta_{{\bf u}_2}
} w^{j} D^{-1}\Big\}_{j=0}^{L_2-1},
\Big\{
\frac{\Delta_{{\bf u}_3\cup w}}{
\Delta_{{\bf u}_3}
} w^{j} D^{-3}\Big\}_{j=0}^{L_3-1}
\Big]\;,
\eeqa
and
\begin{equation}
\Big[
L_0;{\bf u}_0
\Big|
L_1;{\bf u}_1
\Big|
L_2;{\bf u}_2
\Big|
L_3;{\bf u}_3
\Big]_{\svx\svy}:= \left.\frac{s_{\bf L}}{\Delta_{\theta}^2}\displaystyle \sum_{k} {\rm sign}(\sigma)\prod_{\alpha,a}\frac{r_{\alpha,n_{\alpha,a}}}{r_{\alpha,0}} \prod_b \frac{\Delta_{{\bf u}_b\cup \svx_{\sigma^{-1}(b)}}}{
\Delta_{{\bf u}_b}}\right|_{\sigma_{a,\alpha} = k_{a,\alpha}-m_{\alpha,a}+a}\,.
\end{equation}
Our starting point is the l.h.s. of~\eqref{genexpress}. By explicitly writing each entry of the matrix we can pull out the measure factors $\mu_\alpha(\omega_{\alpha,a})$ and Q-functions $\tilde{Q}^B_1$ associated to the state $|\tilde{\Psi}_B\rangle$, as the finite-difference operators in the determinant do not act on them. Hence we obtain
\begin{equation}\label{integral3}
\displaystyle \int t(\{w_{\alpha,a}\}) \displaystyle \prod_{\alpha,a} \tilde{Q}_1^B(w_{\alpha,a})\mu_\alpha(w_{\alpha,a}){\rm d}w_{\alpha,a}
\end{equation}
where
\begin{equation}
t(\{w_{\alpha,a}\}) = \det_{(\alpha,a),(b,\beta)} f_b(w_{\alpha,a})w_{\alpha,a}^{\beta-1}Q_{1,1+a}\left( w_{\alpha,a}+\frac{i}{2}(3-2b)\right)\,,
\end{equation}
and
\begin{equation}\label{fbdef}
f_b(w)= \frac{\Delta_{{\bf u}_b\cup w}}{
\Delta_{{\bf u}_b}\,.}
\end{equation}
Let us note the range of indices in the above determinant formula. $(\alpha,a)$ takes values in the set
\begin{equation}\label{Aset}
\{(1,1),(1,2),(2,1),\dots,(L,2) \}
\end{equation}
whereas $(b,\beta)$ takes values in the set
\begin{equation}\label{Bset}
\{(0,1),\dots,(0,L_0),(1,1),\dots,(1,L_1),\dots, (3,L_3) \}\,.
\end{equation}
Note, that this is in contrast, and simplicity for this derivation, to the main text where the rows of the determinant were labelled by $(a,\alpha)$ instead of $(\alpha,a)$. At the end we will convert back to the original ordering.
In \cite{Gromov:2020fwh} a determinant relation was used for the case of the measure where $L_0=L_3=0$ and $L_1=L_2=L$ to extract the SoV matrix elements. For the the general case we have the following updated determinant relation, valid for any two tensors $H_{a,\alpha,\beta}$ and $G_{a,\alpha,b}$, which reads
\beq\label{detrelation2}
\det_{(\alpha,a),(b,\beta)}
H_{a,\alpha,\beta} G_{a,\alpha,b}=
\sum_{\sigma}(-1)^{|\sigma|}
\left(\prod_{b}
\det_{(\alpha,a)\in \sigma^{-1}(b),\beta_b}
H_{a,\alpha,\beta_b}\right)
\prod_{a,\alpha}
G_{a,\alpha,\sigma_{a,\alpha}}
\eeq
which is easy to derive. Here, $\sigma$ is a permutation of
\begin{equation}
\{\underbrace{0,\dots,0}_{L_0},\dots,\underbrace{3,\dots,3}_{L_3}\}
\end{equation}
with $\sigma_{\alpha,a}$ denoting the number at position $a+(N-1)(\alpha-1)$ and
\begin{equation}
\sigma^{-1}(b) = \{(\alpha,a):\sigma_{a,\alpha}=b\}\,.
\end{equation}
We have $\beta_b\in \{1,\dots,L_b\}$ and finally $|\sigma|$ denotes the number of elementary permutations needed to bring the set $\bigcup_b \sigma^{-1}(b)$ to the canonical ordering \eqref{Aset}.
We now apply \eqref{detrelation2} to \eqref{integral3} by identifying
\begin{equation}
H_{a,\alpha,\beta} = w_{\alpha,a}^{\beta-1}, \quad G_{a,\alpha,b} = f_b(w_{\alpha,a})Q_{1,1+a}\left(w_{\alpha,a}+\frac{i}{2}(3-2b) \right)\,.
\end{equation}
Notice that $\det_{(\alpha,a)\in \sigma^{-1}(b),\beta_b}
H_{a,\alpha,\beta_b}=(-1)^{\frac{L_b}{2}(L_b-1)}\Delta_b$
where $\Delta_b$ denotes the Vandermonde determinant built out of $w_{\alpha,a}$ for which $\sigma_{\alpha,a}=b$, that is
\begin{equation}
\Delta_b := \displaystyle\prod_{(\alpha,a)<(\alpha',a')}(w_{\alpha,a}-w_{\alpha',a'})
\end{equation}
where $<$ is to be understood in lexicographical ordering as explained above. The result then reads
\begin{equation}
t(\{w_{\alpha,a}\}= s'_{\bf L}\displaystyle \sum_{\sigma}(-1)^{\sigma|} \prod_b \Delta_b \prod_{\alpha,a} f_{\sigma_{a,\alpha}}(w_{\alpha,a})Q_{1,1+a}(w_{\alpha,a}+\tfrac{i}{2}+i s_{a,\alpha})
\end{equation}
where we have defined $s_{\alpha,a}=1-\sigma_{\alpha,a}$ and
\begin{equation}
s'_{\bf L}:= \prod_b (-1)^{\frac{L_b}{2}(L_b-1)}
\end{equation}
Using the explicit form of $f_b(w)$, which is \eqref{fbdef}, it is easy to verify that
\begin{equation}
\displaystyle\prod_b \Delta_b\prod_{\alpha,a}f_{\sigma_{a,\alpha}}(w_{\alpha,a}) = \prod_b \frac{\Delta_{{\bf u}_b\cup w_{\sigma^{-1}(b)}}}{
\Delta_{{\bf u}_b}}
\end{equation}
and hence we obtain
\begin{equation}
t(\{w_{\alpha,a}\})= s'_{\bf L}\displaystyle \sum_{\sigma}(-1)^{|\sigma|} \prod_b (-1)^{\frac{L_b}{2}(L_b-1)}\frac{\Delta_{{\bf u}_b\cup w_{\sigma^{-1}(b)}}}{
\Delta_{{\bf u}_b}} \prod_{\alpha,a} Q_{1,1+a}(w_{\alpha,a}+\tfrac{i}{2}+i s_{\alpha,a})\,.
\end{equation}
We now symmetrise over the integration variables $w_{\alpha,1}$ and $w_{\alpha,2}$. The only factor in \eqref{integral3} not invariant under this operation is $t(\{w_{\alpha,a}\})$, so symmetrising it gives
\begin{equation}
\underset{{w_{\alpha,1}\leftrightarrow w_{\alpha,2}}}{\rm sym}t(\{w_{\alpha,a}\}) =\frac{s'_{\bf L}}{2^L}\displaystyle \sum_{\sigma}(-1)^{|\sigma|} \prod_b \frac{\Delta_{{\bf u}_b\cup w_{\sigma^{-1}(b)}}}{
\Delta_{{\bf u}_b}} \prod_{\alpha} F_{\alpha}^{s_{\alpha,1}s_{\alpha,2}}\,,
\end{equation}
where
\begin{equation}
F_{\alpha}^{s_{\alpha,1}s_{\alpha,2}}=\det_{1\leq a,a'\leq 2}Q_{1,1+a}(w_{\alpha,a'}+\tfrac{i}{2}+i s_{\alpha,a'})\,.
\end{equation}
We now put this under the integration \eqref{integral3} and compute the integral by residues, closing the contour in the upper-half plane. This produces a sum over poles at the locations $w_{\alpha,a}=\svx_{\alpha,a}=\theta_\alpha+i(\bs +n_{\alpha,a})$, with $n_{\alpha,a}$ ranging over all non-negative integers. If all $n_{\alpha,a}$ are distinct for a fixed $\alpha$ we can use the symmetry of the integrand to remove a factor of $2$ for each $\alpha$ and restrict the summation to $n_{\alpha,1}\geq n_{\alpha,2}$. If some $n_{\alpha,a}$ coincide for a fixed $\alpha$ then removing the $2^L$ factor will result in an overcounting which we must compensate for, by introducing the factor $M_\alpha$.
As a result, we obtain
\begin{equation}
\begin{split}
\sum_{n_{\alpha,1}\geq n_{\alpha,2}\geq 0}\displaystyle & \prod_{\alpha}\frac{1}{M_{\alpha}}\prod_{\alpha,a}\tilde{Q}_1^B(\svx_{\alpha,a})\frac{r_{\alpha,n_{\alpha,a}}}{r_{\alpha,0}}\\
& s'_{\bf L}\sum_{\sigma}(-1)^{|\sigma|} \prod_b \frac{\Delta_{{\bf u}_b\cup \svx_{\sigma^{-1}(b)}}}{
\Delta_{{\bf u}_b}} \prod_{\alpha} \displaystyle \det_{1\leq a,a' \leq 2} Q_{1,a+1}(\svx_{\alpha,a'}+\tfrac{i}{2}+is_{\alpha,a})\,.
\end{split}
\end{equation}
We now compare with the general expression \eqref{genexpress}. We see that in order for a term
\begin{equation}\label{xymatelem}
\Big[
L_0;{\bf u}_0
\Big|
L_1;{\bf u}_1
\Big|
L_2;{\bf u}_2
\Big|
L_3;{\bf u}_3
\Big]_{\svx\svy}
\end{equation}
in the summand of the r.h.s. to be non-zero it must be possible to write each $\svy_{\alpha,a}=\theta_\alpha+i(\bs+m_{\alpha,a}+1-a)$, for each fixed $\alpha$, as
\begin{equation}
\svy_{\alpha,a}=\svx_{\alpha,\rho^\alpha_a}+i s_{\alpha,\rho^{\alpha}_a,},\quad a=1,2
\end{equation}
where $\rho^{\alpha}$ is a permutation of $\{1,2\}$ and $\rho^\alpha_a:=\rho^\alpha(a)$ and hence we require
\begin{equation}\label{mnrelation}
m_{\alpha,a} + 1-a =n_{\alpha,\rho^\alpha_a}+ s_{\alpha,\rho^{\alpha}_a}\,.
\end{equation}
Since each of the numbers $m_{\alpha,a} + 1-a$ must be distinct, as otherwise the determinant built from $Q_{1,1+a}$ will vanish, there is a unique permutation $\rho^\alpha$ (if such a permutation exists) for which \eqref{mnrelation} holds. If such a permutation does not exist then the matrix element \eqref{xymatelem} vanishes. The permutation $\rho^\alpha$ amounts to sorting the set $\{n_{\alpha,1}+s_{\alpha,1},n_{\alpha,2}+s_{\alpha,2}\}$ and so we should keep track of the sign of this permutation. Hence, for a fixed permutation $\sigma$ we read off the following contribution to \eqref{xymatelem}
\begin{equation}\label{fixedsigma}
s'_{\bf L}(-1)^{\frac{L}{2}(L-1)(N-1)}\left.\frac{(-1)^{|\sigma|}}{\Delta_\theta^2} \prod_{\alpha}\frac{(-1)^{|\rho^\alpha|}}{M_{\alpha}}\prod_{\alpha,a}\frac{r_{\alpha,n_{\alpha,a}}}{r_{\alpha,0}}\prod_b(-1)^{\frac{L_b}{2}(L_b-1)} \frac{\Delta_{{\bf u}_b\cup \svx_{\sigma^{-1}(b)}}}{
\Delta_{{\bf u}_b}}\right|_{m_{\alpha,a}=n_{\alpha,\rho^\alpha_a}-\sigma_{\alpha,\rho^{\alpha}_a}+a}
\end{equation}
where we have included the corresponding normalisation factor $\lN$.Finally, in order to determine \eqref{xymatelem} we note that for a given set of $\svx_{\alpha,a}$ and $\svy_{\alpha,a}$ there can be many different $\sigma$ for which the relation \eqref{mnrelation} holds and we must sum over all such $\sigma$ in \eqref{fixedsigma} in order to obtain \eqref{xymatelem}. When there is a degeneracy in $n_{\alpha,a}$ for fixed $\alpha$ there are multiple $\sigma$ that give the same result. Their number is exactly $M_\alpha$, so we can simplify the expression by only summing over $k$ inequivalent permutations of $n_{\alpha,a}$ within each $\alpha$. We denote such permutations ${\rm perm}_{\alpha}n$ and hence obtain
\begin{equation}
\langle \svy|\hat{O}|\svx\rangle =s_{\bf L}(-1)^{\frac{L}{2}(L-1)(N-1)} \left.\displaystyle \sum_{k} \frac{(-1)^{|\sigma|}}{\Delta_\theta^2}\prod_{\alpha,a}\frac{r_{\alpha,n_{\alpha,a}}}{r_{\alpha,0}} \prod_b (-1)^{\frac{L_b}{2}(L_b-1)}\frac{\Delta_{{\bf u}_b\cup \svx_{\sigma^{-1}(b)}}}{
\Delta_{{\bf u}_b}}\right|_{\sigma_{\alpha,a} = k_{\alpha,a}-m_{\alpha,a}+a}\,.
\end{equation}
Moving back to the original ordering of the rows of the determinant introduces a sign $(-1)^{\frac{L}{4}(N^2-3N+2)}$ which combines with $s'_{\bf L}$ to produce $s_{\bf L}$ given by
\begin{equation}
s_{\bf L}:=(-1)^{\frac{LN}{4}(L-1)(N-1)+\sum_{n=0}^N L_n}\,.
\end{equation}
Finally, the above argument is rigourous in the finite-dimensional setting. To pass to the infinite-dimensional case we notice that the matrix elements are block diagonal with each block having finite size. The spin $\bs$ enters each block as a universal polynomial pre-factor. Then, each block is fixed by analysing a finite-number of finite-dimensional representations and the matrix elements can be analytically continued to values of $\bs$ not being negative half-integers. So the matrix elements we found are valid in the infinite-dimensional case as well.
This completes the derivation. The $\sl(N)$ case is identical up to extending the range of indices $\{1,2\}$ to $\{1,\dots,N-1\}$ but can be carried out in exactly the same way as was demonstrated for the measure in \cite{Gromov:2020fwh}.
\section{SoV basis}\label{deriveB}
In this section we will demonstrate that knowledge of the structure of the SoV basis and the FSoV approach allows one to derive the form of Sklyanin's ${\bf B}$ operator.
We start by defining the SoV ground states $|0\rangle$ and $\langle 0|$ which correspond to the constant polynomial $1$. These states satisfy the following properties
\begin{equation}\label{SoVvacprop}
T_{j1}(\theta_\alpha+i\bs)|0\rangle = 0=T_{1j}(\theta_\alpha+i\bs)|0\rangle = 0,\quad j=1,\dots,N,\quad \alpha=1,\dots,L\,.
\end{equation}
We can now follow the logic of \cite{Maillet:2018bim} and build vectors by action of transfer matrices on $\langle 0|$ and $|0\rangle$. The key idea of \cite{Maillet:2018bim} is that if such vectors form a basis then it is automatically an SoV basis since the transfer matrix wave functions will factorise. We will choose the following set of transfer matrices
\begin{equation}
\bbT^*_{\mu}(u):=\det_{1\leq j,k\leq \mu_1}\bbT_{\mu'_j-j+k,1}\left(u-\frac{i}{2}\left(\mu'_1-\mu_1-\mu'_j+j+k-1\right)\right)
\end{equation}
where $\T_{a,1}(u)$ are the transfer matrices in anti-symmetric representations \eqref{highertransfer} and $\mu$ denotes an integer partition (Young diagram)
\begin{equation}
\mu=(\mu_1,\dots,\mu_{N-1},0)
\end{equation}
and $\mu'_j$ denotes the height of the $j$-th column of the Young diagram. The states $|\svy\rangle$ are then constructed as
\begin{equation}\label{cvecs}
|\svy\rangle\, \propto\, \prod_{\alpha=1}^L \bbT^*_{\mu^\alpha}\left(\theta_\alpha+i\bs+\frac{i}{2}\left(\mu_1^\alpha-\mu^{\alpha,\prime}_1\right)\right)|0\rangle
\end{equation}
and we label the constructed states by the $L$ Young diagrams $\mu^\alpha$, $\alpha=1,\dots,L$.
We also construct a set of left vectors
\begin{equation}\label{bvecs}
\langle\svx|\, \propto \, \bra{0}\prod_{\alpha=1}^L\prod_{j=1}^{N-1}\bbT_{N-1,s_j^\alpha}\left(\theta_\alpha+i \bs-\frac{i}{2}(N-s^\alpha_j-1)\right)
\end{equation}
where $(N-1,s)$ denotes the Young diagram of height $N-1$ and width $s$, that is $\mu=(\underbrace{s,\dots,s}_{N-1},0)$ and the corresponding transfer matrix is defined by the Cherednik-Bazhanov-Reshetikhin (CBR) \cite{Cherednik,Bazhanov:1989yk} formula
\begin{equation}
{\mathbb T}_{\mu}(u)=\det_{1\leq j,k\leq \mu_1}{\mathbb T}_{\mu'_j-j+k,1}\left(u-\frac{i}{2}\left(\mu'_1-\mu_1-\mu'_j+j+k-1\right)\right)\;.
\end{equation}
We now note two key properties of the constructed set of vectors. First, they are linearly independent. This was proven in \cite{Ryan:2020rfk} and the argument relies on the fact that the twist matrix \eqref{companion} can be deformed slightly with $N-1$ parameters $w_1,\dots,w_{N-1}$. Then, in the limit where all $w_i$ are sent sequentially to infinity the constructed set of vectors reduce to eigenvectors of the so-called Gelfand-Tsetlin algebra \cite{Ryan:2020rfk}, a key object in representation theory. Furthermore, the Gelfand-Tsetlin algebra has non-degenerate spectrum and it was shown in \cite{Ryan:2020rfk} that a basis of eigenvectors are given by \eqref{bvecs} and \eqref{cvecs} in the above-described limit. Hence, the vectors \eqref{bvecs} and \eqref{cvecs} form a basis, and the transfer matrix wave functions are guaranteed to factorise.
The next key property is that the constructed set of vectors are independent of the twist eigenvalues. This follows from the fact that
all transfer matrices in our chosen reference frame have the structure
\begin{equation}
\T^*_\mu(u) = \T_\mu^{*,0}(u) + \sum_{r=0}^N \dots\times\chi_r T_{r1}(u)
\end{equation}
and
\begin{equation}
\T_{N-1,s}(u) = \T_{N-1,s}^{0}(u) + \sum_{r=0}^N \chi_r T_{1,r}(u)\times\dots
\end{equation}
where $\T_\mu^{0}(u)$ denotes a part which is independent of the twist eigenvalues. The property \eqref{SoVvacprop} then ensures that the twist-dependent part of the transfer matrices never contributes, see \cite{Ryan:2018fyo,Ryan:2020rfk,Gromov:2020fwh}.
We now exploit known relations for transfer matrices in terms of Baxter Q-functions. The transfer matrix eigenvalues admit the form
\begin{equation}
\bra{\Psi}\bbT^*_{\mu^\alpha}\left(\theta_\alpha+i\bs+\frac{i}{2}\left(\mu_1^\alpha-\mu^{\alpha,\prime}_1\right)\right) \ \propto\ \frac{\displaystyle\det_{1\leq a,a'\leq N-1} Q^{a+1} \left(\svy_{\alpha,a'}+\frac{i}{2}\left(N-2\right)\right)}{\displaystyle \det_{1\leq a,a'\leq N-1} Q^{a+1}\left(\theta_\alpha+i\bs + \frac{i}{2}\left(N-2k\right)\right)}\bra{\Psi}
\end{equation}
with $\svy_{\alpha,a}= \theta_\alpha+i(\bs+\mu^\alpha_a+1-a)$ and
\begin{equation}
\prod_{\alpha=1}^L\prod_{a=1}^{N-1}\bbT_{N-1,s_a^\alpha}\left(\theta_\alpha+i \bs-\frac{i}{2}(N-s^\alpha_j-1)\right)|\Psi\rangle \, \propto\, \prod_{\alpha=1}^L \prod_{a=1}^{N-1} \frac{Q_1(\theta_\alpha+i\bs +i s_a^\alpha )}{Q_1(\svx_{\alpha,a})}|\Psi\rangle
\end{equation}
where $\svx_{\alpha,a}=\theta_\alpha+i(\bs+s^\alpha_a)$.
We can now write down the wave functions. By normalising $\langle\Psi|$ and $|\Psi\rangle$ appropriately we have
\begin{equation}
\langle\Psi|\svy\rangle = \prod_{\alpha=1}^L \displaystyle\det_{1\leq a,a'\leq N-1} Q^{j+1} \left(\svy_{\alpha,a}+\frac{i}{2}\left(N-2\right)\right)\,.
\end{equation}
Similarly, we have
\begin{equation}
\langle\svx|\Psi\rangle = \prod_{\alpha=1}^L \prod_{a=1}^{N-1} Q_1(\svx_{\alpha,a})
\end{equation}
Since the proposed sets of vectors form a basis we can write the scalar product between two transfer matrix eigenstates as
\begin{equation}
\langle\Psi_A|\Psi_B\rangle = \sum_{\svx,\svy}
\Psi_B(\svx)\lM_{\svy,\svx} \Psi_A(\svy)\,.
\end{equation}
We now turn to the FSoV construction which allows us to extract the measure \eqref{measure} in the two SoV bases. This is just a special case of the formula \eqref{psixy}. Since the SoV bases are independent of twist, the character projection trick is valid and all of the techniques developed earlier in the paper in sections \ref{sec:sl3disc} and \ref{sec5} can be carried out. In particular, we can compute correlation functions of multi-insertions of principal operators. Following the logic of section \ref{secBC} we see that there is a distinguished operator diagonalised in the basis $|\svx\rangle$ which then must also be diagonalised in the basis $\langle \svx|$ defined in \eqref{bvecs}. Hence, we have obtained Sklyanin's ${\bf B}$ operator, and the basis diagonalising it, starting solely from the FSoV approach and the knowledge of the SoV basis. | {"config": "arxiv", "file": "2202.01591/paperver.tex"} |
TITLE: Simplifying nested radicals with higher-order radicals
QUESTION [1 upvotes]: I've seen that $$\sin1^{\circ}=\frac{1}{2i}\sqrt[3]{\frac{1}{4}\sqrt{8+\sqrt{3}+\sqrt{15}+\sqrt{10-2\sqrt{5}}}+\frac{i}{4}\sqrt{8-\sqrt{3}-\sqrt{15}-\sqrt{10-2\sqrt{5}}}}-\frac{1}{2i}\sqrt[3]{\frac{1}{4}\sqrt{8+\sqrt{3}+\sqrt{15}+\sqrt{10-2\sqrt{5}}}-\frac{i}{4}\sqrt{8-\sqrt{3}-\sqrt{15}-\sqrt{10-2\sqrt{5}}}}.$$
But then someone was able to simplify this neat, but long, expression with higher-order radicals, and they said they used De Moivre's theorem: $$\sin1^{\circ}=\frac{1}{2i}\sqrt[30]{\frac{\sqrt{3}}{2}+\frac{i}{2}}-\frac{1}{2i}\sqrt[30]{\frac{\sqrt{3}}{2}-\frac{i}{2}}.$$
I have been looking at this for a while now, and I cannot see how they were able to successfully do this. I am very impressed by the result and would like to use a similar technique to simplify nested radicals in the future.
Edit: It seems like the person who originally used De Moivre's theorem did not use it to directly simplify the longer radical expression, but rather found $\sin1^{\circ}$ by the method I figured out in my answer to this question. I do think there is limited value to writing the exact value of, say, $\sin1^{\circ}$ out, but which way do you think is better, the longer combination of square and cube roots, or the compact thirtieth-root?
REPLY [1 votes]: Just a comment. Also note that, since
$$\sqrt[^\pm3]{i}=\frac{\sqrt{3}}{2}\pm\frac{i}{2}$$
This was achieved using Euler's Identity or more specifically,
$$i^x=\cos\left(\frac{\pi x}{2}\right)+i\sin\left(\frac{\pi x}{2}\right)$$
We can even simply that to,
$$\sin(1^°)=\frac{1}{2i}\left(\sqrt[^{90}]{i}-\sqrt[^{90}]{\frac{1}{i}}\right)$$
$$\sin(1^°)=\frac{1}{2i}\left(\sqrt[^{90}]{i}-\sqrt[^{90}]{-i}\right)$$
Simplying this any further would just make this trivial.
Also note that in the above question,
$$\sin\left(\frac{29\pi}{60}\right)=\sin(87°)=\frac{1}{4}\sqrt{8+\sqrt{3}+\sqrt{15}+\sqrt{10-2\sqrt{5}}}$$
$$\cos\left(\frac{29\pi}{60}\right)=\cos(87°)=\frac{1}{4}\sqrt{8-\sqrt{3}-\sqrt{15}-\sqrt{10-2\sqrt{5}}}$$
You can get this by using the usual identity for $\sin(90-3)$
And as far as I can see,
$$\frac{1}{4}\sqrt{8-\sqrt{3}-\sqrt{15}-\sqrt{10-2\sqrt{5}}}=\frac{\left(1+\sqrt{3}\right)\sqrt{5+\sqrt{5}}}{8}+\frac{\left(\sqrt{5}-1\right)\left(\sqrt{3}-1\right)}{8\sqrt{2}}$$
I don't think we could denest it just like that, because there'e simply too many surds to take care of. | {"set_name": "stack_exchange", "score": 1, "question_id": 2817225} |
TITLE: $ X^{25} = A $ in $ M_{2}(\mathbb{R}) $
QUESTION [3 upvotes]: Let $ A = \bigl(\begin{smallmatrix}
2& -1\\
4& -2
\end{smallmatrix}\bigr) $
Find the number of solutions from $ M_{2}(\mathbb{R}) $ of the ecuation $ X^{25} = A $
Since $ A^2=O_{2} $ and $ det(A) = 0 $ $ \Rightarrow $ $ det(X^{25}) = 0 $ $ \Rightarrow $ $ det(X) = 0 $
Using Cayley Hamilton's formula we get that $ X^2=Tr(X) \cdot X $
$ X^{25} = A $ $ \Rightarrow $ $ X^{50} = A^2=O_{2} $
What to do next?
REPLY [2 votes]: Since $X^2=tr(X) \cdot X$ you get
$$0=X^{50}= (tr(X))^{49} \cdot X$$
Deduce from here that $tr(X)=0$ or $X=0$ and hence
$$X^2=tr(X) \cdot X =0$$ | {"set_name": "stack_exchange", "score": 3, "question_id": 2747452} |
TITLE: Extended integral in Spivak’s Calculus on Manifolds
QUESTION [4 upvotes]: On page 48 of Calculus on Manifolds Spivak defines (Riemann) integration over rectangles $[a_{1},b_{1}]\times\cdots\times[a_{n},b_{n}]\subset\mathbb{R}^{n}$. Then on page 55 he extends this integral to bounded subsets $C\subset\mathbb{R}^{n}$ via characteristic functions. This integral is the usual one.
Then, on page 63 he defines (smooth) partitions of unity and uses them later on page 65 to define and extended integral over open sets $A\subset\mathbb{R}^{n}$.
The usual and extended integrals are not always the same. However, Theorem 3-12 (3) gives us a precise relation between the extended integral and the usual one.
Now, mostly via problems, Spivak makes the reader verify all the familiar properties of the usual integral (linearity, comparison, monotonicity, etc). However, there is no mention in either the theory or the exercises of whether these properties hold for the extended integral. Moreover, when doing the problems I found myself making use of them, so it is natural to ask if the extended integral also verifies these properties. That is:
Let $A$ be an open subset of $\mathbb{R}^{n}$ and $f,g:A\rightarrow\mathbb{R}$ be continuous functions:
Linearity: If $f,g$ integrable over $A$, so is $af+bg$ and $\int_{A}af+bg=a\int_{A}f+b\int_{A}g$.
Comparison: $f,g$ integrable over $A$ and $f(x)\leq g(x)$ then $\int_{A}f\leq\int_{A}g$. In particular $\left|\int_{A}f\right|\leq\int_{A}|f|$.
Monotonicity: If $B \subset A$ is open and $f$ in non-negative on $A$ and integrable over $A$ then it is integrale over $B$ and $\int_{B}f\leq\int_{A}f$.
Additivity: If $A$ and $B$ are open and $f$ is continuous on $A\cup B$ and integrable over $A$ and $B$ then it is integrable over the union and the inetrsection and $\int_{A\cup B}f=\int_{A}f+\int_{B}f-\int_{A\cap B}f$.
Let $A$ be open and of measure $0$. If $f$ is integrable over $A$ then $\int_{A}f=0$.
If $f$ and $g$ agree except on a set of measure $0$ then $\int_{A}f=\int_{A}g$.
I have been verifying these properties and seem to have a proof for each. But I would appreciate it if someone with more experience could corroborate that these properties do indeed hold for extended integrals.
REPLY [2 votes]: Following the notation in the book (see page 65), if it exists, the extended integral is defined as $\int_{A}f:=\sum_{\varphi\in\Phi}\int_{A}\varphi\cdot f$, where each integral $\int_{A}\phi\cdot f$ exists and being of the the usual type it verifies all the corresponding properties for usual integrals. This and a bit of care, as we are dealing with series, is enough to show each and every one of the properties mentioned above for extended integrals. | {"set_name": "stack_exchange", "score": 4, "question_id": 115795} |
TITLE: Intuition behind open set in topology
QUESTION [1 upvotes]: I am reading Munkres Topology Chapter 13, in which some examples of bases of topologies are given. One of the examples compares the two possible bases (a, b) and [a, b) on the real line.
I understand that both (a,b) and [a,b) satisfy the definition of basis (intersection between two element has the same form and all elements together form the whole real line).
However, in the definition of a topology (as a subset of the power set of the real line), every element is called an open set. Clearly this "open" definition is different from the traditional "open" definition as [a, b) cannot be open in the traditional definition.
Furthermore, I can see that even [a, b] can be "open" in the real line (in the topology generated by [a, b]) because any non empty intersection also has the form [a, b] and the whole union makes up the whole real line. But this makes no sense using the traditional definition where [a, b] is obviously closed.
My question is why every element of a topology is named "open"? What is the intuition behind this name?
REPLY [2 votes]: The key idea you need, I think, is that a topological space is not a set of points.
You're misled by the fact that certain sets of points you usually see laid out in a particular way, and think their relationships to each other are a property of the set of points... but that's not true. The relationship between the points comes from how they're laid out. e.g. consider the rational numbers. You usually imagine them as being laid out on the rational number line:
(image taken from here)
(note the rational number line is not a continuum: e.g. there is a "hole" where $\sqrt{2}$ should be) And in this arrangement you visualize things like the sequence $1, 1/2, 1/4, 1/8, 1/16, \ldots$ converging to zero.
However, I could instead pick an enumeration of the rationals and arrange them like this:
$$ \begin{matrix}
\ldots & \bullet^{-1/3} & \bullet^{-2/1} & \bullet^{-1/2} & \bullet^{-1} & \bullet^0 & \bullet^1 & \bullet^{1/2} & \bullet^{2/1} & \bullet^{1/3} & \bullet^{3/1} & \ldots
\end{matrix} $$
Now, the sequence $1, 1/2, 1/4, 1/8, 1/16, \ldots$ rapidly shoots off to the right and doesn't converge to anything. We could even make this a metric space, insisting that if two rational numbers are separated by $n$ places, then the distance between them is $n$. e.g. in this arrangement, $d(-2, 1/2) = 5$.
Our intuition suggests a topology here too: the discrete topology. The points are all widely separated, so each point is contained in a "small" open set that contains nothing else. In fact, using the metric defined above, we can even define the open sets in the usual way: the open ball around a point $x$ of radius $r$ is the set of points $y$ with $d(x,y) < r$.
The point to take away from this is that you need something additional to tell you how the points are "arranged". A topology is one sort of this additional information.
Note that topology doesn't tell you everything one might be interested in: e.g. it can't tell the difference between the curve | and the curve S; you need something else to distinguish between those. (e.g. the notion of a "curve in the Euclidean plane")
The intuition behind a topology is that a topological space is made up of open sets. In fact, some forms of topology (e.g. "pointless topology") go so far as to define the basic object (e.g. "locale") in a way that only talks about opens and makes no reference to the idea of a point at all. (the notion of point enters this theory in a much different way)
Now, even still, there are topological spaces where the open sets lack many of the nice properties we're used to from Euclidean geometry; but they still retain all of the properties essential to topology (e.g. finite intersections and arbitrary unions of opens are also open). | {"set_name": "stack_exchange", "score": 1, "question_id": 1430405} |
TITLE: Mistake in assuming every subring of $R \times S$ is equal to some $A \times B$ for some $A\subset R, B\subset S$
QUESTION [0 upvotes]: So I made a mistake in assuming every subring of $R \times S$ is equal to some $A \times B$ for some $A\subset R, B\subset S$ for one of my homeworks.
The grader mention this is a common mistake, hence I was wondering if some one could explain the mistake to me, or better, provide a counter-example to illustrate.
Any insight is deeply appreciated.
REPLY [1 votes]: You were probably confusing subrings with ideals. An ideal $K$ of $R\times S$ is of the form $K=I\times J$ with $I$ an ideal of $R$ and $J$ an ideal of $S$. Indeed, if we consider
$$
I=\{r\in R:(r,0)\in K\},
\qquad
J=\{s\in S:(0,s)\in K\},
$$
we can show $K=I\times J$.
Suppose $(r,s)\in K$; then $(r,0)=(r,s)(1,0)\in K$, so $r\in I$; similarly, $(0,s)=(r,s)(0,1)\in K$, so $s\in J$. Hence $K\subseteq I\times J$.
If $r\in I$ and $s\in J$, then $(r,s)=(r,0)+(0,s)\in K$; therefore $I\times J\subseteq K$.
If you look closely, you'll notice that actually right ideal has been used; it's easy to modify the proof for left ideals.
For subrings this trick doesn't work and, indeed, $\Delta_R=\{(r,r):r\in R\}$ is a subring of $R\times R$ which is not of the form $A\times B$.
More generally, the minimal subring $P$ of $R\times S$, that is, the subring generated by $(1,1)$, is of the form $A\times B$
Suppose $A$ and $B$ are subrings of $R$ and $S$ respectively and $P=A\times B$. Then, for every integers $m$ and $n$, $(m1,n1)\in P$, so
$$
(m1,n1)=k(1,1)
$$
for some integer $k$. Just take $m$ and $n$ such that the greatest common divisor of the characteristics of $R$ and $S$ doesn't divide $m-n$ and you get a contradiction, provided the characteristics are not coprime.
We have proved that for any pair of rings (with nonzero identity) $R$ and $S$ having coprime characteristics, there exists a subring of $R\times S$ which is not of the form $A\times B$, for any choice of subrings $A$ of $R$ and $B$ of $S$. | {"set_name": "stack_exchange", "score": 0, "question_id": 2127215} |
TITLE: How many possible passwords are there for 8-100 characters?
QUESTION [0 upvotes]: Requirements/Restrictions: Minimum of 8. Maximum of 100. At least 1 letter from the latin alphabet (capitalisation doesn’t matter—g is same as G, 26 letters), at least 1 number (0-9, 10 numbers) and it may also include special characters (33 special characters/symbols).
Here’s what I did:
$26 \times 10 \times 69^6 \times 70^{92} = \\1574300283675196381393274771319731729003339411333808462\\
6274484198008813380319383474217713060000000000000000000\\
00000000000000000000000000000000000000000000000000000000000000000000000000$
One of the characters must be a letter (26), then another must be a number ($26 \times 10$), there are 6 spots left to reach the minimum, all the possible characters added together is $69 (= 33 + 26 + 10)$ so ($26 \times 10 \times 69^6$). Then there are 92 spots left, since they don’t have to be filled, the user has the option of leaving it blank, so now instead of 69 options, it’s 70. Hence, ($26 \times 10 \times 69^6 \times 70^{92}$).
I would appreciate it if someone confirmed whether I am correct or not.
Thank you :)
REPLY [3 votes]: 26 alphabet characters, 10 numeric characters, 33 special characters (including blank space, which can be placed anywhere) - that’s 69 total
For an N-length password:
If there were no “at least 1” conditions, you’d have $69^N$ posibilities.
Then, to satisfy “atleast 1 letter”, we need to remove all options without a letter: $(69-26)^N$. And to satisfy “at least 1 number”, we need to remove all options without a number: $(69-10)^N$.
But then we need to add back in all options without a number OR a letter, i.e., with only special characters (since we’ve removed those twice): $33^N$
So, for an N-length password, the number of possibilities is $69^N - 43^N - 59^N + 33^N$
So for the total you want the sum of each different possible length, i.e., your answer is $$\sum_{N=8}^{100}(69^N - 43^N - 59^N + 33^N)$$
Stick this in some code to calculate it nice and quickly, and you get 7784830887958863955006123907413479732322179460547538436324402442938556231147248052155742398941820197907332216822738647248662040228937662527224075073302550548119136634116724084874159040 $\approx 7.78 \times 10^{183}$ options. | {"set_name": "stack_exchange", "score": 0, "question_id": 3693737} |
TITLE: Proof that $a^3 = b^4 + 6$ has no integer solutions
QUESTION [3 upvotes]: So as part of a former question I already proved that perfect cubes $\pmod 7$ can only have the remainders $0, 1$ and $6$ - Where the only cubes with remainder $6$ are cubes of the numbers of form $(7k + 3)$ or $(7k+6)$.
Do I now show that $(7k+3)^4 $ and $(7k+6)^4$ are $4 \pmod 7$ and $1\pmod 7$ respectively so $b^4$ can't fit this condition?
Sorry, I have read through a lot of similar questions but I'm still just not really getting it.
Also, why was modulo $7$ chosen? Modulo $13$ was also suggested (which leaves remainders of $1, 5, 8$ and $12$) but I'm wondering if it matters.
REPLY [3 votes]: Modulo 7 may be helpful because only comparatively few remainders are cubes. For example, modulo 5 any reaminder is potentially a cube: $0^1\equiv 0$, $1^3\equiv 1$, $3^3\equiv 2$, $2^3\equiv 3$, $4^3\equiv 4\pmod 5$. The reason behind this again is that $7\equiv1\pmod 3$ and $5\not\equiv 1\pmod 3$. (Why is this a reason? The $p-1$ non-zero remainders modulo $p$ form a cyclic group of order $p-1$, and in such a cyclic group it is always possible to "take $n$th roots" of every element for any $n$ except when $n$ is a divisor of $p-1$.)
So as we are here dealing with both third and fourth powers, working modulo $13$ may be even more promising as $13\equiv 1$ both $\pmod 3$ and $\pmod 4$.
Now, possible remainders of cubes modulo 13 are: $0,1,5,8,12$.
And possible remainder of fourth powers modulo 13 are: $0,1,3,9$. If we add $6$ to the latter we arrive at $6,7,9,2$ and observe that none of the values occurs among the list of possible cubes - so 13 was a lucky choice for our endeavor.
REPLY [1 votes]: I thought I would leave this as a hint. You can take $b = 7k, 7k+1,7k+2,..., 7k+6$, and go through each case with your observation there. | {"set_name": "stack_exchange", "score": 3, "question_id": 1725758} |
TITLE: Covariance estimation and Graphical Modelling
QUESTION [2 upvotes]: I've started reading on Convariance Matrix estimation through Graphical model in high-dimensional situation. But I have several questions.
Suppose, $X_i \overset{iid}{\sim} N_p(\mu,\Sigma)$, $I=1, \dots, n$. Define $S=\frac{1}{n}\sum (X_i-\bar{X})(X_i-\bar{X})'$. Then almost in every literature, they claim $S$ is really a 'bad estimator' (or performs poorly) in case of large $p$ small $n$. However I understand $S$ is always non-negative definite and symmetric and will be positive definite (w.p. 1) if $n\geq p+1$. Also $\hat{\Sigma}_{mle}=S$. My question is
1) Why S performs poorly when $p>n$?
2) If $\Sigma$ is sparse why do we use different approach to estimate it (methodology involving Graphical representation of $Q=\Sigma^{-1}$) ?
BTW I am reading Graphical Models by S. Lauritzen, Gaussian Markov Random Fields by Rue and Held. Please fell free to suggest any other books, materials etc. That you think will help me understand.
REPLY [0 votes]: First, I would like to mention that $S$ is always non-negative definite no matter what $p$ and $n$ are. To claim positive definiteness it's not enough to have $n \ge p + 1$.
Now, as for your questions:
When $p > n$ you basically have $\frac{p(p+1)}2$ parameters to estimate with $n$ observations only. Imagine, you want to estimate a 3dimensional random vector with only one observation. What would your estimate be? Would that be consistent in any sense?
There are many different methods for sparse covariance matrix estimation, but the reason people don't use estimator $S$ (mentioned in OPs question) is that the sparsity pattern is assumed unknown, in other words, you don't know which elements of matrix $\Sigma$ are exactly $0$ and which are not. One of the ways to handle this is to use Graphical models.
I tried to explain the intuition behind the answers but of course mathematically rigorous explanations also exist. | {"set_name": "stack_exchange", "score": 2, "question_id": 1444217} |
\section{Introduction}
In this paper, we address the detection problem as follows. A signal (two-sided sequence of reals) $x$ is observed on time horizon $0,1,...,N-1$ according to
$$
y=x_0^{N-1}+\xi,
$$
where $\xi\sim \cN(0,I_N)$ is the white Gaussian noise and $z_0^{N-1}=[z_0;...;z_{N-1}]$. Given $y$ we want to distinguish between two hypotheses:
\begin{itemize}
\item {\sl Nuisance hypothesis:} $x\in H_0$, where $H_0$ is comprised of all {\sl nuisances} -- linear combinations of $d_n$ harmonic oscillations of given frequencies;
\item {\sl Signal hypothesis:} $x\in H_1(\rho)$, where $H_1(\rho)$ is the set of all sequences $x$ representable as $s+u$ with the ``nuisance component'' $u$ belonging to $H_0$ and the ``signal component'' $s$ being a sum of at most $d_s$ harmonic oscillations (of whatever frequencies) such that the {\em uniform} distance, on the time horizon in question, from $x$ to all nuisance signals is at least $\rho$:
$$\min\limits_{z\in H_0}\|x_0^{N-1}-z_0^{N-1}\|_\infty\geq\rho.
$$
\end{itemize}
We are interested in a test which allows to distinguish, with a given confidence $1-\alpha$, between the above two hypotheses for as small ``resolution'' $\rho$ as possible.
\par
An approach to this problem which is generally advocated in the signal processing literature is based on frequency estimation. The spectrum of the signal is first estimated using noise subspace methods, such as multiple signal classification (MUSIC) \cite{Pis73,mus1}, then the nuisance spectrum is removed from this estimation and the decision is taken whether the remaining ``spectral content'' indicates the presence of a signal or is a noise artifact (for detailed presentation of these techniques see \cite{mus2,HanQui}). To the best of our knowledge, no theoretical bounds for the resolution of such tests are available. A different test for the case when no nuisance is present, based on the normalized periodogram, has been proposed in \cite{Fish29}. The properties of this test and of its various modifications were extensively studied in the statistical literature (see, e.g., \cite{Whittle,Han70,Chiu,HanQui}).
However, theoretical results on the power of this test are limited exclusively to the case of sequence $x$ being a linear combination of Fourier harmonics $e^{2\pi\imath kt/N}$, $k=0,1,...,N-1$ under signal hypothesis.
\par
In this paper we show that a good solution to the outlined problem is offered by an extremely simple test as follows.
\begin{quote}
{\sl Let $F_Nu=\left\{{1\over \sqrt{N}}\sum_{t=0}^{N-1}u_t\exp\{2\pi \imath kt/N\}\right\}_{k=0}^{N-1}:\C^N\to\C^N$ be the Discrete Fourier Transform. Given the observation $y$, we solve the convex optimization problem
$$
\Opt(y)=\min_z\left\{\|F_N(y-z_0^{N-1})\|_\infty:z\in H_0\right\}
$$
and compare the optimal value with a threshold $q_N(\alpha)$ which is a valid upper bound on the $1-\alpha$-quantile of $\|F_N\xi\|_\infty$, $\alpha\in(0,1)$ being a given tolerance:
$$
\Prob_{\xi\sim \cN(0,I_N)}\left\{\|F_N\xi\|_\infty>q_N(\alpha)\right\}\leq\alpha.
$$
If $\Opt(y)\leq q_N(\alpha)$, we accept the nuisance hypothesis, otherwise we claim that a signal is present.}
\end{quote}
It is immediately seen that the outlined test rejects the nuisance hypothesis when it is true with probability at most $\alpha$ \footnote{This fact is completely independent of what the nuisance hypothesis is -- it remains true when $H_0$ is an arbitrary set in the space of signals.}. Our main result (Theorems \ref{themain1}) states that {\sl the probability to reject the signal hypothesis when it is true is $\leq\alpha$, provided that the resolution $\rho$ is not too small, specifically,
$$
\rho\geq C(d_n+d_s)\sqrt{\ln(N/\alpha)/N}\eqno{(!)}
$$ with an appropriately chosen universal function $C(\cdot)$}.
\par
Some comments are in order.
\begin{itemize}
\item We show that in our detection problem the power of our test is nearly as good as it can be: precisely, for every pair $d_n$, $d_s$ and properly selected $H_0$,
no test can distinguish $(1-\alpha)$-reliably between $H_0$ and $H_1(\rho)$ when $\rho<O(1)d_s\sqrt{\ln(1/\alpha)/N}$ from now on, $O(1)$'s are appropriately chosen positive absolute constants.
\item We are measuring the resolution in the ``weakest'' of all natural scales, namely, via the {\sl uniform} distance from the signal to the set of nuisances.
When passing from the uniform norm to the normalized Euclidean norm $|x_0^{N-1}|_2=\|x_0^{N-1}\|_2/\sqrt{N}\leq\|x_0^N\|_\infty$, an immediate lower bound on the resolution
which allows for reliable detection becomes $O(1)\sqrt{\ln(1/\alpha)/N}$. In the case when, as in our setting, signals obeying $H_0$ and $H_1(\rho)$ admit parametric description involving $K$ parameters,
this lower bound, up to a factor logarithmic in $N$ and linear in $K$, is also an upper resolution bound, and the associated test is based on estimating the Euclidean distance from the signal underlying the observations to the nuisance set. Note that, in general, the $|\cdot|_2$-norm can be smaller than $\|\cdot\|_\infty$ by a factor as large as $\sqrt{N}$, and the fact that ``energy-based'' detection allows to distinguish well between parametric hypotheses ``separated'' by $O(\sqrt{\ln(N/\alpha)/N})$ in $|\cdot|_2$ norm does {\sl not} automatically imply the possibility to distinguish between hypotheses separated by $O(\sqrt{\ln(N/\alpha)/N})$ in the uniform norm\footnote{Indeed, let $H_0$ state that the signal is 0, and $H_1(\rho)$ state that the signal is $\geq\rho$ at $t=0$ and is zero for all other $t$'s. These two hypotheses cannot be reliably distinguished unless $\rho\geq O(1)$, that is, the $\|\cdot\|_\infty$ resolution in this case is much larger than $O(\sqrt{\ln(N/\alpha)/N})$.}. The latter possibility exists in the situation we are interested in due to the particular structure of our specific nuisance and non-nuisance hypotheses;
this structure allows also for a dedicated {\sl non-energy-based} test.
\item For the sake of definiteness, throughout the paper we assume that the observation noise is the standard white Gaussian one. This assumption is by no means critical: {\sl whatever is the observation noise, with $q_N(\alpha)$ defined as (an upper bound on) the $(1-\alpha)$-quantile of $\|F_N\xi\|_\infty$,
the above test $(1-\alpha)$-reliably distinguishes between the hypotheses $H_0$ and $H_1(\rho)$, provided that $\rho\geq C(d_n+d_s)q_N(\alpha)/\sqrt{N}$.} For example, the results of Theorems \ref{themain1} and \ref{themain11} remain valid when the observation noise is of the form $\xi=\{\xi_t=\sum_{\tau=-\infty}^\infty\gamma_\tau \eta_{t-\tau}\}_{t=0}^{N-1}$ with deterministic $\gamma_\tau$, $\sum_\tau|\gamma_\tau|\leq1$, and independent $\eta_t\sim\cN(0,1)$.
\item The main observation underlying the results on the resolution of the above test is as follows: {\sl when $x$ is the sum of at most $d$ harmonic oscillations, $\|F_Nx\|_\infty\geq C(d)\sqrt{N}\|x_0^{N-1}\|_\infty$ with some universal positive function $C(d)$.} This observation originates from \cite{Nem81} and, along with its modifications and extensions, was utilized, for the time being in the denoising setting, in \cite{Nem92,GoNem97,JuNem1,JuNem2}. It is worth to mention that it also allows to extend, albeit with degraded constants, the results of Theorems \ref{themain1}
and \ref{themain11} to multi-dimensional setting.
\end{itemize}
\par
The rest of this paper is organized as follows. In section \ref{sect:Problem} we give a detailed description of the detection problems $(P_1)$, $(P_2)$, $(N_1)$ and $(N_2)$, we are interested in (where $(P_2)$ is the problem we have discussed so far).
Our test is presented in section \ref{sectTest} where we also provide associated resolution bounds for these problems. Next in section \ref{sectLower} we present lower bounds on ``good'' (allowing for $(1-\alpha)$-reliable hypotheses testing) resolutions, while in section \ref{sectNumres} we present some numerical illustrations. The proofs of results of sections \ref{sectTest} and \ref{sectLower} are put into section \ref{sec:proofs}. | {"config": "arxiv", "file": "1301.5328/SinesIntroFin.tex"} |
\begin{document}
\begin{abstract} Let $L$ be a Lie algebra of dimension $n$ over a field $F$ of characteristic $p > 0$. I prove the existence of a faithful completely reducible $L$-module of dimension less than or equal to $p^{n^2-1}$.
\end{abstract}
\maketitle
\section{Introduction}
Let $L$ be a Lie algebra of dimension $n$ over the field $F$. The Ado-Iwasawa Theorem asserts that there exists a faithful finite-dimensional $L$-module $V$. There are several extensions of this result which assert the existence of such a module $V$ with various additional properties. See, for example, Hochschild \cite{Hoch}, Barnes \cite{extras}. Of importance for this paper is Jacobson's Theorem, \cite[Theorem 5.5.2]{SF} that every finite-dimensional Lie algebra $L$ over a field $F$ of characteristic $p>0$ has a finite-dimensional faithful completely reducible module $V$. None of these results sets a bound to the dimension of $V$, unlike the Leibniz algebra analogue \cite{faithful} which asserts for a Leibniz algebra of dimension $n$, the existence of a faithful Leibniz module of dimension less than or equal to $n+1$. This raises the question ``Is there an analogous strengthening of the Ado-Iwasawa Theorem?" that is, ``For a field $F$, does there exist a function $f:\N \to \N$ such that every Lie algebra of dimension $n$ over $F$ has a faithful module of dimension less than or equal to $f(n)$?" The main purpose of this paper is to prove the following strengthening of Jacobson's Theorem, thereby answering this question in the affirmative for fields $F$ of non-zero characteristic.
\begin{theorem}\label{main} Let $F$ be a field of characteristic $p>0$ and let $L$ be a Lie algebra of dimension $n$ over $F$. Then $L$ has a faithful completely reducible module $V$ with $\dim(V) \le p^{n^2-1}$.
\end{theorem}
In all that follows, $F$ is a field of characteristic $p>0$ and $L$ is a Lie algebra of dimension $n$ over $F$.
\section{Restricted Lie algebras.}
A restricted Lie algebra (see \cite[Chapter 2]{SF}) is a Lie algebra $L$ together with a $p$-operation, that is, a map $\p: L \to L$ such that for $a,b \in L$ and $\lambda \in F$, we have $\ad(a^\scp) = \ad(a)^p$, $(\lambda a)^\scp = \lambda^p a^\scp$ and
$$(a+b)^\scp = a^\scp + b^\scp + \sum_{i=1}^{p-1}s_i(a,b),$$
where the $s_i(a,b)$ are defined by
$$\bigl(\ad(a \otimes X+b \otimes 1)\bigr)^{p-1}(a \otimes 1) = \sum_{i=1}^{p-1}i s_i(a,b) \otimes X^{i-1}$$
in $L \otimes_F F[X]$.
For convenience of reference, we list here some properties of $p$-operations.
\begin{lemma} \label{siab} Let $(L,\p)$ be restricted Lie algebra. Then
\begin{enumerate}
\item If $[a,b]=0$, then $[a^{\scp^r},b^{\scp^s}]=0$. In particular, $[a^{\scp^r}, a^{\scp^s}]=0$.
\item If $[a,b]=0$, then $(a+b)^\scp = a^\scp + b^\scp$.
\item For all $a,b \in L$, we have $s_i(a,b) \in L'$.
\end{enumerate}
\end{lemma}
\begin{proof}
(1) Since $\ad(a)b=0$, $\ad(a)^{p^r}b=0$, that is, $[a^{\scp^r}, b]=0$. But $[b,a^{\scp^r}] = 0$ implies that $[b^{\scp^s}, a^{\scp^r} ]=0$.
(2) Since $\ad(a \otimes X+b \otimes 1)(a \otimes 1) = 0$, we have $s_i(a,b)=0$ for all $i$.
(3) Follows immediately from the definition. (In fact, by \cite[Lemma 2.1.2]{SF}, $s_i(a,b) \in L^p$.)
\end{proof}
\begin{lemma} \label{abid} Let $(L,\p)$ be a restricted Lie algebra and let $A$ be an abelian ideal of $L$. Then there exists a $p$-operation $\pd$ on $L$ such that $a^\scpd = 0$ for all $a \in A$.
\end{lemma}
\begin{proof} Take a basis $\{a_1, \dots, a_r\}$ of $A$ and extend this to a basis $\{a_1, \dots, a_n\}$ of $L$. Put $b_i = a_i^\scp$. For $i=1, \dots, r$, we have $\ad(a_i)^2=0$. We replace these $b_i$ with $0$. By Jacobson's Theorem \cite[Theorem 2.2.3]{SF}, there exists a $p$-operation $\pd$ on $L$ such that $a_i^\scpd = 0$ for $i = 1, \dots, r$ (and $a_j^\scpd = b_j$ for $j > r$). From Lemma \ref{siab}(2), it then follows that $a^\scpd = 0$ for all $a \in A$.
\end{proof}
\begin{cor} \label{pid} Let $(L,\p)$ be a restricted Lie algebra. Then there exists a $p$-operation $\pd$ on $L$ such that every abelian minimal ideal of $L$ is a \pd-ideal.
\end{cor}
\begin{proof} The abelian socle $\asoc(L)$ is the sum of all the abelian minimal ideals of $L$. It is an abelian ideal. By Lemma \ref{abid}, there exists a $p$-operation $\pd$ which is zero on $\asoc(L)$, and so, on every abelian minimal ideal. Thus every abelian minimal ideal is a \pd-ideal.
\end{proof}
\begin{theorem} \label{res} Let $(L, \p)$ be a restricted Lie algebra of dimension $n$ over the field $F$ of characteristic $p$. Then $L$ has a faithful completely reducible module of dimension less than or equal to $p^{n-1}$.
\end{theorem}
\begin{proof} By Corollary \ref{pid}, we may suppose that every abelian minimal ideal is a \p-ideal. The result holds for $n=1$. We use induction over $n$.
Suppose that $A_1, A_2$ are distinct minimal \p-ideals of $L$. Then $L/A_i$ is a restricted Lie algebra. Since $\dim(L/A_i) \le n-1$, $L/A_i$ has a faithful completely reducible module $V_i$ with $\dim(V_i) \le p^{n-2}$. But $V_1 \oplus V_2$ is a faithful completely reducible $L$-module and $\dim(V_1 \oplus V_2) \le 2p^{n-2} \le p^{n-1}$.
Suppose that $A$ is the only minimal \p-ideal of $L$. Let $B \subseteq A$ be a minimal ideal of $L$. The representation of $L$ on $B$ is a \p-representation and its kernel $K$ is a \p-ideal. Either $K=0$ or $K \supseteq A$. If $K=0$, then $B$ is a faithful completely reducible $L$-module and the result holds. So we may suppose that $K \supseteq A$. But this implies that $B$ is abelian. By our choice of \p, this implies that $B$ is a \p-ideal and so, that $B=A$. Hence we may assume that our only minimal \p-ideal $A$ is also a minimal ideal and is abelian.
We can take a linear map $c : L \to F$ such that $c(A) \ne 0$. Let $W = \langle w \rangle$ be the $1$-dimensional $A$-module with the action $aw = c(a)w$ for all $a \in A$. Then $W$ has character $c|A$. We form the $c$-induced module $V = u(L,c) \otimes_{u(A,c|A)} W$. See \cite[Chapter 5]{SF}. By \cite[Proposition 5.6.2]{SF}, $\dim(V) = p^{\dim(L/A)} \le p^{n-1}$. Let $V_0$ be the direct sum of the composition factors of $V$. Then $\dim(V_0) \le p^{n-1}$. Note that $A$ acts non-trivially on $V_0$ since it acts non-trivially on the irreducible $A$-submodule $1 \otimes W$ of $V$. If $V_0$ is faithful, the result holds.
Let $\{e_1, \dots, e_k\}$ be a co-basis of $A$ in $L$. Then by \cite[Proposition 5.6.2]{SF}, the $e_1^{r_1}e_2^{r_2} \dots e_k^{r_k} \otimes w$ with the $r_i <p$ form a basis of $V$. For $x = \sum \lambda_i e_i +a$ with $a \in A$, $x(1 \otimes w) = \sum \lambda_i e_i \otimes w + 1 \otimes aw$. If $x(1 \otimes w)=0$ then we must have $\lambda_i = 0$ for all $i$, that is, $x \in A$. Thus the representation of $L$ on $V$ has kernel $\ker(V) \subseteq A$. As $A$ is a minimal ideal and acts non-trivially, we have $\ker(V) = 0$. Thus $V$ is faithful.
So suppose that $V_0$ is not faithful. Then there exists a minimal ideal $B$ whose action on every composition factor of $V$ is trivial. Then $B$ is represented on $V$ by nilpotent linear transformations. But $V$ is faithful, so by Engel's Theorem for algebras of linear transformations, $B$ is nilpotent. But $B'$ is an ideal of $L$, so we must have $B'=0$. By our choice of \p, $B$ is a \p-ideal of $L$, contrary to $A$ being the only minimal \p-ideal of $L$. Therefore $V$ is faithful.
\end{proof}
\section{Minimal $p$-envelopes.}
Let $(L^e,\p)$ be a minimal $p$-envelope of $L$. We investigate $\dim(L^e)$. Note that, by \cite[Theorem 2.5.8(1)]{SF}, $\dim(L^e)$ is independent of the choice of minimal $p$-envelope. Let $Z$ be the centre of $L^e$. By \cite[Theorem 2.5.8(3)]{SF}, $Z \subseteq L$. By \cite[Proposition 2.1.3(2)]{SF}, $(L^e)' \subseteq L$.
\begin{lemma} \label{idL} Let $A$ be an ideal of $L$. Then $A$ is an ideal of $L^e$.
\end{lemma}
\begin{proof} The set $\{x\in L^e \mid \ad(x)A \subseteq A\}$ is a \p-subalgebra of $L^e$ and contains $L$.
\end{proof}
\begin{lemma} \label{sums} Let $a_1, \dots, a_r \in L^e$ and $\lambda_1, \dots , \lambda_r \in F$. Then
$$ (\sum_{i=1}^r \lambda_i a_i)^\scp = \sum_{i=1}^r \lambda_i^p a_i^\scp + k$$
for some $k \in L$.
\end{lemma}
\begin{proof} From the definition of a $p$-operation, we have $(\lambda_i a_i)^\scp = \lambda_i^p a_i^\scp$. The result holds for $r=2$ by Lemma \ref{siab}(3) since $(L^e)' \subseteq L$. So $(\lambda_1 a_1 + \dots + \lambda_r a_r)^\scp = (\lambda_1 a_1 + \dots \lambda_{r-1} a_{r-1})^\scp + \lambda_r^p a_r^\scp +k_1$ for some $k_1 \in L$. But by induction, $(\lambda_1 a_1 + \dots + \lambda_{r-1} a_{r-1})^\scp = \lambda_1^p a_1^\scp+ \dots + \lambda_{r-1}^p a_{r-1}^\scp +k_2$ for some $k_2 \in L$. The result follows.
\end{proof}
\begin{lemma} \label{powers} Let $x \in L$ and let $V= \langle x^{\scp^i} \mid i=1,2, \dots \rangle$ be the space spanned by the $x^{\scp^i}$. Then $\dim((V+L)/L) \le n$.
\end{lemma}
\begin{proof}
We have $\ad(x)L^e \subseteq L$. The maps $\ad(x^{\scp^i})| L \to L$ are powers of $\ad(x)|L$. So they span a subspace of $\Hom(L,L)$ of dimension at most $n$. For some $r \le n-1$, the maps $\ad(x)|L, \ad(x^\scp)|L, \dots, \ad(x^{\scp^r})|L$ are linearly independent with
$$\ad(x^{\scp^{r+1}})|L = \sum_{i=0}^r \lambda_i \ad(x^{\scp^i})|L$$
for some $\lambda_i \in F$. Put $y = x^{\scp^{r+1}} - \sum_{i=0}^r \lambda_i x^{\scp^i}$. Then $\ad(y)L^e \subseteq L$ and $\ad(y)L = 0$. Thus $\ad(y)^pL^e = 0$ and it follows that $y^\scp \in Z \subseteq L$.
By Lemma \ref{siab}(1) and (2), $y^\scp = x^{\scp^{r+2}} - \sum_{i=0}^r \lambda_i^p x^{\scp^{i+1}}$. Thus $x^{\scp^{r+2}} \in \langle x^\scp, \dots, x^{\scp^{r+1}} \rangle + Z$. Suppose that $x^{\scp^{r+s}} \in \langle x^\scp, \dots, x^{\scp^{r+1}} \rangle + Z$. Then $x^{\scp^{r+s}} = \mu_1x^\scp + \dots + \mu_{r+1}x^{\scp^{r+1}} +z$ for some $\mu_i \in F$ and $z \in Z$. By Lemma \ref{siab}(1) and (2), $x^{\scp^{r+s+1}} = \mu_1^p x^{\scp^2} + \dots + \mu_{r+1}^px^{\scp^{r+2}} +z^\scp$. Since $x^{\scp^{r+2}} \in \langle x^\scp, \dots, x^{\scp^{r+1}} \rangle + Z$ and $z^\scp \in Z$, we have $x^{\scp^{r+s+1}} \in \langle x^\scp, \dots, x^{\scp^{r+1}} \rangle + Z$. It follows by induction over $s$, that $\langle x^\scp, \dots, x^{\scp^{r+1}} \rangle + Z = V+Z$ and so, that $\dim((V+L)/L) \le r+1 \le n$.
\end{proof}
\begin{theorem} \label{env} Let $L$ be a Lie algebra of dimension $n$ over the field $F$ of characteristic $p>0$ and let $A$ be an abelian ideal of $L$ with $\dim(A)=d$. Let $(L^e, \p)$ be a minimal $p$-envelope of $L$. Then $\dim(L^e) \le n(n-d+1)$.
\end{theorem}
\begin{proof} We choose a basis $\{e_1, \dots, e_n\}$ of $L$ with $e_{n-d+1}, \dots, e_n \in A$. By Lemma \ref{abid}, we may suppose that $a^\scp = 0$ for all $a \in A$. Then the $e_i^{\scp^j} = 0$ for $i > n-d$ and $j > 0$. For each $i$, let $V_i$ be the subspace of $L^e$ spanned by the $e_i^{\scp^j}$ (including $j=0$) and let $V = \sum_iV_i$. Then $V \supseteq L$ and $V/L = \sum_{i=1}^{n-d} (V_i+L)/L$. By Lemma \ref{powers}, $\dim(V_i+L/L) \le n$, so $\dim(V/L) \le n(n-d)$ giving $\dim(V) \le n(n-d+1)$. But $(L^e)' \subseteq L$, so $V$ is a subalgebra of $L^e$. By Lemma \ref{sums}, $v^\scp \in V$ for all $v \in V$. Thus $V$ is a $p$-envelope of $L$, so $V = L^e$. \end{proof}
\section{The main result.}
\begin{proof}[Proof of Theorem \ref{main}] We use induction over $n$. Suppose that $A_1,A_2$ are distinct minimal ideals of $L$. Then $L/A_i$ has a faithful completely reducible module $V_i$ with $\dim(V_i) \le p^{(n-1)^2 -1}$ and $V_1 \oplus V_2$ is a module satisfying all the requirements. So suppose that $A$ is the only minimal ideal of $L$. If $A$ is non-abelian, then $A$ is an $L$-module satisfying the requirements, so suppose that $A$ is abelian.
We take a minimal $p$-envelope $(L^e,\p)$ of $L$. As $\dim(A) \ge 1$, by Theorem \ref{env}, we have $\dim(L^e) \le n^2$. By Theorem \ref{res}, $L^e$ has a faithful completely reducible module $V$ with $\dim(V) \le p^{n^2-1}$. There is some irreducible summand $V_0$ of $V$ on which $A$ acts non-trivially. By Lemma \ref{idL}, $A$ is an ideal of $L^e$ and it follows that $V_0^A := \{v \in V_0 \mid Av=0\}$ is an $L^e$-submodule of $V_0$. Therefore $V_0^A =0$. Let $V_1$ be an irreducible $L$-submodule of $V_0$. Since $V_1^A \subseteq V_0^A$, we have $V_1^A = 0$. But $A$ is the only minimal ideal of $L$. As it is not in the kernel of the representation of $L$ on $V_1$, $V_1$ is a faithful $L$-module.
\end{proof}
\begin{remark} We have a function $f: \N \to \N$, namely $f(n)=p^{n^2-1}$, such that every Lie algebra of dimension $n$ over a field of characteristic $p$ has a faithful completely reducible module of dimension less than or equal to $f(n)$. We cannot replace this with a function independent of $p$, for suppose that $f:\N \to \N$ were claimed to be such a function. The smallest faithful completely reducible module for the non-abelian algebra of dimension $2$ has dimension $p$, so this algebra over a field of characteristic $p > f(2)$ is a counterexample. This does not rule out the possibility, if we drop the requirement of complete reducibility, of there being a function $f$ independent of $p$ such that every Lie algebra of dimension $n$ over a field of non-zero characteristic has a faithful module of dimension less than or equal to $f(n)$.
\end{remark}
It is not claimed that any of the bounds given in this paper are best possible.
\bibliographystyle{amsplain} | {"config": "arxiv", "file": "1603.01894.tex"} |
\begin{document}
\begin{frontmatter}
\title{Asymptotic normality for eigenvalue statistics of a general sample covariance matrix
when $p/n \to \infty$ and applications}
\runtitle{CLT for LSS}
\begin{aug}
\author{\fnms{Jiaxin} \snm{Qiu}\ead[label=e1]{qiujx@connect.hku.hk}},
\author{\fnms{Zeng} \snm{Li}\ead[label=e2]{liz9@sustech.edu.cn}}
\and
\author{\fnms{Jian-feng} \snm{Yao}\ead[label=e3]{jeffyao@hku.hk}}
\address{Department of Statistics and Actuarial Science,\\
The University of Hong Kong,\\}
\address{Department of Statistics and Data Science,\\
Southern University of Science and Technology,\\}
\end{aug}
\begin{abstract}
The asymptotic normality for a large family of eigenvalue
statistics of a general sample covariance matrix is derived
under the ultra-high dimensional setting, that is, when the dimension to
sample size ratio $p/n \to \infty$.
Based on this CLT result, we first adapt the
covariance matrix test problem to the new ultra-high dimensional context.
Then as a second application, we develop a new test
for the separable covariance structure of a matrix-valued white
noise.
Simulation experiments are conducted
for the investigation of finite-sample properties of the general
asymptotic normality of eigenvalue statistics, as well as the second
test for separable covariance structure of matrix-valued white noise.
\end{abstract}
\begin{keyword}[class=AMS]
\kwd[Primary ]{62H10}
\kwd[; secondary ] {62H15}
\kwd{Asymptotic normality, linear spectral statistics, general
sample covariance matrix, ultra-dimension, matrix white noise,
separable covariance}
\end{keyword}
\end{frontmatter}
\section{Introduction}\label{sec:intro}
Let $\by\in\mathbb{R}^p$ be a population of the form $\by=\bSigma_p^{\nicefrac{1}{2}}\bx$ where $\bSigma_p$ is a $p\times p$ positive definite matrix, $\bx\in\mathbb{R}^p$ a $p$-dimensional random vector with independent and identically distributed (i.i.d.) components with zero mean and unit variance. Given an i.i.d. sample $\left\{\by_j=\bSigma_p^{\nicefrac{1}{2}}\bx_j,~1\leq j\leq n\right\}$ of $\by$, the sample covariance matrix is $\bS_n=\frac{1}{n}\sum_{j=1}^n\by_j\by_j'=\frac{1}{n}\bSigma_p^{\nicefrac{1}{2}}\bX\bX'\bSigma_p^{\nicefrac{1}{2}}$, where $\bX=(\bx_1, \bx_2, \ldots, \bx_n)$. We consider the ultra-high dimensional setting where $n\to \infty, p=p(n)\to\infty$ such that $p/n\to \infty$. The $p\times p$ matrix $\bS_n$ has only a small number of non-zero eigenvalues which are the same as those of its $n\times n$ companion matrix $\underline{\bS}_n=\frac{1}{n}\bX'\bSigma_p\bX$. The limiting distribution of these non-zero eigenvalues is known (see \citet{bai1988convergence}, \citet{wang2014limiting}). Precisely, consider the re-normalized sample covariance matrix
\begin{equation}\label{eq:A_def}
\bA_n=\cfrac{1}{\sqrt{npb_p}}\lb \bX'\bSigma_p \bX-pa_p \bI_n\rb,
\end{equation}
where $\bI_n$ is the identity matrix of order $n$, $a_p = \frac{1}{p}\tr (\bSigma_p)$, $b_p = \frac{1}{p}\tr (\bSigma_p^2)$. Denote the eigenvalues of $\bA_n$ as $\lambda_1,\cdots,\lambda_n$. According to \citet{wang2014limiting}, under the condition that $\sup_p \|\bSigma_p\|<\infty$, the eigenvalue distribution of $\bA_n$, i.e. $F^{\bA_n}=\frac{1}{n}\sum_{i=1}^n\delta_{\lambda_i}$ converges to the celebrated semi-circle law. In this paper, we focus on the so-called linear spectral statistics (LSS) of $\bA_n$, i.e. $\frac{1}{n}\sum_{i=1}^n f(\lambda_i)$ where $f(\cdot)$ is any smooth function we are interested in.
The main contribution of this paper is to establish the central limit theorem (CLT) for LSS of $\bA_n$ under the ultra-high dimensional setting. The study of fluctuations of LSS for different types of random matrix models has received extensive attention in the past decades, see monographs \citet{BSbook,couillet2011random,yao2015sample}. It plays a very important role in high dimensional data analysis because many well-established statistics can be represented as LSS of sample covariance or correlation matrix. In facing the curse of dimensionality, most asymptotic results are discussed under the \emph{Marchenko-Pastur} asymptotic regime, where $p/n\rightarrow c\in (0,\infty)$. However, this doesn't fit the case of ultra-high dimension when $p\gg n$. Hence in this paper we re-examined the asympototic behavior of LSS of $\bA_n$ when $n\to \infty, p=p(n)\to\infty$ such that $p/n\to \infty$.
A special version of $\bA_n$ for the case where $\bSigma_p=\bI_p$ has already been studied in the literature. The matrix becomes
\begin{equation}\label{eq:A_def_iden}
\bA_n^{\mathsf{iden}}=\cfrac{1}{\sqrt{np}}\lb \bX' \bX- p\bI_n\rb.
\end{equation}
\citet{bai1988convergence} is the first to study this matrix. They proved that the ultra-high dimensional limiting eigenvalue distribution of $\bA_n^{\mathsf{iden}}$ is the semi-circle law.
\citet{chen2012convergence} studied the behavior of the largest eigenvalue of $\bA_n^{\mathsf{iden}}$.
\citet{chen2015clt} and \citet{bao2015asymptotic} independently established the CLT for LSS of $\bA_n^{\mathsf{iden}}$, the limiting variance function of which coincides with that of a Wigner matrix given in \citet{BaiYao2005}. From these results, we know that some spectral properties of $\bA_n^{\mathsf{iden}}$ are similar to those of a $n\times n$ Wigner matrix.
Indeed, the general matrix $\bA_n$ also has some spectral properties similar to those of a Wigner matrix. In particular, the eigenvalue distribution of $\bA_n$, $F^{\bA_n}$, also converges to the semi-circle law. However, the second order fluctuations for LSS of $\bA_n$ are quite different and worth further investigation.
In this paper, we establish the CLT for LSS of $\bA_n$. The general strategy of the proof follows that of \citet{BaiYao2005} for the CLT for LSS of a large Wigner matrix. However, the calculations are more involved here as the matrix $\bA_n$ is a quadratic function of the independent entries $(X_{ij})$ while a Wigner matrix is a linear function of its entries. Similar to \citet{chen2015clt}, a key step is to establish the CLT for some smooth integral of the Stieltjes transform $M_n(z)$ of $\bA_n$, see Proposition~\ref{prop:Mn_CLT}.
To derive the limiting mean and covariance functions, we divide $M_n(z)$ into two parts: a non-random part and a random part. Our approaches to handle these two parts are technically different. For the random part, we follow a method in \citet{chen2015clt} which depends heavily on an explicit expression for $\tr\bigl(\bM_k^{(1)}\bigr)/(npb_p)$ (see Section~\ref{sec:conv_Mn1_proof} for more details). This explicit expression does not exist in our matrix model, so we need to provide a first-order approximation for it, which is given in Lemma~\ref{lem:Mk_limit}. For the non-random part, we
utilize the generalized Stein's equation to find the asymptotic expansion of the expectation of Stieltjes transform, which provides some new enlightenment for conventional procedures.
To demonstrate the potential of our newly established CLT, we further studied two hypothesis testing problems about population covariance matrices. First, we examine the identity hypothesis $H_0:~\bSigma_p=\bI_p$ under the ultra-high dimensional setting and compare it with cases of relatively low dimensions. Next, we consider the hypothesis that a matrix-valued noise has a separable covariance matrix.
For a sequence of i.i.d. $p_1 \times p_2 $ matrices $\{\bE_t\}_{1\leqslant t\leqslant T}$, we adopt a Frobenius-norm-type statistic to test whether the covariance matrix of $\mathsf{vec}(\bE_t)$ is separable, i.e. $\Cov(\mathsf{vec}(\bE_t))=\bSigma_1\otimes \bSigma_2$, where $\bSigma_1$ and $\bSigma_2$ are two given $p_1\times p_1$ and $p_2\times p_2$ nonnegative definite matrices. Here $p_1, p_2$ and $T$ are of comparable magnitude. Our test statistic can be represented as a LSS of the sample covariance matrix with dimension $p_1p_2$ much larger than the sample size $T$. Therefore our CLT can then be employed to derive the null distribution and perform power analysis of the test. Good numerical performance lends full support to the correctness of our CLT results.
The paper is organized as follows. Section~\ref{sec:pre_notation} provides preliminary knowledge of some technical tools. Section~\ref{sec:main_result} establishes our main CLT for LSS of $\bA_n$. Section~\ref{sec:application_hypothesis_testing} contains two hypothesis testing applications. Section~\ref{sec:simulation} reports numerical studies. Technical proofs and lemmas are relegated to Section~\ref{sec:proof_CLT_1} and Appendices.
Throughout the paper, we reserve boldfaced symbols for vectors and matrices. For any matrix $\bA$, we let $A_{ij}$, $\lambda_j^{\bA}$, $\bA'$, $\tr(\bA)$ and $\|\bA\|$ represent, respectively, its $(i, j)$-th element, its $j$-th largest eigenvalue, its transpose, its trace, and its spectral norm (i.e., the largest singular value of $\bA$). $\indicator_{\{\cdot\}}$ stands for the indicator function. For the random variable $X_{11}$, we denote the $a$-th moment of $X_{11}$ by $\nu_{a}$ and the $a$-th cumulant of $X_{11}$ by $\kappa_a$.
We use $K$ to denote constants which may vary from line to line. For simplicity, we sometimes omit the variable $z$ when representing some matrices and functions (e.g. Stieltjes transforms) of $z$, provided that it does not lead to confusion.
\section{Preliminaries}\label{sec:pre_notation}
In this section, we introduce some useful preliminary results.
For any $n\times n$ Hermitian matrix $\bB_n$, its empirical spectral distribution (ESD) is defined by
\[
F^{\bB_n}(x) = \frac{1}{n}\sum_{i=1}^n \indicator_{\{\lambda_i^{\bB_n} \leqslant x\}}.
\]
If $F^{\bB_n}(x)$ converges to a non-random limit $F(x)$ as $n\to\infty$, we call $F(x)$ the limiting spectral distribution (LSD) of $\bB_n$.
As for the LSD of $\bA_n$ defined in \eqref{eq:A_def},
\cite{wang2014limiting} derived the LSD of re-normalized sample covariance matrices with more generalized form
\begin{equation}\label{eq:Cp_wang2014limiting}
\bC_n = \sqrt{\frac{p}{n}} \biggl( \frac{1}{p} \bT_{n}^{1/2}\bX_n^*\bSigma_{p}\bX_n\bT_{n}^{1/2} - \frac{1}{p} \tr(\bSigma_{p})\bT_{n} \biggr),
\end{equation}
where $\bX_n$ and $\bSigma_p$ are the same as those in \eqref{eq:A_def}. $\bT_{n}$ is a $n\times n$ nonnegative definite Hermitian matrix, whose ESD, $F^{\bT_{n}}$, converges weakly to $H$, a nonrandom distribution function on $\mathbb{R}^{+}$ which does not degenerate to zero.
The LSD of $\bC_n$ is described in terms of its Stieltjes transform. The Stieltjes transform of any cumulative distribution function $G$ is defined by
\[
m_G(z) = \int \frac{1}{\lambda-z} \dif G(\lambda),\qquad z\in\mathbb{C}^{+}:=\{u+iv,\, u\in\mathbb{R}, v>0\}.
\]
\cite{wang2014limiting} proved that, when $p\wedge n\to\infty$ and $p/n\to\infty$, $F^{\bC_n}$ almost surely converges to a nonrandom distribution, whose Stieltjes transform $m_{\bC}(z)$ satisfies the following system of equations:
\begin{equation}\label{eq:wang2014limiting_LSD}
\begin{cases}
m_{\bC}(z)= -\int\frac{\dif H(x)}{z+x\theta g(z)},\\[0.5em]
g(z)= -\int\frac{x \dif H(x)}{z+x\theta g(z)},
\end{cases}
\end{equation}
for any $z\in\mathbb{C}^{+}$, where $\theta=\lim\limits_{p\to\infty}(1/p)\tr(\bSigma_{p}^2)$.
Note that $\bA_n$ is a special case of $\bC_n$ with $\bT_n=\bI_n$. By \eqref{eq:wang2014limiting_LSD} we can easily show that the Stieltjes transform $m_{\bA}(z)$ of LSD of $\bA_n$ satisfies
\begin{equation}\label{eq:SCLaw_equation}
m_{\bA}(z)=-\frac{1}{z+m_{\bA}(z)},
\end{equation}
which is exactly the Stieltjes transform of the semi-circle law with density function given by
\begin{equation}\label{eq:semi-circle-law}
F'(x) =\frac{1}{2\pi}\sqrt{4-x^2}\,\indicator_{\{ |x|\leqslant 2\}}.
\end{equation}
Hereafter we use $m(z)$ to represent $m_{\bA}(z)$ for ease of presentation.
\section{Main Results}\label{sec:main_result}
Let $\mathscr{U}$ denote any open region on the complex plane including $[-2,2]$ and $\mathscr{M}$ be the set of analytic functions defined on $\mathscr{U}$.
For any $f\in \mathscr{M}$, we consider a LSS of $\bA_n$ of the form:
\[
\int f(x) \dif F^{\bA_n}(x) = \frac{1}{n} \sum_{i=1}^n f(\lambda_i^{\bA_n}).
\]
Since $F^{\bA_n}$ converges to $F$ almost surely, we have
\[
\int f(x) \dif F^{\bA_n}(x) \to \int f(x) \dif F(x).
\]
A question naturally arises: how fast does
$\int f(x) \dif\left\{F^{\bA_n}(x) -F(x)\right\} $ converge to zero?
To answer this question, we consider a re-normalized functional:
\begin{equation}\label{eq:LSS}
G_n(f)= n\int_{-\infty}^{+\infty}f(x) \mathrm{d} \left\{ F^{\bA_n}(x)-F(x)\right\}-\frac{n}{2\pi i}\oint_{|m|=\rho}f(-m-m^{-1})\calX_n(m)\frac{1-m^2}{m^2}\dif m,
\end{equation}
where
\begin{equation}\label{eq:calX}
\begin{split}
\mathcal{X}_n(m) &= \frac{-\calB+\sqrt{\calB^2-4\calA\calC}}{2\calA},\qquad \calA = m-\sqrt{\frac{n}{p}}\frac{c_p}{b_p\sqrt{b_p}}\bigl(1+m^2\bigr), \\[0.5em]
\calC &= \frac{m^3}{n}\biggl\{ \frac{1}{1-m^2}+\frac{(\nu_4-3)\tilde{b}_p}{b_p} \biggr\} - \sqrt{\frac{n}{p}}\frac{c_p}{b_p\sqrt{b_p}} m^4 +\frac{n}{p}\biggl(-\frac{c_p^2}{b_p^3} +\frac{d_p}{b_p^2}\biggr)m^5,\\[0.5em]
\calB &=m^2-1-\sqrt{\frac{n}{p}}\frac{c_p}{b_p\sqrt{b_p}}m(1+2m^2),~a_p = \frac{1}{p}\tr (\bSigma_p)\\[0.5em]
b_p &= \frac{1}{p}\tr (\bSigma_p^2),~\tilde{b}_p = \frac{1}{p}\sum_{i=1}^p \sigma_{ii}^2,~c_p =\frac{1}{p}\tr (\bSigma_p^3), ~ d_p = \frac{1}{p}\tr (\bSigma_p^4),
\end{split}
\end{equation}
here $\rho < 1$ and $\sqrt{\calB^2-4\calA\calC}$ is a complex number whose imaginary part has the same sign as that of $\calB$.
In what follows, we have established the asymptotic normality of $G_n(f)$ and the main result is formulated in the theorem below.
\begin{theorem}\label{thm:CLT}
Suppose that
\begin{enumerate}
\item[(A)]\label{itm:asum1} ${\bf X}=(X_{ij})_{p\times n}$ where $\{X_{ij},~1\leqslant i \leqslant p,~ 1\leqslant j\leqslant n\}$ are i.i.d. real random variables with $\Expe X_{ij}=0$, $\Expe X_{ij}^2=1$, $\Expe X_{ij}^4=\nu_4$ and $\Expe |X_{ij}|^{6+\varepsilon_0}<\infty$ for some small positive $\varepsilon_0$;
\item[(B)]\label{itm:asum2} $\{\bSigma_p,~p\geq 1\}$ is a sequence of non-negative definite matrices, bounded in spectral norm, such that the following limits exist:
\begin{itemize}
\item $\gamma=\lim_{p\to \infty} \frac{1}{p}\tr (\bSigma_p)$,
\item $\theta=\lim_{p\to \infty} \frac{1}{p}\tr (\bSigma_p^2)$,
\item $\omega=\lim_{p\to \infty} \frac{1}{p}\sum_{i=1}^p \sigma_{ii}^2$;
\end{itemize}
\item[(C1)] \label{item:np0} $p\wedge n\to\infty$ and $n^2/p=O(1)$.
\end{enumerate}
Then, for any $f_1,\cdots,f_k\in \mathscr{M}$, the finite dimensional random vector $\lb G_n(f_1),\cdots,G_n(f_k)\rb$ converges weakly to a Gaussian vector $\lb Y(f_1),\cdots, Y(f_k)\rb$ with mean function $\Expe Y(f) = 0$
and covariance function
\begin{align}
\Cov\lb Y(f_1), Y(f_2)\rb&=\frac{\omega}{\theta}(\nu_4-3)\Psi_1(f_1)\Psi_1(f_2)+2\sum_{k=1}^{\infty}k\Psi_k(f_1)\Psi_k(f_2)\label{eq:cov_1}\\
&=\frac{1}{4\pi^2}\int_{-2}^2\int_{-2}^2f_1'(x)f_2'(y)H(x,y)\dif x \dif y,\label{eq:cov_2}
\end{align}
where
\begin{equation}\label{eq:Phi_k}
\Psi_k(f)=\cfrac{1}{2\pi}\int_{-\pi}^{\pi}f(2\cos\theta)e^{ik\theta}\dif \theta=\cfrac{1}{2\pi}\int_{-\pi}^{\pi}f(2\cos\theta)\cos k\theta \dif \theta,
\end{equation}
\begin{equation}\label{eq:Hxy}
H(x,y)=\frac{\omega}{\theta}(\nu_4-3)\sqrt{4-x^2}\sqrt{4-y^2}+2\log\lb \cfrac{4-xy+\sqrt{(4-x^2)(4-y^2)}}{4-xy-\sqrt{(4-x^2)(4-y^2)}}\rb.
\end{equation}
\end{theorem}
The proofs of Theorem \ref{thm:CLT} is postponed to Section \ref{sec:proof_CLT_1}.
\begin{remark}
Note that we require $p\geqslant Kn^2$ asymptotically in Assumption (C1), while the situation of $n\ll p \ll n^2$ remains unknown.
\end{remark}
\begin{remark}
If $\bSigma_p = \bI_p$, we have $a_p=b_p=\tilde{b}_p=c_p=d_p=1$ and $\gamma=\theta=\omega=1$. Our Theorem~\ref{thm:CLT} reduces to the CLT derived in \citet{chen2015clt}.
\end{remark}
Applying Theorem~\ref{thm:CLT} to three polynomial functions, we obtain the following corollary.
\begin{corollary}\label{coro:CLT-x-x2-x3}
With the same notations and assumptions given in Theorem \ref{thm:CLT}, consider three analytic functions $f_1(x)=x, f_2(x)=x^2, f_3(x)=x^3$, we have
\begin{align*}
G_n(f_1) &= \tr(\bA_n) \convd \calN\Bigl(0, \frac{\omega}{\theta}(\nu_4-3)+2\Bigr);\\[0.5em]
G_n(f_2) &= \tr(\bA_n^2) - n - \Bigl\{\frac{\tilde{b}_p}{b_p}(\nu_4-3)+1\Bigr\} \convd \calN(0, 4);\\[0.5em]
G_n(f_3) &= \tr(\bA_n^3) - \frac{c_p}{b_p\sqrt{b_p}}\sqrt{\frac{n}{p}}\Bigl\{ n+1+\frac{\tilde{b}_p}{b_p}(\nu_4-3) \Bigr\} \convd \calN\Bigl(0, \frac{9\omega}{\theta}(\nu_4-3)+24\Bigr).
\end{align*}
\end{corollary}
The calculations in these applications are elementary, thus omitted. Note that the mean correction terms for $G_n(f_1)$, $G_n(f_2)$, and $G_n(f_3)$ are $0$, $\frac{\tilde{b}_p}{b_p}(\nu_4-3)+1$, and $\frac{c_p}{b_p\sqrt{b_p}}\sqrt{\frac{n}{p}}\bigl\{ n+1+\frac{\tilde{b}_p}{b_p}(\nu_4-3) \bigr\}$, respectively.
\subsection{Case of $p\geqslant Kn^3$}
When $p\geqslant Kn^3$, the mean correction term in \eqref{eq:LSS} can be further simplified, i.e.
\begin{align}
&\;-\frac{n}{2\pi i}\oint_{|m|=\rho}f(-m-m^{-1})\calX_n(m)\frac{1-m^2}{m^2}\dif m \nonumber\\
= &\; -\biggl[\frac{1}{4}\bigl( f(2)+f(-2)\bigr)-\frac{1}{2}\Psi_0(f)+\frac{\tilde{b}_p}{b_p}(\nu_4-3)\Psi_2(f)\biggr] - \sqrt{\frac{n^3}{p}}\frac{c_p}{b_p\sqrt{b_p}}\Psi_3(f) + o(1).\label{eq:mean-correction-simplify}
\end{align}
For any function $f\in\mathscr{M}$, we define a new normalization of the LSS:
\begin{equation}\label{eq:LSS_n3}
Q_{n}(f)= n\int_{-\infty}^{+\infty}f(x) \mathrm{d} \left\{ F^{\bA_n}(x)-F(x)\right\}-\sqrt{\frac{n^3}{p}}\frac{c_p}{b_p\sqrt{b_p}}\Psi_3(f).
\end{equation}
Note that the last term in \eqref{eq:LSS_n3} makes no contribution if the function $f$ is even ($\Psi_3(f)=0$) or $n^3/p = o(1)$. Substituting \eqref{eq:mean-correction-simplify} into Theorem \eqref{thm:CLT}, we obtain the following CLT for $Q_n(f)$.
\begin{corollary}\label{coro:CLT_pn3}
Under assumptions $(A)$, $(B)$ in Theorem~\eqref{thm:CLT} and
\begin{enumerate}
\item[(C2)] $p\wedge n\to\infty$ and $n^3/p = O(1)$.
\end{enumerate}
for any $f_1,\cdots,f_k\in \mathscr{M}$, the finite dimensional random vector $\lb Q_n(f_1),\cdots,Q_n(f_k)\rb$ converges weakly to a Gaussian vector $\lb Y(f_1),\cdots, Y(f_k)\rb$ with mean function
\begin{equation}\label{eq:CLT-mean}
\Expe Y(f_k)=\frac{1}{4}\bigl( f_k(2)+f_k(-2)\bigr)-\frac{1}{2}\Psi_0(f_k)+\cfrac{\omega}{\theta}(\nu_4-3)\Psi_2(f_k)
\end{equation}
and covariance function given in \eqref{eq:cov_1}.
\end{corollary}
\begin{remark}
As a special case of Theorem~\ref{thm:CLT}, Corollary \ref{coro:CLT_pn3} is used in \citet{li2016testing} to derive the asymptotic power of two sphericity tests, John's invariant test and Quasi-likelihood ratio test (QLRT), when the dimension $p$ is much larger than sample size $n$.
Specifically, let $\bX=(\bx_1,\ldots,\bx_n)$ be a $p\times n$ data matrix with $n$ i.i.d. $p$-dimensional random vectors $\{\bx_i\}_{1\leqslant i \leqslant n}$ with covariance matrix $\bSigma=\Var(\bx_i)$. The goal is to test
\[
H_0: \bSigma = \sigma^2\bI_p,\qquad \text{vs.} \qquad H_1: \bSigma \neq \sigma^2\bI_p,
\]
where $\sigma^2$ is an unknown positive constant. John's test statistic is defined by
\[
U = \frac{1}{p}\tr \biggl[\biggl(\frac{\bS_n}{\tr(\bS_n)/p}-\bI_p\biggr)^2\biggr] = \frac{p^{-1}\sum_{i=1}^p (l_i-\bar{l})^2}{\bar{l}^2},
\]
where $\{l_i\}_{1\leqslant i \leqslant p}$ are eigenvalues of $p$-dimensional sample covariance matrix $\bS_n=\frac{1}{n}\sum_{i=1}^n \bx_i\bx_i'=\frac{1}{n}\bX\bX'$ and $\bar{l}=\frac{1}{p}\sum_{i=1}^p l_i$.
The QLRT statistic is defined by
\[
\mathcal{L}_n = \frac{p}{n}\log \frac{(n^{-1}\sum_{i=1}^n \tilde{l}_i )^n}{\prod_{i=1}^n \tilde{l}_i},
\]
where $\{\tilde{l}_i\}_{1\leqslant i \leqslant n}$ are the eigenvalues of the $n\times n$ matrix $\frac{1}{p}\bX'\bX$. The main idea is that both $U$ and $\mathcal{L}_n$ can be expressed as functions of eigenvalues of $\bA_n$ in \eqref{eq:A_def}. Thus, asymptotic distributions of John's statistic and QLRT statistic can be derived either using Theorem~\ref{thm:CLT} or Corollary~\ref{coro:CLT_pn3}. \citet{li2016testing} used Corollary \ref{coro:CLT_pn3} to derive the limiting distributions of $U$ and $\mathcal{L}_n$ under the alternative hypothesis. Their power functions are proven to converge to $1$ under the assumption $n^3/p=O(1)$. More details can be found in \citet{li2016testing}.
\end{remark}
\section{Applications to Hypothesis Testing about Large Covariance Matrices}\label{sec:application_hypothesis_testing}
\subsection{The Identity Hypothesis ``$\Sigma_p = \bI_p$''}\label{sec:identity_test}
Let $\bY=(\by_1,\ldots,\by_n)$ be a $p\times n$ data matrix with $n$ i.i.d. $p$-dimensional random vectors $\{\by_i=\bSigma_p^{1/2}\bx_i\}_{1\leqslant i \leqslant n}$ with covariance matrix $\bSigma_p=\Var(\by_i)$ and $\bx_i$ has $p$ i.i.d. components $\{X_{ij},~1\leq j\leq p\}$ satisfying $\Expe X_{ij}=0$, $\Expe X_{ij}^2=1$, $\Expe X_{ij}^4=\nu_4$. We explore the identity testing problem
\begin{equation}\label{eq:identity_test}
H_0: \bSigma_p = \bI_p,\qquad \text{vs.} \qquad H_1: \bSigma_p \ne \bI_p,
\end{equation}
under two different asymptotic regimes: high-dimensional regime, ``$p\wedge n\to\infty,~p/n\to c \in(0,\infty)$'' and ultra-high dimensional regime, ``$p\wedge n\to\infty,~p/n\to\infty$''.
We will consider two well-known test statistics and discuss their limiting distributions under both regimes.
For the identity testing problem \eqref{eq:identity_test}, \citet{Nagao1973On} proposed a statistic based on the Frobenius norm:
\[
V = \frac{1}{p} \tr\bigl[ (\bS_n-\bI)^2 \bigr],
\]
where $\bS_n=\tfrac{1}{n}\bY\bY'$ is the sample covariance matrix.
Nagao's test based on $V$ performs well when $n$ tends to infinity while $p$ remains fixed. However, \citet{LedoitWolf2002} showed that Nagao's test has poor properties when $p$ is large.
They made some modifications as
\begin{equation}\label{eq:W_def}
W = \frac{1}{p} \tr\Bigl[(\bS_n-\bI_p)^2\Bigr] - \frac{p}{n}\biggl[\frac{1}{p}\tr(\bS_n)\biggr]^2 +\frac{p}{n}.
\end{equation}
When $p\wedge n\to\infty, p/n=c_n\to c\in(0, \infty)$, under normality assumption, \citet{LedoitWolf2002} proved that the limiting distribution of $W$ under $H_0$ is
\[
nW-p-1 \convd \calN(0,4).
\]
\citet{wang2013sphericity} further removed the normality assumption and show that under $H_0$, when $p\wedge n\to\infty, p/n=c_n\to c\in(0, \infty)$,
\begin{equation}\label{eq:W_null_dist_high}
nW-p-(\nu_4-2) \convd \calN(0,4).
\end{equation}
Now we derive the limiting distribution of $W$ under both $H_0$ and $H_1$ when $p/n\rightarrow \infty$. We will show that the test based on $W$ is consistent under the ultra-high dimensional
setting. The main results of the test based on $W$ is as follows.
\begin{theorem}\label{thm:W_limit_dist_H0}
Assume that $\bY=(\by_1,\ldots,\by_n)$ is a $p\times n$ data matrix with $n$ i.i.d. $p$-dimensional random vectors $\{\by_i=\bSigma_p^{1/2}\bx_i\}_{1\leqslant i \leqslant n}$ with covariance matrix $\bSigma_p=\Var(\by_i)$ and $\bx_i$ has $p$ i.i.d. components $\{X_{ij},~1\leq j\leq p\}$ satisfying $\Expe X_{ij}=0$, $\Expe X_{ij}^2=1$, $\Expe X_{ij}^4=\nu_4$ and $\Expe |X_{ij}|^{6+\varepsilon_0}<\infty$ for some small positive $\varepsilon_0$. $W$ is defined as \eqref{eq:W_def}. Then under $H_0$, when $p\wedge n\to\infty$ and $n^2/p=O(1)$,
\begin{equation}\label{eq:W_null_dist_ultra}
nW-p-(\nu_4-2) \convd \calN(0,4).
\end{equation}
\end{theorem}
Note that the asymptotic distribution \eqref{eq:W_null_dist_ultra} coincides with \eqref{eq:W_null_dist_high}, which means $W$ has the same limiting null distribution in both high dimensional and ultra-high dimensional setting. Therefore $W$ can be used to test \eqref{eq:identity_test} under the ultra-high dimensional setting. For nominal level $\alpha$, the corresponding rejection rule is
\begin{equation}\label{eq:W_reject_rule}
\frac{1}{2}\Bigl\{ nW-p-(\nu_4-2) \Bigr\} \geqslant z_{\alpha},
\end{equation}
where $z_{\alpha}$ is the $\alpha$ upper quantile of standard normal distribution.
As for the case of $H_1$ when $\bSigma_p\neq \bI_p$, we have
\begin{theorem}\label{thm:W_limit_dist_H1}
Under the same assumptions as in Theorem~\ref{thm:W_limit_dist_H0}, further assume that $\{\bSigma_p,~p\geq 1\}$ is a sequence of non-negative definite matrices, bounded in spectral norm such that the following limits exist:
\begin{gather*}
\gamma=\lim_{p\to \infty} \frac{1}{p}\tr(\bSigma_p),
\qquad \theta=\lim_{p\to \infty} \frac{1}{p}\tr(\bSigma_p^2),
\qquad \omega=\lim_{p\to \infty} \frac{1}{p}\sum_{i=1}^p (\bSigma_p)_{ii}^2,
\end{gather*}
then when $p\wedge n\to\infty$ and $n^2/p=O(1)$,
\[
nW-p-\theta\Bigl[\frac{\omega}{\theta}(\nu_4-3)+1\Bigr] +n(2\gamma-1-\theta) \convd \calN(0,4\theta^2).
\]
\end{theorem}
Note that Theorem \ref{thm:W_limit_dist_H1} reveals the limiting null distribution of $W$. Let $\bSigma_p=\bI_p$, then
$\gamma=\theta=\omega=1$, Theorem \ref{thm:W_limit_dist_H1} reduces to Theorem \ref{thm:W_limit_dist_H0}, which states the limiting null
distribution of $W$. With Theorem \ref{thm:W_limit_dist_H0} and \ref{thm:W_limit_dist_H1}, asymptotic power of $W$ can be derived.
\begin{proposition}\label{prop:power_theo_W}
With the same assumptions as in Theorem~\ref{thm:W_limit_dist_H1}, when $p \wedge n\to\infty$ and $n^2/p=O(1)$, the testing power of $W$ for \eqref{eq:identity_test}
\[
\beta(H_1) \rightarrow 1-\Phi\biggl( \frac{1}{2\theta} \Bigl\{ 2z_{\alpha} - \omega(\nu_4-3) -\theta +n(2\gamma-1-\theta) + (\nu_4-2) \Bigr\} \biggr).
\]
If $\gamma=\theta=1$, then $\beta(H_1) \to 1-\Phi\bigl(z_{\alpha}-\tfrac{\omega-1}{2}(\nu_4-3)\bigr)$; otherwise, $\beta(H_1)\to 1$.
\end{proposition}
The second test statistic of \eqref{eq:identity_test} we consider is the likelihood ratio test (LRT) statistic studied in \citet{Bai2009Correction}. \citet{Bai2009Correction} assumed that $\nu_4=3$. The LRT statistic is defined as
\begin{equation}\label{eq:Bai_identity_test_L0}
\calL_0 = \tr(\bS_n) - \log |\bS_n| - p.
\end{equation}
\citet{Bai2009Correction} derived the limiting null distribution of $\calL_0$ when $p\wedge n\to\infty,~p/n\to c\in (0,1)$. However, this LRT statistic is degenerate and not applicable when $p> n$ because $|\bS_n|=0$. Thus for $p>n$ we introduce a quasi-LRT test statistic
\[
\calL = \tr(\widehat{\bS}_n) - \log |\widehat{\bS}_n| - n,
\]
where $\widehat{\bS}_n = \frac{1}{p}\bY'\bY$. When $p\wedge n\to\infty$, $p/n=c_n\to c \in(1, \infty)$, the limiting null distribution of $\calL$ is
\begin{equation}\label{eq:Bai_identity_test_stat_normalized}
\calL^*:=\frac{\calL - n F_1(c_n) - \mu_1}{\sigma_1} \convd \calN(0,1),
\end{equation}
where
\[
F_1(c_n) = 1 - (1-c_n) \log\Bigl(1-\frac{1}{c_n}\Bigr),
~ \mu_1 = -\frac{1}{2}\log\Bigl(1-\frac{1}{c_n}\Bigr),
~\sigma_1^2 = -2\log\Bigl(1-\frac{1}{c_n}\Bigr)-\frac{2}{c_n}.
\]
Now we will show that this asymptotic distribution \eqref{eq:Bai_identity_test_stat_normalized} still holds in the ultra-high dimensional setting.
Note that
\[
\sigma_1 = \sqrt{-2\log\Bigl(1-\frac{1}{c_n}\Bigr)-\frac{2}{c_n}} = \sqrt{\frac{1}{c_n^2}+\frac{2}{3c_n^3}+o\biggl(\frac{1}{c_n^3}\biggr)} = \frac{1}{c_n} +\frac{1}{3c_n^2} + o\biggl(\frac{1}{c_n^2}\biggr),
\]
which implies that
\begin{equation}\label{eq:frac_sigma_1}
\frac{1}{\sigma_1} = c_n-\frac{1}{3} + o(1).
\end{equation}
Firstly, we consider the random part of $\calL^*$. Let $\widehat{\lambda}_1\geqslant \cdots \geqslant \widehat{\lambda}_n$ be the eigenvalues of $\widehat{\bS}_n$ and $\widetilde{\lambda}_1\geqslant \cdots \geqslant \widetilde{\lambda}_n$ be the eigenvalues of $\widetilde{\bS}_n=\sqrt{\tfrac{n}{p}}(\tfrac{1}{n}\bX'\bX-\tfrac{p}{n}\bI_n)$. By using the basic identity $\widehat{\lambda}_i=\tfrac{\widetilde{\lambda}_i}{\sqrt{c_n}}+1$, we have
\begin{align}
\calL & = \sum_{i=1}^n \widehat{\lambda}_i - n - \sum_{i=1}^n\log(\widehat{\lambda}_i) = \sum_{i=1}^n \frac{\widetilde{\lambda}_i}{\sqrt{c_n}} - \sum_{i=1}^n\log \Bigl( 1+ \frac{\widetilde{\lambda}_i}{\sqrt{c_n}} \Bigr) \nonumber\\[0.5em]
& = \sum_{i=1}^n \frac{\widetilde{\lambda}_i}{\sqrt{c_n}} - \sum_{i=1}^n\Bigl(\frac{\widetilde{\lambda}_i}{\sqrt{c_n}}-\frac{1}{2}\frac{\widetilde{\lambda}_i^2}{c_n} +\frac{1}{3}\frac{\widetilde{\lambda}_i^3}{c_n\sqrt{c_n}}-\frac{1}{4}\frac{\widetilde{\lambda}_i^4}{c_n^2} + o\Bigl(\frac{1}{c_n^2}\Bigr)\Bigr)\nonumber\\[0.5em]
& = \frac{1}{2c_n}\tr(\widetilde{\bS}_n^2) - \frac{1}{3c_n\sqrt{c_n}}\tr(\widetilde{\bS}_n^3) +\frac{1}{4c_n^2}\tr(\widetilde{\bS}_n^4) + o\Bigl(\frac{n}{c_n^2}\Bigr).\label{eq:calL_expan}
\end{align}
Takeing $\nu_4=3$ (the assumption in \citet{Bai2009Correction}) and $\bSigma_p=\bI_p$ in Corollary~\ref{coro:CLT-x-x2-x3}, we have, under $H_0$,
\begin{equation}\label{eq:tr_S2_S3_CLT}
\tr(\widetilde{\bS}_n^2)-n-1\convd \calN(0,4),\qquad
\tr(\widetilde{\bS}_n^3) - \frac{n+1}{\sqrt{c_n}} \convd \calN(0,24).
\end{equation}
\begin{equation}\label{eq:tr_S4_CLT}
\tr(\widetilde{\bS}_n^4) - 2n - \Bigl(\frac{n}{c_n}+\frac{1}{c_n}+5\Bigr)\convd \calN(0,72).
\end{equation}
Combining \eqref{eq:frac_sigma_1} $\sim$ \eqref{eq:tr_S4_CLT} gives us that
\begin{equation}\label{eq:random_calL}
\frac{\calL}{\sigma_1} = \frac{1}{2}\tr(\widetilde{\bS}_n^2)+o\Bigl(\frac{n^2}{p}\Bigr).
\end{equation}
Secondly, we consider the determinist part of $\calL^*$. Note that
\begin{align*}
nF_1(c_n)+\mu & = n - \Bigl[n(1-c_n)+\frac{1}{2}\Bigr]\log\Bigl(1-\frac{1}{c_n}\Bigr)\\
& = n - \Bigl[n(1-c_n)+\frac{1}{2}\Bigr]\cdot \Bigl[-\frac{1}{c_n}-\frac{1}{2c_n^2}-\frac{1}{3c_n^3}+o\Bigl(\frac{1}{c_n^3}\Bigr)\Bigr]\\
& = \frac{n}{2c_n} +\frac{1}{2c_n} +\frac{n}{6c_n^2} +o\Bigl(\frac{n}{c_n^2}\Bigr),
\end{align*}
together with \eqref{eq:frac_sigma_1} which implies that
\begin{equation}\label{eq:determinist_calL}
\frac{nF_1(c_n)+\mu_1}{\sigma_1} = \frac{n+1}{2} + o\Bigl(\frac{n^2}{p}\Bigr).
\end{equation}
Therefore, from \eqref{eq:tr_S2_S3_CLT}, \eqref{eq:random_calL} and \eqref{eq:determinist_calL}, we conclude that, under $H_0$, as $p\wedge n\to\infty$, $n^2/p=O(1)$,
\[
\calL^* = \frac{\calL}{\sigma_1} - \frac{n F_1(c_n) + \mu_1}{\sigma_1} = \frac{1}{2}\Bigl(\tr(\widetilde{\bS}_n^2) - n-1\Bigr) +o(1) \convd \calN(0, 1),
\]
which is the same as the limiting distribution \eqref{eq:Bai_identity_test_stat_normalized} when $p\wedge n\to\infty$, $p/n=c_n\to c \in(1, \infty)$. Finally, we summarize the discussion above in the following proposition.
\begin{proposition}
\begin{enumerate}
\item[(1)] (\citet{Bai2009Correction}) Assume that $\bY=(\by_1,\ldots,\by_n)$ is a $p\times n$ data matrix with $n$ i.i.d. $p$-dimensional random vectors $\{\by_i=\bSigma_p^{1/2}\bx_i\}_{1\leqslant i \leqslant n}$ with covariance matrix $\bSigma_p=\Var(\by_i)$ and $\bx_i$ has $p$ i.i.d. components $\{X_{ij},~1\leq j\leq p\}$ satisfying $\Expe X_{ij}=0$, $\Expe X_{ij}^2=1$, $\Expe X_{ij}^4=\nu_4=3$. $\calL_0$ is defined as \eqref{eq:Bai_identity_test_L0}. Then under $H_0$, when $p \wedge n\to\infty$, $p/n\to c \in (0,1)$, we have
\[
\frac{\calL_0 - n F_0(c_n) - \mu_0}{\sigma_0} \convd \calN(0,1),
\]
where $c_n=p/n$ and
\[
F_0(c_n) = 1 - \frac{c_n-1}{c_n} \log(1-c_n),
~ \mu_0 = -\frac{\log(1-c_n)}{2},
~ \sigma_0^2 = -2\log(1-c_n)-2c_n.
\]
\item[(2)] Under the same assumptions as in (1) and the normalized quasi LRT statistic $\calL^*$ is defined in \eqref{eq:Bai_identity_test_stat_normalized}. Then under $H_0$, when $p\wedge n\to\infty$, $p/n\to c \in (1,\infty)$, we have
\[
\calL^* \convd \calN(0,1).
\]
\item[(3)] Under the same assumptions as in (1) and the normalized quasi LRT statistic $\calL^*$ is defined in \eqref{eq:Bai_identity_test_stat_normalized}. Then under $H_0$, when $p\wedge n\to\infty$ and $n^2/p=O(1)$, we have
\[
\calL^* \convd \calN(0,1).
\]
\end{enumerate}
\end{proposition}
Note that the results (2) and (3) in this proposition are newly derived.
\subsection{Separable Covariance Structure for Matrix-valued Noise}\label{sec:separable_structure_matrix_noise}
In this section, we develop a test for the structure of the covariance matrix of a matrix-valued white noise. \citet{chen2021autoregressive} proposed a matrix autoregressive model with the form
\[
\bX_t = \bA\bX_{t-1}\bB'+\bE_t, ~t=1,\cdots, T,
\]
where $\bX_t$ is a $p_1\times p_2$ random matrix observed at time $t$, $\bA$ and $\bB$ are $p_1\times p_1$ and $p_2\times p_2$ deterministic autoregressive coefficient matrices, $\bE_t=(e_{t,ij})$ is a $p_1\times p_2$ matrix-valued white noise. It's assumed that the error white noise matrix $\bE_t$ has a specific covariance structure
\[
\Cov\bigl(\mathsf{vec}(\bE_t)\bigr)=\bSigma_1\otimes\bSigma_2,
\]
where $\mathsf{vec}(\cdot)$ denotes the vectorization, $\bSigma_1$ and $\bSigma_2$ are $p_1\times p_1$ and $p_2\times p_2$ non-negative definite matrices. In other words, the noise $\bE_t$ has a separable covariance matrix.
Now for any observed matrix-valued time sequence, we aim to test whether it has a separable covariance matrix. Specifically, suppose that $\{\bE_t\}_{1\leqslant t\leqslant T}$ is an observed i.i.d. sequence of $p_1 \times p_2 $ matrices and $p_1, p_2$,$T$ are of comparable magnitude, we aim to test
\begin{equation}\label{eq:separable_test}
H_0: \Cov\bigl(\mathsf{vec}(\bE_t)\bigr)=\bSigma_1\otimes\bSigma_2,\qquad \text{vs.} \qquad H_1: \Cov\bigl(\mathsf{vec}(\bE_t)\bigr) \ne \bSigma_1\otimes\bSigma_2,
\end{equation}
where $\bSigma_1$ and $\bSigma_2$ are two prespecified $p_1\times p_1$ and $p_2\times p_2$ non-negative definite matrices. Testing $H_0: \Cov\bigl(\mathsf{vec}(\bE_t)\bigr)=\bSigma_1\otimes\bSigma_2$ is equivalent to testing
\[
H_0': \Cov\Bigl(\bigl(\bSigma_1\otimes\bSigma_2\bigr)^{-\nicefrac{1}{2}}\mathsf{vec}(\bE_t)\Bigr)=\bI_{p_1p_2}.
\]
To this end, we define a test statistic
\begin{equation}\label{eq:W_star_def_1}
W^* = \frac{1}{p_1p_2} \tr \Bigl[ \bigl(\bB_T-\bI_{p_1p_2}\bigr)^2 \Bigr] - \frac{p_1p_2}{T} \Bigl[\frac{1}{p_1p_2}\tr(\bB_T)\Bigr]^2+\frac{p_1p_2}{T}
\end{equation}
where
\begin{equation}\label{eq:W_star_def_2}
\bB_T = \frac{1}{T} \bY_T\bY_T', \qquad \bY_T = \bigl(\bSigma_1\otimes\bSigma_2\bigr)^{-\nicefrac{1}{2}}\bigl(\mathsf{vec}(\bE_1),\ldots,\mathsf{vec}(\bE_T)\bigr):=(Y_{ij})_{p_1p_2\times T}.
\end{equation}
Note that $W^*$ measures the distance between sample covariance matrix of $\mathsf{vec}(\bE_t)$ and $\bSigma_1\otimes\bSigma_2$. Naturally we reject $H_0$ when $W^*$ is too large and the critical value is determined by the limiting null distribution of $W^*$.
Since $p_1,p_2,T$ are about the same order, we examine the asymptotic behavior of $W^*$ under the high dimensional regime
\begin{equation}\label{eq:three_dimension_infinity}
T\to\infty, \qquad \frac{p_1}{T}=\frac{p_1(T)}{T}\to d_1 \in (0, \infty),\qquad \frac{p_2}{T}=\frac{p_2(T)}{T}\to d_2 \in (0, \infty).
\end{equation}
The asymptotic null distribution of the test statistic $W^*$ is given in the following Theorem. It is a direct implementation of Theorem \ref{thm:W_limit_dist_H0}.
\begin{theorem}\label{prop:separable_test_H0_limit_dist}
Assume that
\begin{enumerate}
\item[(1)] $\{\bE_t=(e_{t,ij})_{p_1\times p_2}\}_{1\leqslant t\leqslant T}$ is a sequence of i.i.d. sample matrices satisfying $\mathsf{vec}(\bE_t) = (\bSigma_1\otimes \bSigma_2)^{\nicefrac{1}{2}} \mathsf{vec}(\bZ_t)$, where $\bZ_t=(Z_{t,ij})_{p_1\times p_2}$ is a $p_1\times p_2$ matrix with i.i.d. real entries $Z_{t,ij}$ satisfying $\Expe Z_{t,ij} = 0$, $\Expe Z_{t,ij}^2 = 1$, $\Expe Z_{t,ij}^4 = \nu_4$ and $\Expe |Z_{t,ij}|^{6+\varepsilon_0}<\infty$ for some small positive $\varepsilon_0$;
\item[(2)] $p_1,p_2,T$ tend to infinity as in \eqref{eq:three_dimension_infinity}.
\end{enumerate}
Then under the null hypothesis $H_0: \Cov\bigl(\mathsf{vec}(\bE_t)\bigr) = \bSigma_1\otimes\bSigma_2$, $W^*$ is defined as in \eqref{eq:W_star_def_1}, we have
\[
TW^*-p_1p_2-(\nu_4-2) \convd \calN(0,4).
\]
\end{theorem}
According to the asymptotic normality of $W^*$ presented in Theorem~\ref{prop:separable_test_H0_limit_dist}, we reject $H_0$ at nominal level $\alpha$ if
\[
\frac{1}{2}\Bigl\{TW^*-p_1p_2-(\nu_4-2)\Bigr\}\geqslant z_{\alpha}.
\]
Moreover, the asymptotic power of the proposed test for \eqref{eq:separable_test} can be derived as follows.
\begin{proposition}\label{prop:separable_test_power_theo} Suppose that assumptions (1) and (2) in Theorem \ref{prop:separable_test_H0_limit_dist} hold, and
\begin{enumerate}
\item[(3)] $\widetilde{\bSigma}_1$ and $\widetilde{\bSigma}_2$ are two $p_1\times p_1$ and $p_2\times p_2$ non-negative definite matrices with bounded spectral norm,
$\widetilde{\bSigma} := (\widetilde{\bSigma}_1\otimes \widetilde{\bSigma}_2)^{\nicefrac{1}{2}} (\bSigma_1\otimes \bSigma_2)^{-1} (\widetilde{\bSigma}_1\otimes \widetilde{\bSigma}_2)^{\nicefrac{1}{2}}$ and the following limits exist:
\begin{gather*}
\gamma=\lim\limits_{T\to\infty}\frac{1}{p_1 p_2}\tr(\widetilde{\bSigma}),
\qquad \theta =\lim\limits_{T\to\infty}\frac{1}{p_1 p_2}\tr(\widetilde{\bSigma}^2),
\qquad \omega = \lim\limits_{T\to\infty} \frac{1}{p_1 p_2}\sum_{i=1}^{p_1p_2}\bigl(\widetilde{\bSigma}\bigr)_{ii}^2.
\end{gather*}
\end{enumerate}
Then when $p_1,p_2,T$ tend to infinity as in \eqref{eq:three_dimension_infinity}, the testing power of $W^*$ for \eqref{eq:separable_test}
\[
\beta(H_1) \rightarrow 1-\Phi\biggl( \frac{1}{2\theta}\Bigl[ 2z_{\alpha}-\omega(\nu_4-3)-\theta+n(2\gamma-1-\theta)+(\nu_4-2) \Bigr] \biggr).
\]
If $\gamma=\theta=1$, then $\beta(H_1) \rightarrow 1-\Phi\bigl(z_{\alpha}-\frac{\omega-1}{2}(\nu_4-3)\bigr)$; otherwise, $\beta(H_1)\to 1$.
\end{proposition}
\section{Simulation results}\label{sec:simulation}
In this section, we implement some simulation studies to examine
\begin{enumerate}
\item[(1)] finite-sample properties of some LSS for $\bA_n$ by comparing their empirical means and variances with theoretical limiting values;
\item[(2)] finite-sample performance of the separable covariance structure test in Section~\ref{sec:separable_structure_matrix_noise}.
\end{enumerate}
\subsection{LSS of $\bA_n$}\label{sec:simu-CLT}
Firstly we compare the empirical mean and variance of normalized $\left\{G_n(f_i)=\tr(\bA_n^i),~i=1, 2, 3\right\}$ with their theoretical limits in Corollary~\ref{coro:CLT-x-x2-x3}. Define
\begin{align*}
\overline{G}_n(f_1) &:= \frac{G_n(f_1)}{\sqrt{\Var(Y(f_1))}} =\frac{\tr(\bA_n)}{\sqrt{\frac{\omega}{\theta}(\nu_4-3)+2}},\\[0.5em]
\overline{G}_n(f_2) &:= \frac{G_n(f_2)}{\sqrt{\Var(Y(f_2))}} =\frac{1}{2}\biggl\{\tr(\bA_n^2) - n - \Bigl[\frac{\tilde{b}_p}{b_p}(\nu_4-3)+1\Bigr]\biggr\},\\[0.5em]
\overline{G}_n(f_3) &:= \frac{G_n(f_3)}{\sqrt{\Var(Y(f_3))}} =\frac{\tr(\bA_n^3) - \tfrac{c_p}{b_p\sqrt{b_p}}\sqrt{\tfrac{n}{p}}\Bigl[ n+1+\tfrac{\tilde{b}_p}{b_p}(\nu_4-3) \Bigr] }{\sqrt{\tfrac{9\omega}{\theta}(\nu_4-3)+24}}.
\end{align*}
According to Corollary~\ref{coro:CLT-x-x2-x3}, $\{\overline{G}_n(f_i)\}\xrightarrow{d} \calN(0,1)$, $i=1,2,3$. Hence we directly compare the empirical distribution of $\{\overline{G}_n(f_i)\}$ with $\calN(0,1)$ under different scenarios. Specifically, we consider two data distributions of $\{X_{ij}\}$ and three types of covariance matrix $\bSigma_p$, i.e.
\begin{itemize}
\item[(1)] \textbf{Gaussian data:} $\{X_{ij},~1\leqslant i\leqslant p,~1\leqslant j\leqslant n\}$ i.i.d. $\calN(0, 1)$, with $\Expe X_{ij}^4=\nu_4=3$.
\item[(2)] \textbf{Non-Gaussian data:} $\{X_{ij},~1\leqslant i\leqslant p,~1\leqslant j\leqslant n\}$ i.i.d. $\mathsf{Gamma}(4,2)-2$, with $\Expe X_{ij}=0$, $\Expe X^2_{ij}=1$, $\Expe X^4_{ij}=4.5$.
\end{itemize}
As for $\bSigma_p$,
\begin{itemize}
\item[(A)] $\bSigma_A=\bI_p$;
\item [(B)] $\bSigma_B$ is diagonal, $1/4$ of its diagonal elements are $0.5$, and $3/4$ are $1$.
\item [(C)] $\bSigma_C$ is diagonal, one half of its diagonal elements are $0.5$, and one half are 1.
\end{itemize}
Empirical mean and variance of $\{\overline{G}_n(f_i)\}$ are calculated for various combinations of $(p,n)$ under different model settings. For each pair of $(p,n)$, 5000 independent replications are used to obtain the empirical mean and variance.
Table \ref{tab:Qn_f_Simulation_n2} reports the empirical values of $\{\overline{G}_n(f_i)\}$ when $p=n^2$.
Table \ref{tab:Qn_f_Simulation_n2p5} reports the case of $p=n^{2.5}$.
As shown in Tables~\ref{tab:Qn_f_Simulation_n2} and \ref{tab:Qn_f_Simulation_n2p5}, the empirical mean and variance of $\{\overline{G}_n(f_i)\}$ perfectly match their theoretical limits $0$ and $1$ under all scenarios, including all three types of $\bSigma_p$, and for both Gaussian and non-Gaussian data.
\begin{table}[!h]
\centering
\caption{Empirical mean and variance of $\overline{G}_n(f_i),\; i=1, 2, 3$ from $5000$ replications. Theoretical mean and variance are $0$ and $1$, respectively. Dimension $p=n^{2}$.}
\label{tab:Qn_f_Simulation_n2}
\begin{tabular}{@{}lccccccccc@{}}
\toprule
& & \multicolumn{2}{c}{$\bSigma_p=\bSigma_A$} & & \multicolumn{2}{c}{$\bSigma_p=\bSigma_B$} & & \multicolumn{2}{c}{$\bSigma_p=\bSigma_C$} \\
\cmidrule(r){3-4} \cmidrule(lr){6-7} \cmidrule(l){9-10}
$n$ & & mean & var & & mean & var & & mean & var \\ \toprule
50 && 0.0050 & 1.0092 && 0.0038 & 1.0074 && -0.0157 & 1.0292 \\
100 && -0.0103 & 0.9962 && 0.0148 & 1.0073 && -0.0048 & 1.0252 \\
150 && -0.0075 & 1.0293 && -0.0054 & 1.0372 && -0.0113 & 0.9915 \\
200 && -0.0052 & 0.9989 && 0.0206 & 1.0140 && -0.0008 & 1.0012 \\
\multicolumn{6}{l}{$\overline{G}_n(f_1)$\qquad \textbf{Gaussian}} & \\ \midrule
50 && 0.0048 & 1.0265 && -0.0079 & 1.0065 && 0.0179 & 1.0119 \\
100 && -0.0034 & 1.0041 && 0.0011 & 0.9983 && 0.0066 & 1.0305 \\
150 && 0.0009 & 0.9841 && 0.0064 & 1.0159 && -0.0199 & 1.0273 \\
200 && -0.0091 & 1.0093 && 0.0070 & 0.9929 && 0.0087 & 0.9751 \\
\multicolumn{6}{l}{$\overline{G}_n(f_1)$\qquad \textbf{Non-Gaussian}} & \\ \toprule
50 && -0.0068 & 1.0848 && -0.0012 & 1.0922 && -0.0194 & 1.0871 \\
100 && -0.0052 & 1.0678 && -0.0078 & 1.0266 && -0.0139 & 1.0289 \\
150 && 0.0163 & 1.0209 && -0.0262 & 1.0291 && -0.0057 & 1.0250 \\
200 && 0.0196 & 1.0223 && -0.0047 & 0.9972 && -0.0008 & 0.9930 \\
\multicolumn{6}{l}{$\overline{G}_n(f_2)$\qquad \textbf{Gaussian}} & \\ \midrule
50 && 0.0049 & 1.1533 && -0.0184 & 1.1588 && -0.0195 & 1.2185 \\
100 && -0.0163 & 1.0927 && 0.0071 & 1.0896 && -0.0167 & 1.0924 \\
150 && 0.0017 & 1.0513 && 0.0232 & 1.0574 && -0.0106 & 1.0655 \\
200 && -0.0020 & 1.0568 && 0.0173 & 1.0361 && -0.0131 & 1.0568 \\
\multicolumn{6}{l}{$\overline{G}_n(f_2)$\qquad \textbf{Non-Gaussian}}& \\ \toprule
50 && 0.0734 & 1.1134 && 0.0579 & 1.1145 && 0.0480 & 1.1727 \\
100 && 0.0307 & 1.0642 && 0.0392 & 1.0720 && 0.0537 & 1.0805 \\
150 && 0.0230 & 1.0919 && 0.0421 & 1.0489 && 0.0361 & 1.0502 \\
200 && 0.0198 & 1.0131 && 0.0412 & 1.0372 && 0.0329 & 1.0457 \\
\multicolumn{6}{l}{$\overline{G}_n(f_3)$\qquad \textbf{Gaussian}}& \\ \toprule
50 && 0.1500 & 1.1976 && 0.1284 & 1.1964 && 0.1688 & 1.2197 \\
100 && 0.0895 & 1.1090 && 0.0922 & 1.0841 && 0.0885 & 1.0988 \\
150 && 0.0736 & 1.0491 && 0.0701 & 1.0494 && 0.0760 & 1.0851 \\
200 && 0.0698 & 1.0447 && 0.0690 & 1.0834 && 0.0693 & 1.0300 \\
\multicolumn{6}{l}{$\overline{G}_n(f_3)$\qquad \textbf{Non-Gaussian}}& \\
\bottomrule
\end{tabular}
\end{table}
\begin{table}[!h]
\centering
\caption{Empirical mean and variance of $\overline{G}_n(f_i),\; i=1, 2, 3$ from $5000$ replications. Theoretical mean and variance are $0$ and $1$, respectively. Dimension $p=n^{2.5}$.}
\label{tab:Qn_f_Simulation_n2p5}
\begin{tabular}{@{}lccccccccc@{}}
\toprule
& & \multicolumn{2}{c}{$\bSigma_p=\bSigma_A$} & & \multicolumn{2}{c}{$\bSigma_p=\bSigma_B$} & & \multicolumn{2}{c}{$\bSigma_p=\bSigma_C$} \\
\cmidrule(r){3-4} \cmidrule(lr){6-7} \cmidrule(l){9-10}
$n$ & & mean & var & & mean & var & & mean & var \\ \toprule
50 && -0.0095 & 0.9870 && -0.0023 & 1.0067 && 0.0092 & 1.0233 \\
100 && -0.0067 & 1.0274 && 0.0009 & 0.9991 && 0.0115 & 1.0150 \\
150 && -0.0056 & 1.0164 && 0.0109 & 0.9772 && -0.0086 & 0.973 \\
200 && 0.0139 & 0.9949 && 0.012 & 0.9907 && -0.0179 & 1.0002 \\
\multicolumn{6}{l}{$\overline{G}_n(f_1)$\qquad \textbf{Gaussian}} & \\ \midrule
50 && 0.0087 & 1.0332 && -0.0011 & 0.9972 && -0.0056 & 0.992 \\
100 && 0.0016 & 0.9859 && -0.0148 & 0.9899 && -0.0054 & 1.0226 \\
150 && 0.0093 & 1.0325 && 0.0088 & 1.0284 && 0.0380 & 0.9894 \\
200 && 0.0109 & 0.9947 && -0.0199 & 1.0085 && 0.0038 & 0.9948 \\
\multicolumn{6}{l}{$\overline{G}_n(f_1)$\qquad \textbf{Non-Gaussian}} & \\ \toprule
50 && 0.0044 & 1.0243 && -0.0045 & 1.0124 && -0.0152 & 1.0265 \\
100 && 0.0191 & 0.9982 && 0.0022 & 1.0169 && -0.0173 & 1.0314 \\
150 && 0.0010 & 1.0353 && 0.0086 & 1.0120 && 0.0065 & 1.0105 \\
200 && 0.0039 & 1.0111 && -0.0178 & 1.0089 && 0.0167 & 1.0124 \\
\multicolumn{6}{l}{$\overline{G}_n(f_2)$\qquad \textbf{Gaussian}} & \\ \midrule
50 && 0.0049 & 1.0585 && -0.003 & 1.0967 && -0.0015 & 1.1071 \\
100 && -0.017 & 1.04 && 0.007 & 1.0426 && 0.0085 & 1.0805 \\
150 && 0.0113 & 1.0449 && -0.0019 & 1.0396 && 0.0033 & 1.0244 \\
200 && -0.0178 & 1.0492 && -0.01 & 1.041 && -0.0049 & 1.0336 \\
\multicolumn{6}{l}{$\overline{G}_n(f_2)$\qquad \textbf{Non-Gaussian}}& \\ \toprule
50 && 0.0045 & 1.0491 && 0.0298 & 1.0607 && 0.0406 & 1.0826 \\
100 && -0.0051 & 1.0387 && 0.0021 & 1.0235 && 0.0255 & 1.0371 \\
150 && 0.0023 & 0.9959 && 0.0115 & 1.0224 && 0.0097 & 0.9771 \\
200 && 0.0323 & 1.0186 && 0.0045 & 1.003 && 0.0084 & 1.0037 \\
\multicolumn{6}{l}{$\overline{G}_n(f_3)$\qquad \textbf{Gaussian}}& \\ \toprule
50 && 0.0551 & 1.1447 && 0.0539 & 1.1059 && 0.0604 & 1.1238 \\
100 && 0.0342 & 1.0608 && 0.0273 & 1.0318 && 0.0322 & 1.0817 \\
150 && 0.0281 & 1.0671 && 0.029 & 1.0528 && 0.0642 & 1.0259 \\
200 && 0.0347 & 1.0355 && -0.0017 & 1.0086 && 0.0266 & 1.0289 \\
\multicolumn{6}{l}{$\overline{G}_n(f_3)$\qquad \textbf{Non-Gaussian}}& \\
\bottomrule
\end{tabular}
\end{table}
\subsection{Test for the Separable Covariance Structure}
Empirical size and power of the separable structure test in Section~\ref{sec:separable_structure_matrix_noise} are examined to testify the asymptotic testing power of $W^*$ given in Proposition~\ref{prop:separable_test_power_theo}. We compare the empirical power of $W^*$ with its limits under various model settings. Specifically, the vectorization of data matrix $\bE_t$ is $\mathsf{vec}(\bE_t) = (\bSigma_1\otimes \bSigma_2)^{\nicefrac{1}{2}} \mathsf{vec}(\bZ_t)$. We consider two data distributions of $\bZ_t=\{Z_{t,ij}\}$.
\begin{itemize}
\item[(1)] \textbf{Gaussian matrix white noise:} $\{Z_{t, ij},~1\leqslant i\leqslant p,~1\leqslant j\leqslant n\}$ i.i.d. $\calN(0, 1)$, with $\nu_4=\Expe Z_{t, ij}^4=3$.
\item[(2)] \textbf{Non-Gaussian matrix white noise:} $\{Z_{t, ij},~1\leqslant i\leqslant p,~1\leqslant j\leqslant n\}$ i.i.d. $\mathsf{Gamma}(4,2)-2$, with $\Expe Z_{t, ij}=0$, $\Expe Z^2_{t, ij}=1$, $\nu_4=\Expe Z^4_{t, ij}=4.5$.
\end{itemize}
As for covariance matrix $\bSigma_1\otimes\bSigma_2$, we set $\bSigma_1$ as a $p_1\times p_1$ tri-diagonal matrix, and $\bSigma_2$ as a $p_2\times p_2$ symmetric Toeplitz matrix. More specifically,
\[
\bSigma_1 = \begin{pmatrix}
2&1&&&\\
1&2&1&\\
&1&\ddots&\ddots&\\
&&\ddots&\ddots&1\\
&&&1&2\\
\end{pmatrix}_{p_1\times p_1},
\]
and $\bSigma_2 = \bigl(\rho^{|i-j|}\bigr)_{p_2\times p_2}$ with $|\rho|<1$. We set $\rho=0.45$, $p_1=p_2=T$ and $p_1 = 40, 60, 80, 100, 120$. The nominal level of the test is $\alpha = 0.05$.
To obtain the empirical power, we keep $\bSigma_1$ unchanged and replace $\rho$ in $\bSigma_2$ with $\rho(1+\lambda)$ satisfying $|\rho(1+\lambda)|<1$. We vary $\lambda = 0, 0.2, 0.3, 0.4, 0.5$ to obtain different levels of testing power. For each pair of $(p_1, p_2, T)$, $5000$ independent replications are used to obtain the empirical size and power. Empirical values and theoretical limits are compared in Table~\ref{tab:separable_test_power}.
As shown in Table~\ref{tab:separable_test_power}, the empirical power tends to $1$ when either $p_1, p_2, T$ or $\lambda$ increases. Most importantly, the empirical power value is consistent with its theoretical limit under all scenarios.
\begin{table}[!h]
\centering
\caption{Empirical (Emp) and Theoretical (Theo) Size ($\lambda=0$) and Power of the Separable Structure Test with $5000$ replications.}
\label{tab:separable_test_power}
\begin{tabular}{@{}ccccccccccccc@{}}
\toprule
\multicolumn{3}{c}{} & \multicolumn{2}{c}{$\lambda=0$} & \multicolumn{2}{c}{$\lambda=0.2$} & \multicolumn{2}{c}{$\lambda=0.3$} & \multicolumn{2}{c}{$\lambda=0.4$} & \multicolumn{2}{c}{$\lambda=0.5$} \\
\cmidrule(r){4-5} \cmidrule(lr){6-7} \cmidrule(lr){8-9} \cmidrule(lr){10-11} \cmidrule(l){12-13}
$p_1$ & $p_2$ & $T$ & Emp & Theo & Emp & Theo & Emp & Theo & Emp & Theo & Emp & Theo
\\ \toprule
40 & 40 & 40 & 0.0490 & 0.05 & 0.0950 & 0.0880 & 0.2856 & 0.3087 & 0.8230 & 0.8354 & 0.9992 & 0.9992 \\
60 & 60 & 60 & 0.0554 & 0.05 & 0.1650 & 0.1625 & 0.6484 & 0.6606 & 0.9974 & 0.9969 & 1 & 1 \\
80 & 80 & 80 & 0.0520 & 0.05 & 0.2600 & 0.2699 & 0.8994 & 0.9084 & 1 & 1 & 1 & 1 \\
100 & 100 & 100 & 0.0526 & 0.05 & 0.3916 & 0.4049 & 0.9864 & 0.9878 & 1 & 1 & 1 & 1 \\
120 & 120 & 120 & 0.0542 & 0.05 & 0.5356 & 0.5524 & 0.9986 & 0.9992 & 1 & 1 & 1 & 1 \\
\multicolumn{13}{l}{\textbf{Gaussian}} \\ \toprule
40 & 40 & 40 & 0.0568 & 0.05 & 0.0716 & 0.0662 & 0.2214 & 0.2353 & 0.7008 & 0.7568 & 0.9942 & 0.9977 \\
60 & 60 & 60 & 0.0610 & 0.05 & 0.1298 & 0.1277 & 0.5462 & 0.5752 & 0.9878 & 0.9930 & 1 & 1 \\
80 & 80 & 80 & 0.0580 & 0.05 & 0.2202 & 0.2216 & 0.8356 & 0.8655 & 1 & 1 & 1 & 1 \\
100 & 100 & 100 & 0.0530 & 0.05 & 0.3312 & 0.3464 & 0.9694 & 0.9785 & 1 & 1 & 1 & 1 \\
120 & 120 & 120 & 0.0562 & 0.05 & 0.4886 & 0.4910 & 0.9974 & 0.9984 & 1 & 1 & 1 & 1 \\
\multicolumn{13}{l}{\textbf{Non-Gaussian}} \\ \bottomrule
\end{tabular}
\end{table}
\section{Proof of Theorem \ref{thm:CLT}}\label{sec:proof_CLT_1}
In Section~\ref{sec:truncation} we first present the preliminary step of data truncation. The general strategy of the main proof of Theorem~\ref{thm:CLT} is explained in Section~\ref{sec:strategy_of_proof}. Three major steps of the general strategy are presented in Section~\ref{sec:conv_Mn1_proof}, \ref{sec:tight_Mn1} and \ref{sec:conv_Mn2} respectively.
\subsection{Truncation, Centralization and Rescaling}\label{sec:truncation}
We first truncate the elements of $\bX$ without changing the weak limit of $G_n(f)$. We choose a positive sequence $\{\delta_n\}$ such that
\begin{equation}\label{eq:condi-trun}
\delta_n^{-4} \Expe |X_{11}|^4 \indicator_{\{|X_{11}|\geqslant\delta_n \sqrt[4]{np}\}} \to 0,\qquad \delta_n\downarrow 0,\quad \delta_n\sqrt[4]{np}\uparrow \infty,
\end{equation}
as $n\to\infty$. Define
\begin{align*}
\widehat{X}_{ij}& = X_{ij}\indicator_{\{|X_{11}|\leqslant\delta_n \sqrt[4]{np}\}},\qquad \sigma^2 = \Expe |\widehat{X}_{ij}-\Expe \widehat{X}_{ij}|^2, \qquad \widehat{\bX} = (\widehat{X}_{ij})_{p\times n },\\
\widetilde{X}_{ij} & = (\widehat{X}_{ij}-\Expe \widehat{X}_{ij})/\sigma, \qquad \widetilde{\bX} = (\widetilde{X}_{ij})_{p\times n},\\
\widehat{\bA}_n& = \lb \widehat{\bX}'\bSigma_p \widehat{\bX} -pa_p \bI_n\rb/\sqrt{npb_p},\qquad \widetilde{\bA}_n=\lb \widetilde{\bX}'\bSigma_p \widetilde{\bX} -pa_p \bI_n\rb/\sqrt{npb_p}.
\end{align*}
Define $\widehat{G}_n(f)$ and $\widetilde{G}(f)$ similarly by means of \eqref{eq:LSS} with the matrix $\bA_n$ replaced by $\widehat{\bA}_n$ and $\widetilde{\bA}_n$, respectively.
First, observe that
\[
\Prob\bigl(G_n(f)\ne\widehat{G}_n(f)\bigr)\leqslant \Prob(\bA_n\ne\widehat{\bA}_n)=o(1).
\]
Indeed,
\[
\Prob(\bA_n\ne\widehat{\bA}_n)
\leqslant np\Prob(|X_{11}|\leq\delta_n \sqrt[4]{np})\leqslant K\delta_n^{-4} \Expe |X_{11}|^4\indicator_{\{|X_{11}|\geqslant\delta_n \sqrt[4]{np}\}}=o(1).
\]
Now we consider the difference between $\widehat{G}_n(f)$ and $\widetilde{G}_n(f)$. For any analytic function $f$ on $\mathscr{U}$, we have
\begin{align*}
\Expe \Bigl|\widehat{G}_n(f) - \widetilde{G}_n(f)\Bigr| & = \sum_{k=1}^n \Bigl|f\bigl(\lambda_j^{\widehat{\bA}_n}\bigr)-f\bigl(\lambda_j^{\widetilde{\bA}_n}\bigr)\Bigr|\leqslant \frac{K_f}{\sqrt{npb_p}}\sum_{k=1}^n \Bigl|\lambda_j^{\widehat{\bX}'\bSigma_p \widehat{\bX}}-\lambda_j^{\widetilde{\bX}'\bSigma_p \widetilde{\bX}}\Bigr|\\[0.5em]
& \leqslant \frac{K_f}{\sqrt{npb_p}} \Expe \Bigl| \tr(\widehat{\bX}-\widetilde{\bX})'\bSigma_p (\widehat{\bX}-\widetilde{\bX})\cdot 2 \bigl(\tr(\widehat{\bB}_n)+\tr(\widetilde{\bB}_n)\bigr) \Bigr|^{\nicefrac{1}{2}}\\[0.5em]
&\leqslant \frac{2K_f}{\sqrt{npb_p}}\Bigl|\Expe \tr(\widehat{\bX}-\widetilde{\bX})'\bSigma_p (\widehat{\bX}-\widetilde{\bX})\Bigr|^{\nicefrac{1}{2}} \cdot \Bigl|\Expe \tr(\widehat{\bB}_n)+\Expe \tr(\widetilde{\bB}_n)\Bigr|^{\nicefrac{1}{2}},
\end{align*}
where $K_f$ is a bound on $|f'(x)|$.
It follows from \eqref{eq:condi-trun} that
\begin{align*}
|\sigma^2-1|&\leqslant 2 \Expe X_{11}^2 \indicator_{\{|X_{11}|\geqslant\delta_n \sqrt[4]{np}\}}\\
&\leqslant \frac{2}{\delta_n^2 \sqrt{np}} \Expe |X_{11}|^4 \indicator_{\{|X_{11}|\geqslant\delta_n \sqrt[4]{np}\}}=o\bigl((np)^{-\nicefrac{1}{2}}\bigr),
\end{align*}
and
\begin{align*}
\bigl|\Expe \widehat{X}_{11}\bigr| &= \bigl| \Expe X_{11} \indicator_{\{|X_{11}|\geqslant\delta_n \sqrt[4]{np}\}} \bigr|\leqslant \Expe |X_{11}| \indicator_{\{|X_{11}|\geqslant\delta_n \sqrt[4]{np}\}}\\
&\leqslant \frac{1}{\delta_n^3 (np)^{3/4}} \Expe |X_{11}|^4 \indicator_{\{|X_{11}|\geqslant\delta_n \sqrt[4]{np}\}} =o\bigl((np)^{-3/4}\bigr).
\end{align*}
These give us
\begin{align*}
\frac{1}{\sqrt{np}}\Bigl[\tr(\widehat{\bX}-\widetilde{\bX})'\bSigma_p (\widehat{\bX}-\widetilde{\bX})\Bigr]^{\nicefrac{1}{2}}
& \leqslant \sum_{i, j} \sigma_{ii}\Expe |\widehat{X}_{ij}-\widetilde{X}_{ij}|^2 = \sum_{i, j} \sigma_{ii} \Expe \biggl|\frac{\sigma-1}{\sigma}\widehat{X}_{ij}+\frac{\Expe \widehat{X}_{ij}}{\sigma}\biggr|^2\\
& \leqslant Kpn\biggl(\frac{(1-\sigma)^2}{\sigma^2}\Expe|\widehat{X}_{11}|^2 + \frac{1}{\sigma^2} \Expe |\widehat{X}_{11}|^2\biggr) = o(1),
\end{align*}
and
\[
\Expe \tr\bigl(\widehat{\bX}'\widehat{\bX}\bigr) \leqslant \sum_{i,j} \Expe |\widehat{X}_{ij}|^2 \leqslant Knp,\qquad
\Expe \tr\bigl(\widetilde{\bX}'\widetilde{\bX}\bigr) \leqslant \sum_{i,j} \Expe |\widetilde{X}_{ij}|^2 \leqslant Knp.
\]
From the above estimates, we obtain
\[
G_n(f)=\widetilde{G}_n(f)+o_p(1).
\]
Thus, we only need to find the limit distribution of $\{\widetilde{G}(f_j), j=1,\ldots,k\}$. Hence, in what follows, we assume that the underlying variables are truncated at $\delta_n \sqrt[4]{np}$, centralized, and renormalized. For convenience, we shall suppress the superscript on the variables, and assume that, for any $1\leqslant i\leqslant p$ and $1\leqslant j \leqslant n$,
\begin{align*}
|X_{ij}|&\leqslant \delta_n \sqrt[4]{np},\qquad \Expe X_{ij}=0, \qquad \Expe X_{ij}^2 = 1,\\[0.5em]
\Expe X_{ij}^{a} &= \nu_{a} + o(1),\quad a=4,5,\qquad\qquad \Expe |X_{ij}|^{6+\varepsilon_0}< \infty,
\end{align*}
where $\delta_n$ satisfies the condition \eqref{eq:condi-trun}.
\subsection{Strategy of the proof}\label{sec:strategy_of_proof}
The general strategy of the proof follows the method established in \citet{bai2004clt} and \citet{BaiYao2005}.
Let $\scrC$ be the closed contour formed by the boundary of the rectangle with $(\pm u_1, \pm i v_1)$ where $u_1>2, 0<v_1\leqslant 1$. Assume that $u_1$ and $v_1$ are fixed and sufficiently small such that $\scrC\subset \scrU$. By Cauchy theorem, with probability one, we have
\[
G_n(f) = -\frac{1}{2\pi i} \oint_{\mathscr{C}} f(z) n\Bigl[m_n(z)-m(z)-\mathcal{X}_n(m(z))\Bigr] \mathrm{d} z,
\]
where $m_n(z)$ and $m(z)$ are the Stieltjes transforms of $F^{\bA_n}$ and $F$, respectively.
The representation reduces our problem to finding the limiting process of
\[
M_n(z) = n\Bigl[m_n(z)-m(z)-\mathcal{X}_n(m(z))\Bigr],\quad z\in\scrC.
\]
For $z\in \scrC$, we decompose $M_n(z)$ into a random part $M_n^{(1)}(z)$, and a determinist part $M_n^{(2)}(z)$, where
\[
M_n^{(1)}(z) = n\bigl[m_n(z)-\Expe m_n(z)\bigr], \qquad M_n^{(2)}(z) = n\bigl[\Expe m_n(z) -m(z)-\mathcal{X}_n(m(z))\bigr].
\]
Throughout the paper, we set
$
\mathbb{C}_1 = \{z: z=u+iv, u\in [-u_1, u_1],~|v|\geqslant v_1\}.
$
The limiting process of $M_n(z)$ on $\mathbb{C}_1$ is stated in the following proposition.
\begin{proposition}\label{prop:Mn_CLT}
Under the assumption $p\wedge n\to \infty, n^2/p=O(1)$ and after truncation of the data, the empirical process $\{M_n(z), z\in\mathbb{C}_1\}$ converges weakly to a centred Gaussian process $\{M(z), z\in\mathbb{C}_1\}$
with the covariance function
\begin{equation}\label{eq:Mn_cov}
\Lambda(z_1, z_2) = m'(z_1)m'(z_2)\Bigl[\frac{\omega}{\theta}(\nu_4-3)+2\bigl(1-m(z_1)m(z_2)\bigr)^{-2}\Bigr].
\end{equation}
\end{proposition}
Write the contour $\scrC$ as $\scrC=\scrC_{\ell}\cup \scrC_{r}\cup \scrC_{u}\cup \scrC_0$, where
\begin{align*}
\scrC_{\ell}&= \{ z=-u_1+iv, \xi_n/n < |v| < v_1 \},\\
\scrC_{r}&= \{ z=u_1+iv, \xi_n/n < |v| < v_1 \},\\
\scrC_{0}&= \{ z=\pm u_1+iv, |v| \leqslant \xi_n/n \},\\
\scrC_{u}&= \{ z=u \pm iv_1, |u| \leqslant u_1 \}
\end{align*}
and $\xi_n$ is a slowly varying sequence of positive constants and $v_1$ is a positive constant which is independent of $n$. Note that $\scrC_{\ell}\cup\scrC_{0}\cup \scrC_{\ell}=\scrC\setminus \mathbb{C}_1$.
To prove Theorem~\ref{thm:CLT}, we need to show that for $j=\ell, r, 0$ and some event $U_n$ with $\mathsf{P}(U_n)\to 1$,
\begin{equation}\label{eq:Mn_integral_zero}
\lim_{v_1\downarrow 0} \limsup_{n\to\infty} \int_{\scrC_j} \Expe \Bigl|M_n(z)\indicator_{U_n}\Bigr|^2\mathrm{d}z=0
\end{equation}
and
\begin{equation}\label{eq:M_integral_zero}
\lim_{v_1\downarrow 0} \int_{\scrC_j} \Expe \bigl|M(z)\bigr|^2\mathrm{d}z=0.
\end{equation}
The verification of \eqref{eq:Mn_integral_zero} and \eqref{eq:M_integral_zero} follows similar procedures developed in Sections 2.3, 3.1 and 4.3 of \citet{BaiYao2005} and the details will be omitted here.
The calculation of the limiting covariance function of $Y(f)$ (see \eqref{eq:cov_1} \& \eqref{eq:cov_2}) is quite similar to that given in Section 5 of \citet{BaiYao2005}, it is then omitted.
Next, we can prove Proposition~\ref{prop:Mn_CLT} by the following three steps:
\begin{itemize}
\item Finite-dimensional convergence of the random part $M_n^{(1)}(z)$ in distribution on $\mathbb{C}_1$;
\item Tightness of the random part $M_n^{(1)}(z)$.
\item Convergence of the non-random part $M_n^{(2)}(z)$ to the mean function on $\mathbb{C}_1$.
\end{itemize}
Details of the three steps are presented in the coming sections~\ref{sec:conv_Mn1_proof}, \ref{sec:tight_Mn1} and \ref{sec:conv_Mn2}, respectively.
\subsection{Finite dimensional convergence of $M_n^{(1)}(z)$ in distribution}\label{sec:conv_Mn1_proof}
We first decompose the random part $M_n^{(1)}(z)$ as a sum of martingale difference sequences, which is given in \eqref{eq:Mn_MDS}. Then, we apply the martingale CLT (Lemma~\ref{lem:CLT_MDS}) to obtain the asymptotic distribution of $M_n^{(1)}(z)$. Note that we prove the finite dimensional convergence of $M_n^{(1)}(z)$ under the assumption $p/n\to\infty$, which is weaker than $n^2/p=O(1)$.
First, we introduce some notations.
Define
\begin{align*}
\bX_k&=(\bx_1,\ldots,\bx_{k-1},\bx_{k+1},\ldots,\bx_n),\qquad \bA_k=\tfrac{1}{\sqrt{npb_p}}\Bigl( \bX_k'\bSigma_p \bX_k-pa_p \bI_{n-1}\Bigr),\\[0.5em]
\bD&=(\bA-z\bI_n)^{-1},\qquad \bD_k = (\bA_k-z\bI_{n-1})^{-1},\qquad \bM_k^{(s)} =\bSigma_p\bX_k\bD_k^{s}\bX_k'\bSigma_p, ~ s=1, 2,\\[0.5em]
a_{kk}^{\mathsf{diag}}&=A_{kk}-z=\tfrac{1}{\sqrt{npb_p}}\bigl(\bx_k'\bSigma_p\bx_k-pa_p\bigr)-z,\qquad \bq_k'=\tfrac{1}{\sqrt{npb_p}}\bigl(\bx_k'\bSigma_p\bX_k\bigr)\\[0.5em]
\beta_k & =\frac{1}{-a_{kk}^{\mathsf{diag}}+\bq_k'\bD_k\bq_k},\qquad \beta^{\mathsf{tr}}_k = \frac{1}{z+(npb_p)^{-1}\tr \bM_k^{(1)}},\\[0.5em]
\gamma_{ks} & = -\frac{1}{npb_p}\tr\bM_k^{(s)}+\bq_k'\bD_k^{s}\bq_k, ~ s=1, 2,\qquad \eta_k = \tfrac{1}{\sqrt{npb_p}}\bigl(\bx_k'\bSigma_p\bx_k-pa_p\bigr)-\gamma_{k1},\\[0.5em]
\ell_k&=-\beta_k\beta^{\mathsf{tr}}_k\eta_k\bigl(1+\bq_k'\bD_k^{2}\bq_k\bigr).
\end{align*}
Note that $a_{kk}^{\mathsf{diag}}$ is the $k$-th diagonal element of $\bD^{-1}$ and $\bq_k'$ is the vector from the $k$-th row of $\bD^{-1}$ by deleting the $k$-th element.
By applying Theorem A.5 in \citet{bai2010spectral}, we obtain the equality
\begin{equation}\label{eq:trace_inv_diff}
\tr \bD-\tr \bD_k=-\frac{1+\bq_k'\bD_k^{2}\bq_k}{-a_{kk}^{\mathsf{diag}}+\bq_k'\bD_k\bq_k}=-\beta_k(1+\bq_k'\bD_k^{2}\bq_k).
\end{equation}
Straightforward calculation gives:
\begin{equation}\label{eq:betak_decom}
\beta_k-\beta^{\mathsf{tr}}_k=\beta_k\beta^{\mathsf{tr}}_k\eta_k
\end{equation}
and
\begin{equation}\label{eq:Expe_beta_eta_kappa}
(\Expe_k-\Expe_{k-1}) \beta^{\mathsf{tr}}_k(1+\bq_k'\bD_k^{2}\bq_k)=\Expe_k \bigl(\beta_k^{\tr}\gamma_{k2}\bigr),\qquad \Expe_{k-1}\bigl(\beta_k^{\tr}\gamma_{k2}\bigr)= 0,
\end{equation}
where $\Expe_k(\cdot)$ is the expectation with respect to the $\sigma$-field generated by the first $k$ columns of $\bX$.
By the definition of $\bD$ and $\bD_k$, we obtain two basic identities:
\begin{align}
\bD\bX'\bSigma_p\bX&=p a_p \bD +\sqrt{npb_p}(\bI_n+z\bD),\label{eq:DXSX}\\[0.5em]
\bD_k\bX_k'\bSigma_p\bX_k&=p a_p \bD_k +\sqrt{npb_p}(\bI_{n-1}+z\bD_k).\label{eq:DXSX_k}
\end{align}
If $\bSigma_p=\bI_p$, it is straightforward to derive that the limit of $\tr\bigl(\bM_{k}^{(1)}(z)\bigr)/(npb_p)$ is $m(z)$ by using \eqref{eq:DXSX_k}. However, when $\bSigma_p\neq \bI_p$, we need more detailed estimate.
\begin{lemma}\label{lem:Mk_limit}
Under the assumption that $p\wedge n\to\infty$ and $p/n\to\infty$, we have, for $z\in\mathbb{C}_1$,
\begin{equation}
\Expe \biggl| \frac{1}{npb_p} \tr\bigl(\bM_{k}^{(1)}(z)\bigr) - m(z) \biggr|^2
\leqslant \frac{K n}{p} + \frac{K}{n^2}.\label{eq:Diff-trMk-m}
\end{equation}
\end{lemma}
\begin{proof}[Proof]
Using Lemma \ref{lem:quad-trace}, we have
\begin{equation}\label{eq:quad-trace-upper-bound}
\Expe (\bx_i'\bSigma_p^2\bx_i - pb_p)^2 \leqslant K \nu_4\tr(\bSigma_p^4)\leqslant K \cdot p\|\bSigma_p^4\| \leqslant K p .
\end{equation}
Note that $\tr(\bA^*\bB)$ is the inner product of $\mathsf{vec}(\bA)$ and $\mathsf{vec}(\bB)$ for any $n\times m$ matrices $\bA$ and $\bB$.
It follows from the Cauchy-Schwarz inequality that
\begin{equation}\label{eq:CS-trace}
\bigl|\tr(\bA^*\bB)\bigr|^2 \leqslant \tr(\bA^*\bA)\cdot \tr(\bB^*\bB).
\end{equation}
By using \eqref{eq:CS-trace}, we have
\begin{align*}
\Expe \biggl| \frac{1}{npb_p} &\tr \bM_{k}^{(1)}(z) - \frac{1}{n}\tr \bD_k^{1}(z)\biggr|^2 \\
&=\frac{1}{(npb_p)^2} \Expe \; \Bigl|\tr\Bigl(\bD_k(z)\bigl(\bX_k'\bSigma_p^2\bX_k-pb_p\bI_{n-1}\bigr)\Bigr)\Bigr|^2\\
&\leqslant \frac{1}{(npb_p)^2} \Expe \; \biggl[\tr\bigl(\bD_k(\bar{z})\bD_k(z)\bigr) \cdot \tr\bigl(\bX_k'\bSigma_p^2\bX_k-pb_p\bI_{n-1}\bigr)^2\biggr]\\
&\leqslant \frac{1}{(npb_p)^2} \Expe \; \biggl[n\bigl\|\bD_k(\bar{z})\bD_k(z)\bigr\| \cdot \tr\bigl(\bX_k'\bSigma_p^2\bX_k-pb_p\bI_{n-1}\bigr)^2\biggr]\\
&\leqslant \frac{1}{n(pb_p v_1)^2} \Expe \; \biggl[\tr\bigl(\bX_k'\bSigma_p^2\bX_k-pb_p\bI_{n-1}\bigr)^2\biggr].
\end{align*}
Indeed, by using \eqref{eq:quad-trace-upper-bound} and the fact $\Expe (\bx_i'\bSigma_p^2\bx_j)^2 = \tr(\bSigma_p^4)$, we have
\begin{align*}
\Expe \biggl[\tr\bigl(\bX_k'\bSigma_p^2\bX_k-pb_p\bI_{n-1}\bigr)^2\biggr] & = \sum_{i\neq k} \Expe (\bx_i'\bSigma_p^2\bx_i - pb_p)^2 + \sum_{i\neq j, i\neq k, j\neq k} \Expe (\bx_i'\bSigma_p^2\bx_j)^2 \\
&\leqslant (n-1) \cdot p K + (n-1)(n-2) \cdot p K.
\end{align*}
Thus we have
\begin{equation}\label{eq:Diff-trMk-trDk}
\Expe \biggl| \frac{1}{npb_p} \tr \bM_{k}^{(1)}(z) - \frac{1}{n}\tr \bD_k(z) \biggr|^2
\leqslant \frac{K n}{p}.
\end{equation}
Moreover, by \eqref{eq:trace_inv_diff} and \eqref{eq:beta_qDq_upper_bdd}, we have
\begin{equation}
\biggl|\frac{1}{n}\tr\bD (z)-\frac{1}{n}\tr\bD_k(z)\biggr| \overset{\eqref{eq:trace_inv_diff}}{=} \frac{1}{n}\Bigl|\beta_k(1+\bq_k'\bD_k^{2}\bq_k)\Bigr|\overset{\eqref{eq:beta_qDq_upper_bdd}}{\leqslant} \frac{1}{n v_1}, \label{eq:D_Dk_ESD_diff}
\end{equation}
which, together with \eqref{eq:Diff-trMk-trDk} and the fact that $m_n(z)\convas m$, implies \eqref{eq:Diff-trMk-m}.
\end{proof}
Applying \eqref{eq:trace_inv_diff} $\sim$ \eqref{eq:Expe_beta_eta_kappa}, we have the following decomposition:
\begin{align}
M_n^{(1)}(z)&=\tr\bD-\Expe\tr\bD=\sum_{k=1}^{n} (\Expe_k-\Expe_{k-1})\bigl(\tr \bD-\tr \bD_k\bigr)\nonumber\\
&=-\sum_{k=1}^{n} (\Expe_k-\Expe_{k-1})\beta_k\Bigl(1+\bq_k'\bD_k^{2}\bq_k\Bigr)\label{eq:Mn1_decom_0}\\
&\overset{\eqref{eq:betak_decom}}{=}~\sum_{k=1}^{n} (\Expe_k-\Expe_{k-1})(-\beta_k\beta^{\mathsf{tr}}_k\eta_k)\Bigl(1+\bq_k'\bD_k^{2}\bq_k\Bigr) - \sum_{k=1}^{n} (\Expe_k-\Expe_{k-1})\beta^{\mathsf{tr}}_k\Bigl(1+\bq_k'\bD_k^{2}\bq_k\Bigr) \nonumber\\
&=\sum_{k=1}^{n} (\Expe_k-\Expe_{k-1}) \ell_k - \Expe_k \bigl(\beta_k^{\tr}\gamma_{k2}\bigr).\label{eq:Mn1_decom_1}
\end{align}
\noindent By using \eqref{eq:betak_decom}, we can split $\ell_k$ as
\begin{align}
\ell_k&=-\bigl[(\beta^{\mathsf{tr}}_k)^2\eta_k+\beta_k(\beta^{\mathsf{tr}}_{k})^2\eta_k^2\bigr]\Bigl(1+\bq_k'\bD_k^{2}\bq_k\Bigr)\nonumber\\[0.5em]
&=-(\beta^{\mathsf{tr}}_k)^2\eta_k \Bigl(1+\frac{1}{npb_p}\tr\bM_k^{(2)}\Bigr)-\beta^{\mathsf{tr}}_k\eta_k \gamma_{k2}- \beta_k(\beta^{\mathsf{tr}}_{k})^2\eta_k^2\Bigl(1+\bq_k'\bD_k^{2}\bq_k\Bigr)\nonumber\\
&=:\ell_{k1}+\ell_{k2}+\ell_{k3}.\label{eq:ellk_decom}
\end{align}
By Lemma~\ref{lem:betak_upper_bounds} and Lemma~\ref{lem:gamma_moment_upper_bound}, it is not difficult to verify that
\begin{equation}
\Expe \biggl| \sum_{k=1}^n (\Expe_k-\Expe_{k-1}) \ell_{k2}\biggr|^2 = o(1),\qquad \Expe \biggl| \sum_{k=1}^n (\Expe_k-\Expe_{k-1}) \ell_{k3}\biggr|^2 = o(1).
\end{equation}
These estimates, together with \eqref{eq:Mn1_decom_1} and \eqref{eq:ellk_decom}, imply that
\begin{align}
M_n^{(1)}(z) &= \sum_{k=1}^n \Expe_k \biggl[-(\beta^{\mathsf{tr}}_k)^2\eta_k \Bigl(1+\frac{1}{npb_p}\tr\bM_k^{(2)}\Bigr)-\beta_k^{\tr}\gamma_{k2}\biggr]+o_{L_2}(1)\nonumber\\
&=:\sum_{k=1}^n Y_k(z)+o_{L_2}(1),\label{eq:Mn_MDS}
\end{align}
where $Y_k(z)$ is a sequence of martingale difference. Thus, to prove finite-dimensional convergence of $M_n^{(1)}(z), z\in\mathbb{C}_1$, we need only to consider the limit of the following term:
\[
\sum_{j=1}^r a_j M_n^{(1)}(z_j) = \sum_{j=1}^{r} a_j\sum_{k=1}^n Y_k(z_j) +o(1) = \sum_{k=1}^n\Biggl(\sum_{j=1}^{r} a_j Y_k(z_j)\Biggr) + o(1),
\]
where $\{a_j\}$ are complex numbers and $r$ is any positive integer.
By Lemma~\ref{lem:betak_upper_bounds} and Lemma~\ref{lem:gamma_moment_upper_bound}, we have
\begin{equation}\label{eq:MDS_4th_moment}
\Expe|Y_j(z)|^4 \leqslant K\frac{\delta_n^4}{n} + K\biggl(\frac{1}{n^2}+\frac{n}{p^2}\biggr),
\end{equation}
which implies that, for each $\varepsilon >0$,
\[
\sum_{k=1}^n \Expe\Biggl( \biggl|\sum_{j=1}^{r} a_j Y_k(z_j)\biggr|^2 \indicator_{\{\sum_{j=1}^{r} a_j Y_k(z_j)\geqslant \varepsilon\}} \Biggr) \leqslant \frac{1}{\varepsilon^2} \sum_{k=1}^n \Expe \biggl|\sum_{j=1}^{r} a_j Y_k(z_j)\biggr|^4 = o(1),
\]
which implies that the second condition \eqref{eq:CLT_MDS_condition2} of the martingale CLT (see Lemma~\ref{lem:CLT_MDS}) is satisfied. Thus, to apply the martingale CLT, it is sufficient to verify that, for $z_1, z_2 \in\mathbb{C}^{+}$, the sum
\begin{equation}\label{eq:Lambda_1}
{\Lambda}_n(z_1,z_2):=\sum_{k=1}^n \Expe_{k-1} \bigl(Y_k(z_1)Y_k(z_2)\bigr)
\end{equation}
converges in probability to a constant (and to determine this constant).
Note that
\[
-\bigl(\beta^{\mathsf{tr}}_k\bigr)^2\eta_k\Bigl(1+\frac{1}{npb_p}\tr\bM_k^{(2)}\Bigr) - \beta^{\mathsf{tr}}_k\gamma_{k2}=\frac{\partial}{\partial z} \Bigl[\beta^{\mathsf{tr}}_k(z)\eta_k(z)\Bigr],
\]
thus, we have
\begin{equation}\label{eq:Lambda_2}
{\Lambda}_n(z_1,z_2)=\frac{\partial^2}{\partial z_2\partial z_1} \sum_{k=1}^{n} \Expe_{k-1}\Bigl[\Expe_k \bigl[\beta^{\mathsf{tr}}_k(z_1)\eta_k(z_1)\bigr] \cdot \Expe_k \bigl[(\beta^{\mathsf{tr}}_k(z_2)\eta_k(z_2)\bigr]\Bigr].
\end{equation}
It is enough to consider the limit of
\begin{equation}\label{eq:Lambda_tilde_1}
\sum_{k=1}^{n} \Expe_{k-1}\Bigl[\Expe_k \bigl[\beta^{\mathsf{tr}}_k(z_1)\eta_k(z_1)\bigr] \cdot \Expe_k \bigl[(\beta^{\mathsf{tr}}_k(z_2)\eta_k(z_2)\bigr]\Bigr].
\end{equation}
By \eqref{eq:SCLaw_equation}, \eqref{lem:Mk_limit} and the dominated convergence theorem, we conclude that
\begin{equation}\label{eq:betak_tr_limit}
\Expe \Bigl| \beta_k^{\tr}(z)+m(z) \Bigr|^2 = o(1).
\end{equation}
Combining \eqref{eq:betak_tr_limit} into \eqref{eq:Lambda_tilde_1} yields that
\begin{align}
\;&\sum_{k=1}^{n} \Expe_{k-1}\Bigl[\Expe_k \bigl[\beta^{\mathsf{tr}}_k(z_1)\eta_k(z_1)\bigr] \cdot \Expe_k \bigl[(\beta^{\mathsf{tr}}_k(z_2)\eta_k(z_2)\bigr]\Bigr]\nonumber\\
=\;& m(z_1)m(z_2)\sum_{k=1}^{n} \Expe_{k-1}\Bigl(\Expe_k\eta_k(z_1)\cdot \Expe_k \eta_k(z_2)\Bigr)+o_p(1)\nonumber\\
=:\;& m(z_1)m(z_2)\widetilde{\Lambda}_n(z_1,z_2)+o_p(1).\label{eq:Lambda_tilde_2}
\end{align}
In view of \eqref{eq:Lambda_1} $\sim$ \eqref{eq:Lambda_tilde_2}, it suffices to derive the limit of $\widetilde{\Lambda}_n(z_1,z_2)$, which further gives the limit of \eqref{eq:Lambda_1}.
Since $\Expe_k \bigl[\eta_k(z)\bigr] =(1/\sqrt{npb_p})\bigl(\bx_k'\bSigma_p\bx_k-pa_p\bigr)-\Expe_k\bigl[\gamma_{k1}(z)\bigr]$, we have
\begin{equation}\label{eq:Expe_eta1_eta2}
\Expe_{k-1}\Bigl[\Expe_k \eta_k(z_1)\cdot \Expe_k \eta_k(z_2)\Bigr] = \frac{1}{n}\biggl[\frac{\tilde{b}_p}{b_p}(\nu_4-3)+2\biggr] + A_{1}^{(k)} +A_{2}^{(k)} +A_{3}^{(k)},
\end{equation}
where
\begin{align*}
A_1^{(k)} & = \Expe_{k-1}\Bigl[\Expe_k\gamma_{k1}(z_1)\cdot\Expe_k\gamma_{k1}(z_2)\Bigr],\qquad A_2^{(k)} = -\Expe_{k-1}\biggl[\tfrac{1}{\sqrt{npb_p}}\bigl(\bx_k'\bSigma_p\bx_k-pa_p\bigr) \cdot\Expe_k\gamma_{k1}(z_1)\biggr],\\[0.5em]
A_3^{(k)} & = -\Expe_{k-1}\biggl[\tfrac{1}{\sqrt{npb_p}}\bigl(\bx_k'\bSigma_p\bx_k-pa_p\bigr)\cdot \Expe_k\gamma_{k1}(z_2)\biggr] .
\end{align*}
\noindent First, we show that $A_2^{(k)}$ and $A_3^{(k)}$ are negligible. Denote $\bM_k^{(1)}(z) = \bigl(a_{ij}^{(1)}(z)\bigr)_{p\times p}$, using the independence between $\bx_k$ and $\bM_k^{(1)}$, we have
\begin{align}
A_2^{(k)} & = - \frac{1}{npb_p\sqrt{npb_p}}\Expe_{k-1}\biggl[\biggl(\sum_{i,j}\sigma_{ij} X_{ik}X_{jk}-pa_p\biggr)\biggl( \sum_{i\neq j} X_{ik}X_{jk}\Expe_{k}a_{ij}^{(1)}(z_1) + \sum_{i=1}^p (X_{ik}^2-1)\Expe_{k}a_{ii}^{(1)}(z_1) \biggr) \biggr]\nonumber\\
& = - \frac{1}{npb_p\sqrt{npb_p}}\Expe_{k-1}\biggl[\biggl( \sum_{i\neq j} \sigma_{ij}X_{ik}^2X_{jk}^2\Expe_{k}a_{ij}^{(1)}(z_1) + \sum_{i=1}^p \sigma_{ii}X_{ik}^2(X_{ik}^2-1)\Expe_{k}a_{ii}^{(1)}(z_1) \biggr) \biggr]\nonumber\\
& = - \frac{1}{npb_p\sqrt{npb_p}}\Bigl(\sum_{i\neq j}\sigma_{ij} \Expe_{k}a_{ij}^{(1)}(z_1) + (\nu_4-1)\sum_{i=1}^p \sigma_{ii}\Expe_{k}a_{ii}^{(1)}(z_1)\Bigr)\nonumber \\
& = - \frac{1}{\sqrt{npb_p}}\Expe_k\biggl[\frac{1}{npb_p} \tr\Bigl(\bSigma_p\bM_k^{(1)}\Bigr)- \frac{\nu_4-2}{npb_p}\sum_{i=1}^p \sigma_{ii}a_{ii}^{(1)}(z_1)\biggr]. \label{eq:square_bracket}
\end{align}
As for the first term in the bracket of \eqref{eq:square_bracket}, we can estimate it by using the similar argument as in the proof of Lemma~\ref{lem:Mk_limit}.
Replacing $pb_p$ and $\bM_k^{(1)}$ in the proof of Lemma~\ref{lem:Mk_limit} with $\tr(\bSigma_p^3)$ and $\bSigma_p\bM_k^{(1)}$, we can prove that
\[
\Expe \biggl| \frac{1}{n\tr(\bSigma_p^3)} \tr\Bigl(\bSigma_p\bM_{k}^{(1)}\Bigr) - \frac{1}{n}\tr \bD_k \biggr|^2
\leqslant \frac{K n}{p}.
\]
Moreover, by the fact $\tfrac{b_p^2}{a_p}\leqslant \tr(\bSigma_p^3)\leqslant Kp$, the first inequality of which follows from \eqref{eq:CS-trace}, we conclude that
\[
\frac{1}{npb_p} \tr\Bigl(\bSigma_p\bM_{k}^{(1)}\Bigr) = \frac{\tr(\bSigma_p^3)}{pb_p}\cdot \frac{1}{n\tr(\bSigma_p^3)} \tr\Bigl(\bSigma_p\bM_{k}^{(1)}\Bigr) = O_p(1).
\]
As for the second term in the bracket of \eqref{eq:square_bracket}, we have
\[
\frac{1}{npb_p}\sum_{i=1}^p \sigma_{ii}a_{ii}^{(1)} \leqslant \frac{\|\bSigma_p\|}{npb_p}\sum_{i=1}^p a_{ii}^{(1)} = \frac{\|\bSigma_p\|}{npb_p}\tr \bM_k^{(1)}=O_p(1).
\]
Thus, we conclude that the term in the square bracket of \eqref{eq:square_bracket} is bounded in probability. Thus $\Bigl|\sum_{k=1}^n A_2^{(k)}\Bigr|\to 0$.
Similarly, we can show that $\Bigl|\sum_{k=1}^n A_3^{(k)}\Bigr|\to 0$.
Now we consider $A_1^{(k)}$.
We consider the second terms on the RHS of \eqref{eq:Expe_eta1_eta2} with the notation $\Expe_k\bM_k^{(1)}(z) = \bigl(a_{ij}^{(1)}(z)\bigr)_{n\times n}$,
\begin{align*}
A_1^{(k)}&= \frac{1}{(npb_p)^2}\Expe_{k-1} \biggl[\biggl( \sum_{i\neq j} X_{ik}X_{jk}\Expe_{k}a_{ij}^{(1)}(z_1) + \sum_{i=1}^p (X_{ik}^2-1)\Expe_{k}a_{ii}^{(1)}(z_1) \biggr) \\
&\qquad\qquad\qquad\qquad\qquad \times \biggl( \sum_{i\neq j} X_{ik}X_{jk}\Expe_{k}a_{ij}^{(1)}(z_1) + \sum_{i=1}^p (X_{ik}^2-1)\Expe_{k}a_{ii}^{(1)}(z_1) \biggr)\biggr]\\
=&\; \frac{1}{(npb_p)^2}\Expe_{k-1} \biggl[ 2\sum_{i\neq j}X_{ik}^2 X_{jk}^2\Expe_{k}a_{ij}^{(1)}(z_1)\Expe_{k}a_{ij}^{(1)}(z_2) + \sum_{i=1}^p (X_{ik}^2-1)^2\Expe_{k}a_{ii}^{(1)}(z_1)\Expe_{k}a_{ii}^{(1)}(z_2) \biggr]\\
=&\; \frac{1}{(npb_p)^2} \biggl[ 2\sum_{i\neq j}\Expe_{k}a_{ij}^{(1)}(z_1)\Expe_{k}a_{ij}^{(1)}(z_2) + (\nu_4-1)\sum_{i=1}^p \Expe_{k}a_{ii}^{(1)}(z_1)\Expe_{k}a_{ii}^{(1)}(z_2) \biggr]\\
=&\; \frac{1}{(npb_p)^2} \biggl[ 2\sum_{i, j}\Expe_{k}a_{ij}^{(1)}(z_1)\Expe_{k}a_{ij}^{(1)}(z_2) + (\nu_4-3)\sum_{i=1}^p \Expe_{k}a_{ii}^{(1)}(z_1)\Expe_{k}a_{ii}^{(1)}(z_2) \biggr]\\
=&\;\frac{2}{(npb_p)^2} \biggl[\tr \Bigl(\Expe_k\bM_k^{(1)}(z_1)\cdot \Expe_k\bM_k^{(1)}(z_2)\Bigr)\biggr]+o_{L_1}(1),
\end{align*}
where the last step follows from
\begin{align*}
\Expe\biggl|\sum_{i=1}^p \Expe_{k}a_{ii}^{(1)}(z_1)\cdot\Expe_{k}a_{ii}^{(1)}(z_2)\biggr|^2
&\leqslant p\cdot \sum_{i=1}^p\Expe \biggl|\Expe_{k}a_{ii}^{(1)}(z_1)\cdot\Expe_{k}a_{ii}^{(1)}(z_2)\biggr|^2\\
&\leqslant p\cdot \sum_{i=1}^p\biggl( \Expe \Bigl|\Expe_{k}a_{ii}^{(1)}(z_1)\Bigr|^4\biggr)^{\nicefrac{1}{2}}\cdot \biggl(\Expe\Bigl|\Expe_{k}a_{ii}^{(1)}(z_2)\Bigr|^4\biggr)^{\nicefrac{1}{2}}\\
&\leqslant p\cdot \sum_{i=1}^p\biggl( \Expe \Bigl|a_{ii}^{(1)}(z_1)\Bigr|^4\biggr)^{\nicefrac{1}{2}}\cdot \biggl(\Expe\Bigl|a_{ii}^{(1)}(z_2)\Bigr|^4\biggr)^{\nicefrac{1}{2}}\\
&\overset{\eqref{eq:ajj_upper_bound}}{\leqslant} K(n^4p^2+n^2p^3).
\end{align*}
By above estimates, we obtain
\begin{align}
\widetilde{\Lambda}_n(z_1,z_2) &= \frac{2}{(npb_p)^2}\sum_{k=1}^n\tr \Bigl(\Expe_k\bM_k^{(1)}(z_1)\cdot \Expe_k\bM_k^{(1)}(z_2)\Bigr)+\Bigl[\frac{\tilde{b}_p}{b_p}(\nu_4-3)+2 \Bigr]+o_p(1),\nonumber\\
&=\frac{2}{n}\sum_{k=1}^{n}\bbZ_k + \frac{\tilde{b}_p}{b_p}(\nu_4-3)+2 + o_p(1),\label{eq:Gamma_tilde}
\end{align}
where \begin{equation}\label{eq:Zk_defi}
\bbZ_k=\frac{1}{n(pb_p)^2} \tr \Bigl(\Expe_k\bM_k^{(1)}(z_1)\cdot \Expe_k\bM_k^{(1)}(z_2)\Bigr).
\end{equation}
In Lemma \ref{lem:Zk_asym}, we derive the asymptotic expression of $\bbZ_k$. This asymptotic expression ensures that
\begin{equation}\label{eq:Zk_limit}
\frac{1}{n}\sum_{k=1}^n\bbZ_k
\to \int_0^1 \frac{t m(z_1)m(z_2)}{1-tm(z_1)m(z_2)}\mathrm{d} t = -1-\frac{\log\bigl(1-m(z_1)m(z_2)\bigr)}{m(z_1)m(z_2)}.
\end{equation}
By \eqref{eq:Lambda_2}, \eqref{eq:Lambda_tilde_2}, \eqref{eq:Gamma_tilde} and \eqref{eq:Zk_limit}, we have
\[
\widetilde{\Lambda}_n(z_1,z_2) \convp \frac{\omega}{\theta}(\nu_4-3)-\frac{2\log\bigl(1-m(z_1)m(z_2)\bigr)}{m(z_1)m(z_2)}.
\]
Therefore,
\begin{align*}
\Lambda_n(z_1,z_2) & \convp \frac{\partial^2}{\partial z_1\partial z_2} \biggl\{\frac{\omega}{\theta}(\nu_4-3)m(z_1)m(z_2) - 2\log\Bigl( 1-m(z_1)m(z_2)\Bigr)\biggr\} \nonumber\\[0.5em]
&=m'(z_1)m'(z_2) \biggl[ \frac{\omega}{\theta}(\nu_4-3)+2\bigl( 1-m(z_1)m(z_2) \bigr)^{-2} \biggr].
\end{align*}
\subsection{Tightness of $M_n^{(1)}(z)$}\label{sec:tight_Mn1}
This subsection is to verify the tightness of $M_n^{(1)}(z)$ for $z\in\mathbb{C}_1$ by using Lemma~\ref{lem:tightness}. Applying the Cauchy-Schwarz inequality, Lemma~\ref{lem:betak_upper_bounds} and Lemma~\ref{lem:gamma_moment_upper_bound}, we have
\[
\Expe\biggl|\sum_{k=1}^n\sum_{j=1}^{r} a_j Y_k(z_j)\biggr|^2 = O(1),
\]
which shows that the condition (i) of Lemma~\ref{lem:tightness} holds. Condition (ii) of Lemma~\ref{lem:tightness} will be verified by showing
\begin{equation}\label{eq:Mn1_Mn2_diff_bounded}
\frac{\Expe\bigl|M_n^{(1)}(z_1)-M_n^{(1)}(z_2)\bigr|^2}{|z_1-z_2|^2} \leqslant K,\qquad z_1,z_2\in\mathbb{C}_1.
\end{equation}
The proof of \eqref{eq:Mn1_Mn2_diff_bounded} exactly follow \citet{chen2015clt}, it is then omitted.
\subsection{Convergence of $M_n^{(2)}(z)$}\label{sec:conv_Mn2}
In this section, we obtain the asymptotic expansion of $n\bigl(\Expe\, m_n(z) - m(z)\bigr)$ for $z\in\mathbb{C}_1$ (see definition $\mathbb{C}_1$ of in Section~\ref{sec:strategy_of_proof}) and the result is stated in Lemma~\ref{lem:Mn2_mean}.
This lemma, together with the finite dimensional convergence (see Section~\ref{sec:conv_Mn1_proof}) and the tightness of $M_n^{(1)}(z)$ (see Section~\ref{sec:tight_Mn1}), implies Proposition~\ref{prop:Mn_CLT}. To prove Lemma~\ref{lem:Mn2_mean}, we will follow the strategy in \citet{khorunzhy1996asymptotic} and \citet{bao2015asymptotic}. The main tool is the generalized Stein's equation (see Lemma~\ref{lem:general-stein}).
\begin{lemma}\label{lem:Mn2_mean}
With the same notations as in the previous sections, for $z\in\mathbb{C}_1$,
\begin{enumerate}
\item[(1)] if $p\wedge n\to\infty$ and $n^2/p=O(1)$, we have
\begin{equation}\label{eq:n2p_mean_expan}
M_n^{(2)}=n\Bigl[\Expe m_n(z) -m(z)-\calX_n\bigl(m(z)\bigr)\Bigr] = o(1),
\end{equation}
where $\calX_n(m)$ is defined by \eqref{eq:calX};
\item[(2)] if $p\wedge n\to\infty$ and $n^3/p = O(1)$, we have
\begin{equation}\label{eq:n3p_mean_expan}
n\Biggl[\Expe m_n(z) -m(z) +\sqrt{\frac{n}{p}}\frac{c_p}{b_p\sqrt{b_p}}\frac{m^4}{1-m^2} \Biggr]= \frac{m^3}{1-m^2}\biggl(\frac{m^2}{1-m^2}+\frac{\tilde{b}_p}{b_p}(\nu_4-3)+1\biggr) +o(1).
\end{equation}
\end{enumerate}
\end{lemma}
\begin{proof}
Let $\bY=(npb_p)^{-1/4}\bX$, then
\[
\bA = \bY'\bSigma_p\bY-\sqrt{\frac{p}{n}}\frac{a_p}{\sqrt{b_p}}\bI_n.
\]
To simplify notations, we let
\[
\bE:= \bSigma_p\bY\bD\bY'\bSigma_p = (E_{ij})_{p\times p}, \qquad\bF:= \bSigma_p\bY\bD = (F_{ij})_{p\times n}.
\]
By the basic identity
\[
\bD = -\frac{1}{z}\bI_n + \frac{1}{z} \bD\bA
= -\frac{1}{z}\bI_n + \frac{1}{z} \biggl(\bD\bY'\bSigma_p\bY - \sqrt{\frac{p}{n}}\frac{a_p}{\sqrt{b_p}}\bD\biggr),
\]
we have
\begin{align}
\Expe m_n(z) & = -\frac{1}{z} + \frac{1}{z}\cdot \frac{1}{n} \Expe \tr\bigl(\bD\bA\bigr)\nonumber\\
& = -\frac{1}{z} - \frac{1}{z} \sqrt{\frac{p}{n}}\frac{a_p}{\sqrt{b_p}} \Expe \biggl(\frac{1}{n}\tr \bD\biggr) + \frac{1}{zn}\Expe \;\tr\Bigl(\bY'\bSigma_p\bY\bD\Bigr)\nonumber \\
& = -\frac{1}{z} - \frac{1}{z} \sqrt{\frac{p}{n}}\frac{a_p}{\sqrt{b_p}} \Expe m_n(z) + \frac{1}{zn}\sum_{j,k}\Expe \bigl( Y_{jk}F_{jk} \bigr).\label{eq:mn-decom}
\end{align}
The basic idea of the following derivation is regarding $F_{jk}:=(\bSigma_p\bY\bD)_{jk}$ as an analytic function of $Y_{jk}$, and then apply the generalized Stein's equation (Lemma~\ref{lem:general-stein} below) to expand $\Expe \bigl( Y_{jk}F_{jk} \bigr)$.
\begin{lemma}[Generalized Stein's Equation, \citet{khorunzhy1996asymptotic}]\label{lem:general-stein}
For any real-valued random variable $\xi$ with $\Expe |\xi|^{p+2}<\infty$ and complex-valued function $g(t)$ with continuous and bounded $p+1$ derivatives, we have
\[
\Expe \bigl[\xi g(\xi)\bigr] = \sum_{a=0}^{p} \frac{\kappa_{a+1}}{a!} \Expe \Bigl(g^{(a)}(\xi)\Bigr)+\varepsilon,
\]
where $\kappa_{a}$ is the $a$-th cumulant of $\xi$, and
\[
|\varepsilon|\leqslant C \sup_t \bigl| g^{(p+1)}(t) \bigr| \; \Expe\bigl( |\xi|^{p+2}\bigr),
\]
where the positive constant $C$ depends on $p$.
\end{lemma}
Applying Lemma~\ref{lem:general-stein} to the last term in \eqref{eq:mn-decom}, we obtain the following expansion:
\begin{equation}\label{eq:Emn_expan_0}
\Expe m_n(z)
= -\frac{1}{z} - \frac{1}{z} \sqrt{\frac{p}{n}}\frac{a_p}{\sqrt{b_p}} \Expe m_n(z) + \frac{1}{zn}\sum_{a=0}^4\frac{1}{(npb_p)^{(a+1)/4}}\sum_{j,k} \frac{\kappa_{a+1}}{a!}\Expe\biggl(\frac{\partial^a F_{jk}}{\partial Y_{jk}^a}\biggr)+\varepsilon_n,
\end{equation}
where $\kappa_{a}$ is the $a$-th cumulant of $Y_{jk}$, $\tfrac{\partial^a F_{jk}}{\partial Y_{jk}^a}$ denotes the $a$-th order derivative of $F_{jk}$ w.r.t. $Y_{jk}$, and
\begin{equation}\label{eq:stein_remainder_up_bdd}
\varepsilon_n \leqslant \frac{K}{n} \frac{1}{(npb_p)^{6/4}}\sum_{j,k}\sup_{j,k} \Expe_{jk} \biggl|\frac{\partial^5 F_{jk}}{\partial Y_{jk}^5}\biggr|.
\end{equation}
The explicit formula of the derivatives of $F_{jk}$ are provided in Lemma~\ref{lem:Fjk_derivative}. These derivatives can be derived by using the chain rule and Lemma~\ref{lem:derivative} repeatedly, and the details will be omitted here.
\begin{lemma}[Derivatives of $F_{jk}$]\label{lem:Fjk_derivative}
\begin{align*}
\frac{\partial F_{jk}}{\partial Y_{jk}} \;&= \sigma_{j j} D_{kk} - E_{jj}D_{kk}-F_{jk}^2;\\[0.5em]
\frac{\partial^2 F_{jk}}{\partial Y_{jk}^2} \;&= -6\sigma_{jj}F_{jk}D_{kk}+6E_{jj}F_{jk}D_{kk}+2F_{jk}^3;\\[0.5em]
\frac{\partial^3 F_{jk}}{\partial Y_{jk}^3} \;&= -6\sigma_{jj}^2D_{kk}^2+36\sigma_{jj}F_{jk}^2D_{kk}+12\sigma_{jj} E_{jj}D_{kk}^2 - 36E_{jj}F_{jk}^2D_{kk}-6E_{jj}^2D_{kk}^2-6F_{jk}^4;\\[0.5em]
\frac{\partial^4 F_{jk}}{\partial Y_{jk}^4} \;&= 120\sigma_{jj}^2F_{jk}D_{kk}^2 -240\sigma_{jj}F_{jk}^3D_{kk}-240\sigma_{jj}E_{jj}F_{jk}D_{kk}^2+240E_{jj}F_{jk}^3D_{kk}\\[0.5em]
&\qquad\qquad+120E_{jj}^2F_{jk}D_{kk}^2+24F_{jk}^5;\\
\frac{\partial^5 F_{jk}}{\partial Y_{jk}^5} \;&= -120 F_{jk}^6 - 1800 E_{jj} F_{jk}^4 D_{kk} - 1800 E_{jj}^2 F_{jk}^2 D_{kk}^2 - 120 E_{jj}^3 D_{kk}^3 +
1800 \sigma_{jj}F_{jk}^4 D_{kk} \\[0.5em]
&\qquad\qquad + 3600 \sigma_{jj} E_{jj} F_{jk}^2 D_{kk}^2 + 360 \sigma_{jj} E_{jj}^2 D_{kk}^3 -
1800 \sigma_{jj}^2 F_{jk}^2 D_{kk}^2\\[0.5em]
&\qquad \qquad - 360 \sigma_{jj}^2 E_{jj} D_{kk}^3 + 120 \sigma_{jj}^3 D_{kk}^3.
\end{align*}
\end{lemma}
From \eqref{eq:DXSX} and Lemma~\ref{lem:G11_tilde_upper_bbd}, it is not difficult to obtain the following estimates:
\begin{align}
D_{kk}^{a_1}F_{jk}^{a_2}E_{jj}^{a_3}& \leqslant Kn^{a_3/2}\biggl(\sum_{\alpha}\bigl[(\bSigma \bY)_{j\alpha}\bigr]^2\biggr)^{(a_2+2a_3)/2},\quad (a_1, a_2, a_3\geqslant 0)\label{eq:stein_esti_EFG}\\[0.5em]
\Expe\biggl|\Bigl(\bSigma_p^{-\nicefrac{1}{2}}\bE\bSigma_p^{-\nicefrac{1}{2}}\Bigr)_{jj}&-\frac{\Expe m_n}{a_p\sqrt{p/(nb_p)}+z+\Expe m_n}\biggr|=O\biggl(\biggl(\frac{n}{p}\biggr)^2\biggr) + O\biggl(\frac{1}{p}\biggr),\label{eq:stein_esti_E}\\[0.5em]
\biggl|\sum_{j,k}F_{jk}\biggr| & = O\Bigl((np)^{3/4}\Bigr),\label{eq:stein_esti_F_1}\\[0.5em]
\biggl|\sum_{j,k}F_{jk}^{a_2}\biggr| & = O\Bigl(p^{a_2/4}n^{1-a_2/4}\Bigr),\quad (a_2\geqslant 2).\label{eq:stein_esti_F_2}
\end{align}
By \eqref{eq:stein_remainder_up_bdd} and \eqref{eq:stein_esti_EFG}, we have the following estimate for $\varepsilon_n$ defined in \eqref{eq:stein_remainder_up_bdd}:
\begin{equation}\label{eq:stein_remainder_esti}
|\varepsilon_n| = o\biggl(\frac{1}{n}\biggr).
\end{equation}
By the fact \begin{equation}
\Expe \biggl|D_{\ell\ell}+\frac{1}{z+\Expe\, m_n}\biggr|^2 = O\biggl(\frac{1}{n}\biggr) + O\biggl(\frac{n}{p}\biggr),\qquad 1\leqslant \ell \leqslant n,\label{eq:Gkk_esti}
\end{equation}
which is verified in Lemma~\ref{lem:G11_upper_bbd}, and the estimates above, we can extract the leading order terms in \eqref{eq:Emn_expan_0} to obtain
\begin{align}
\Expe m_n(z)
& = -\frac{1}{z} - \frac{1}{z} \sqrt{\frac{p}{n}}\frac{a_p}{\sqrt{b_p}} \Expe m_n(z) + \frac{1}{zn}\frac{1}{\sqrt{npb_p}}\sum_{j,k}\Expe\Bigl(\sigma_{jj}D_{kk}-E_{jj}D_{kk}-F_{jk}^2\Bigr)\nonumber\\
&\qquad\qquad - \frac{1}{zn}\frac{\nu_4-3}{npb_p}\sum_{j,k}\Expe\Bigl(\sigma_{jj}^2 D_{kk}^2\Bigr)+o\biggl(\frac{1}{n}\biggr)\nonumber\\
& = -\frac{1}{z} - \frac{1}{zn}\frac{1}{\sqrt{npb_p}} \Expe\Bigl[\tr(\bE)\tr(\bD)\Bigr] - \frac{1}{zn}\frac{1}{\sqrt{npb_p}} \Expe\Bigl[\tr(\bF\bF')\Bigr]\nonumber\\
&\qquad\qquad - \frac{(\nu_4-3)\tilde{b}_p}{zn^2b_p}\Expe\Bigl( \sum_{k} D_{kk}^2\Bigr)+o\biggl(\frac{1}{n}\biggr).\label{eq:Emn_expan_1}
\end{align}
Using the same argument as in the proof of Lemma~\ref{lem:Mk_limit}, if $n^2/p=O(1)$, we can show that
\begin{equation}
\Expe \biggl| \frac{1}{\sqrt{npb_p}} \tr \bE - m(z) \biggr|^2
=O\biggl(\frac{1}{n}\biggr).\label{eq:Diff-trE-m}
\end{equation}
This, together with $c_r$-inequality, implies that
\begin{equation}\label{eq:Diff-trE-Expe}
\Expe\, \biggl| \frac{1}{\sqrt{npb_p}} \tr \bE - \Expe\,\frac{1}{\sqrt{npb_p}} \tr \bE \biggr|^2 = o(1).
\end{equation}
Together with the fact that $\Var(m_n)=O(n^{-2})$ (see Lemma~\ref{lem:var_mn_estimate} for more details), we obtain
\begin{equation}
\Cov\biggl(\frac{1}{\sqrt{npb_p}} \tr \bE, \frac{1}{n}\tr \bD \biggr) \leqslant \sqrt{\eqref{eq:Diff-trE-Expe}}\cdot \sqrt{\Var(m_n)} = o\biggl(\frac{1}{n}\biggr).\label{eq:trE_trG_Cov}
\end{equation}
Note that
\begin{equation}
\tr(\bF\bF') = \tr\bigl(\bSigma_p\bY\bD^{2}\bY'\bSigma_p\bigr) = \frac{\partial }{\partial z} \tr\bigl(\bSigma_p\bY\bD\bY'\bSigma_p\bigr) = \frac{\partial }{\partial z} \tr \bE .\label{eq:tr_FFt_dif}
\end{equation}
Applying \eqref{eq:Gkk_esti}, \eqref{eq:trE_trG_Cov}, and \eqref{eq:tr_FFt_dif} to \eqref{eq:Emn_expan_1}, we have
\begin{align}
\Expe m_n(z)
& = -\frac{1}{z} - \frac{1}{z}\cdot\frac{1}{\sqrt{npb_p}} \Expe\bigl(\tr \bE \bigr)\cdot\frac{1}{n}\Expe\,\bigl(\tr \bD \bigr) - \frac{1}{zn}\frac{1}{\sqrt{npb_p}} \Expe\,\biggl(\frac{\partial }{\partial z} \tr \bE \biggr)\nonumber\\
&\qquad\qquad -\frac{\nu_4-3}{zn}\frac{\tilde{b}_p}{b_p}\bigl(\Expe m_n(z)\bigr)^2+o\biggl(\frac{1}{n}\biggr).\label{eq:Emn_expan_2}
\end{align}
The problem reduces to estimate $(1/\sqrt{npb_p})\Expe\bigl(\tr \bE \bigr)$. To this end, we apply Lemma~\ref{lem:general-stein} again to the term $(1/\sqrt{npb_p})\Expe\bigl(\tr \bE \bigr)$ to find its expansion.
Denote
\[
\widetilde{\bE}:= \bSigma_p^2\bY\bD\bY'\bSigma_p^2, \qquad \widehat{\bE}:=\bSigma_p\bY\bD\bY'\bSigma_p^2,\qquad\widetilde{\bF}:= \bSigma_p^2\bY\bD,
\]
and write
\begin{equation}\label{eq:Expe_trE_expan}
\frac{1}{\sqrt{npb_p}}\Expe\,\bigl(\tr \bE\bigr) =\frac{1}{\sqrt{npb_p}}\sum_{j,k} \Expe\bigl(Y_{jk}\widetilde{F}_{jk}\bigr).
\end{equation}
The first four derivatives of $\widetilde{F}_{jk}$ w.r.t. $Y_{jk}$ is presented in the following lemma.
\begin{lemma}[Derivatives of $\widetilde{F}_{jk}$]\label{lem:Fjk_tilde_derivative}
\begin{align*}
\frac{\partial \widetilde{F}_{jk}}{\partial Y_{jk}} \;& = \widetilde{\sigma}_{j j} D_{kk} - \widehat{E}_{jj}D_{k k}-F_{jk}\widetilde{F}_{jk};\\[0.5em]
\frac{\partial^2 \widetilde{F}_{jk}}{\partial Y_{jk}^2} \;& = -2\sigma_{jj}\widetilde{F}_{jk}D_{kk}-4\widetilde{\sigma}_{jj}F_{jk}D_{kk} + 2F_{jk}^2\widetilde{F}_{jk}+4\widehat{E}_{jj}F_{jk}D_{kk}+2E_{jj}\widetilde{F}_{jk}D_{kk};\\[0.5em]
\frac{\partial^3 \widetilde{F}_{jk}}{\partial Y_{jk}^3} \;& = -6\widetilde{\sigma}_{jj}\sigma_{jj}D_{kk}^2-6F_{jk}^3\widetilde{F}_{jk}-18\widehat{E}_{jj}F_{jk}^2D_{kk}-18E_{jj}F_{jk}\widetilde{F}_{jk}D_{kk}-6E_{jj}\widehat{E}_{jj}D_{kk}^2\\[0.5em]
\;&\qquad +18\sigma_{jj}F_{jk}\widetilde{F}_{jk}D_{kk} + 6\sigma_{jj}\widehat{E}_{jj}D_{kk}^2+18\widetilde{\sigma}_{jj}F_{jk}^2D_{kk}+6\widetilde{\sigma}_{jj}E_{jj}D_{kk}^2;\\[0.5em]
\frac{\partial^4 \widetilde{F}_{jk}}{\partial Y_{jk}^4}
\;& = 24 F_{jk}^4 \widetilde{F}_{jk} + 96 \widehat{E}_{jj}F_{jk}^3D_{kk} +144 E_{jj} F_{jk}^2 \widetilde{F}_{jk} D_{kk} + 96 E_{jj}\widehat{E}_{jj} F_{jk} D_{kk}^2 + 24 E_{jj}^2 \widetilde{F}_{jk} D_{kk}^2 \\[0.5em]
\; & \qquad - 144\sigma_{jj} F_{jk}^2 \widetilde{F}_{jk} D_{kk} - 96 \sigma_{jj}\widehat{E}_{jj}F_{jk}D_{kk}^2 - 48 \sigma_{jj}E_{jj} \widetilde{F}_{jk} D_{kk}^2
+24 \sigma_{jj}^2 \widetilde{F}_{jk} D_{kk}^2 \\[0.5em]
\;&\qquad - 96 \widetilde{\sigma}_{jj}F_{jk}^3 D_{kk}- 96\widetilde{\sigma}_{jj}E_{jj} F_{jk} D_{kk}^2 + 96 \sigma_{jj} \widetilde{\sigma}_{jj} F_{jk} D_{kk}^2.
\end{align*}
\end{lemma}
Applying generalized Stein's equation with the derivatives of $\widetilde{F}_{jk}$ (see Lemma~\ref{lem:Fjk_tilde_derivative}) to the last term in \eqref{eq:Expe_trE_expan}, and using the similar estimates above, gives us
\begin{align}
\;&\frac{1}{\sqrt{npb_p}}\Expe\,\bigl(\tr \bE\bigr) \nonumber\\
=\;&\frac{1}{\sqrt{npb_p}}\sum_{a=0}^3\frac{1}{(npb_p)^{(a+1)/4}}\sum_{j,k} \frac{\kappa_{a+1}}{a!}\Expe\biggl(\frac{\partial^a \widetilde{F}_{jk}}{\partial Y_{jk}^a}\biggr) + \widetilde{\varepsilon}_n\nonumber\\
=\;&\frac{1}{npb_p}\sum_{j,k}\Expe\Bigl( \widetilde{\sigma}_{j j} D_{kk} - \widehat{E}_{jj}D_{k k}-F_{jk}\widetilde{F}_{jk}\Bigr)+\frac{1}{\sqrt{npb_p}}\frac{\nu_4-3}{npb_p}\sum_{j,k}\Expe\Bigl(-\widetilde{\sigma}_{jj}\sigma_{jj}D_{kk}^2\Bigr)+o\biggl(\frac{1}{n}\biggr)\nonumber\\
=\;&\Expe m_n- \frac{1}{npb_p} \Expe\Bigl[\tr(\widehat{\bE})\tr(\bD)\Bigr] - \frac{1}{npb_p} \Expe\Bigl[\tr(\bF\widetilde{\bF}')\Bigr]\nonumber \\
\;&\qquad\qquad\qquad\qquad- \frac{1}{\sqrt{npb_p}}\frac{\nu_4-3}{npb_p}\Expe\Bigl(\sum_{j}\widetilde{\sigma}_{jj}\sigma_{jj}\Bigr)\Bigl(\sum_{k}D_{kk}^2\Bigr)+o\biggl(\frac{1}{n}\biggr)\nonumber\\
=\;&\Expe m_n- \sqrt\frac{n}{p}\cdot \frac{1}{\sqrt{np}b_p} \Expe\bigl(\tr \widehat{\bE}\bigr)\cdot\frac{1}{n}\Expe\bigl( \tr \bD\bigr) +o\biggl(\frac{1}{n}\biggr)\label{eq:Expe-trE-hat-trD}\\
=\;& \Expe m_n -\sqrt{\frac{n}{p}}\biggl(\frac{c_p}{b_p\sqrt{b_p}}\Expe\, m_n + O\Bigl(\sqrt{\tfrac{n}{p}}\Bigr)\biggr)\cdot \Expe\, m_n+o\biggl(\frac{1}{n}\biggr)\label{eq:Expe-trace-E-hat-esti}\\
=\;& \Expe m_n -\sqrt{\frac{n}{p}}\frac{c_p}{b_p\sqrt{b_p}}\bigl(\Expe\, m_n\bigr)^2+o\biggl(\frac{1}{n}\biggr)+o\biggl(\sqrt{\frac{n}{p}}\biggr).\label{eq:Expe-trace-E}
\end{align}
\noindent Plugging \eqref{eq:Expe-trace-E} into \eqref{eq:Emn_expan_2}, we have
\begin{align*}
\Expe m_n
& = -\frac{1}{z} - \frac{1}{z}\biggl[\Expe m_n -\sqrt{\frac{n}{p}}\frac{c_p}{b_p\sqrt{b_p}}\bigl(\Expe m_n\bigr)^2 \biggr]\Expe m_n\\[0.5em]
& \qquad\qquad-\frac{1}{zn}\cdot \frac{\partial}{\partial z} \biggl[ \Expe m_n-\sqrt{\frac{n}{p}}\frac{c_p}{b_p\sqrt{b_p}}\cdot \bigl(\Expe m_n\bigr)^2 \biggr]\\[0.5em]
& \qquad\qquad-\frac{\nu_4-3}{zn}\frac{\tilde{b}_p}{b_p}\bigl(\Expe m_n\bigr)^2 + o\biggl(\frac{1}{n}\biggr)+o\biggl(\sqrt{\frac{n}{p}}\biggr),\\[0.5em]
& = -\frac{1}{z} -\frac{1}{z} \bigl(\Expe\, m_n\bigr)^2 + \frac{1}{z}\sqrt{\frac{n}{p}}\frac{c_p}{b_p\sqrt{b_p}} m^3 \\[0.5em]
&\qquad\qquad- \frac{1}{zn}\biggl[ \frac{m^2}{1-m^2}+\frac{(\nu_4-3)\tilde{b}_p}{b_p}m^2 \biggr]+ o\biggl(\frac{1}{n}\biggr)+o\biggl(\sqrt{\frac{n}{p}}\biggr).
\end{align*}
This implies \eqref{eq:n3p_mean_expan} under the assumption $n^3/p=O(1)$.
Moreover, to obtain \eqref{eq:n2p_mean_expan} under the assumption $n^2/p = O(1)$, we need to figure out the remainder term $o(\sqrt{n/p})$ in \eqref{eq:Expe-trace-E} more carefully. Indeed, this remainder term comes from the estimate of $\Expe\bigl(\tr \widehat{\bE}\bigr)/(b_p\sqrt{np})$ in \eqref{eq:Expe-trE-hat-trD}. To get a more precise estimate, we use the similar argument above for calculating the asymptotic expansion of $\Expe\bigl(\tr \widehat{\bE}\bigr)/(b_p\sqrt{np})$.
\begin{equation}\label{eq:Expe-trace-E-hat-expand}
\frac{1}{\sqrt{np}b_p} \Expe\bigl(\tr \widehat{\bE}\bigr) = \Expe m_n - \sqrt{\frac{n}{p}}\frac{c_p}{b_p\sqrt{b_p}} (\Expe m_n)^2 + \frac{n}{p}\frac{d_p}{b_p^2} (\Expe m_n)^3 + o\biggl(\frac{1}{n}\biggr) + o\biggl(\frac{n}{p}\biggr).
\end{equation}
Plugging \eqref{eq:Expe-trace-E-hat-expand} into \eqref{eq:Emn_expan_2}, we have
\begin{align*}
\Expe m_n
& = -\frac{1}{z} - \frac{1}{z}\biggl[\Expe m_n -\sqrt{\frac{n}{p}}\frac{c_p}{b_p\sqrt{b_p}}\bigl(\Expe m_n\bigr)^2 + \frac{n}{p}\frac{d_p}{b_p^2} (\Expe m_n)^3\biggr]\Expe m_n\\[0.5em]
& \qquad\qquad-\frac{1}{zn}\cdot \frac{\partial}{\partial z} \biggl[ \Expe m_n-\sqrt{\frac{n}{p}}\frac{c_p}{b_p\sqrt{b_p}}\cdot \bigl(\Expe m_n\bigr)^2 \biggr]-\frac{\nu_4-3}{zn}\frac{\tilde{b}_p}{b_p}\bigl(\Expe m_n\bigr)^2 + o\biggl(\frac{1}{n}\biggr)+o\biggl(\frac{n}{p}\biggr),\\[0.5em]
& = -\frac{1}{z} -\frac{1}{z} \bigl(\Expe\, m_n\bigr)^2 + \frac{1}{z}\sqrt{\frac{n}{p}}\frac{c_p}{b_p\sqrt{b_p}} \bigl(\Expe m_n\bigr)^3 - \frac{1}{z}\frac{n}{p}\frac{d_p}{b_p^2}m^4 \\[0.5em]
&\qquad\qquad - \frac{1}{zn}\biggl[ \frac{m^2}{1-m^2}+\frac{(\nu_4-3)\tilde{b}_p}{b_p}m^2 \biggr]+ o\biggl(\frac{1}{n}\biggr)+o\biggl(\frac{n}{p}\biggr).
\end{align*}
Multiplying $-z$ on both sides, we have
\begin{align}
-z\Expe m_n
&=1 + \bigl(\Expe\, m_n\bigr)^2 - \sqrt{\frac{n}{p}}\frac{c_p}{b_p\sqrt{b_p}} \bigl(\Expe m_n\bigr)^2\cdot\Expe m_n
+\frac{n}{p}\frac{d_p}{b_p^2}m^4 \nonumber \\[0.5em]
&\qquad \qquad + \frac{1}{n}\biggl[ \frac{m^2}{1-m^2}+\frac{(\nu_4-3)\tilde{b}_p}{b_p}m^2 \biggr]+ o\biggl(\frac{1}{n}\biggr)+o\biggl(\frac{n}{p}\biggr).\label{eq:zEmn}
\end{align}
This implies that
\begin{equation}\label{eq:Emn2}
\bigl(\Expe\, m_n\bigr)^2
=-1 - z\Expe m_n + \sqrt{\frac{n}{p}}\frac{c_p}{b_p\sqrt{b_p}} \bigl(\Expe m_n\bigr)^3 + O\biggl(\frac{1}{n}\biggr)+O\biggl(\frac{n}{p}\biggr).
\end{equation}
Plugging \eqref{eq:Emn2} into \eqref{eq:zEmn} yields that
\begin{align*}
-z\Expe m_n
& =1 + \bigl(\Expe\, m_n\bigr)^2 + \sqrt{\frac{n}{p}}\frac{c_p}{b_p\sqrt{b_p}} \bigl(1 + z\Expe m_n\bigr)\Expe m_n - \frac{n}{p}\frac{c_p^2}{b_p^3} m^4 \\[0.5em]
& \qquad \qquad + \frac{n}{p}\frac{d_p}{b_p^2}m^4 + \frac{1}{n}\biggl[ \frac{m^2}{1-m^2}+\frac{(\nu_4-3)\tilde{b}_p}{b_p}m^2 \biggr]+ o\biggl(\frac{1}{n}\biggr)+o\biggl(\frac{n}{p}\biggr).
\end{align*}
This equation can be written as a quadratic equation of $\Expe\, m_n - m$:
\begin{align*}
0 &= \biggl(m-\sqrt{\frac{n}{p}}\frac{c_p}{b_p\sqrt{b_p}}\bigl(1+m^2\bigr)\biggr)\bigl(\Expe\, m_n - m\bigr)^2 \\[0.5em]
&\qquad \qquad+ \biggl[m^2-1-\biggl(\sqrt{\frac{n}{p}}\frac{c_p}{b_p\sqrt{b_p}}\biggr)m(1+2m^2)\biggr] \bigl(\Expe\, m_n - m \bigr) \\[0.5em]
&\qquad \qquad + \frac{m^3}{n}\biggl[ \frac{1}{1-m^2}+\frac{(\nu_4-3)\tilde{b}_p}{b_p} \biggr] - \biggl(\sqrt{\frac{n}{p}}\frac{c_p}{b_p\sqrt{b_p}}\biggr) m^4 + \frac{n}{p}\biggl(-\frac{c_p^2}{b_p^3} +\frac{d_p}{b_p^2}\biggr)m^5 \\[0.5em]
&\qquad\qquad + o\biggl(\frac{1}{n}\biggr)+o\biggl(\frac{n}{p}\biggr)\\[0.5em]
& =: \calA \bigl(\Expe\, m_n - m\bigr)^2 + \calB \bigl(\Expe\, m_n - m\bigr) + \calC + o\biggl(\frac{1}{n}\biggr)+o\biggl(\frac{n}{p}\biggr).
\end{align*}
Solving the equation yields two solutions:
\[
x_1 = \frac{-\calB+\sqrt{\calB^2-4\calA\calC}}{2\calA} + o\biggl(\frac{1}{n}\biggr)+o\biggl(\frac{n}{p}\biggr), \qquad x_2 = \frac{-\calB-\sqrt{\calB^2-4\calA\calC}}{2\calA} + o\biggl(\frac{1}{n}\biggr)+o\biggl(\frac{n}{p}\biggr).
\]
When $n^2/p=O(1)$, from the definition of $\calA, \calB, \calC$, we can verify that $x_1 = o(1)$ while $x_2=(1-m^2)/m^2+o(1)$.
Since $\Expe\, m_n - m=o(1)$, we choose $x_1$ to be the expression of $\Expe\, m_n - m$, that is,
\[
\Expe\, m_n - m = \frac{-\calB+\sqrt{\calB^2-4\calA\calC}}{2\calA} + o\biggl(\frac{1}{n}\biggr)+o\biggl(\frac{n}{p}\biggr).
\]
This implies \eqref{eq:n2p_mean_expan} under the assumption $n^2/p=O(1)$.
\end{proof}
\newpage
\appendix
\section{Some Technical Lemmas}
\begin{lemma}[\citet{bai2010spectral}, Lemma B.26]\label{lem:quad-trace}
Let $\bA = (a_{ij})$ be an $n\times n$ nonrandom matrix and $\bx=(X_1, \ldots, X_n)'$ be a random vector of independent entries. Assume that $\Expe X_i = 0$, $\Expe |X_i|^2=1$ and $\Expe |X_i|^{\ell}\leqslant \nu_{\ell}$. Then, for any $k\geqslant 1$,
\[
\Expe |\bx^*\bA\bx - \tr \bA|^k \leqslant C_k \Bigl( \bigl(\nu_4 \tr(\bA\bA^*)\bigr)^{k/2} + \nu_{2k} \tr(\bA\bA^*)^{k/2} \Bigr),
\]
where $C_k$ is a constant depending on $k$ only.
\end{lemma}
\begin{lemma}[\citet{pan2011central}, Lemma 5]\label{lem:PanZhou-trace}
Let $\bA$ be a $p\times p$ deterministic complex matrix with zero diagonal elements. Let $\bx=(X_1, \ldots, X_p)'$ be a random vector with i.i.d. real entries. Assume that $\Expe X_i = 0$, $\Expe |X_i|^2=1$. Then, for any $k\geqslant 2$,
\begin{equation}
\Expe |\bx'\bA\bx|^k \leqslant C_k \Bigl(\Expe |X_1|^k\Bigr)^2 \bigl( \tr \bA\bA^* \bigr)^{k/2},
\end{equation}
where $C_k$ is a constant depending on $k$ only.
\end{lemma}
\begin{lemma}[Burkholder's inequality, \citet{Burkholder1973Distribution}]\label{lem:Burkholder_ineq}
Let $\{X_i\}$ be a complex martingale difference sequence withe respect to the increasing $\sigma$-field $\{\mathcal{F}_i\}$. Then for $k\geqslant 2$, the following inequality
\[
\Expe\biggl|\sum_{i} X_i\biggr|^k \leqslant C_k \Expe \biggl(\sum_{i}\Expe \Bigl(|X_i|^2\;\big|\;\mathcal{F}_{i-1}\Bigr)\biggr)^{k/2} + C_k \Expe\sum_i |X_i|^k
\]
holds, where $C_k$ is a constant depending on $k$ only.
\end{lemma}
\begin{lemma}[Martingale CLT, \cite{billingsley2008probability}]\label{lem:CLT_MDS}
Suppose for each $n$, $Y_{n1}, Y_{n2}, \ldots, Y_{nr_n}$ is a real martingale difference sequence with respect to the $\sigma$-field $\{\mathcal{F}_{nj}\}$ having second moments. If as $n\to\infty$,
\begin{equation}\label{eq:CLT_MDS_condition1}
\sum_{j=1}^{r_n}\Expe(Y_{nj}^2\mid \mathcal{F}_{n,j-1}) \convp \sigma^2,
\end{equation}
where $\sigma^2$ is a positive constant, and for each $\varepsilon > 0$,
\begin{equation}\label{eq:CLT_MDS_condition2}
\sum_{j=1}^{r_n} \Expe \Bigl(Y_{nj}^2\indicator_{\{|Y_{nj}|\geqslant \varepsilon\}}\Bigr) \to 0,
\end{equation}
then
\[
\sum_{j=1}^{r_n}Y_{nj} \convd \mathcal{N}(0,\sigma^2).
\]
\end{lemma}
\begin{lemma}[\citet{billingsley1968convergence}, Theorem 12.3]\label{lem:tightness}
The sequence $\{X_n\}$ is tight if it satisfies these two conditions:
\begin{itemize}
\item[(i)] The sequence $\{X_n(0)\}$ is tight.
\item[(ii)] There exist constants $\gamma\geqslant 0$ and $\alpha > 1$ and a non-decreasing, continuous function $F$ on $[0,1]$ such that
\[
\Prob\Bigl( \bigl|X_n(t_2)-X_n(t_1)\bigr|\geqslant \lambda \Bigr) \leqslant \frac{1}{\lambda^{\gamma}}\bigl|F(t_2)-F(t_1)\bigr|^{\alpha}
\]
holds for all $t_1, t_2$ and $n$ and all positive $\lambda$.
\end{itemize}
\end{lemma}
\begin{lemma}\label{lem:Zk_asym} For $z_1, z_2 \in \mathbb{C}^{+}$,
\[
\bbZ_k:=\frac{1}{n(pb_p)^2} \tr \Bigl(\Expe_k\bM_k^{(1)}(z_1)\cdot \Expe_k\bM_k^{(1)}(z_2)\Bigr)=\frac{\frac{k}{n}m(z_1)m(z_2)}{1-\frac{k}{n}m(z_1)m(z_2)}+o_{L_1}(1).
\]
\end{lemma}
This lemma is used in Section~\ref{sec:conv_Mn1_proof} to derive the finite dimensional convergence of $M_n^{(1)}(z)$.
\begin{proof}
Let $\{ \be_i, i=1.\ldots,k-1,k+1,\ldots,n \}$ be the $(n-1)$-dimensional unit vectors with the $i$-th (or $(i-1)$-th) element equal to $1$ and the remaining equal to $0$ according as $i<k$ (or $i>k$). Write $\bX_k=\bX_{ki}+\bx_i\be_i'$. Let
\begin{align*}
\bD_{ki,r}^{-1}&=\bD_k^{-1}-\be_i\bh_i'=\frac{1}{\sqrt{npb_p}}\Bigl( \bX_{ki}'\bSigma_p\bX_k-pa_p \bI_{(i)}\Bigr)-z\bI_{n-1},\\[0.5em]
\bD_{ki}^{-1}&=\bD_k^{-1}-\be_i\bh_i'-\br_i\be_i'=\frac{1}{\sqrt{npb_p}}\Bigl(\bX_{ki}'\bSigma_p\bX_{ki}-pa_p\bI_{(i)} \Bigr)-z\bI_{n-1},\\[0.5em]
\bh_i'&=\frac{1}{\sqrt{npb_p}}\bx_i'\bSigma_p\bX_{ki}+\frac{1}{\sqrt{npb_p}}\Bigl(\bx_i'\bSigma_p\bx_i-pa_p\Bigr)\be_i',\qquad \br_i=\frac{1}{\sqrt{npb_p}} \bX_{ki}'\bSigma_p\bx_i,\\[0.5em]
\zeta_i&=\frac{1}{1+\vartheta_i},\qquad \vartheta_i=\bh_i'\bD_{ki,r}(z)\be_i,\qquad \bM_{ki}=\bSigma_p\bX_{ki}\bD_{ki}(z)\bX_{ki}'\bSigma_p.
\end{align*}
We have some crucial identities,
\begin{equation}\label{eq:Xe_0}
\bX_{ki}\be_i=\boldsymbol{0},\qquad \be_i'\bD_{ki,r}=\be_{i}'\bD_{ki}=-\frac{\be_{i}'}{z},
\end{equation}
where $\boldsymbol{0}$ is a $p$-dimensional vector with all the elements equal to $0$.
By using \eqref{eq:Xe_0} and some frequently used formulas about the inverse of matrices, we have two useful identites,
\begin{equation}\label{eq:diff_inv_1}
\begin{aligned}
\bD_{k}-\bD_{ki,r}&=-\bD_{ki,r}(\bD_k^{-1}-\bD_{ki,r}^{-1})\bD_k=-\bD_{ki,r}(\be_i\bh_i')\bD_{k}\\[0.5em]
&=-\bD_{ki,r}(\be_i\bh_i')(\zeta_i\bD_{ki,r})=-\zeta_i\bD_{ki,r}(\be_i\bh_i')\bD_{ki,r}
\end{aligned}
\end{equation}
and
\begin{equation}\label{eq:diff_inv_2}
\begin{aligned}
\bD_{ki,r}-\bD_{ki}&=-\bD_{ki}(\bD_{ki,r}^{-1}-\bD_{ki}^{-1})\bD_{ki,r}=-\bD_{ki}(\br_i\be_i')\bD_{ki,r}\\[0.5em]
&=-\bD_{ki}\Bigl(\frac{1}{\sqrt{npb_p}} \bX_{ki}'\bSigma_p\bx_i\be_i'\Bigr)\bD_{ki}=\frac{1}{z\sqrt{npb_p}}\bD_{ki}\bX_{ki}'\bSigma_p\bx_i\be_i'.
\end{aligned}
\end{equation}
Using \eqref{eq:diff_inv_1} and \eqref{eq:diff_inv_2}, for $i<k$, we obtain the following decomposition $\Expe_k\bM_k^{(1)}(z)$,
\begin{align}
\Expe_k\bM_k^{(1)}(z) & = \Expe_k \Bigl(\bSigma_p(\bX_{ki}+\bx_i\be_i')\bD_k(\bX_{ki}+\bx_i\be_i')'\bSigma_p\Bigr)\nonumber\\
&=\Expe_k\biggl( \bSigma_p\bX_{ki}\bD_k\bX_{ki}'\bSigma_p + \bSigma_p\bX_{ki}\bD_k\be_i\bx_i'\bSigma_p+\bSigma_p\bx_i\be_i'\bD_k\bX_{ki}'\bSigma_p +\bSigma_p\bx_i\be_i'\bD_k\be_i\bx_i'\bSigma_p \biggr)\nonumber\\
&=\Expe_k \bM_{ki}-\Expe_{k}\biggl( \frac{\zeta_i(z)}{znpb_p} \bM_{ki}\bx_i\bx_i'\bM_{ki}\biggr)+\Expe_k \biggl(\frac{\zeta_i(z)}{z\sqrt{npb_p}}\bM_{ki}\biggr)\bx_i\bx_i'\bSigma_p\nonumber\\
&\qquad\qquad +\bSigma_p\bx_i\bx_i'\Expe_k \biggl(\frac{\zeta_i(z)}{z\sqrt{npb_p}}\bM_{ki}\biggr)-\Expe_k\biggl(\frac{\zeta_i(z)}{z}\biggr)\bSigma_p\bx_i\bx_i'\bSigma_p\label{eq:E_Mk_decom}\\
&:=\bB_1(z)+\bB_2(z)+\bB_3(z)+\bB_4(z)+\bB_5(z).\nonumber
\end{align}
Write
\[
\bD_k^{-1}=\sum_{i=1 (\neq k)}^n \be_i\bh_i'-z\bI_{n-1}.
\]
Multiplying $\bD_k$ on the right-hand side, we have
\[
z\bD_k=-\bI_{n-1}+\sum_{i=1(\neq k)}^n \be_i\bh_i'\bD_k.
\]
Multiplying $\bSigma_p\bX_k$ on the left-hand side, $\bX_k'\bSigma_p$ on the right-hand side, we get
\[
z\bM_k^{(1)}(z)=-\bSigma_p\bX_k\bX_k'\bSigma_p+\sum_{i=1(\neq k)}^n \bSigma_p\bX_k\be_i\bh_i'\bD_k\bX_k'\bSigma_p.
\]
Thus,
\begin{align}
z\Expe_k \bigl(\bM_k^{(1)}(z)\bigr)&=-\Expe_k\bigl(\bSigma_p\bX_k\bX_k'\bSigma_p\bigr)+\sum_{i=1(\neq k)}^n \Expe_k(\bSigma_p\bX_k\be_i\bh_i'\bD_k\bX_k'\bSigma_p)\nonumber\\
&=-\bSigma_p\Expe_k\biggl(\sum_{i=1(\neq k)}^n\bx_i\bx_i'\biggr)\bSigma_p+\sum_{i=1(\neq k)}^n \Expe_k\Bigl(\zeta_i\bSigma_p\bx_i\bh_i'\bD_{ki,r}(\bX_{ki}'+\be_i\bx_i')\bSigma_p\Bigr)\nonumber\\
&=-(n-k)\bSigma_p^2 - \sum_{i<k} \Bigl(\bSigma_p\bx_i\bx_i'\bSigma_p\Bigr)+
\sum_{i=1(\neq k)}^n \Expe_{k}\biggl(\frac{\zeta_i}{\sqrt{npb_p}}\bSigma_p\bx_i\bx_i'\bSigma_p\bX_{ki}\bD_{ki,r}\bX_{ki}'\bSigma_p\biggr)\nonumber\\
&\qquad\qquad+\sum_{i=1(\neq k)}^n \Expe_k \Bigl(\zeta_i\bSigma_p\bx_i\bh_i'\bD_{ki,r}\be_i\bx_i'\bSigma_p\Bigr)\nonumber\\
&=-(n-k)\bSigma_p^2 - \sum_{i<k} \Bigl(\bSigma_p\bx_i\bx_i'\bSigma_p\Bigr)+\sum_{i=1(\neq k)}^n \Expe_{k}\biggl(\frac{\zeta_i}{\sqrt{npb_p}}\bSigma_p\bx_i\bx_i'\bM_{ki}\biggr)\nonumber\\
&\qquad\qquad+\sum_{i=1(\neq k)}^n \Expe_k \Bigl(\zeta_i\vartheta_i\bSigma_p\bx_i\bx_i'\bSigma_p\Bigr).\label{eq:z_E_Mk_decom}
\end{align}
Applying \eqref{eq:E_Mk_decom} and \eqref{eq:z_E_Mk_decom} to $\Expe_k \bM_k^{(1)}(z_2)$ (for $i<k$) and $z_1\Expe_k \bM_k^{(1)}(z_1)$, we get the following decomposition:
\begin{align}
z_1\bbZ_k&=\frac{z_1}{n(pb_p)^2} \tr \Bigl(\Expe_k\bM_k^{(1)}(z_1)\cdot \Expe_k\bM_k^{(1)}(z_2)\Bigr)\nonumber\\
&=\frac{1}{n(pb_p)^2} \tr \biggl\{\biggl[-(n-k)\bSigma_p^2 - \sum_{i<k} \Bigl(\bSigma_p\bx_i\bx_i'\bSigma_p\Bigr)+\sum_{i=1(\neq k)}^n \Expe_{k}\biggl(\frac{\zeta_i}{\sqrt{npb_p}}\bSigma_p\bx_i\bx_i'\bM_{ki}\biggr)\nonumber\\
&\qquad\qquad\qquad\qquad+\sum_{i=1(\neq k)}^n \Expe_k \Bigl(\zeta_i\vartheta_i\bSigma_p\bx_i\bx_i'\bSigma_p\Bigr)\biggr]\times \Expe_k\bM_k^{(1)}(z_2) \biggr\}\nonumber\\
&=C_1(z_1,z_2)+C_2(z_1,z_2)+C_3(z_1,z_2)+C_4(z_1,z_2),\label{eq:zZ_decom}
\end{align}
where
\begingroup
\allowdisplaybreaks
\begin{align}
C_1(z_1,z_2)&=-\frac{n-k}{n(pb_p)^2} \tr \Bigl(\bSigma_p^2\cdot \Expe_k\bM_k^{(1)}(z_2)\Bigr),\nonumber\\
C_2(z_1,z_2)&=-\frac{1}{n(pb_p)^2}\sum_{i<k} \bx_i'\bSigma_p\biggl(\sum_{j=1}^5 \bB_j(z_2)\biggr)\bSigma_p\bx_i=\sum_{j=1}^5 C_{2j},\label{eq:C2}\\
C_3(z_1,z_2)&=\frac{1}{n(pb_p)^2}\sum_{i<k} \Expe_k\biggl[ \frac{\zeta_i(z_1)}{\sqrt{npb_p}}\bx_i'\bM_{ki}(z_1)\biggl(\sum_{j=1}^5 \bB_j(z_2)\biggr)\bSigma_p\bx_i \biggr] \nonumber\\
&\qquad +\frac{1}{n(pb_p)^2}\sum_{i>k} \Expe_k\biggl[ \frac{\zeta_i(z_1)}{\sqrt{npb_p}}\bx_i'\bM_{ki}(z_1)\Bigl(\Expe_k\bM_k^{(1)}(z_2)\Bigr)\bSigma_p\bx_i \biggr]=\sum_{j=1}^6 C_{3j},\label{eq:C3}\\
C_4(z_1,z_2)&=\frac{1}{n(pb_p)^2}\sum_{i<k} \Expe_k\biggl[ \zeta_i(z_1)\vartheta_i(z_1)\bx_i'\bSigma_p\biggl(\sum_{j=1}^5 \bB_j(z_2)\biggr)\bSigma_p\bx_i \biggr]\nonumber\\
&\qquad +\frac{1}{n(pb_p)^2}\sum_{i>k} \Expe_k\biggl[ \zeta_i(z_1)\vartheta_i(z_1)\bx_i'\bSigma_p\Bigl(\Expe_k\bM_k^{(1)}(z_2)\Bigr)\bSigma_p\bx_i \biggr]=\sum_{j=1}^6 C_{4j}.\label{eq:C4}
\end{align}
\endgroup
Now we estimate all the terms in \eqref{eq:zZ_decom}. We will show that these terms are negligible as $n\to\infty$, expect $C_{25}, C_{33}, C_{45}$ defined in \eqref{eq:C2} $\sim$ \eqref{eq:C4}.
Before proceeding, we provide two useful lemmas.
For $C_1(z_1,z_2)$, we have
\[
\Expe|C_1(z_1,z_2)|=\frac{n-k}{n(pb_p)^2} \biggl|\tr \Bigl(\bSigma_p^2\cdot \Expe_k\bM_k^{(1)}(z_2)\Bigr)\biggr| = O\biggl(\frac{1}{p^2}\biggr) \cdot O(np) = O\biggl(\frac{n}{p}\biggr),
\]
where the second equality follows from the fact $\Bigl|\tr \Bigl(\bSigma_p^2\cdot \Expe_k\bM_k^{(1)}(z_2)\Bigr)\Bigr|=O(np)$, which can be verified by using the similar argument in the proof of Lemma~\ref{lem:Mk_limit}.
Applying Lemma~\ref{lem:vartheta_zeta} and inequality \eqref{eq:sMBs_bound_2} with $\bB=\bI_p$, we have
\begin{align*}
\Expe|C_{21}|& \leqslant \frac{1}{n(pb_p)^2}\sum_{i<k} \Expe\bigl| \bx_i'\bSigma_p\cdot \Expe_k\bM_{ki}(z_2)\cdot\bSigma_p\bx_i \bigr|\\
&\leqslant \frac{1}{n(pb_p)^2}\sum_{i<k} \Bigl(\Expe\bigl| \bx_i'\bSigma_p\cdot \Expe_k\bM_{ki}(z_2)\cdot\bSigma_p\bx_i \bigr|^2\Bigr)^{\nicefrac{1}{2}}\leqslant \frac{Kn}{p}.
\end{align*}
Applying Lemma~\ref{lem:vartheta_zeta} and inequality \eqref{eq:sMBs_bound_2} with $\bB=\bSigma_p$, we have
\begin{align*}
\Expe |C_{22}| & \leqslant \frac{1}{n(pb_p)^2}\sum_{i<k} \Expe \biggl|\bx_i'\bSigma_p\cdot \Expe_{k}\biggl( \frac{\zeta_i(z_2)}{z_2npb_p} \bM_{ki}(z_2)\bx_i\bx_i'\bM_{ki}(z_2)\biggr)\cdot\bSigma_p\bx_i\biggr|\\
& = \frac{K}{n^2(pb_p)^3}\sum_{i<k} \Expe\Bigl| \bx_i'\bM_{ki}(z_2)\bSigma_p\bx_i\Bigr|^2\leqslant \frac{Kn}{p}.
\end{align*}
Similarly, we obtain
\begin{align*}
\Expe|C_{23}| =\Expe|C_{24}|& \leqslant \frac{1}{n(pb_p)^2}\sum_{i<k} \Expe\Bigl|\bx_i'\bSigma_p\cdot \Expe_k \biggl(\frac{\zeta_i(z_2)}{z_2\sqrt{npb_p}}\bM_{ki}(z_2)\biggr)\bx_i\bx_i'\bSigma_p\cdot\bSigma_p\bx_i\Bigr|\\
&\leqslant \frac{K}{np^2\sqrt{np}}\sum_{i<k} \Expe\Bigl|\bx_i'\bSigma_p \bM_{ki}(z_2)\bx_i\cdot\bx_i'\bSigma_p^2\bx_i\Bigr|\\
&\leqslant \frac{K}{np^2\sqrt{np}}\sum_{i<k} \biggl(\Expe\Bigl|\bx_i'\bSigma_p \bM_{ki}(z_2)\bx_i\Bigr|^2\biggr)^{\nicefrac{1}{2}}\cdot\biggl(\Expe\Bigl|\bx_i'\bSigma_p^2\bx_i\Bigr|^2\biggr)^{\nicefrac{1}{2}}
\leqslant K\sqrt{\frac{n}{p}}.
\end{align*}
Applying Lemma~\ref{lem:vartheta_zeta} and inequality \eqref{eq:sMBs_bound_1} with $\bB=\Expe_k\bM_{ki}(z_2) \bSigma_p$, we have
\begin{align*}
\Expe |C_{31}|&=\frac{1}{n(pb_p)^2}\sum_{i<k} \Expe\biggl|\Expe_k\biggl[ \frac{\zeta_i(z_1)}{\sqrt{npb_p}}\bx_i'\bM_{ki}(z_1)\cdot \Expe_k\bM_{ki}(z_2) \cdot\bSigma_p\bx_i \biggr]\biggr|\\
&\leqslant \frac{K}{np^2\sqrt{np}}\sum_{i<k} \Expe\biggl| \bx_i'\bM_{ki}(z_1)\cdot \Expe_k\bM_{ki}(z_2) \cdot\bSigma_p\bx_i \biggr|
\leqslant K\sqrt{\frac{n}{p}}.
\end{align*}
Define $\tilde{\zeta}_i$ and $\widetilde{\bM}_{ki}$, the analogues of $\zeta_i(z)$ and $\bM_{ki}(z)$ respectively, by $(\bx_1,\ldots,\bx_k, \tilde{\bx}_{k+1},\ldots,\tilde{\bx}_n)$, where $\tilde{\bx}_{k+1},\ldots,\tilde{\bx}_n$ are i.i.d. copies of $\bx_{k+1}, \ldots, \bx_n$ and independent of $\bx_1, \ldots, \bx_n$. Then,
\begin{align*}
\Expe|C_{32}|&=\frac{1}{n(pb_p)^2}\sum_{i<k} \Expe\biggl|\Expe_k\biggl[ \frac{\zeta_i(z_1)}{\sqrt{npb_p}}\bx_i'\bM_{ki}(z_1)\cdot \Expe_{k}\biggl( \frac{\zeta_i(z_2)}{z_2npb_p} \bM_{ki}(z_2)\bx_i\bx_i'\bM_{ki}(z_2)\biggr)\cdot\bSigma_p\bx_i \biggr]\biggr|\\
&=\frac{1}{n(pb_p)^2}\sum_{i<k} \Expe\biggl|\Expe_k\biggl[ \frac{\zeta_i(z_1)}{\sqrt{npb_p}}\bx_i'\bM_{ki}(z_1)\cdot \Expe_{k}\biggl( \frac{\tilde{\zeta}_i(z_2)}{z_2npb_p} \widetilde{\bM}_{ki}(z_2)\bx_i\bx_i'\widetilde{\bM}_{ki}(z_2)\biggr)\cdot\bSigma_p\bx_i \biggr]\biggr|\\
&\leqslant \frac{K}{n^2p^3\sqrt{np}}\sum_{i<k} \Expe\biggl|\biggl[ \bx_i'\bM_{ki}(z_1) \widetilde{\bM}_{ki}(z_2)\bx_i\cdot\bx_i'\widetilde{\bM}_{ki}(z_2)\bSigma_p\bx_i \biggr]\biggr|\\
&\leqslant \frac{K}{n^2p^3\sqrt{np}}\sum_{i<k} \biggl(\Expe\Bigl| \bx_i'\bM_{ki}(z_1) \widetilde{\bM}_{ki}(z_2)\bx_i\Bigr|^2\biggr)^{\nicefrac{1}{2}} \biggl(\Expe\Bigl|\bx_i'\widetilde{\bM}_{ki}(z_2)\bSigma_p\bx_i \Bigr|^2\biggr)^{\nicefrac{1}{2}}
\overset{\eqref{eq:sMBs_bound_1}}{\leqslant} K\sqrt{\frac{n}{p}}.
\end{align*}
Similarly, we have
\[
\Expe |C_{3j}| \leqslant K\frac{n}{p},\qquad j=4,5,6.
\]
Applying Lemma~\ref{lem:vartheta_zeta} and inequality \eqref{eq:sMBs_bound_2} with $\bB=\bI_{n-1}$, we obtain
\[
\Expe |C_{4j}| \leqslant K\frac{n}{p},\qquad j=1,2,3,4,6.
\]
Moreover, by using Lemma~\ref{lem:Mk_limit}, Lemma~\ref{lem:vartheta_zeta} and Lemma~\ref{lem:quad_upper_bound}, we obtain the following limits:
\begin{align*}
C_{25}&=-\frac{1}{n(pb_p)^2}\sum_{i<k}\biggl\{ \bx_i'\bSigma_p\biggl[-\Expe_k\biggl(\frac{\zeta_i(z_2)}{z_2}\biggr)\bSigma_p\bx_i\bx_i'\bSigma_p\biggr]\bSigma_p\bx_i\biggr\}\\
&=-\frac{1}{n(pb_p)^2}m(z_2)\sum_{i<k}\Bigl(\bx_i'\bSigma_p^2\bx_i\Bigr)^2\\
&=-\frac{k}{n}m(z_2)+o_{L_1}(1),
\end{align*}
\begin{align*}
C_{45}&=\frac{1}{n(pb_p)^2}\sum_{i<k} \Expe_k\biggl\{ \zeta_i(z_1)\vartheta_i(z_1)\bx_i'\bSigma_p \biggl[-\Expe_k\biggl(\frac{\zeta_i(z_2)}{z_2}\biggr)\bSigma_p\bx_i\bx_i'\bSigma_p\biggr]\bSigma_p\bx_i \biggr\}\\
&=\frac{1}{n(pb_p)^2}\sum_{i<k}\Expe_k \biggl[-m^2(z_1)m(z_2)\Bigl(\bx_i'\bSigma_p^2\bx_i\Bigr)^2\biggr] +o_{L_1}(1)\\
&=-\frac{k}{n}m^2(z_1)m(z_2)+o_{L_1}(1),
\end{align*}
and
\begin{align*}
C_{33}&=\frac{1}{n(pb_p)^2}\sum_{i<k} \Expe_k\biggl[ \frac{\zeta_i(z_1)}{\sqrt{npb_p}}\bx_i'\bM_{ki}(z_1)\biggl(\Expe_k \frac{\zeta_i(z_2)}{z_2\sqrt{npb_p}}\bM_{ki}(z_2)\biggr)\bx_i\bx_i'\bSigma_p^2 \bx_i\biggr]\\
&=\frac{1}{n^2p^2b_p^2}z_1m(z_1)m(z_2)\biggl[ \sum_{i<k}\bx_i'\Expe_k \bM_{ki}(z_1)\Expe_{k}\bM_{ki}(z_2)\bx_i \biggr]+o_{L_4}(1)\\
&=\frac{k}{n}m(z_1)m(z_2)z_1\bbZ_k+o_{L_1}(1).
\end{align*}
From above estimates, we have
\begin{align*}
z_1\bbZ_k&=-\frac{k}{n}m(z_2)-\frac{k}{n}m^2(z_1)m(z_2)+\frac{k}{n}m(z_1)m(z_2)z_1\bbZ_k+o_{L_1}(1)\\
&=\frac{k}{n}z_1m(z_1)m(z_2)+\frac{k}{n}z_1m(z_1)m(z_2)\bbZ_k+o_{L_1}(1),
\end{align*}
which is equivalent to
\[
\bbZ_k=\frac{\frac{k}{n}m(z_1)m(z_2)}{1-\frac{k}{n}m(z_1)m(z_2)}+o_{L_1}(1).\qedhere
\]
\end{proof}
\begin{lemma}\label{lem:vartheta_zeta}
For $\vartheta_i(z)$ and $\zeta_i(z)$ defined in Lemma \ref{lem:Zk_asym}, we have
\begin{equation*}
\Expe \biggl|\vartheta_i(z)-\frac{m(z)}{z}\biggr|^4 \to 0,\qquad \Expe \Bigl|\zeta_i(z)+zm(z)\Bigr|^4 \to 0,\qquad \text{as }\; n\to \infty.
\end{equation*}
\end{lemma}
\begin{proof}
This lemma can be proved by using the similar arguments in Section 5.2.2 of \citet{chen2015clt}.
\end{proof}
\begin{lemma}\label{lem:quad_upper_bound}
Let $\bB$ be any matrix independent of $\bx_i$.
\begin{align}
\Expe \bigl|\bx_i'\bM_{ki}\bB\bx_i\bigr|^2 &\leqslant Kp^2n^2\Expe \|\bB\|^2,\label{eq:sMBs_bound_1}\\
\Expe \bigl|\bx_i'\bSigma_p\bM_{ki}\bB\bSigma_p\bx_i\bigr|^2 &\leqslant Kp^2n^2\Expe \|\bB\|^2. \label{eq:sMBs_bound_2}
\end{align}
\end{lemma}
\begin{proof}
Note that $\bM_{ki}$ and $\bx_i$ are independent. By using Lemma~\ref{lem:quad-trace}, we have
\begin{equation}
\Expe \bigl|\bx_i'\bM_{ki}\bB\bx_i-\tr\bM_{ki}\bB\bigr|^2 \leqslant K \Bigl( \nu_4 \Expe\tr\bigl(\bM_{ki}\bB\overline{\bB}\overline{\bM}_{ki}\bigr) \Bigr)\leqslant K np^2\|\bB\|^2,\label{eq:sMBs_trMB_bound}
\end{equation}
where we use the fact that
\begin{align}
\bigl|\tr\bigl(\bM_{ki}\bB\overline{\bB}\overline{\bM}_{ki}\bigr)\bigr|&= \bigl| \tr\bigl(\bSigma_p\bX_{ki}\bD_{ki}\bX_{ki}'\bSigma_p \bB \overline{\bB}\bSigma_p\bX_{ki}\overline{\bD}_{ki}\bX_{ki}'\bSigma_p \bigr) \bigr| \nonumber\\
& = \bigl| \tr\bigl(\bD_{ki}^{\nicefrac{1}{2}}\bX_{ki}'\bSigma_p \bB \overline{\bB}\bSigma_p\bX_{ki}\overline{\bD}_{ki}\bX_{ki}'\bSigma_p^2\bX_{ki}\bD_{ki}^{\nicefrac{1}{2}} \bigr) \bigr| \nonumber\\
&\leqslant n \cdot\|\bD_{ki}^{\nicefrac{1}{2}}\bX_{ki}'\bSigma_p^{\nicefrac{1}{2}}\|\cdot \|\bSigma_p^{\nicefrac{1}{2}}\|\cdot\|\bB \overline{\bB}\| \cdot \|\bSigma_p^{\nicefrac{1}{2}}\|\nonumber\\
&\qquad\qquad\times \|\bSigma_p^{\nicefrac{1}{2}}\bX_{ki}\overline{\bD}_{ki}\bX_{ki}'\bSigma_p^{\nicefrac{1}{2}}\|\cdot \|\bSigma_p\|\cdot\|\bSigma_p^{\nicefrac{1}{2}}\bX_{ki}\bD_{ki}^{\nicefrac{1}{2}} \|\nonumber\\
& = n\cdot\|\bSigma_p\|^2\cdot\|\bB\|^2\cdot \|\bSigma_p^{\nicefrac{1}{2}}\bX_{ki}\bD_{ki}\bX_{ki}'\bSigma_p^{\nicefrac{1}{2}}\|^2\nonumber\\
& = n\cdot\|\bSigma_p\|^2\cdot\|\bB\|^2\cdot \|\bD_{ki}\bX_{ki}'\bSigma_p\bX_{ki}\|^2\nonumber\\
& = n\cdot\|\bSigma_p\|^2\cdot\|\bB\|^2\cdot \| \sqrt{npb_p}(\bI_{n-1}+z\bD_{ki}) + pa_p\bI_{(i)}\bD_{ki} \|^2 \nonumber\\
& \leqslant Knp^2\|\bB\|^2.\label{eq:trace_2M}
\end{align}
By \eqref{eq:sMBs_trMB_bound} and the $c_r$-inequality, we have
\[
\Expe \bigl|\bx_i'\bM_{ki}\bB\bx_i\bigr|^2
\leqslant K\Bigl( \Expe \bigl|\bx_i'\bM_{ki}\bB\bx_i-\tr\bM_{ki}\bB\bigr|^2 + \Expe \bigl|\tr \bM_{ki}\bB\bigr|^2\Bigr)\leqslant Kp^2n^2\Expe \|\bB\|^2,
\]
which completes the proof of \eqref{eq:sMBs_bound_1}. By using the same argument, we get \eqref{eq:sMBs_bound_2}.
\end{proof}
Lemma \ref{lem:vartheta_zeta} and \ref{lem:quad_upper_bound} are used in the proof of Lemma \ref{lem:Zk_asym}.
Recall that $\mathbb{C}_1 = \{z: z=u+iv, u\in [-u_1, u_1],~|v|\geqslant v_1\}$, where $u_1>2$ and $0<v_1\leqslant 1$.
\begin{lemma}\label{lem:betak_upper_bounds} For $z\in\mathbb{C}_1$, we have
\begin{align}
|\beta_k(z)|\leqslant 1/v_1 ,\qquad |\beta_k^{\mathsf{tr}}(z)|\leqslant 1/v_1,\nonumber\\
\biggl|1+\frac{1}{npb_p}\tr \bM_k^{(s)}(z)\biggr|\leqslant 1+\frac{1}{v_1^s},\qquad s=1, 2,\nonumber\\
\Bigl|\beta_k\Bigl(1+\bq_k'\bD_k^{2}(z)\bq_k\Bigr)\Bigr|\leqslant \frac{1}{v_1}. \label{eq:beta_qDq_upper_bdd}
\end{align}
\end{lemma}
\begin{proof}
The proof exactly follows \citet{chen2015clt}, so is omitted.
\end{proof}
\begin{lemma}\label{lem:gamma_moment_upper_bound}
Under the assumption $p\wedge n\to\infty,~p/n\to\infty$ and truncation, for $z\in\mathbb{C}_1$,
\begin{align}
\Expe &|\gamma_{ks}|^2\leqslant \frac{K}{n}, \\
\Expe&|\gamma_{ks}|^4 \leqslant K\biggl(\frac{1}{n^2}+\frac{n}{p^2}\biggr),\\
\Expe & |\eta_k|^2 \leqslant \frac{K}{n},\\
\Expe&|\eta_k|^4 \leqslant K\frac{\delta_n^4}{n} + K\biggl(\frac{1}{n^2}+\frac{n}{p^2}\biggr).
\end{align}
\end{lemma}
\begin{proof}
By Lemma~\ref{lem:quad-trace} and taking $\bB=\bI_p$ in the inequality~\eqref{eq:trace_2M}, we have
\[
\Expe |\gamma_{k2}|^2 \leqslant \frac{K}{n^2p^2} \tr \bigl(\bM_k^{(s)}\overline{\bM}_k^{(s)}\bigr)\leqslant \frac{K}{n}.
\]
Similarly, we can prove that $\Expe |\eta_k|^2 \leqslant K/n$.
Now, we prove the bounds for the $4$-th moments of $\gamma_{ks}$ and $\eta_{ks}$.
Let $\bH$ be $\bM_k^{(s)}$ with all diagonal elements replaced by zeros, then we have
\begin{equation}\label{eq:sHs_1}
\Expe |\bx_k'\bH\bx_k|^4\leqslant K(\Expe X_{11}^4)^2 \Expe \bigl(\tr \bH\bH^*\bigr)^2 \leqslant K\Expe\bigl(\tr \bM_k^{(s)}\overline{\bM}_k^{(s)}\bigr)^2 \leqslant Kn^2p^4.
\end{equation}
The first inequality follows from Lemma~\ref{lem:PanZhou-trace}, and the last inequality follows from \eqref{eq:trace_2M}.
Denote $\Expe_j(\cdot)$ be the conditional expectation with respect to $(X_{1k}, X_{2k}, \ldots, X_{jk})$, where $j=1,2,\ldots,p$. Since $\Expe_{j-1}(X_{jk}^2-1)a_{jj}^{(s)}=0$, then
$(X_{jk}^2-1)a_{jj}^{(s)}$ can be expressed as a martingale difference
\begin{equation}\label{eq:Xjk_MDS}
(X_{jk}^2-1)a_{jj}^{(s)} = (\Expe_{j}-\Expe_{j-1})\Bigl[(X_{jk}^2-1)a_{jj}^{(s)}\Bigr].
\end{equation}
Applying the Burkholder’s inequality (Lemma~\ref{lem:Burkholder_ineq}) to \eqref{eq:Xjk_MDS} yields that
\begin{align}
\Expe \biggl|\sum_{j=1}^p (X_{jk}^2-1)a_{jj}^{(s)}\biggr|^4 & \leqslant K \Expe \biggl( \sum_{j=1}^p\Expe_{j-1}\Bigl|(X_{jk}^2-1)a_{jj}^{(s)}\Bigr|^2 \biggr)^2 + K\Expe\biggl(\sum_{j=1}^p\Bigl|(X_{jk}^2-1)a_{jj}^{(s)}\Bigr|^4 \biggr)^2\nonumber\\
&\leqslant K\biggl(\sum_{j=1}^p \Expe |X_{11}|^4 \bigl|a_{jj}^{(s)}\bigr|^2\biggr) + K \sum_{j=1}^p \Expe |X_{11}|^8 \Expe\bigl|a_{jj}^{(s)}\bigr|^4\nonumber\\
&\leqslant Kn^5p^2+Kn^3p^3,\label{eq:sHs_2}
\end{align}
where we use the fact that, with $\be_j$ be the $j$-th $p$-dimensional standard basis vector and $\by$ be an $(n-1)$-dimensional random vector with $\Expe y_i=0$ and $\Expe y_i^2=1$,
\begin{align}
\Expe\bigl|a_{jj}^{(s)}\bigr|^4 & = \Expe \Bigl| \be_j'\bSigma_p\bX_k\bD_k^{s}\bX_k'\bSigma_p\be_j \Bigr|^4\nonumber\\
&\leqslant v_1^{-4s}\Expe \bigl\|\be_j'\bSigma_p\bX_k\bigr\|^8
=v_1^{-4s}\widetilde{\sigma}_{jj}^4\Expe \bigl\|\by\bigr\|^8\leqslant Kn^4 + K n^2p,\label{eq:ajj_upper_bound}
\end{align}
where $\widetilde{\sigma}_{jj} = \sum_{\ell} \sigma_{j\ell}^2$ is the $j$-th diagonal elements of $\bSigma_p^2$. By Rayleigh-Ritz Theorem, we know that $\widetilde{\sigma}_{jj}\leqslant \lambda_{\max}(\bSigma_p^2)\leqslant K$. Combining \eqref{eq:sHs_1} and \eqref{eq:sHs_2} yields that
\begin{align*}
\Expe |\gamma_{ks}|^4 & \leqslant \frac{1}{(npb_p)^4}\Expe \biggl| \sum_{j=1}^p(X_{jk}^2-1)a_{jj}^{(s)} + \bx_k'\bH\bx_k \biggr|^4\\
& \leqslant \frac{K}{n^4p^4} \Expe \biggl|\sum_{j=1}^p (X_{jk}^2-1)a_{jj}^{(s)}\biggr|^4 + \frac{K}{n^4p^4}\Expe |\bx_k'\bH\bx_k|^4\\
&\leqslant K\biggl(\frac{1}{n^2}+\frac{n}{p^2}\biggr).
\end{align*}
Moreover, by Lemma~\ref{lem:quad-trace}, we have
\[
\Expe |\eta_k|^4 \leqslant \frac{K}{n^2p^2} \Expe \bigl|\bx_k'\bSigma_p\bx_k-pa_p\bigr|^4 + K \Expe \bigl|\gamma_{k1}\bigr|^4\leqslant \frac{K\delta_n^4}{n}+K\biggl(\frac{1}{n^2}+\frac{n}{p^2}\biggr).\qedhere
\]
\end{proof}
Lemmas \eqref{lem:var_mn_estimate} , \eqref{lem:G11_upper_bbd} and \eqref{lem:G11_tilde_upper_bbd} below are used in Section~\ref{sec:conv_Mn2} to derive the convergence of the non-random part $M_n^{2}(z)$. We will prove them following the strategy in \citet{bao2015asymptotic}.
\begin{lemma}\label{lem:var_mn_estimate}
Under the assumption $p\wedge n\to\infty,~p/n\to\infty$, for $z\in\mathbb{C}_1$, we have
\begin{equation}\label{eq:var_mn_estimate}
\Var(m_n) = O\biggl(\frac{1}{n^2}\biggr).
\end{equation}
\end{lemma}
\begin{proof}
By the identity $m_n-\Expe m_n = -\sum_{k=1}^n \Bigl(\Expe_{k-1}\,m_n-\Expe_k\, m_n\Bigr)$, we have
\[
\Var(m_n)
= \sum_{k=1}^n\Expe\,\Bigl|\Expe_{k-1}\,m_n-\Expe_k\, m_n\Bigr|^2 + 2\sum_{1\leqslant s<t\leqslant 1} \Expe\, \Bigl(\Expe_{s-1}\,m_n-\Expe_s\, m_n\Bigr)\Bigl(\Expe_{t-1}\,m_n-\Expe_t\, m_n\Bigr).
\]
Since each term in the second sum on the RHS of the above identity is zero, we write
\begin{align*}
\Var(m_n) & = \sum_{k=1}^n\Expe\,\Bigl|\Expe_{k-1}\,m_n-\Expe_k\, m_n\Bigr|^2 \\
& = \sum_{k=1}^n\Expe\,\Bigl|\Expe_{k-1}\,\Bigl(m_n-\Expe_{(k)}\, m_n\Bigr)\Bigr|^2 \\
& \leqslant \sum_{k=1}^n\Expe\,\Bigl|m_n-\Expe_{(k)}\, m_n\Bigr|^2 ,
\end{align*}
where $\Expe_{(k)}(\cdot)$ denotes the expectation w.r.t. the $\sigma$-field generated by $\bx_k$. To prove \eqref{eq:var_mn_estimate}, it suffices to show
\begin{equation}\label{eq:mn_var_martigale_element}
\Expe\,\Bigl|m_n-\Expe_{(k)}\, m_n\Bigr|^2 = O\biggl(\frac{1}{n^3}\biggr),\qquad 1\leqslant k\leqslant n.
\end{equation}
Now we deal with the case $k=1$, and the remaining cases are analogous and omitted.
Denote $\widetilde{\bY}=(\widetilde{Y}_{ij})_{p\times n} := \bSigma_p^{\nicefrac{1}{2}}\bY$ where $\bY=(npb_p)^{-1/4}\bX$, and let $\widetilde{\by}_k$ be the $k$-th column of $\widetilde{\bY}$. Let $\widetilde{\bY}_k$ be the $p\times(n-1)$ matrix extracted from $\widetilde{\bY}$ by removing $\widetilde{\by}_k$, then the matrix model \eqref{eq:A_def} can be written as
\[
\bA_n = \begin{pmatrix}
\widetilde{\by}_1'\widetilde{\by}_1 -\sqrt{\frac{p}{n}}\frac{a_p}{\sqrt{b_p}} & (\widetilde{\bY}_1'\widetilde{\by}_1)'\\
\widetilde{\bY}_1'\widetilde{\by}_1 & \widetilde{\bY}_1'\widetilde{\bY}_1 - \sqrt{\frac{p}{n}}\frac{a_p}{\sqrt{b_p}} \bI_{n-1}
\end{pmatrix}.
\]
With notations
$
\bA_k = \widetilde{\bY}_k'\widetilde{\bY}_k - \sqrt{\frac{p}{n}}\frac{a_p}{\sqrt{b_p}} \bI_{n-1}
$
and $\bD_k= (\bA_k-z\bI_n)^{-1}$, we have
\begin{align*}
\;&\tr \bD- \tr \bD_1 \\
=\;& \frac{1+(\widetilde{\bY}_1'\widetilde{\by}_1)'\Bigl(\widetilde{\bY}_1'\widetilde{\bY}_1 - \sqrt{\frac{p}{n}}\frac{a_p}{\sqrt{b_p}} \bI_{n-1} - z\bI_{n-1}\Bigr)^{-2}(\widetilde{\bY}_1'\widetilde{\by}_1)}{\Bigl(\widetilde{\by}_1'\widetilde{\by}_1 -\sqrt{\frac{p}{n}}\frac{a_p}{\sqrt{b_p}}-z\Bigr) - (\widetilde{\bY}_1'\widetilde{\by}_1)'\Bigl(\widetilde{\bY}_1'\widetilde{\bY}_1 - \sqrt{\frac{p}{n}}\frac{a_p}{\sqrt{b_p}} \bI_{n-1} - z\bI_{n-1}\Bigr)^{-1}(\widetilde{\bY}_1'\widetilde{\by}_1)}\\[0.5em]
=\;& \frac{1+\widetilde{\by}_1'\Bigl[\widetilde{\bY}_1\widetilde{\bY}_1' \Bigl(\widetilde{\bY}_1\widetilde{\bY}_1'- \sqrt{\frac{p}{n}}\frac{a_p}{\sqrt{b_p}} \bI_{n-1} - z\bI_{n-1}\Bigr)^{-2}\Bigr]\widetilde{\by}_1}{\Bigl(\widetilde{\by}_1'\widetilde{\by}_1 -\sqrt{\frac{p}{n}}\frac{a_p}{\sqrt{b_p}}-z\Bigr) - \widetilde{\by}_1'\Bigl[\widetilde{\bY}_1\widetilde{\bY}_1' \Bigl(\widetilde{\bY}_1\widetilde{\bY}_1'- \sqrt{\frac{p}{n}}\frac{a_p}{\sqrt{b_p}} \bI_{n-1} - z\bI_{n-1}\Bigr)^{-1}\Bigr]\widetilde{\by}_1}\\[0.5em]
=:\;& \frac{1+U}{V},
\end{align*}
where the second ``='' comes from the identity
\[
\bB(\bA\bB-\alpha \bI)^{-n}\bA = \bB\bA(\bB\bA-\alpha\bI)^{-n}.
\]
Moreover, with notations $U$ and $V$, we can write $D_{11}=1/V$ and
\begin{align*}
\Expe\,\Bigl|m_n-\Expe_{(1)}\, m_n\Bigr|^2 & = \frac{1}{n^2} \Expe\,\biggl| (\tr \bD -\tr\bD_1) - \Expe_{(1)}(\tr \bD -\tr\bD_1)\biggr|^2\qquad \bigl(\because \Expe_{(1)}\tr\bD_1 = \tr\bD_1\bigr)\\
&= \frac{1}{n^2}\Expe\,\biggl|\frac{1+U}{V} - \Expe_{(1)} \biggl(\frac{1+U}{V}\biggr)\biggr|^2\\
& \leqslant \frac{2}{n^2} \biggl\{\Expe\,\biggl|\frac{1}{V} - \Expe_{(1)} \biggl(\frac{1}{V}\biggr)\biggr|^2 + \Expe\,\biggl|\frac{U}{V} - \Expe_{(1)} \biggl(\frac{U}{V}\biggr)\biggr|^2\biggr\}.
\end{align*}
By the same arguments as those on Page 196 of \citet{bao2015asymptotic}, it is sufficient to prove that
\begin{equation}\label{eq:U_V_esti}
\Expe_{(1)}\,|U-\Expe_{(1)}\, U|^2 = O\biggl(\frac{1}{n}\biggr),\qquad \Expe_{(1)}\,|V-\Expe_{(1)}\, V|^2 = O\biggl(\frac{1}{n}\biggr).
\end{equation}
For simplicity of presentation, we define
\[
\bH^{[\ell]} = \Bigl(H_{jk}^{[\ell]}\Bigr)_{p\times p}:= \widetilde{\bY}_1\widetilde{\bY}_1' \Bigl(\widetilde{\bY}_1\widetilde{\bY}_1'- \sqrt{\frac{p}{n}}\frac{a_p}{\sqrt{b_p}} \bI_{n-1} - z\bI_{n-1}\Bigr)^{-\ell},\qquad \ell=1, 2.
\]
Then, we write
\begin{align}
U-\Expe_{(1)}\, U & = \sum_{i\neq j} H_{ij}^{[2]} \widetilde{Y}_{i1}\widetilde{Y}_{j1} + \sum_{i=1}^p H_{ii}^{[2]} \Bigl( \widetilde{Y}_{i1}^2 - \Expe\, \widetilde{Y}_{i1}^2\Bigr),\label{eq:U_exp}\\
V-\Expe_{(1)}\, V & = \widetilde{\by}_1'\widetilde{\by}_1 - \sqrt{\frac{p}{n}}\frac{a_p}{\sqrt{b_p}} - \sum_{i\neq j} H_{ij}^{[1]} \widetilde{Y}_{i1}\widetilde{Y}_{j1} -\sum_{i=1}^p H_{ii}^{[1]} \Bigl( \widetilde{Y}_{i1}^2 - \Expe\, \widetilde{Y}_{i1}^2\Bigr).\label{eq:V_exp}
\end{align}
Now we proceed to prove \eqref{eq:U_V_esti}. From \eqref{eq:V_exp}, we have
\begin{align}
&\;\Expe_{(1)}|V-\Expe_{(1)}\, V|^2\nonumber\\
\leqslant&\; K\biggl\{\Expe_{(1)} \biggl|\widetilde{\by}_1'\widetilde{\by}_1 - \sqrt{\frac{p}{n}}\frac{a_p}{\sqrt{b_p}}\biggr|^2
+ \Expe_{(1)}\,\biggl| \sum_{i\neq j} H_{ij}^{[1]} \widetilde{Y}_{i1}\widetilde{Y}_{j1}\biggr|^2
+ \Expe_{(1)}\,\biggl|\sum_{i=1}^p H_{ii}^{[1]} \Bigl( \widetilde{Y}_{i1}^2 - \Expe\, \widetilde{Y}_{i1}^2\Bigr)\biggr|^2\biggr\}\nonumber\\
=&\; K\biggl\{\Expe\,\biggl|\widetilde{\by}_1'\widetilde{\by}_1 - \sqrt{\frac{p}{n}}\frac{a_p}{\sqrt{b_p}}\biggr|^2
+ \sum_{i\neq j} \Bigl|H_{ij}^{[1]}\Bigr|^2 \Expe\,\Bigl(\widetilde{Y}_{i1}^2\widetilde{Y}_{j1}^2\Bigr)
+ \sum_{i=1}^p \Bigl|H_{ii}^{[1]}\Bigr|^2 \Expe\,\Bigl( \widetilde{Y}_{i1}^2 - \Expe\, \widetilde{Y}_{i1}^2\Bigr)^2\biggr\}.\label{eq:U_esti_decom}
\end{align}
After some straightforward calculations, we obtain some estimates:
\begin{equation}\label{eq:ytilde_moment_esti}
\Expe\,\widetilde{Y}_{i1}^2 = O\biggl(\frac{1}{\sqrt{np}}\biggr),
\qquad
\Expe\,\widetilde{Y}_{i1}^4 =O\biggl(\frac{1}{np}\biggr),
\qquad
\Expe\,\biggl(\widetilde{\by}_1'\widetilde{\by}_1 - \sqrt{\frac{p}{n}}\frac{a_p}{\sqrt{b_p}}\biggr)^2 = O\biggl(\frac{1}{n}\biggr).
\end{equation}
Combining \eqref{eq:ytilde_moment_esti} and \eqref{eq:U_esti_decom}, we obtain
\begin{equation}
\Expe_{(1)}\,\bigl|V-\Expe_{(1)}\, V\bigr|^2 \leqslant \frac{K}{n}+\frac{K}{np} \tr\Bigl|\bH^{[1]}\Bigr|^2.
\end{equation}
Similarly, we can show that
\begin{equation}
\Expe_{(1)}\,\bigl|U-\Expe_{(1)}\, U\bigr|^2 \leqslant \frac{K}{np} \tr\Bigl|\bH^{[2]}\Bigr|^2.
\end{equation}
To get \eqref{eq:U_V_esti}, it suffices to show that
\[
\tr\bigl|\bH^{[\ell]}\bigr|^2 = O(p),\qquad \ell=1,2.
\]
Let $\{\mu_i^{(k)},\, i=1,2,\ldots,n-1\}$ be eigenvalues of $\bA_k$, then the eigenvalues of $\bH^{[\ell]}\, (\ell=1, 2)$ are
\[
\frac{\Bigl(\mu_{i}^{(1)}+a_p\sqrt{p/(nb_p)}\Bigr)^2}{\bigl|\mu_i^{(1)}-z\bigr|^{2\ell}}, \qquad i = 1,2,\ldots,n-1,
\]
and a zero eigenvalue with algebraic multiplicity $(p-n+1)$. Using the fact $\mu_i^{(1)} \geqslant -a_p\sqrt{p/(nb_p)}$, we conclude that
\[
\tr\bigl|\bH^{[\ell]}\bigr|^2 = \sum_{i=1}^{n-1} \frac{\Bigl(\mu_{i}^{(1)}+a_p\sqrt{p/(nb_p)}\Bigr)^2}{\bigl|\mu_i^{(1)}-z\bigr|^{2\ell}}= O(p),\qquad \ell=1,2. \qedhere
\]
\end{proof}
\begin{lemma}\label{lem:G11_upper_bbd}
Under the assumption $p\wedge n\to\infty,~p/n\to\infty$, for $z\in\mathbb{C}_1$ and $1\leqslant \ell \leqslant n$,
\begin{equation}
\Expe \biggl|D_{\ell\ell}+\frac{1}{z+\Expe\, m_n}\biggr|^2 = O\biggl(\frac{1}{n}\biggr) + O\biggl(\frac{n}{p}\biggr).
\end{equation}
\end{lemma}
\begin{proof}
We only provide the estimation of $D_{11}$, since others are analogous. Note that
\[
D_{11}=V^{-1}=\Bigl(\widetilde{\by}_1'\widetilde{\by}_1-\sqrt{\frac{p}{n}}\frac{a_p}{\sqrt{b_p}}-z-\widetilde{\by}_1'\bH^{[1]}\widetilde{\by}_1\Bigr)^{-1}.
\]
Let $\bv_i^{(1)}=\bigl(v_{i1}^{(1)},\ldots,v_{ip}^{(1)}\bigr),\, (i=1,2,\ldots,n-1)$ be the unit eigenvector of $\bA_1$ corresponding to the eigenvalue $\mu_i^{(1)}$, and let
\[
w_i^{(1)} = \frac{\sqrt{np}a_p}{\sqrt{b_p}} \bigl|\widetilde{\by}_1'\bv_i^{(1)}\bigr|^2.
\]
Applying spectral decomposition to $\bH^{[1]}$ yields
\begin{align}
D_{11} & = \Biggl[\widetilde{\by}_1'\widetilde{\by}_1-\sqrt{\frac{p}{n}}\frac{a_p}{\sqrt{b_p}}-z-\sum_{i=1}^{n-1}\biggl(\frac{\mu_i^{(1)}+\sqrt{\tfrac{p}{n}}\tfrac{a_p}{\sqrt{b_p}}}{\mu_i^{(1)}-z}\biggr)\bigl|\widetilde{\by}_1'\bv_i^{(1)}\bigr|^2\Biggr]^{-1} \nonumber\\[0.5em]
& = \Biggl[\widetilde{\by}_1'\widetilde{\by}_1-\sqrt{\frac{p}{n}}\frac{a_p}{\sqrt{b_p}}-z-\frac{1}{\sqrt{np}}\frac{\sqrt{b_p}}{a_p}\sum_{i=1}^{n-1}\frac{\Bigl(\mu_i^{(1)}+\sqrt{\tfrac{p}{n}}\tfrac{a_p}{\sqrt{b_p}}\Bigr)w_i^{(1)}}{\mu_i^{(1)}-z}\Biggr]^{-1}\nonumber\\
&=: \Bigl(-z-m_n(z)+h_1\Bigr)^{-1},\label{eq:G11_inv}
\end{align}
where
\[
h_1 = \Biggl[m_n-\frac{1}{n}\sum_{i=1}^{n-1}\Biggl(\frac{\sqrt{\tfrac{n}{p}}\tfrac{\sqrt{b_p}}{a_p}\mu_i^{(1)}+1}{\mu_i^{(1)}-z}\Biggr)\Biggr]
+ \Biggl[\widetilde{\by}_1'\widetilde{\by}_1-\sqrt{\frac{p}{n}}\frac{a_p}{\sqrt{b_p}}-\frac{1}{n}\sum_{i=1}^{n-1}\Biggl(\frac{\sqrt{\tfrac{n}{p}}\tfrac{\sqrt{b_p}}{a_p}\mu_i^{(1)}+1}{\mu_i^{(1)}-z}\Biggr)\bigl(w_i^{(1)}-1\bigr)\Biggr].
\]
By \eqref{eq:G11_inv}, we obtain
\[
\biggl|D_{11} + \frac{1}{z+\Expe\, m_n}\biggr| = \biggl|\frac{\Expe\, m_n -m_n + h_1}{\bigl(-z-m_n+h_1\bigr)\bigl(z+\Expe\, m_n\bigr)} \biggr|\leqslant K\Bigl| (\Expe\, m_n -m_n) + h_1\Bigr|,
\]
which implies that
\begin{align}
\Expe\, \biggl|&D_{11} + \frac{1}{z+\Expe\, m_n}\biggr|^2 \nonumber\\
& \leqslant K \Biggl\{ \Expe\, \bigl|\Expe\, m_n-m_n\bigr|^2 + \Expe\, \Biggl|m_n - \Biggl(\frac{1}{n}\sum_{i=1}^{n-1}\frac{1}{\mu_i^{(1)}-z}\Biggr)-\sqrt{\frac{n}{p}}\frac{\sqrt{b_p}}{a_p}\Biggl(\frac{1}{n}\sum_{i=1}^{n-1}\frac{\mu_i^{(1)}}{\mu_i^{(1)}-z}\Biggr)\Biggr|^2\nonumber \\[0.5em]
&\qquad\qquad + \Expe\, \Biggl|\frac{1}{n}\sum_{i=1}^{n-1}\Biggl(\frac{\sqrt{\tfrac{n}{p}}\tfrac{\sqrt{b_p}}{a_p}\mu_i^{(1)}+1}{\mu_i^{(1)}-z}\Biggr)\bigl(w_i^{(1)}-1\bigr)\Biggr|^2 + \Expe\, \biggl|\widetilde{\by}_1'\widetilde{\by}_1-\sqrt{\frac{p}{n}}\frac{a_p}{\sqrt{b_p}}\biggr|^2\Biggr\}\nonumber\\
& =: K (\mathrm{I+II+III+IV})\nonumber\\
& = O\biggl(\frac{1}{n^2}\biggr) + \biggl[O\biggl(\frac{1}{n^2}\biggr) + O\biggl(\frac{n}{p}\biggr)\biggr] + O\biggl(\frac{1}{n}\biggr) + O\biggl(\frac{1}{n}\biggr)\label{eq:G11_upper_bbd_0}\\
& = O\biggl(\frac{1}{n}\biggr) + O\biggl(\frac{n}{p}\biggr).\label{eq:G11_upper_bbd}
\end{align}
Below we explain \eqref{eq:G11_upper_bbd_0} in more detail:
\begin{enumerate}
\item[(I)] Follows from Lemma~\ref{lem:var_mn_estimate}.
\item[(II)] Use the fact
\[
\sqrt{\frac{n}{p}}\frac{\sqrt{b_p}}{a_p}\Biggl|\frac{1}{n}\sum_{i=1}^{n-1}\frac{\mu_i^{(1)}}{\mu_i^{(1)}-z}\Biggr| = O\biggl(\sqrt{\frac{n}{p}}\biggr)
\]
and
\[
\Biggl|m_n - \frac{1}{n}\sum_{i=1}^{n-1}\frac{1}{\mu_i^{(1)}-z}\Biggr| = \biggl|\frac{1}{n}\tr \bD -\frac{1}{n}\tr \bD_k\biggr| \overset{\eqref{eq:D_Dk_ESD_diff}}{=} O\biggl(\frac{1}{n}\biggr).
\]
\item[(III)] Use \eqref{eq:ytilde_moment_esti}.
\item[(IV)] Analogous to the estimation of $\Expe\,\bigl|V-\Expe_{(1)} V\bigr|^2$.\qedhere
\end{enumerate}
\end{proof}
The following lemma is used to prove \eqref{eq:stein_esti_E}.
Define
\[
\widetilde{\bD}=(\widetilde{D}_{ij})_{p\times p} = \biggl( \bSigma_p^{\nicefrac{1}{2}}\bY\bY'\bSigma_p^{\nicefrac{1}{2}}-\sqrt{\frac{p}{n}}\frac{a_p}{\sqrt{b_p}\bI_p}-z\bI_p \biggr)^{-1}.
\]
\begin{lemma}\label{lem:G11_tilde_upper_bbd}
Under the assumption $p\wedge n\to\infty,~p/n\to\infty$, for $z\in\mathbb{C}_1$ and $1\leqslant \ell \leqslant p$,
\begin{equation}\label{eq:G11_tilde_upper_bbd}
\Expe\,\biggl| \widetilde{D}_{\ell\ell}+\frac{1}{a_p\sqrt{p/(nb_p)}+z+\Expe\, m_n} \biggr|^2 = O\biggl(\biggl(\frac{n}{p}\biggr)^3\biggr) + O\biggl(\frac{n}{p^2}\biggr).
\end{equation}
\end{lemma}
\begin{proof}
We only provide the estimation of $\widetilde{D}_{11}$, since the others are analogous.
Let $\widetilde{\br}_k'$ be $k$-th row of $\widetilde{\bY}$ and let $\bB_k$ be the $(p-1)\times n$ matrix extracted from $\widetilde{\bY}$ by deleting $\widetilde{\br}_k'$.
With notations defined above, we can write
\[
\widetilde{\bA}=\begin{pmatrix}
\widetilde{\br}_1'\widetilde{\br}_1-\sqrt{\tfrac{p}{n}}\tfrac{a_p}{\sqrt{b_p}} & \widetilde{\br}_1'\bB_1'\\
\bB_1\widetilde{\br}_1 & \bB_1\bB_1'-\sqrt{\tfrac{p}{n}}\tfrac{a_p}{\sqrt{b_p}}\bI_{p-1}
\end{pmatrix}.
\]
Denote \[
\widetilde{\bA}_k = \bB_k'\bB_k - \sqrt{\frac{p}{n}}\frac{a_p}{\sqrt{b_p}}\bI_n,
\]
and \[
W= \widetilde{\br}_1'\bB_1'\bB_1\biggl(\bB_1'\bB_1-\sqrt{\tfrac{p}{n}}\tfrac{a_p}{\sqrt{b_p}}\bI_{n}\biggr)^{-1}\widetilde{\br}_1.
\]
Let $\{\widetilde{\mu}_i^{(k)},\, i=1,2,\ldots,n-1\}$ be the eigenvalues of $\widetilde{\bA}_k$, and let $\widetilde{\bv}_i^{(1)}=\bigl(\widetilde{v}_{i1}^{(1)},\ldots,\widetilde{v}_{ip}^{(1)}\bigr),\, (i=1,2,\ldots,n)$ be the unit eigenvector of $\widetilde{\bA}_1$ corresponding to the eigenvalue $\widetilde{\mu}_i^{(1)}$, and set
\[
\widetilde{w}_i^{(1)} = \frac{\sqrt{np}a_p}{\sqrt{b_p}} \bigl|\widetilde{\br}_1'\widetilde{\bv}_i^{(1)}\bigr|^2,
\]
then we have
\[
W=\frac{1}{n}\sum_{i=1}^{n}\Biggl(\frac{\sqrt{\tfrac{n}{p}}\tfrac{\sqrt{b_p}}{a_p}\widetilde{\mu}_i^{(1)}+1}{\widetilde{\mu}_i^{(1)}-z}\Biggr)\widetilde{w}_i^{(1)},
\]
and
\[
\widetilde{D}_{11} = \biggl(\widetilde{\br}_1'\widetilde{\br}_1-\sqrt{\tfrac{p}{n}}\tfrac{a_p}{\sqrt{b_p}}-z-W\biggr)^{-1}
=: \biggl(-\sqrt{\tfrac{p}{n}}\tfrac{a_p}{\sqrt{b_p}}-z-m_n+\widetilde{h}_1\biggr)^{-1},
\]
where
\begin{align}
\widetilde{h}_1 & = \widetilde{\br}_1'\widetilde{\br}_1+m_n-W \nonumber\\
& = \widetilde{\br}_1'\widetilde{\br}_1+m_n- \frac{1}{n}\sum_{i=1}^{n}\Biggl(\frac{\sqrt{\tfrac{n}{p}}\tfrac{\sqrt{b_p}}{a_p}\widetilde{\mu}_i^{(1)}+1}{\widetilde{\mu}_i^{(1)}-z}\Biggr) - \frac{1}{n}\sum_{i=1}^{n}\Biggl(\frac{\sqrt{\tfrac{n}{p}}\tfrac{\sqrt{b_p}}{a_p}\widetilde{\mu}_i^{(1)}+1}{\widetilde{\mu}_i^{(1)}-z}\Biggr)\bigl(\widetilde{w}_i^{(1)}-1\bigr).\label{eq:h_tilde}
\end{align}
We define the set of events
\[
\Omega_0 = \biggl\{ \bigl|\Expe\, m_n-m_n+\widetilde{h}_1\bigr| \geqslant \frac{1}{2}\sqrt{\frac{p}{n}} \biggr\},
\]
then the inequality
\[
\biggl|\biggl(\sqrt{\frac{p}{n}}\frac{a_p}{\sqrt{b_p}}+z+\Expe\, m_n\biggr)\biggl(\sqrt{\frac{p}{n}}\frac{a_p}{\sqrt{b_p}}+z+m_n-\widetilde{h}_1\biggr)\biggr| \geqslant K\frac{p}{n}
\]
holds on $\Omega_0$. Thus we obtain
\begin{align*}
\Expe\,\biggl|& \widetilde{D}_{11}+\frac{1}{a_p\sqrt{p/(nb_p)}+z+\Expe\, m_n} \biggr|^2 \\
&\leqslant \Expe\, \biggl| \frac{\Expe\, m_n-m_n+\widetilde{h}_1}{\bigl(a_p\sqrt{p/(nb_p)}+z+\Expe\, m_n\bigr)\bigl(a_p\sqrt{p/(nb_p)}+z+m_n-\widetilde{h}_1\bigr)} \biggr|^2\\
&\leqslant K\biggl[\biggl(\frac{n}{p}\biggr)^2\cdot\Prob(\Omega_0^{\mathsf{c}})+\frac{n}{p}\cdot\Prob(\Omega_0)\biggr]\cdot \Expe\,\bigl|\Expe\, m_n-m_n+\widetilde{h}_1\bigr|^2,
\end{align*}
where we use the inequality
\begin{equation}\label{eq:denominator_lower_bbd}
\biggl|\biggl(\sqrt{\frac{p}{n}}\frac{a_p}{\sqrt{b_p}}+z+\Expe\, m_n\biggr)\biggl(\sqrt{\frac{p}{n}}\frac{a_p}{\sqrt{b_p}}+z+m_n-\widetilde{h}_1\biggr)\biggr| \geqslant K\sqrt{\frac{p}{n}},
\end{equation}
that holds on the full set $\Omega$. The inequality \eqref{eq:denominator_lower_bbd} follows from the facts
\begin{align*}
\;&\sqrt{\frac{p}{n}}\frac{a_p}{\sqrt{b_p}}+z+m_n-\widetilde{h}_1 \\
= \;& \sqrt{\frac{p}{n}}\frac{a_p}{\sqrt{b_p}}+z-\widetilde{\br}_1'\widetilde{\br}_1+\widetilde{\br}_1'\bB_1'\bB_1\biggl(\bB_1'\bB_1-\sqrt{\tfrac{p}{n}}\tfrac{a_p}{\sqrt{b_p}}\bI_{n}\biggr)^{-1}\widetilde{\br}_1\\
= \;& \sqrt{\frac{p}{n}}\frac{a_p}{\sqrt{b_p}}+z+\widetilde{\br}_1'\biggl[\bB_1'\bB_1\biggl(\bB_1'\bB_1-\sqrt{\tfrac{p}{n}}\tfrac{a_p}{\sqrt{b_p}}\bI_{n}\biggr)^{-1}-\bI_n\biggr]\widetilde{\br}_1\\
= \; & \sqrt{\frac{p}{n}}\frac{a_p}{\sqrt{b_p}}+z + \frac{1}{\sqrt{np}}\frac{\sqrt{b_p}}{a_p}\sum_{i=1}^{n}\Biggl(\frac{\widetilde{\mu}_i^{(1)}+\sqrt{\tfrac{p}{n}}\tfrac{a_p}{\sqrt{b_p}}}{\widetilde{\mu}_i^{(1)}-z}-1\Biggr)\widetilde{w}_i^{(1)}\\
= \; & \biggl(\sqrt{\frac{p}{n}}\frac{a_p}{\sqrt{b_p}}+z\biggr) \biggl[1+\frac{1}{\sqrt{np}}\frac{\sqrt{b_p}}{a_p}\sum_{i=1}^{n}\frac{\widetilde{w}_i^{(1)}}{\widetilde{\mu}_i^{(1)}-z}\biggr]\\
=:\;& \biggl(\sqrt{\frac{p}{n}}\frac{a_p}{\sqrt{b_p}}+z\biggr)(1+S),
\end{align*}
and \[
|1+S|\geqslant K\sqrt{\frac{n}{p}}.
\]
We now proceed to complete the proof of \eqref{eq:G11_tilde_upper_bbd}. Note that we have
\[
\Prob(\Omega_0)\leqslant \frac{4n}{p}\Expe\,\bigl|\Expe\, m_n-m_n+\widetilde{h}_1\bigr|^2,
\]
thus it is sufficient to prove that
\begin{equation}\label{eq:G11_tilde_upper_bbd_esti_1}
\Expe\,\bigl|\Expe\, m_n-m_n+\widetilde{h}_1\bigr|^2 = O\biggl(\frac{1}{n}\biggr) + O\biggl(\frac{n}{p}\biggr).
\end{equation}
Applying \eqref{eq:h_tilde} gives us
\begin{align}
\Expe\,\bigr| &\Expe\, m_n-m_n+\widetilde{h}_1\bigr|^2\nonumber \\
& \leqslant K \Biggl\{ \Expe\, \bigl|\Expe\, m_n-m_n\bigr|^2 + \Expe\, \Biggl|m_n - \Biggl(\frac{1}{n}\sum_{i=1}^{n-1}\frac{1}{\widetilde{\mu}_i^{(1)}-z}\Biggr)-\sqrt{\frac{n}{p}}\frac{\sqrt{b_p}}{a_p}\Biggl(\frac{1}{n}\sum_{i=1}^{n-1}\frac{\widetilde{\mu}_i^{(1)}}{\widetilde{\mu}_i^{(1)}-z}\Biggr)\Biggr|^2\nonumber\\
&\qquad\qquad + \Expe\, \Biggl|\frac{1}{n}\sum_{i=1}^{n-1}\Biggl(\frac{\sqrt{\tfrac{n}{p}}\tfrac{\sqrt{b_p}}{a_p}\widetilde{\mu}_i^{(1)}+1}{\widetilde{\mu}_i^{(1)}-z}\Biggr)\bigl(\widetilde{w}_i^{(1)}-1\bigr)\Biggr|^2 + \Expe\, \bigl(\widetilde{\br}_1'\widetilde{\br}_1\bigr)^2\Biggr\}.\label{eq:G11_tilde_upper_bbd_esti}
\end{align}
Combining the similar method used for \eqref{eq:G11_upper_bbd} with \eqref{eq:G11_tilde_upper_bbd_esti} and the fact
\begin{align*}
\Expe\,\bigl(\widetilde{\br}_1' \widetilde{\br}_1\bigr)^2 & = \Expe\,\biggl[\sum_{j=1}^n\Bigl(\sum_{i=1}^p\hat{\sigma}_{1i}Y_{ij}\Bigr)^2\biggr]^2\\
&=\Expe\,\biggl[\sum_{j=1}^n\Bigl(\sum_{i=1}^p\hat{\sigma}_{1i}Y_{ij}\Bigr)^4 + \sum_{j_1\neq j_2}\Bigl(\sum_{i=1}^p\hat{\sigma}_{1i}Y_{ij_1}\Bigr)^2\Bigl(\sum_{i=1}^p\hat{\sigma}_{1i}Y_{ij_2}\Bigr)^2 \biggr] = O\biggl(\frac{n}{p}\biggr),
\end{align*}
we obtain \eqref{eq:G11_tilde_upper_bbd_esti_1}.
\end{proof}
The following two lemmas about the derivatives of some quantities with respect to $Y_{jk}$, which can be used to obtain the derivatives of $F_{jk}$ (Lemma~\ref{lem:Fjk_derivative}) and $\widetilde{F}_{jk}$ (Lemma~\ref{lem:Fjk_tilde_derivative}).
Recall that
\[
\bE:= \bSigma_p\bY\bD\bY'\bSigma_p = (E_{ij})_{p\times p}, \qquad\bF:= \bSigma_p\bY\bD = (F_{ij})_{p\times n}.
\]
\begin{lemma}\label{lem:derivative}
For any $\alpha, j\in\{1,2,\ldots,p\}$ and $\beta, k\in \{1,2,\ldots,n\}$, we have
\[
\begin{aligned}
\frac{\partial D_{\alpha\beta}}{\partial Y_{jk}} &= -F_{j\alpha}D_{\beta k}-F_{j\beta}D_{\alpha k};\\[0.5em]
\frac{\partial F_{\alpha\beta}}{\partial Y_{jk}} &= \sigma_{\alpha j} D_{k\beta} - E_{j\alpha}D_{\beta k}-F_{j\beta}F_{\alpha k};\\[0.5em]
\frac{\partial (E_{jj}D_{kk})}{\partial Y_{jk}} &= 2\sigma_{j j} F_{jk}D_{kk} - 4 E_{jj}F_{j k} D_{kk}.
\end{aligned}
\]
\end{lemma}
\begin{proof}
\begin{table}[!htbp]
\centering
\caption{Derivatives of $(Y_{rs}Y_{\ell t})$ w.r.t. $Y_{jk}$}
\label{tab:derivative}
\begin{tabular}{@{}ccccc@{}}
\toprule
$\frac{\partial \bigl(Y_{rs}Y_{\ell t}\bigr)}{\partial Y_{jk}}$ & $r=\ell=j$ & $r\neq j, \ell \neq j$ & $r=j, \ell\neq j$ & $r\neq j, \ell = j$ \\ \midrule
$s=t=k$ & $2Y_{jk}$ & $0$ & $Y_{\ell k}$ & $Y_{rk}$ \\
$s\neq k, t\neq k$ & $0$ & $0$ & $0$ & $0$ \\
$s=k, t\neq k$ & $Y_{jt}$ & $0$ & $Y_{\ell t}$ & $0$ \\
$s\neq k, t=k$ & $Y_{js}$ & $0$ & $0$ & $Y_{rs}$ \\ \bottomrule
\end{tabular}
\end{table}
\begin{enumerate}
\item[(1)] By using chain rule and the results in the Table~\ref{tab:derivative}, we have
\begin{align*}
\frac{\partial D_{\alpha\beta}}{\partial Y_{jk}} & = \sum_{1\leqslant s\leqslant t\leqslant p}\frac{\partial D_{\alpha\beta}}{\partial A_{st}}\cdot \frac{\partial A_{st}}{\partial Y_{jk}}\qquad \biggl[ \frac{\partial A_{st}}{\partial Y_{jk}} := \frac{\partial (\bY'\bSigma_p\bY)_{st}}{\partial Y_{jk}} \biggr]\\
& = \sum_{s =1}^p\frac{\partial D_{\alpha\beta}}{\partial A_{ss}}\cdot \frac{\partial A_{ss}}{\partial Y_{jk}} + \sum_{1\leqslant s < t \leqslant p}\frac{\partial D_{\alpha\beta}}{\partial A_{st}}\cdot \frac{\partial A_{st}}{\partial Y_{jk}}\\
& = \sum_{s=1}^p\Bigl( -D_{\alpha s}D_{t\beta} \Bigr)\cdot \sum_{r,\ell } \biggl(\sigma_{r\ell}\frac{\partial \bigl(Y_{rs}Y_{\ell s}\bigr)}{\partial Y_{jk}} \biggr) + \sum_{s<t}\Bigl( -D_{\alpha s}D_{t\beta}-D_{\alpha t}D_{s\beta} \Bigr)\cdot \sum_{r,\ell } \biggl(\sigma_{r\ell}\frac{\partial \bigl(Y_{rs}Y_{\ell t}\bigr)}{\partial Y_{jk}}\biggr)\\
& = \Bigl( -D_{\alpha k}D_{k\beta} \Bigr)\cdot \biggl(2\sigma_{jj}Y_{jk} + \sum_{\ell \neq j} \sigma_{j\ell}Y_{\ell k} +\sum_{r\neq j} \sigma_{rj}Y_{rk} \biggr)\\
&\qquad + \sum_{k< t}\Bigl( -D_{\alpha k}D_{t\beta}-D_{\alpha t}D_{k\beta} \Bigr)\cdot \biggl(\sigma_{jj} Y_{jt} +\sum_{\ell\neq j}\sigma_{j\ell} Y_{\ell t}\biggr)\\
&\qquad + \sum_{s <k}\Bigl( -D_{\alpha s}D_{k\beta}-D_{\alpha k}D_{s\beta} \Bigr)\cdot \biggl(\sigma_{jj} Y_{js} +\sum_{r \neq j}\sigma_{r j} Y_{rs}\biggr)\\
& = \sum_{s=1}^p \Bigl( -D_{\alpha s}D_{k\beta}-D_{\alpha k}D_{s\beta} \Bigr) \Bigl(\sum_{r=1}^p \sigma_{rj} Y_{rs}\Bigr)\\
& = \sum_{s, r} \biggl[-\Bigl(\sigma_{jr}Y_{rs}D_{s\alpha}\Bigr)D_{\beta k}-\Bigl(\sigma_{jr}Y_{rs}D_{s\beta}\Bigr)D_{\alpha k}\biggr]\\
& = -F_{j\alpha}D_{\beta k}-F_{j\beta}D_{\alpha k},
\end{align*}
where the third equality follows from the formula (II. 18) in \citet{khorunzhy1996asymptotic};
\item[(2)]
\begin{align*}
\frac{\partial F_{\alpha\beta}}{\partial Y_{jk}} & = \frac{\partial }{\partial Y_{jk}} \sum_{s,t}\Bigl(\sigma_{\alpha s} Y_{st}D_{t\beta}\Bigr)
= \sum_{s, t } \sigma_{\alpha s} \biggl( \frac{\partial Y_{st}}{\partial Y_{jk}}\cdot D_{t\beta} + Y_{st} \cdot \frac{\partial D_{t\beta}}{\partial Y_{jk}} \biggr) \\
& = \sigma_{\alpha j} D_{k\beta } -\sum_{s, t} \sigma_{\alpha s}Y_{st} \biggl(F_{jt}D_{\beta k}+F_{j\beta}D_{t k}\biggr)\\
& = \sigma_{\alpha j} D_{k\beta} - E_{j\alpha}D_{\beta k}-F_{j\beta}F_{\alpha k};
\end{align*}
\item[(3)]
\begin{align*}
\frac{\partial E_{jj} }{\partial Y_{jk}}
& = \frac{\partial }{\partial Y_{jk}} \sum_{r} \bigl(\bSigma_p\bY\bD\bigr)_{jr}\bigl(\bY'\bSigma_p\bigr)_{rj}\\
& = \sum_{r} \frac{\partial F_{jr}}{\partial Y_{jk}}\cdot \bigl(\bY'\bSigma_p\bigr)_{rj} + \sum_{r}F_{jr}\cdot \frac{\partial \bigl(\bY'\bSigma_p\bigr)_{rj}}{\partial Y_{jk}}\\
& = \sum_{r}\Bigl( \sigma_{jj}D_{kr} - E_{jj} D_{rk} - F_{jr}F_{jk} \Bigr) \cdot \bigl(\bY'\bSigma_p\bigr)_{rj} + \sigma_{jj}F_{jk}\\
& = 2\sigma_{j j} F_{jk} -2 E_{jj}F_{j k},
\end{align*}
\begin{align*}
\frac{\partial (E_{jj}D_{kk})}{\partial Y_{jk}} & = \frac{\partial E_{jj}}{\partial Y_{jk}} \cdot D_{kk} + \frac{\partial D_{kk}}{\partial Y_{jk}} \cdot E_{jj} \\
& = \Bigl(2\sigma_{j j} F_{jk} -2 E_{jj}F_{j k}\Bigr)\cdot D_{kk} - 2F_{jk}D_{kk}\cdot E_{jj}\\
& = 2\sigma_{j j} F_{jk}D_{kk} - 4 E_{jj}F_{j k} D_{kk}.\qedhere
\end{align*}
\end{enumerate}
\end{proof}
\noindent Recall that $\bSigma_p^2 = (\widetilde{\sigma}_{ij})$, and
\[
\widetilde{\bE}:= \bSigma_p^2\bY\bD\bY'\bSigma_p^2, \qquad \widehat{\bE}=\bSigma_p\bY\bD\bY'\bSigma_p^2,\qquad\widetilde{\bF}:= \bSigma_p^2\bY\bD.
\]
\begin{lemma}\label{lem:derivative-Ftilde}
For any $\alpha, j\in\{1,2,\ldots,p\}$ and $\beta, k\in \{1,2,\ldots,n\}$, we have
\[
\begin{aligned}
\frac{\partial \widetilde{F}_{\alpha\beta}}{\partial Y_{jk}} & = \widetilde{\sigma}_{\alpha j} D_{k\beta} - \widehat{E}_{j \alpha}D_{\beta k}-F_{j\beta}\widetilde{F}_{\alpha k};\\[0.5em]
\frac{\partial (\widehat{E}_{jj}D_{kk})}{\partial Y_{jk}} & = \sigma_{jj}\widetilde{F}_{jk}D_{kk} + \widetilde{\sigma}_{jj}F_{jk}D_{kk} - E_{jj}\widetilde{F}_{jk}D_{kk}-3\widehat{E}_{jj}F_{jk}D_{kk}.
\end{aligned}
\]
\end{lemma}
\begin{proof}
\begin{align*}
\frac{\partial \widetilde{F}_{\alpha\beta}}{\partial Y_{jk}} & = \frac{\partial }{\partial Y_{jk}} \sum_{s,t}\Bigl(\widetilde{\sigma}_{\alpha s} Y_{st}D_{t\beta}\Bigr)
= \sum_{s, t } \widetilde{\sigma}_{\alpha s} \biggl( \frac{\partial Y_{st}}{\partial Y_{jk}}\cdot D_{t\beta} + Y_{st} \cdot \frac{\partial D_{t\beta}}{\partial Y_{jk}} \biggr) \\
& = \widetilde{\sigma}_{\alpha j} D_{k\beta } -\sum_{s, t} \widetilde{\sigma}_{\alpha s}Y_{st} \biggl(F_{jt}D_{\beta k}+F_{j\beta}D_{t k}\biggr)\\
& = \widetilde{\sigma}_{\alpha j} D_{k\beta} - \widehat{E}_{j \alpha}D_{\beta k}-F_{j\beta}\widetilde{F}_{\alpha k};
\end{align*}
\begin{align*}
\frac{\partial E_{jr}}{\partial Y_{jk}} & = \frac{\partial }{\partial Y_{jk}}\sum_{\ell} F_{j\ell}\Bigl(\bY' \bSigma_p\Bigr)_{\ell r} = \sum_{\ell} \frac{\partial F_{j\ell}}{\partial Y_{jk}}\cdot\Bigl(\bY' \bSigma_p\Bigr)_{\ell r} + \sum_{\ell} F_{j\ell} \cdot \frac{\partial \bigl(\bY' \bSigma_p\bigr)_{\ell r}}{\partial Y_{jk}} \\
& = \sum_{\ell} \Bigl( \sigma_{jj}D_{k\ell} -E_{jj}D_{\ell k} -F_{j\ell}F_{jk} \Bigr)\cdot\Bigl(\bY' \bSigma_p\Bigr)_{\ell r} + \sigma_{jr}F_{jk}\\
& = \sigma_{jj}F_{rk} + \sigma_{jr}F_{jk} -E_{jj}F_{rk} - F_{jk}E_{jr},
\end{align*}
\begin{align*}
\frac{\partial (\widehat{E}_{jj}D_{kk})}{\partial Y_{jk}} & = \biggl(\frac{\partial }{\partial Y_{jk}} \sum_r E_{jr}\sigma_{rj}\biggr)\cdot D_{kk} + \widehat{E}_{jj}\cdot \Bigl(-2F_{jk}D_{kk}\Bigr)\\
& = D_{kk}\sum_r \sigma_{rj}\biggl(\sigma_{jj}F_{rk} + \sigma_{jr}F_{jk} -E_{jj}F_{rk} - F_{jk}E_{jr}\biggr) -2\widehat{E}_{jj}F_{jk}D_{kk}\\
& = \sigma_{jj}\widetilde{F}_{jk}D_{kk} + \widetilde{\sigma}_{jj}F_{jk}D_{kk} - E_{jj}\widetilde{F}_{jk}D_{kk}-3\widehat{E}_{jj}F_{jk}D_{kk}.\qedhere
\end{align*}
\end{proof}
\section{Proofs in applications}
\noindent
\textbf{Proof of Theorem~\ref{thm:W_limit_dist_H1}.}
\begin{proof}
For notational simplicity, we denote $a_p=\tr(\bSigma_p)/p$ and $b_p=\tr(\bSigma_p^2)/p$. Let
$
\widetilde{\bA}_n = \frac{1}{\sqrt{npb_p}}(\bY'\bY-pa_p\bI_n)= \frac{1}{\sqrt{npb_p}}(\bX'\bSigma_p\bX-pa_p\bI_n).
$
By some elementary calculations, we obtain two identities:
\begin{gather*}
\tr(\bS_n) = \sqrt{\frac{pb_p}{n}}\tr(\widetilde{\bA}_n)+pa_p,
\qquad
\tr(\bS_n^2) = \frac{pb_p}{n}\tr(\widetilde{\bA}_n^2) + \frac{2pa_p}{n}\sqrt{\frac{pb_p}{n}}\tr(\widetilde{\bA}_n) + \frac{(pa_p)^2}{n}.
\end{gather*}
Then $W$ can be written as
\[
W = \frac{b_p}{n}\tr(\widetilde{\bA}_n^2) - \frac{2}{p}\sqrt{\frac{pb_p}{n}}\tr(\widetilde{\bA}_n) -\frac{b_p}{n^2} \bigl[\tr(\widetilde{\bA}_n)\bigr]^2 +\frac{p}{n}-2a_p+1.
\]
\citet{li2016testing} derived the limiting joint distribution of $\bigl(\tr(\widetilde{\bA}_n^2)/n, \tr(\widetilde{\bA}_n)/n\bigr)$ (see their Lemma 3.1) as follows:
\begin{equation}\label{eq:joint_dist_tr_S}
n\Biggl(\begin{matrix}
\frac{1}{n}\tr(\widetilde{\bA}_n^2) - 1 - \frac{1}{n}\bigl( \frac{\omega}{\theta}(\nu_4-3)+1 \bigr)\\[0.5em]
\frac{1}{n}\tr(\widetilde{\bA}_n)
\end{matrix}\Biggr)
\convd \calN \Biggl(\Biggl(\begin{matrix}
0\\[0.5em]0
\end{matrix}\Biggr), \Biggl(\begin{matrix}
4 & 0\\[0.5em]
0 & \frac{\omega}{\theta}(\nu_4-3)+2
\end{matrix}\Biggr)\Biggr).
\end{equation}
Define the function
\[
g(x,y) = b_px - \frac{2n}{p}\sqrt{\frac{pb_p}{n}}y-b_py^2+\frac{p}{n}-2a_p+1,
\]
then $W = g\bigl(\tr(\widetilde{\bA}_n^2)/n, \tr(\widetilde{\bA}_n)/n \bigr)$, we have
\begin{align*}
& \frac{\partial g}{\partial x} \Bigl( 1 + \frac{1}{n}\Bigl( \frac{\omega}{\theta}(\nu_4-3)+1 \Bigr), 0 \Bigr) = b_p,\\
& \frac{\partial g}{\partial y} \Bigl( 1 + \frac{1}{n}\Bigl( \frac{\omega}{\theta}(\nu_4-3)+1 \Bigr), 0 \Bigr) =-\frac{2n}{p}\sqrt{\frac{pb_p}{n}},\\
& g\Bigl( 1 + \frac{1}{n}\Bigl( \frac{\omega}{\theta}(\nu_4-3)+1 \Bigr), 0 \Bigr) = b_p+\frac{b_p}{n}\Bigl( \frac{\omega}{\theta}(\nu_4-3)+1 \Bigr) +\frac{p}{n} - 2a_p+1.
\end{align*}
By \eqref{eq:joint_dist_tr_S}, we have
\[
n\biggl(W-g\Bigl( 1 + \frac{1}{T}\Bigl( \frac{\omega}{\theta}(\nu_4-3)+1 \Bigr), 0 \Bigr)\biggr) \convd\calN(0,\lim A),
\]
where
\[
A = \Biggl(
\begin{matrix}
\frac{\partial g}{\partial x} ( 1 + \frac{1}{n}( \frac{\omega}{\theta}(\nu_4-3)+1 ), 0 )\\
\frac{\partial g}{\partial x} ( 1 + \frac{1}{n}( \frac{\omega}{\theta}(\nu_4-3)+1 ), 0 )
\end{matrix}
\Biggr)'
\Biggl(\begin{matrix}
4 & 0\\
0 & \frac{\omega}{\theta}(\nu_4-3)+2
\end{matrix}\Biggr)
\Biggl(
\begin{matrix}
\frac{\partial g}{\partial x} ( 1 + \frac{1}{n}( \frac{\omega}{\theta}(\nu_4-3)+1 ), 0 )\\
\frac{\partial g}{\partial x} ( 1 + \frac{1}{n}( \frac{\omega}{\theta}(\nu_4-3)+1 ), 0 )
\end{matrix}
\Biggr) \to 4\theta^2.
\]
Thus,
\[
n\Bigl(W-b_p-\frac{b_p}{n}\Bigl( \frac{\omega}{\theta}(\nu_4-3)+1 \Bigr) + 2a_p-1-\frac{p}{n} \Bigr) \convd \calN(0, 4\theta^2),
\]
that is,
\[
nW - p - \theta\Bigl( \frac{\omega}{\theta}(\nu_4-3) +1 \Bigr) +n(2\gamma-1-\theta) \convd \calN(0,4\theta^2).\qedhere
\]
\end{proof}
\noindent \textbf{Proof of Proposition~\ref{prop:power_theo_W}.}
\begin{proof}
For the test based on statistic $W$, by Theorem \ref{thm:W_limit_dist_H0} and \ref{thm:W_limit_dist_H1}, we have
\begin{align*}
\beta(H_1) & = \Prob\biggl( \frac{1}{2}\Bigl(nW-p-(\nu_4-2)\Bigr) \geqslant z_{\alpha} \;\Big|\; H_1 \biggr) \\
& = \Prob\biggl( nW-p - \theta\Bigl( \frac{\omega}{\theta}(\nu_4-3) +1 \Bigr) +n(2\gamma-1-\theta) \\
&\qquad\qquad \geqslant 2z_{\alpha} - \theta\Bigl( \frac{\omega}{\theta}(\nu_4-3) +1 \Bigr) +n(2\gamma-1-\theta) + (\nu_4-2) \;\Big|\; H_1 \biggr)\\
& = 1-\Phi\biggl( \frac{1}{2\theta} \Bigl\{ 2z_{\alpha} - \omega(\nu_4-3) -\theta +n(2\gamma-1-\theta) + (\nu_4-2) \Bigr\} \biggr),
\end{align*}
since $2\gamma-1\leqslant \gamma^2\leqslant \theta$, Proposition \ref{prop:power_theo_W} follows.
\end{proof}
\bibliography{reference}
\end{document} | {"config": "arxiv", "file": "2109.06701/clt.tex"} |
TITLE: Calculate the remainder of 5 to the power of 120
QUESTION [0 upvotes]: I was going through this thread
And the first answer made me think.
Fermat's Little Theorem tells us that $5^{18} = 1$ mod $19$.
Observe next that $5^{120} = (5^{18})^6 \cdot 5^{12}$.
Reducing modulo $19$, we have $5^{120} = 1^{6} \cdot 5^{12} = 5^{12}$
mod $19$.
All that's left now is to calculate $5^{12}$ mod $19$, which can be
done quickly by brute force.
For example, $5^4 = 625 = 608 + 17 = 32\cdot19 + 17 = -2$ mod $19$.
Then $5^{12} = (5^4)^3 = (-2)^3 = -8$ mod $19$, which is the same as
$11$ mod $19$.
And there you have it: the remainder is $11$.
How to get the first line, out of nothing? I mean what is the tricks? How to come up with the number 18 in the first line?
I didn't understand this two line. I mean how they are calculated -
For example, $5^4 = 625 = 608 + 17 = 32\cdot19 + 17 = -2$ mod $19$.
Then $5^{12} = (5^4)^3 = (-2)^3 = -8$ mod $19$, which is the same as
$11$ mod $19$.
REPLY [1 votes]: See my comment for why you use $18$ and how it relates to your previous questions.
So we want to know the remainder of $5^{120}$ when divided by $19$ and we write this as $5^{120} \mod 19$.
We know that because $(5,19)=1$ (they have no common factor greater than $1$) we have $5^{18}\equiv 1 \mod 19$ - This is because of Fermat's Little Theorem applied with $p=19$.
Since $120=18\cdot 6 + 12$, we have $5^{120}=(5^{18})^65^{12} \equiv 5^{12}\mod 19$
In a comment on one of your previous questions I noted that we could do arithmetic modulo $19$ without having to keep track of multiples of $19$ (in fact my comment was for any $p$ - and it works with a little care for any integer - division can go wrong for non-primes) - we are only interested in the remainders. We are now looking for the remainder for $5^{12}$ which we have shown is the same as that for $5^{120}$ by application of little Fermat.
Now we note that $5^{12}=(5^4)^3$. Noticing that $5^4=625$ we can get rid of some extra multiples of $19$ because $625= 32\cdot 19+ 17=33\cdot 19 -2 \equiv -2 \mod 19$. Since we want small numbers to simplify the arithmetic as much as possible we choose $-2$.
So the remainder for $5^4$ is the same as that for $-2$, and the remainder for $(5^4)^3$ is the same as for $(-2)^3=-8$.
Now there is a near convention that we choose the smallest possible positive remainder (there occasions where another choice is useful*). So we note that $-8=11-1\cdot 19\equiv 11 \mod 19$ to finish off.
I suggest that now you have asked a few questions about this, you try some examples for yourself. You need to get used to the language a bit - you will have noticed how naturally it comes to the people who have been posting answers and comments - and how much longer I had to make this answer to avoid using it.
*eg some proofs of quadratic reciprocity | {"set_name": "stack_exchange", "score": 0, "question_id": 735289} |
TITLE: Equivalence for upper semi-continuity in a metric space
QUESTION [0 upvotes]: I believe this question is not hard, but I simply can't understand properly how to deal with the definition of upper semi-continuity of $\phi$ at $x$ in a metric space $X$, i. e. $\forall \epsilon>0, \exists \delta > 0 , \forall y \in X, d(x,y) < \delta
\Rightarrow \phi(y) < \phi(x) + \epsilon$.
Basically, I must show the equivalence
$\phi$ upper semi-continuous at $x \Leftrightarrow \phi(x) = \lim_{n\rightarrow \infty} [ \sup \phi(B_X(x,1/n))]$.
I've tried working with sequences in a sequence of balls with center $x$ and decreasing radius $1/n$, but I believe I may lack some structure to advance further.
REPLY [0 votes]: Let $\epsilon>0$, wlog assume that $\phi$ is bounded on $B(x,1),$ and choose an integer $N$ such that $1/N<\delta$ where $\delta $ satisfies the definition of upper-semicontinuity. Then if $n>N$, we have $\phi(B_X(x,1/n))\subseteq (-\infty ,\phi(x)+\epsilon)\Rightarrow a_n:=\sup\left \{ \phi(y):y\in B(x,1/n) \right \}\in (-\infty, \phi(x)+\epsilon].$ Now observe that $(a_n)$ is decreasing and bounded, so $\lim a_n=a$ exists and $a\le \phi(x)\ $ (because $\epsilon>0$ is arbitrary). But $\phi(x)\in B(x,1/n)$ for each integer $n$ so $a_n\ge \phi(x)\Rightarrow a\ge \phi(x);\ $ that is, $a=\phi(x).$
The other way is easier: just note that if $\phi(x) = \lim_{n\rightarrow \infty} [ \sup \phi(B_X(x,1/n))],$ then $(a_n)$ where $a_n:=\sup \phi(B_X(x,1/n))$ is a decreasing sequence that converges to $\phi(x).$ With this fact, and the definition, the result follows. | {"set_name": "stack_exchange", "score": 0, "question_id": 2147890} |
TITLE: Probability books useful for Information Theory?
QUESTION [5 upvotes]: Can you recommend me a list of good Probability Books for self-studying, with good explanations and introductions for Information Theory and not for the typical statistical subjects?
REPLY [2 votes]: Most of the references above are basic texts on Information Theory and not necessary probability theory based. I'm guessing that you're looking for probability theory texts with some emphasis on information theory in preparation for delving more deeply into information theory. For that I'd recommend taking a look at Paul Pfeiffer's Concepts of Probability Theory or either of Alfred Renyi's two books Probability Theory or Foundations of Probability. All either mention information theory specifically or have presentations influenced or working toward the subject in general. All three are reprints that can be had fairly cheaply from Dover Publications.
Alternately, for a bit more money, you might consider Alfredo Leon-Garcia's fairly standard text Probability, Statistics, and Random Processes for Electrical Engineers. As he's also subsequently written texts for communication engineering, he certainly has a very information theory/comm theory flavored presentation. The most recent edition is the 3rd, but the 2nd edition is substantially the same for less money on the used market.
I think all four above are relatively good for self-study, though Renyi's presentations are a tad more sophisticated mathematically and may seem more dense to the beginner. | {"set_name": "stack_exchange", "score": 5, "question_id": 243677} |
TITLE: Does there exist a nonzero linear transformation $T$ such that ${\alpha}^{T} T \alpha = 0$ for all $\alpha \in \mathbb{R}^n$?
QUESTION [1 upvotes]: Does there exist a nonzero linear transformation $T$ such that
$$\alpha^t T \alpha = 0\text{ for all $\alpha\in\mathbb{R}^n$,}$$
Where ${\alpha}^{t}$ denotes the transpose of the matrix.
Thanks! =D
REPLY [12 votes]: The property holds if and only if $T$ is skew-symmetric (meaning that $T^t=-T$). (The other answers so far are special cases of this.)
Proof: Let $x=\alpha^t T \alpha$. Since $x$ is a $1\times 1$ matrix, it equals its own transpose. Thus $2x=x+x^t=\alpha^t T \alpha+(\alpha^t T \alpha)^t =\alpha^t (T+T^t)\alpha$, which is a quadratic form in $\alpha$ that corresponds to the symmetric matrix $T+T^t$; thus it is zero for all $\alpha$ iff $T+T^t=0$. | {"set_name": "stack_exchange", "score": 1, "question_id": 12172} |
\typeout{TCILATEX Macros for Scientific Word and Scientific WorkPlace 5.5 <06 Oct 2005>.}
\typeout{NOTICE: This macro file is NOT proprietary and may be
freely copied and distributed.}
\makeatletter
\ifx\pdfoutput\relax\let\pdfoutput=\undefined\fi
\newcount\msipdfoutput
\ifx\pdfoutput\undefined
\else
\ifcase\pdfoutput
\else
\msipdfoutput=1
\ifx\paperwidth\undefined
\else
\ifdim\paperheight=0pt\relax
\else
\pdfpageheight\paperheight
\fi
\ifdim\paperwidth=0pt\relax
\else
\pdfpagewidth\paperwidth
\fi
\fi
\fi
\fi
\def\FMTeXButton#1{#1}
\newcount\@hour\newcount\@minute\chardef\@x10\chardef\@xv60
\def\tcitime{
\def\@time{
\@minute\time\@hour\@minute\divide\@hour\@xv
\ifnum\@hour<\@x 0\fi\the\@hour:
\multiply\@hour\@xv\advance\@minute-\@hour
\ifnum\@minute<\@x 0\fi\the\@minute
}}
\def\x@hyperref#1#2#3{
\catcode`\~ = 12
\catcode`\$ = 12
\catcode`\_ = 12
\catcode`\# = 12
\catcode`\& = 12
\catcode`\% = 12
\y@hyperref{#1}{#2}{#3}
}
\def\y@hyperref#1#2#3#4{
#2\ref{#4}#3
\catcode`\~ = 13
\catcode`\$ = 3
\catcode`\_ = 8
\catcode`\# = 6
\catcode`\& = 4
\catcode`\% = 14
}
\@ifundefined{hyperref}{\let\hyperref\x@hyperref}{}
\@ifundefined{msihyperref}{\let\msihyperref\x@hyperref}{}
\@ifundefined{qExtProgCall}{\def\qExtProgCall#1#2#3#4#5#6{\relax}}{}
\def\FILENAME#1{#1}
\def\QCTOpt[#1]#2{
\def\QCTOptB{#1}
\def\QCTOptA{#2}
}
\def\QCTNOpt#1{
\def\QCTOptA{#1}
\let\QCTOptB\empty
}
\def\Qct{
\@ifnextchar[{
\QCTOpt}{\QCTNOpt}
}
\def\QCBOpt[#1]#2{
\def\QCBOptB{#1}
\def\QCBOptA{#2}
}
\def\QCBNOpt#1{
\def\QCBOptA{#1}
\let\QCBOptB\empty
}
\def\Qcb{
\@ifnextchar[{
\QCBOpt}{\QCBNOpt}
}
\def\PrepCapArgs{
\ifx\QCBOptA\empty
\ifx\QCTOptA\empty
{}
\else
\ifx\QCTOptB\empty
{\QCTOptA}
\else
[\QCTOptB]{\QCTOptA}
\fi
\fi
\else
\ifx\QCBOptA\empty
{}
\else
\ifx\QCBOptB\empty
{\QCBOptA}
\else
[\QCBOptB]{\QCBOptA}
\fi
\fi
\fi
}
\newcount\GRAPHICSTYPE
\GRAPHICSTYPE=\z@
\def\GRAPHICSPS#1{
\ifcase\GRAPHICSTYPE
\special{ps: #1}
\or
\special{language "PS", include "#1"}
\fi
}
\def\GRAPHICSHP#1{\special{include #1}}
\def\graffile#1#2#3#4{
\bgroup
\@inlabelfalse
\leavevmode
\@ifundefined{bbl@deactivate}{\def~{\string~}}{\activesoff}
\raise -#4 \BOXTHEFRAME{
\hbox to #2{\raise #3\hbox to #2{\null #1\hfil}}}
\egroup
}
\def\draftbox#1#2#3#4{
\leavevmode\raise -#4 \hbox{
\frame{\rlap{\protect\tiny #1}\hbox to #2
{\vrule height#3 width\z@ depth\z@\hfil}
}
}
}
\newcount\@msidraft
\@msidraft=\z@
\let\nographics=\@msidraft
\newif\ifwasdraft
\wasdraftfalse
\def\GRAPHIC#1#2#3#4#5{
\ifnum\@msidraft=\@ne\draftbox{#2}{#3}{#4}{#5}
\else\graffile{#1}{#3}{#4}{#5}
\fi
}
\def\addtoLaTeXparams#1{
\edef\LaTeXparams{\LaTeXparams #1}}
\newif\ifBoxFrame \BoxFramefalse
\newif\ifOverFrame \OverFramefalse
\newif\ifUnderFrame \UnderFramefalse
\def\BOXTHEFRAME#1{
\hbox{
\ifBoxFrame
\frame{#1}
\else
{#1}
\fi
}
}
\def\doFRAMEparams#1{\BoxFramefalse\OverFramefalse\UnderFramefalse\readFRAMEparams#1\end}
\def\readFRAMEparams#1{
\ifx#1\end
\let\next=\relax
\else
\ifx#1i\dispkind=\z@\fi
\ifx#1d\dispkind=\@ne\fi
\ifx#1f\dispkind=\tw@\fi
\ifx#1t\addtoLaTeXparams{t}\fi
\ifx#1b\addtoLaTeXparams{b}\fi
\ifx#1p\addtoLaTeXparams{p}\fi
\ifx#1h\addtoLaTeXparams{h}\fi
\ifx#1X\BoxFrametrue\fi
\ifx#1O\OverFrametrue\fi
\ifx#1U\UnderFrametrue\fi
\ifx#1w
\ifnum\@msidraft=1\wasdrafttrue\else\wasdraftfalse\fi
\@msidraft=\@ne
\fi
\let\next=\readFRAMEparams
\fi
\next
}
\def\IFRAME#1#2#3#4#5#6{
\bgroup
\let\QCTOptA\empty
\let\QCTOptB\empty
\let\QCBOptA\empty
\let\QCBOptB\empty
#6
\parindent=0pt
\leftskip=0pt
\rightskip=0pt
\setbox0=\hbox{\QCBOptA}
\@tempdima=#1\relax
\ifOverFrame
\typeout{This is not implemented yet}
\show\HELP
\else
\ifdim\wd0>\@tempdima
\advance\@tempdima by \@tempdima
\ifdim\wd0 >\@tempdima
\setbox1 =\vbox{
\unskip\hbox to \@tempdima{\hfill\GRAPHIC{#5}{#4}{#1}{#2}{#3}\hfill}
\unskip\hbox to \@tempdima{\parbox[b]{\@tempdima}{\QCBOptA}}
}
\wd1=\@tempdima
\else
\textwidth=\wd0
\setbox1 =\vbox{
\noindent\hbox to \wd0{\hfill\GRAPHIC{#5}{#4}{#1}{#2}{#3}\hfill}\\%
\noindent\hbox{\QCBOptA}
}
\wd1=\wd0
\fi
\else
\ifdim\wd0>0pt
\hsize=\@tempdima
\setbox1=\vbox{
\unskip\GRAPHIC{#5}{#4}{#1}{#2}{0pt}
\break
\unskip\hbox to \@tempdima{\hfill \QCBOptA\hfill}
}
\wd1=\@tempdima
\else
\hsize=\@tempdima
\setbox1=\vbox{
\unskip\GRAPHIC{#5}{#4}{#1}{#2}{0pt}
}
\wd1=\@tempdima
\fi
\fi
\@tempdimb=\ht1
\advance\@tempdimb by -#2
\advance\@tempdimb by #3
\leavevmode
\raise -\@tempdimb \hbox{\box1}
\fi
\egroup
}
\def\DFRAME#1#2#3#4#5{
\vspace\topsep
\hfil\break
\bgroup
\leftskip\@flushglue
\rightskip\@flushglue
\parindent\z@
\parfillskip\z@skip
\let\QCTOptA\empty
\let\QCTOptB\empty
\let\QCBOptA\empty
\let\QCBOptB\empty
\vbox\bgroup
\ifOverFrame
#5\QCTOptA\par
\fi
\GRAPHIC{#4}{#3}{#1}{#2}{\z@}
\ifUnderFrame
\break#5\QCBOptA
\fi
\egroup
\egroup
\vspace\topsep
\break
}
\def\FFRAME#1#2#3#4#5#6#7{
\@ifundefined{floatstyle}
{
\begin{figure}[#1]
}
{
\ifx#1h
\begin{figure}[H]
\else
\begin{figure}[#1]
\fi
}
\let\QCTOptA\empty
\let\QCTOptB\empty
\let\QCBOptA\empty
\let\QCBOptB\empty
\ifOverFrame
#4
\ifx\QCTOptA\empty
\else
\ifx\QCTOptB\empty
\caption{\QCTOptA}
\else
\caption[\QCTOptB]{\QCTOptA}
\fi
\fi
\ifUnderFrame\else
\label{#5}
\fi
\else
\UnderFrametrue
\fi
\begin{center}\GRAPHIC{#7}{#6}{#2}{#3}{\z@}\end{center}
\ifUnderFrame
#4
\ifx\QCBOptA\empty
\caption{}
\else
\ifx\QCBOptB\empty
\caption{\QCBOptA}
\else
\caption[\QCBOptB]{\QCBOptA}
\fi
\fi
\label{#5}
\fi
\end{figure}
}
\newcount\dispkind
\def\makeactives{
\catcode`\"=\active
\catcode`\;=\active
\catcode`\:=\active
\catcode`\'=\active
\catcode`\~=\active
}
\bgroup
\makeactives
\gdef\activesoff{
\def"{\string"}
\def;{\string;}
\def:{\string:}
\def'{\string'}
\def~{\string~}
}
\egroup
\def\FRAME#1#2#3#4#5#6#7#8{
\bgroup
\ifnum\@msidraft=\@ne
\wasdrafttrue
\else
\wasdraftfalse
\fi
\def\LaTeXparams{}
\dispkind=\z@
\def\LaTeXparams{}
\doFRAMEparams{#1}
\ifnum\dispkind=\z@\IFRAME{#2}{#3}{#4}{#7}{#8}{#5}\else
\ifnum\dispkind=\@ne\DFRAME{#2}{#3}{#7}{#8}{#5}\else
\ifnum\dispkind=\tw@
\edef\@tempa{\noexpand\FFRAME{\LaTeXparams}}
\@tempa{#2}{#3}{#5}{#6}{#7}{#8}
\fi
\fi
\fi
\ifwasdraft\@msidraft=1\else\@msidraft=0\fi{}
\egroup
}
\def\TEXUX#1{"texux"}
\def\BF#1{{\bf {#1}}}
\def\NEG#1{\leavevmode\hbox{\rlap{\thinspace/}{$#1$}}}
\def\limfunc#1{\mathop{\rm #1}}
\def\func#1{\mathop{\rm #1}\nolimits}
\def\unit#1{\mathord{\thinspace\rm #1}}
\long\def\QQQ#1#2{
\long\expandafter\def\csname#1\endcsname{#2}}
\@ifundefined{QTP}{\def\QTP#1{}}{}
\@ifundefined{QEXCLUDE}{\def\QEXCLUDE#1{}}{}
\@ifundefined{Qlb}{\def\Qlb#1{#1}}{}
\@ifundefined{Qlt}{\def\Qlt#1{#1}}{}
\def\QWE{}
\long\def\QQA#1#2{}
\def\QTR#1#2{{\csname#1\endcsname {#2}}}
\let\QQQuline\uline
\let\QQQsout\sout
\let\QQQuuline\uuline
\let\QQQuwave\uwave
\let\QQQxout\xout
\long\def\TeXButton#1#2{#2}
\long\def\QSubDoc#1#2{#2}
\def\EXPAND#1[#2]#3{}
\def\NOEXPAND#1[#2]#3{}
\def\PROTECTED{}
\def\LaTeXparent#1{}
\def\ChildStyles#1{}
\def\ChildDefaults#1{}
\def\QTagDef#1#2#3{}
\@ifundefined{correctchoice}{\def\correctchoice{\relax}}{}
\@ifundefined{HTML}{\def\HTML#1{\relax}}{}
\@ifundefined{TCIIcon}{\def\TCIIcon#1#2#3#4{\relax}}{}
\if@compatibility
\typeout{Not defining UNICODE U or CustomNote commands for LaTeX 2.09.}
\else
\providecommand{\UNICODE}[2][]{\protect\rule{.1in}{.1in}}
\providecommand{\U}[1]{\protect\rule{.1in}{.1in}}
\providecommand{\CustomNote}[3][]{\marginpar{#3}}
\fi
\@ifundefined{lambdabar}{
\def\lambdabar{\errmessage{You have used the lambdabar symbol.
This is available for typesetting only in RevTeX styles.}}
}{}
\@ifundefined{StyleEditBeginDoc}{\def\StyleEditBeginDoc{\relax}}{}
\def\QQfnmark#1{\footnotemark}
\def\QQfntext#1#2{\addtocounter{footnote}{#1}\footnotetext{#2}}
\@ifundefined{TCIMAKEINDEX}{}{\makeindex}
\@ifundefined{abstract}{
\def\abstract{
\if@twocolumn
\section*{Abstract (Not appropriate in this style!)}
\else \small
\begin{center}{\bf Abstract\vspace{-.5em}\vspace{\z@}}\end{center}
\quotation
\fi
}
}{
}
\@ifundefined{endabstract}{\def\endabstract
{\if@twocolumn\else\endquotation\fi}}{}
\@ifundefined{maketitle}{\def\maketitle#1{}}{}
\@ifundefined{affiliation}{\def\affiliation#1{}}{}
\@ifundefined{proof}{\def\proof{\noindent{\bfseries Proof. }}}{}
\@ifundefined{endproof}{\def\endproof{\mbox{\ \rule{.1in}{.1in}}}}{}
\@ifundefined{newfield}{\def\newfield#1#2{}}{}
\@ifundefined{chapter}{\def\chapter#1{\par(Chapter head:)#1\par }
\newcount\c@chapter}{}
\@ifundefined{part}{\def\part#1{\par(Part head:)#1\par }}{}
\@ifundefined{section}{\def\section#1{\par(Section head:)#1\par }}{}
\@ifundefined{subsection}{\def\subsection#1
{\par(Subsection head:)#1\par }}{}
\@ifundefined{subsubsection}{\def\subsubsection#1
{\par(Subsubsection head:)#1\par }}{}
\@ifundefined{paragraph}{\def\paragraph#1
{\par(Subsubsubsection head:)#1\par }}{}
\@ifundefined{subparagraph}{\def\subparagraph#1
{\par(Subsubsubsubsection head:)#1\par }}{}
\@ifundefined{therefore}{\def\therefore{}}{}
\@ifundefined{backepsilon}{\def\backepsilon{}}{}
\@ifundefined{yen}{\def\yen{\hbox{\rm\rlap=Y}}}{}
\@ifundefined{registered}{
\def\registered{\relax\ifmmode{}\r@gistered
\else$\m@th\r@gistered$\fi}
\def\r@gistered{^{\ooalign
{\hfil\raise.07ex\hbox{$\scriptstyle\rm\text{R}$}\hfil\crcr
\mathhexbox20D}}}}{}
\@ifundefined{Eth}{\def\Eth{}}{}
\@ifundefined{eth}{\def\eth{}}{}
\@ifundefined{Thorn}{\def\Thorn{}}{}
\@ifundefined{thorn}{\def\thorn{}}{}
\def\TEXTsymbol#1{\mbox{$#1$}}
\@ifundefined{degree}{\def\degree{{}^{\circ}}}{}
\newdimen\theight
\@ifundefined{Column}{\def\Column{
\vadjust{\setbox\z@=\hbox{\scriptsize\quad\quad tcol}
\theight=\ht\z@\advance\theight by \dp\z@\advance\theight by \lineskip
\kern -\theight \vbox to \theight{
\rightline{\rlap{\box\z@}}
\vss
}
}
}}{}
\@ifundefined{qed}{\def\qed{
\ifhmode\unskip\nobreak\fi\ifmmode\ifinner\else\hskip5\p@\fi\fi
\hbox{\hskip5\p@\vrule width4\p@ height6\p@ depth1.5\p@\hskip\p@}
}}{}
\@ifundefined{cents}{\def\cents{\hbox{\rm\rlap c/}}}{}
\@ifundefined{tciLaplace}{\def\tciLaplace{\ensuremath{\mathcal{L}}}}{}
\@ifundefined{tciFourier}{\def\tciFourier{\ensuremath{\mathcal{F}}}}{}
\@ifundefined{textcurrency}{\def\textcurrency{\hbox{\rm\rlap xo}}}{}
\@ifundefined{texteuro}{\def\texteuro{\hbox{\rm\rlap C=}}}{}
\@ifundefined{euro}{\def\euro{\hbox{\rm\rlap C=}}}{}
\@ifundefined{textfranc}{\def\textfranc{\hbox{\rm\rlap-F}}}{}
\@ifundefined{textlira}{\def\textlira{\hbox{\rm\rlap L=}}}{}
\@ifundefined{textpeseta}{\def\textpeseta{\hbox{\rm P\negthinspace s}}}{}
\@ifundefined{miss}{\def\miss{\hbox{\vrule height2\p@ width 2\p@ depth\z@}}}{}
\@ifundefined{vvert}{\def\vvert{\Vert}}{}
\@ifundefined{tcol}{\def\tcol#1{{\baselineskip=6\p@ \vcenter{#1}} \Column}}{}
\@ifundefined{dB}{\def\dB{\hbox{{}}}}{}
\@ifundefined{mB}{\def\mB#1{\hbox{$#1$}}}{}
\@ifundefined{nB}{\def\nB#1{\hbox{#1}}}{}
\@ifundefined{note}{\def\note{$^{\dag}}}{}
\def\newfmtname{LaTeX2e}
\ifx\fmtname\newfmtname
\DeclareOldFontCommand{\rm}{\normalfont\rmfamily}{\mathrm}
\DeclareOldFontCommand{\sf}{\normalfont\sffamily}{\mathsf}
\DeclareOldFontCommand{\tt}{\normalfont\ttfamily}{\mathtt}
\DeclareOldFontCommand{\bf}{\normalfont\bfseries}{\mathbf}
\DeclareOldFontCommand{\it}{\normalfont\itshape}{\mathit}
\DeclareOldFontCommand{\sl}{\normalfont\slshape}{\@nomath\sl}
\DeclareOldFontCommand{\sc}{\normalfont\scshape}{\@nomath\sc}
\fi
\def\alpha{{\Greekmath 010B}}
\def\beta{{\Greekmath 010C}}
\def\gamma{{\Greekmath 010D}}
\def\delta{{\Greekmath 010E}}
\def\epsilon{{\Greekmath 010F}}
\def\zeta{{\Greekmath 0110}}
\def\eta{{\Greekmath 0111}}
\def\theta{{\Greekmath 0112}}
\def\iota{{\Greekmath 0113}}
\def\kappa{{\Greekmath 0114}}
\def\lambda{{\Greekmath 0115}}
\def\mu{{\Greekmath 0116}}
\def\nu{{\Greekmath 0117}}
\def\xi{{\Greekmath 0118}}
\def\pi{{\Greekmath 0119}}
\def\rho{{\Greekmath 011A}}
\def\sigma{{\Greekmath 011B}}
\def\tau{{\Greekmath 011C}}
\def\upsilon{{\Greekmath 011D}}
\def\phi{{\Greekmath 011E}}
\def\chi{{\Greekmath 011F}}
\def\psi{{\Greekmath 0120}}
\def\omega{{\Greekmath 0121}}
\def\varepsilon{{\Greekmath 0122}}
\def\vartheta{{\Greekmath 0123}}
\def\varpi{{\Greekmath 0124}}
\def\varrho{{\Greekmath 0125}}
\def\varsigma{{\Greekmath 0126}}
\def\varphi{{\Greekmath 0127}}
\def\nabla{{\Greekmath 0272}}
\def\FindBoldGroup{
{\setbox0=\hbox{$\mathbf{x\global\edef\theboldgroup{\the\mathgroup}}$}}
}
\def\Greekmath#1#2#3#4{
\if@compatibility
\ifnum\mathgroup=\symbold
\mathchoice{\mbox{\boldmath$\displaystyle\mathchar"#1#2#3#4$}}
{\mbox{\boldmath$\textstyle\mathchar"#1#2#3#4$}}
{\mbox{\boldmath$\scriptstyle\mathchar"#1#2#3#4$}}
{\mbox{\boldmath$\scriptscriptstyle\mathchar"#1#2#3#4$}}
\else
\mathchar"#1#2#3#4
\fi
\else
\FindBoldGroup
\ifnum\mathgroup=\theboldgroup
\mathchoice{\mbox{\boldmath$\displaystyle\mathchar"#1#2#3#4$}}
{\mbox{\boldmath$\textstyle\mathchar"#1#2#3#4$}}
{\mbox{\boldmath$\scriptstyle\mathchar"#1#2#3#4$}}
{\mbox{\boldmath$\scriptscriptstyle\mathchar"#1#2#3#4$}}
\else
\mathchar"#1#2#3#4
\fi
\fi}
\newif\ifGreekBold \GreekBoldfalse
\let\SAVEPBF=\pbf
\def\pbf{\GreekBoldtrue\SAVEPBF}
\@ifundefined{theorem}{\newtheorem{theorem}{Theorem}}{}
\@ifundefined{lemma}{\newtheorem{lemma}[theorem]{Lemma}}{}
\@ifundefined{corollary}{\newtheorem{corollary}[theorem]{Corollary}}{}
\@ifundefined{conjecture}{\newtheorem{conjecture}[theorem]{Conjecture}}{}
\@ifundefined{proposition}{\newtheorem{proposition}[theorem]{Proposition}}{}
\@ifundefined{axiom}{\newtheorem{axiom}{Axiom}}{}
\@ifundefined{remark}{\newtheorem{remark}{Remark}}{}
\@ifundefined{example}{\newtheorem{example}{Example}}{}
\@ifundefined{exercise}{\newtheorem{exercise}{Exercise}}{}
\@ifundefined{definition}{\newtheorem{definition}{Definition}}{}
\@ifundefined{mathletters}{
\newcounter{equationnumber}
\def\mathletters{
\addtocounter{equation}{1}
\edef\@currentlabel{\theequation}
\setcounter{equationnumber}{\c@equation}
\setcounter{equation}{0}
\edef\theequation{\@currentlabel\noexpand\alph{equation}}
}
\def\endmathletters{
\setcounter{equation}{\value{equationnumber}}
}
}{}
\@ifundefined{BibTeX}{
\def\BibTeX{{\rm B\kern-.05em{\sc i\kern-.025em b}\kern-.08em
T\kern-.1667em\lower.7ex\hbox{E}\kern-.125emX}}}{}
\@ifundefined{AmS}
{\def\AmS{{\protect\usefont{OMS}{cmsy}{m}{n}
A\kern-.1667em\lower.5ex\hbox{M}\kern-.125emS}}}{}
\@ifundefined{AmSTeX}{\def\AmSTeX{\protect\AmS-\protect\TeX\@}}{}
\def\@@eqncr{\let\@tempa\relax
\ifcase\@eqcnt \def\@tempa{& & &}\or \def\@tempa{& &}
\else \def\@tempa{&}\fi
\@tempa
\if@eqnsw
\iftag@
\@taggnum
\else
\@eqnnum\stepcounter{equation}
\fi
\fi
\global\tag@false
\global\@eqnswtrue
\global\@eqcnt\z@\cr}
\def\TCItag{\@ifnextchar*{\@TCItagstar}{\@TCItag}}
\def\@TCItag#1{
\global\tag@true
\global\def\@taggnum{(#1)}
\global\def\@currentlabel{#1}}
\def\@TCItagstar*#1{
\global\tag@true
\global\def\@taggnum{#1}
\global\def\@currentlabel{#1}}
\def\QATOP#1#2{{#1 \atop #2}}
\def\QTATOP#1#2{{\textstyle {#1 \atop #2}}}
\def\QDATOP#1#2{{\displaystyle {#1 \atop #2}}}
\def\QABOVE#1#2#3{{#2 \above#1 #3}}
\def\QTABOVE#1#2#3{{\textstyle {#2 \above#1 #3}}}
\def\QDABOVE#1#2#3{{\displaystyle {#2 \above#1 #3}}}
\def\QOVERD#1#2#3#4{{#3 \overwithdelims#1#2 #4}}
\def\QTOVERD#1#2#3#4{{\textstyle {#3 \overwithdelims#1#2 #4}}}
\def\QDOVERD#1#2#3#4{{\displaystyle {#3 \overwithdelims#1#2 #4}}}
\def\QATOPD#1#2#3#4{{#3 \atopwithdelims#1#2 #4}}
\def\QTATOPD#1#2#3#4{{\textstyle {#3 \atopwithdelims#1#2 #4}}}
\def\QDATOPD#1#2#3#4{{\displaystyle {#3 \atopwithdelims#1#2 #4}}}
\def\QABOVED#1#2#3#4#5{{#4 \abovewithdelims#1#2#3 #5}}
\def\QTABOVED#1#2#3#4#5{{\textstyle
{#4 \abovewithdelims#1#2#3 #5}}}
\def\QDABOVED#1#2#3#4#5{{\displaystyle
{#4 \abovewithdelims#1#2#3 #5}}}
\def\tint{\msi@int\textstyle\int}
\def\tiint{\msi@int\textstyle\iint}
\def\tiiint{\msi@int\textstyle\iiint}
\def\tiiiint{\msi@int\textstyle\iiiint}
\def\tidotsint{\msi@int\textstyle\idotsint}
\def\toint{\msi@int\textstyle\oint}
\def\tsum{\mathop{\textstyle \sum }}
\def\tprod{\mathop{\textstyle \prod }}
\def\tbigcap{\mathop{\textstyle \bigcap }}
\def\tbigwedge{\mathop{\textstyle \bigwedge }}
\def\tbigoplus{\mathop{\textstyle \bigoplus }}
\def\tbigodot{\mathop{\textstyle \bigodot }}
\def\tbigsqcup{\mathop{\textstyle \bigsqcup }}
\def\tcoprod{\mathop{\textstyle \coprod }}
\def\tbigcup{\mathop{\textstyle \bigcup }}
\def\tbigvee{\mathop{\textstyle \bigvee }}
\def\tbigotimes{\mathop{\textstyle \bigotimes }}
\def\tbiguplus{\mathop{\textstyle \biguplus }}
\newtoks\temptoksa
\newtoks\temptoksb
\newtoks\temptoksc
\def\msi@int#1#2{
\def\@temp{{#1#2\the\temptoksc_{\the\temptoksa}^{\the\temptoksb}}}
\futurelet\@nextcs
\@int
}
\def\@int{
\ifx\@nextcs\limits
\typeout{Found limits}
\temptoksc={\limits}
\let\@next\@intgobble
\else\ifx\@nextcs\nolimits
\typeout{Found nolimits}
\temptoksc={\nolimits}
\let\@next\@intgobble
\else
\typeout{Did not find limits or no limits}
\temptoksc={}
\let\@next\msi@limits
\fi\fi
\@next
}
\def\@intgobble#1{
\typeout{arg is #1}
\msi@limits
}
\def\msi@limits{
\temptoksa={}
\temptoksb={}
\@ifnextchar_{\@limitsa}{\@limitsb}
}
\def\@limitsa_#1{
\temptoksa={#1}
\@ifnextchar^{\@limitsc}{\@temp}
}
\def\@limitsb{
\@ifnextchar^{\@limitsc}{\@temp}
}
\def\@limitsc^#1{
\temptoksb={#1}
\@ifnextchar_{\@limitsd}{\@temp}
}
\def\@limitsd_#1{
\temptoksa={#1}
\@temp
}
\def\dint{\msi@int\displaystyle\int}
\def\diint{\msi@int\displaystyle\iint}
\def\diiint{\msi@int\displaystyle\iiint}
\def\diiiint{\msi@int\displaystyle\iiiint}
\def\didotsint{\msi@int\displaystyle\idotsint}
\def\doint{\msi@int\displaystyle\oint}
\def\dsum{\mathop{\displaystyle \sum }}
\def\dprod{\mathop{\displaystyle \prod }}
\def\dbigcap{\mathop{\displaystyle \bigcap }}
\def\dbigwedge{\mathop{\displaystyle \bigwedge }}
\def\dbigoplus{\mathop{\displaystyle \bigoplus }}
\def\dbigodot{\mathop{\displaystyle \bigodot }}
\def\dbigsqcup{\mathop{\displaystyle \bigsqcup }}
\def\dcoprod{\mathop{\displaystyle \coprod }}
\def\dbigcup{\mathop{\displaystyle \bigcup }}
\def\dbigvee{\mathop{\displaystyle \bigvee }}
\def\dbigotimes{\mathop{\displaystyle \bigotimes }}
\def\dbiguplus{\mathop{\displaystyle \biguplus }}
\if@compatibility\else
\RequirePackage{amsmath}
\fi
\def\ExitTCILatex{\makeatother\endinput}
\bgroup
\ifx\ds@amstex\relax
\message{amstex already loaded}\aftergroup\ExitTCILatex
\else
\@ifpackageloaded{amsmath}
{\if@compatibility\message{amsmath already loaded}\fi\aftergroup\ExitTCILatex}
{}
\@ifpackageloaded{amstex}
{\if@compatibility\message{amstex already loaded}\fi\aftergroup\ExitTCILatex}
{}
\@ifpackageloaded{amsgen}
{\if@compatibility\message{amsgen already loaded}\fi\aftergroup\ExitTCILatex}
{}
\fi
\egroup
\typeout{TCILATEX defining AMS-like constructs in LaTeX 2.09 COMPATIBILITY MODE}
\let\DOTSI\relax
\def\RIfM@{\relax\ifmmode}
\def\FN@{\futurelet\next}
\newcount\intno@
\def\iint{\DOTSI\intno@\tw@\FN@\ints@}
\def\iiint{\DOTSI\intno@\thr@@\FN@\ints@}
\def\iiiint{\DOTSI\intno@4 \FN@\ints@}
\def\idotsint{\DOTSI\intno@\z@\FN@\ints@}
\def\ints@{\findlimits@\ints@@}
\newif\iflimtoken@
\newif\iflimits@
\def\findlimits@{\limtoken@true\ifx\next\limits\limits@true
\else\ifx\next\nolimits\limits@false\else
\limtoken@false\ifx\ilimits@\nolimits\limits@false\else
\ifinner\limits@false\else\limits@true\fi\fi\fi\fi}
\def\multint@{\int\ifnum\intno@=\z@\intdots@
\else\intkern@\fi
\ifnum\intno@>\tw@\int\intkern@\fi
\ifnum\intno@>\thr@@\int\intkern@\fi
\int}
\def\multintlimits@{\intop\ifnum\intno@=\z@\intdots@\else\intkern@\fi
\ifnum\intno@>\tw@\intop\intkern@\fi
\ifnum\intno@>\thr@@\intop\intkern@\fi\intop}
\def\intic@{
\mathchoice{\hskip.5em}{\hskip.4em}{\hskip.4em}{\hskip.4em}}
\def\negintic@{\mathchoice
{\hskip-.5em}{\hskip-.4em}{\hskip-.4em}{\hskip-.4em}}
\def\ints@@{\iflimtoken@
\def\ints@@@{\iflimits@\negintic@
\mathop{\intic@\multintlimits@}\limits
\else\multint@\nolimits\fi
\eat@}
\else
\def\ints@@@{\iflimits@\negintic@
\mathop{\intic@\multintlimits@}\limits\else
\multint@\nolimits\fi}\fi\ints@@@}
\def\intkern@{\mathchoice{\!\!\!}{\!\!}{\!\!}{\!\!}}
\def\plaincdots@{\mathinner{\cdotp\cdotp\cdotp}}
\def\intdots@{\mathchoice{\plaincdots@}
{{\cdotp}\mkern1.5mu{\cdotp}\mkern1.5mu{\cdotp}}
{{\cdotp}\mkern1mu{\cdotp}\mkern1mu{\cdotp}}
{{\cdotp}\mkern1mu{\cdotp}\mkern1mu{\cdotp}}}
\def\RIfM@{\relax\protect\ifmmode}
\def\text{\RIfM@\expandafter\text@\else\expandafter\mbox\fi}
\let\nfss@text\text
\def\text@#1{\mathchoice
{\textdef@\displaystyle\f@size{#1}}
{\textdef@\textstyle\tf@size{\firstchoice@false #1}}
{\textdef@\textstyle\sf@size{\firstchoice@false #1}}
{\textdef@\textstyle \ssf@size{\firstchoice@false #1}}
\glb@settings}
\def\textdef@#1#2#3{\hbox{{
\everymath{#1}
\let\f@size#2\selectfont
#3}}}
\newif\iffirstchoice@
\firstchoice@true
\def\Let@{\relax\iffalse{\fi\let\\=\cr\iffalse}\fi}
\def\vspace@{\def\vspace##1{\crcr\noalign{\vskip##1\relax}}}
\def\multilimits@{\bgroup\vspace@\Let@
\baselineskip\fontdimen10 \scriptfont\tw@
\advance\baselineskip\fontdimen12 \scriptfont\tw@
\lineskip\thr@@\fontdimen8 \scriptfont\thr@@
\lineskiplimit\lineskip
\vbox\bgroup\ialign\bgroup\hfil$\m@th\scriptstyle{##}$\hfil\crcr}
\def\Sb{_\multilimits@}
\def\endSb{\crcr\egroup\egroup\egroup}
\def\Sp{^\multilimits@}
\let\endSp\endSb
\newdimen\ex@
\ex@.2326ex
\def\rightarrowfill@#1{$#1\m@th\mathord-\mkern-6mu\cleaders
\hbox{$#1\mkern-2mu\mathord-\mkern-2mu$}\hfill
\mkern-6mu\mathord\rightarrow$}
\def\leftarrowfill@#1{$#1\m@th\mathord\leftarrow\mkern-6mu\cleaders
\hbox{$#1\mkern-2mu\mathord-\mkern-2mu$}\hfill\mkern-6mu\mathord-$}
\def\leftrightarrowfill@#1{$#1\m@th\mathord\leftarrow
\mkern-6mu\cleaders
\hbox{$#1\mkern-2mu\mathord-\mkern-2mu$}\hfill
\mkern-6mu\mathord\rightarrow$}
\def\overrightarrow{\mathpalette\overrightarrow@}
\def\overrightarrow@#1#2{\vbox{\ialign{##\crcr\rightarrowfill@#1\crcr
\noalign{\kern-\ex@\nointerlineskip}$\m@th\hfil#1#2\hfil$\crcr}}}
\let\overarrow\overrightarrow
\def\overleftarrow{\mathpalette\overleftarrow@}
\def\overleftarrow@#1#2{\vbox{\ialign{##\crcr\leftarrowfill@#1\crcr
\noalign{\kern-\ex@\nointerlineskip}$\m@th\hfil#1#2\hfil$\crcr}}}
\def\overleftrightarrow{\mathpalette\overleftrightarrow@}
\def\overleftrightarrow@#1#2{\vbox{\ialign{##\crcr
\leftrightarrowfill@#1\crcr
\noalign{\kern-\ex@\nointerlineskip}$\m@th\hfil#1#2\hfil$\crcr}}}
\def\underrightarrow{\mathpalette\underrightarrow@}
\def\underrightarrow@#1#2{\vtop{\ialign{##\crcr$\m@th\hfil#1#2\hfil
$\crcr\noalign{\nointerlineskip}\rightarrowfill@#1\crcr}}}
\let\underarrow\underrightarrow
\def\underleftarrow{\mathpalette\underleftarrow@}
\def\underleftarrow@#1#2{\vtop{\ialign{##\crcr$\m@th\hfil#1#2\hfil
$\crcr\noalign{\nointerlineskip}\leftarrowfill@#1\crcr}}}
\def\underleftrightarrow{\mathpalette\underleftrightarrow@}
\def\underleftrightarrow@#1#2{\vtop{\ialign{##\crcr$\m@th
\hfil#1#2\hfil$\crcr
\noalign{\nointerlineskip}\leftrightarrowfill@#1\crcr}}}
\def\qopnamewl@#1{\mathop{\operator@font#1}\nlimits@}
\let\nlimits@\displaylimits
\def\setboxz@h{\setbox\z@\hbox}
\def\varlim@#1#2{\mathop{\vtop{\ialign{##\crcr
\hfil$#1\m@th\operator@font lim$\hfil\crcr
\noalign{\nointerlineskip}#2#1\crcr
\noalign{\nointerlineskip\kern-\ex@}\crcr}}}}
\def\rightarrowfill@#1{\m@th\setboxz@h{$#1-$}\ht\z@\z@
$#1\copy\z@\mkern-6mu\cleaders
\hbox{$#1\mkern-2mu\box\z@\mkern-2mu$}\hfill
\mkern-6mu\mathord\rightarrow$}
\def\leftarrowfill@#1{\m@th\setboxz@h{$#1-$}\ht\z@\z@
$#1\mathord\leftarrow\mkern-6mu\cleaders
\hbox{$#1\mkern-2mu\copy\z@\mkern-2mu$}\hfill
\mkern-6mu\box\z@$}
\def\projlim{\qopnamewl@{proj\,lim}}
\def\injlim{\qopnamewl@{inj\,lim}}
\def\varinjlim{\mathpalette\varlim@\rightarrowfill@}
\def\varprojlim{\mathpalette\varlim@\leftarrowfill@}
\def\varliminf{\mathpalette\varliminf@{}}
\def\varliminf@#1{\mathop{\underline{\vrule\@depth.2\ex@\@width\z@
\hbox{$#1\m@th\operator@font lim$}}}}
\def\varlimsup{\mathpalette\varlimsup@{}}
\def\varlimsup@#1{\mathop{\overline
{\hbox{$#1\m@th\operator@font lim$}}}}
\def\stackunder#1#2{\mathrel{\mathop{#2}\limits_{#1}}}
\begingroup \catcode `|=0 \catcode `[= 1
\catcode`]=2 \catcode `\{=12 \catcode `\}=12
\catcode`\\=12
|gdef|@alignverbatim#1\end{align}[#1|end[align]]
|gdef|@salignverbatim#1\end{align*}[#1|end[align*]]
|gdef|@alignatverbatim#1\end{alignat}[#1|end[alignat]]
|gdef|@salignatverbatim#1\end{alignat*}[#1|end[alignat*]]
|gdef|@xalignatverbatim#1\end{xalignat}[#1|end[xalignat]]
|gdef|@sxalignatverbatim#1\end{xalignat*}[#1|end[xalignat*]]
|gdef|@gatherverbatim#1\end{gather}[#1|end[gather]]
|gdef|@sgatherverbatim#1\end{gather*}[#1|end[gather*]]
|gdef|@gatherverbatim#1\end{gather}[#1|end[gather]]
|gdef|@sgatherverbatim#1\end{gather*}[#1|end[gather*]]
|gdef|@multilineverbatim#1\end{multiline}[#1|end[multiline]]
|gdef|@smultilineverbatim#1\end{multiline*}[#1|end[multiline*]]
|gdef|@arraxverbatim#1\end{arrax}[#1|end[arrax]]
|gdef|@sarraxverbatim#1\end{arrax*}[#1|end[arrax*]]
|gdef|@tabulaxverbatim#1\end{tabulax}[#1|end[tabulax]]
|gdef|@stabulaxverbatim#1\end{tabulax*}[#1|end[tabulax*]]
|endgroup
\def\align{\@verbatim \frenchspacing\@vobeyspaces \@alignverbatim
You are using the "align" environment in a style in which it is not defined.}
\let\endalign=\endtrivlist
\@namedef{align*}{\@verbatim\@salignverbatim
You are using the "align*" environment in a style in which it is not defined.}
\expandafter\let\csname endalign*\endcsname =\endtrivlist
\def\alignat{\@verbatim \frenchspacing\@vobeyspaces \@alignatverbatim
You are using the "alignat" environment in a style in which it is not defined.}
\let\endalignat=\endtrivlist
\@namedef{alignat*}{\@verbatim\@salignatverbatim
You are using the "alignat*" environment in a style in which it is not defined.}
\expandafter\let\csname endalignat*\endcsname =\endtrivlist
\def\xalignat{\@verbatim \frenchspacing\@vobeyspaces \@xalignatverbatim
You are using the "xalignat" environment in a style in which it is not defined.}
\let\endxalignat=\endtrivlist
\@namedef{xalignat*}{\@verbatim\@sxalignatverbatim
You are using the "xalignat*" environment in a style in which it is not defined.}
\expandafter\let\csname endxalignat*\endcsname =\endtrivlist
\def\gather{\@verbatim \frenchspacing\@vobeyspaces \@gatherverbatim
You are using the "gather" environment in a style in which it is not defined.}
\let\endgather=\endtrivlist
\@namedef{gather*}{\@verbatim\@sgatherverbatim
You are using the "gather*" environment in a style in which it is not defined.}
\expandafter\let\csname endgather*\endcsname =\endtrivlist
\def\multiline{\@verbatim \frenchspacing\@vobeyspaces \@multilineverbatim
You are using the "multiline" environment in a style in which it is not defined.}
\let\endmultiline=\endtrivlist
\@namedef{multiline*}{\@verbatim\@smultilineverbatim
You are using the "multiline*" environment in a style in which it is not defined.}
\expandafter\let\csname endmultiline*\endcsname =\endtrivlist
\def\arrax{\@verbatim \frenchspacing\@vobeyspaces \@arraxverbatim
You are using a type of "array" construct that is only allowed in AmS-LaTeX.}
\let\endarrax=\endtrivlist
\def\tabulax{\@verbatim \frenchspacing\@vobeyspaces \@tabulaxverbatim
You are using a type of "tabular" construct that is only allowed in AmS-LaTeX.}
\let\endtabulax=\endtrivlist
\@namedef{arrax*}{\@verbatim\@sarraxverbatim
You are using a type of "array*" construct that is only allowed in AmS-LaTeX.}
\expandafter\let\csname endarrax*\endcsname =\endtrivlist
\@namedef{tabulax*}{\@verbatim\@stabulaxverbatim
You are using a type of "tabular*" construct that is only allowed in AmS-LaTeX.}
\expandafter\let\csname endtabulax*\endcsname =\endtrivlist
\def\endequation{
\ifmmode\ifinner
\iftag@
\addtocounter{equation}{-1}
$\hfil
\displaywidth\linewidth\@taggnum\egroup \endtrivlist
\global\tag@false
\global\@ignoretrue
\else
$\hfil
\displaywidth\linewidth\@eqnnum\egroup \endtrivlist
\global\tag@false
\global\@ignoretrue
\fi
\else
\iftag@
\addtocounter{equation}{-1}
\eqno \hbox{\@taggnum}
\global\tag@false
$$\global\@ignoretrue
\else
\eqno \hbox{\@eqnnum}
$$\global\@ignoretrue
\fi
\fi\fi
}
\newif\iftag@ \tag@false
\def\TCItag{\@ifnextchar*{\@TCItagstar}{\@TCItag}}
\def\@TCItag#1{
\global\tag@true
\global\def\@taggnum{(#1)}
\global\def\@currentlabel{#1}}
\def\@TCItagstar*#1{
\global\tag@true
\global\def\@taggnum{#1}
\global\def\@currentlabel{#1}}
\@ifundefined{tag}{
\def\tag{\@ifnextchar*{\@tagstar}{\@tag}}
\def\@tag#1{
\global\tag@true
\global\def\@taggnum{(#1)}}
\def\@tagstar*#1{
\global\tag@true
\global\def\@taggnum{#1}}
}{}
\def\tfrac#1#2{{\textstyle {#1 \over #2}}}
\def\dfrac#1#2{{\displaystyle {#1 \over #2}}}
\def\binom#1#2{{#1 \choose #2}}
\def\tbinom#1#2{{\textstyle {#1 \choose #2}}}
\def\dbinom#1#2{{\displaystyle {#1 \choose #2}}}
\makeatother
\endinput | {"config": "arxiv", "file": "1802.04621/tcilatex.tex"} |
TITLE: Finding the nearest quadratic Bézier curve
QUESTION [2 upvotes]: Given a set of three-dimensional quadratic Bézier curves.
I'm looking for some analytical solution to find the nearest curve to an arbitrary point in space.
Example
I already have a brute force solution with the calculation of the closest point on each curve and selection one with the minimum distance. But this algorithm requires a lot of computational resources.
REPLY [0 votes]: It sounds like you already have a way to calculate the distance from a point to a Bézier curve, and you’d just like to speed things up by avoiding doing this calculation for every curve in your collection.
The way to do this is as outlined in the comment from @J.J Green -— you need to enclose each curve in some simple shape for which distance calculations are easy. Then, for a given curve, if the distance to its enclosing shape is larger than your current minimum, you don’t need to bother calculating distance to that curve itself.
Suitable enclosing shapes For quadratic Bézier curves are spheres, axis-aligned bounding boxes, or triangles | {"set_name": "stack_exchange", "score": 2, "question_id": 359625} |
TITLE: Using trig Identity to show equality of integral and piece-wise function
QUESTION [0 upvotes]: Using the trig identity $$2\sin A\sin B = \cos(A-B)-\cos(A+b)$$
show that $$\int_0^\pi \sin(mx)\sin(nx)dx = \left\{ \begin{array}{lr}0&\text{when }m\neq n\\\pi/2&\text{when }m=n\end{array}\right. $$
where $m, n >0$
After integration, I came with $$\frac{1}{2}\bigg(\frac{\sin((m-n)x}{m-n}-\frac{\sin((m+n)x)}{m+n}\bigg)\Bigg|_0^\pi$$
which isn't entirly clear as why this holds for the piecewise function. So using a sum trig identity I came to $$\frac{\sin(mx)\cos(nx)-\sin(nx)\cos(mx)}{2(m-n)}-\frac{\sin(mx)\cos(nx)+\sin(nx)\cos(mx)}{2(m+n)}\Bigg|_0^\pi$$
but still have no idea where to go from here.
REPLY [1 votes]: For $m\ne n$ you are OK with your expression.
Let $m=n:$
$2 \sin A \sin B =$
$\cos(A-B) - \cos(A+B); $
$\sin(mx)\sin(mx) =$
$(1/2)(1 - cos(2mx));$
Integrate both sides from $0$ to $π$:
Note :
$$\int_{0}^{π} \cos(2mx)dx =$$
$$(1/2m)\sin(2mx)]{_0^π} = 0.$$
Hence:
$$\int_{0}^{π}\sin^2(mx)dx = (1/2)π.$$ | {"set_name": "stack_exchange", "score": 0, "question_id": 2416164} |
TITLE: Field characteristic for a finite product of fields of characteristic $0$
QUESTION [0 upvotes]: Kind of a silly question, but is a finite product of fields of characteristic $0$ also of characteristic $0$? For instance, $\mathbb{C}$ has characteristic $0$, but then does $\mathbb{C}^n, n>1$ also have characteristic $0$?
REPLY [3 votes]: More generally, if $A,B$ are two rings then $\operatorname{char}(A\oplus B)=\operatorname{lcm}(\operatorname{char}(A),\operatorname{char}(B))$. Indeed, if $k\in\Bbb Z$ then $k\cdot (a,b)=(0,0)$ for all $(a,b)\in A\oplus B$ iff $ka=0$ for all $a\in A$ and $kb=0$ for all $b\in B$.
In particular for any (ring or) field $F$ and $n\ge1$, $F^n$ is a ring of same characteristic.
Note that we need to use the definition of characteristic that is applicable to non-unitary rings here. | {"set_name": "stack_exchange", "score": 0, "question_id": 1650798} |
TITLE: Derivative of $\int_c^{\sqrt{x}}1\,dt$ - Application of Fundamental Theorem of Calculus
QUESTION [0 upvotes]: I am trying to determine the $\frac{d}{dx}\int_c^{\sqrt{x}}1\,dt$.
I know that the fundamental theorem states the following,
$$\frac{d}{dx}\int_c^x f(t)\,dt=f(t)$$
However in this case the function is a constant, usually if our upper limit of integration is a function itself we end up taking the derivative of the function when applying chain rule as follows,
$\frac{d}{dx}\int_c^{g(x)} f(t) \, dt = f(g(x))\cdot g'(x)$
However in this case since the function is a constant can we even apply chain rule? I think the answer should be as follows,
$$\frac{d}{dx}\int_c^{\sqrt{x}}1\,dt= 1$$
But I'm pretty sure the right answer is
$\frac{d}{dx}\int_c^{\sqrt{x}}1\,dt= \frac{1}{2\sqrt{x}}$
Could someone explain to me why my answer is wrong?
REPLY [2 votes]: Let $f(x) = \displaystyle \int_c^{\sqrt{x}} 1\,dt$, $u(x) = \sqrt{x}, g(w) = \displaystyle \int_c^w 1 \,dt$. You have: $f(x)= (g\circ u)(x)\displaystyle \implies g'(u) =1$ by the Fundamental Theorem Of Calculus. By the Chain Rule, you have $f'(x) = g'(u)\cdot u'(x)= 1\cdot \dfrac{1}{2\sqrt{x}}= \dfrac{1}{2\sqrt{x}}$ . | {"set_name": "stack_exchange", "score": 0, "question_id": 3545893} |
TITLE: Convert constant of gravitation to days and AUs
QUESTION [1 upvotes]: I'm working on a problem with celestial bodies and for my purpose days and AUs are more appropriate units than seconds and meters. So I tried to convert the constant of gravitation, $G$, like this:
$$
k= \frac{\frac{\text{AU}^3}{\text{kg}\times \text{D}^2}}{\frac{\text{m}^3}{\text{kg}\times \text{s}^2}}=(\frac{\text{AU}}{\text{m}})^3(\frac{s}{D})^2=\frac{(1.496\times 10^{-11})^3}{(24\times 60\times 60)^2}=4.485\times 10^{-43}
$$
$$
G=6.673\times10^{-11}\times k = 6.673\times10^{-11}\times4.485\times 10^{-43}=2.993\times 10^{-53}
$$
Now I want to do some calculations with this value. First of all, what is the orbital period of a circular orbit around the sun?
$$
v_0=\sqrt{\frac{G M_\odot}{r}}
$$
$$
t = \frac{2\pi r}{v_0}
$$
The initial velocity has been adapted so that the centripetal force equals the force of gravitation, so the orbit will be circular.
For $r=1$ I expected that the orbital period would be a couple of hundred days, since it takes about 365 days for the earth to circle the sun and one AU is the distance between the sun and the earth. However, when I do the calculation I get that $t=8.14462\times 10^{11}$. What did I do wrong?
REPLY [0 votes]: G = 6.673 × 10 −11 N*m^2/kg^2
(Freedman, Roger; Geller, Robert M.; Kaufmann, William J.. Universe (Page A6). W. H. Freeman. Kindle Edition.)
N = kg*m/s^2, so the units of G as defined above is (kg * m^3)/(kg^2 * S^2) or m^3/(kg * S^2)
To convert from seconds to days:
60s/min * 60min/hr * 24hr/day = 86,400 sec/day
and from meters to AU:
1 AU = 1.4960 × 10^11 m
(Freedman, Roger; Geller, Robert M.; Kaufmann, William J.. Universe (Page A6). W. H. Freeman. Kindle Edition.)
G = 6.673 × 10 −11 (kg * m^3)/(kg^2 * S^2)
= 6.673 × 10 −11 (kg * m^3)/(kg^2 * S^2)* ((1 AU / 1.4960 × 10^11 m)^3) / ((1 d / 86400 s)^2)
= 6.673 × 10 −11 * (6.68445 x 10^-12)^3 / (1.1574 x 10^-05)^2
= 6.673 × 10 −11 * 2.9868x10^-34 / 1.33959x10^-10
= 6.673 × 10 −11 * 2.2296 x 10^-24
G = 1.4878 x 10^-34 AU^3 /(kg*d^2)
Then you make an assumption of circular motion.
To do so the following equations apply (Giancoli Douglas. General Physics (Page 88) Prentice-Hall, 1984):
a = v^2 / r , acceleration of a particle moving with tangential velocity, v, in circle of radius, r.
g = G * (m / r^2), acceleration of a gravity, with mass of body causing acceleration, m, at distance r
a = g
and
v^2 / r = G * (m / r^2)
thus
[1] v^2 = G * (m / r), which your first equation.
Then you use a form of the average velocity equation:
(Giancoli Douglas. General Physics (Page 15, 90) Prentice-Hall, 1984)
v = x1 - x0 / t1 - t0
= x1 / t1 for x0 = 0, t0 = 0
x1 = 2*(3.14)r for the distance around the circumference of a circle
t1 = T, the revolution period
thus
v = 2(3.14)r / T
or
[2] T = 2(3.14)*r / v
Solving [1]
G = 1.4878 x 10^-34 AU^3 /(kg*d^2)
M = 1.989 × 10^30 kg (Freedman, Roger; Geller, Robert M.; Kaufmann, William J.. Universe (Page A6). W. H. Freeman. Kindle Edition)
r = 1 AU
v = SQRT[(1.4878 x 10^-34 AU^3 /(kg*d^2)) * 1.989 × 10^30 kg / 1 AU]
= SQRT[2.9593 x 10^-04]
= 1.72 x 10^-02 AU/day
Then using the velocity to solve [2]
T = 2 * (3.14) * 1 AU / 1.72 x 10^-02 AU/day
T = 3.65246 X 10^02 days
However, you can go further if you substitute [2] in [1]:
[1] v^2 = G * (m / r)
[2] v = 2*(pi)*r / T
(4 * (pi)^2 * r^2) / T^2 = G * (m / r)
[3] (4*pi^2)/(G*m) = T^2 / r^3
Thus any body orbiting a mass, m, will have its period and orbital distance related by a constant, (4*pi^2)/(G*m).
This is core of Kepler's third law, which allows you to relate the orbital distance and period of any two bodies orbiting the same mass with the following equation:
[4] T1^2 / T2^2 = r1^3 / r2^3 | {"set_name": "stack_exchange", "score": 1, "question_id": 102756} |
TITLE: Prove that that {$f_n$} does not converge uniformly on $(0,\infty)$.
QUESTION [0 upvotes]: $\left\{f_n\right\}$ is given as $\frac{\sqrt{n}x}{2+\sqrt{n}x^4}$, prove that this function does not converge uniformly on $(0,\infty)$. I tried to approach this question using the smallest upper bound:
$$\sup_{x \in (0,\infty)}\left|f_n(x)-f(x)\right|$$
$$=\sup_{x \in (0,\infty)}\left|\frac{\sqrt{n}x}{2+\sqrt{n}x^4}-\frac{1}{x^3}\right|$$
$$\geq f_n\left(\frac{1}{\sqrt{n}}\right)$$
$$=\frac{1}{2+\frac{1}{\sqrt{n}^3}}$$
$$=\frac{1}{2} $$ as $n \to \infty$
which is $\neq \frac{1}{x^3}$ ?
But it is possible that $\frac{1}{x^3} = \frac{1}{2}$ by letting $x=2^{1/3}$. Which part did i get wrong?
REPLY [0 votes]: You have to find the supremum of
$$\left|\frac{\sqrt nx}{2+\sqrt nx^4}-\frac{1}{x^3}\right|$$
on the interval $(0,\infty)$. And I can't follow your work after that. It's not clear to me where that inequality came from. Note that for all $n\in\mathbb{N}$,
$$\lim_{x\to 0^+}\left(\frac{\sqrt nx}{2+\sqrt nx^4}-\frac{1}{x^3}\right)=-\infty$$
Thus
$$\sup_{x\in (0,\infty)}\left|\frac{\sqrt nx}{2+\sqrt nx^4}-\frac{1}{x^3}\right|=+\infty $$
Since this is true for all $n\in\mathbb{N}$, it follows that we don't have uniform convergence on $(0,\infty)$. You can also say $d_n\to\infty$, where for each $n$, $d_n$ is defined as the supremum above. | {"set_name": "stack_exchange", "score": 0, "question_id": 3511711} |
TITLE: If everything in the universe doubled in size overnight, would it be noticeable?
QUESTION [3 upvotes]: By my understanding, if everything doubled in size, such as the Sun and the Earth, and because the space in between them (which is nothing) can't expand, would the gravities greatly change and the Earth be pulled into the Sun?
REPLY [0 votes]: If everything expands at the same porpotion it require more enery to do everything beacasue the law of Conservation of Energy states that energy cannot be created or destroyed but be transformed from one form to another. | {"set_name": "stack_exchange", "score": 3, "question_id": 47259} |
TITLE: Proof that every field is perfect?
QUESTION [19 upvotes]: The following must be wrong, since it shows that every field is perfect, which I gather is not so. But I can't find the error:
Suppose $E/K$ is a field extension and $p\in K[x]$ is irreducible (in $K[x]$). Then every root of $p$ in $E$ is simple.
Proof: Suppose OTOH that $\lambda\in E$ and $(x-\lambda)^2\mid p(x)$. Then $(x-\lambda)\mid p'$, so $\gcd_E(p,p')\ne1$. But the euclidean algorithm shows that $\gcd_K(p,p')=\gcd_E(p,p')$, hence $p$ is not irreducible.
REPLY [29 votes]: The problem is that you can have $p'=0$, so $\gcd(p,p')=p$, which doesn't imply that $p$ is reducible.
To understand how this might happen, suppose
$q$ is prime.$\\[4pt]$
$\text{char}(K)=q$.
and suppose $c\in E$ is such that $c^q\in K$, but $c\not\in K$.
Then letting $p(x)=x^q-c^q$, we get $p'(x)=0$.
Claim $p$ is irreducible in $K[x]$.
To verify the irreducibility of $p$, suppose $f\mid p$ in $K[x]$, where $f$ is a monic polynomial of degree $n$, with $0 < n < q$.
Since $f\mid p$ in $K[x]$, we also have $f\mid p$ in $E[x]$.
Since $q$ is prime and $\text{char}(K)=q$, we have the identity
$$x^q-c^q=(x-c)^q$$
hence, since $f$ is monic and $\text{deg}(f)=n$, it follows that $f=(x-c)^n$.
By the binomial theorem, the coefficient of the $x^{n-1}$ term of $f$ is $-nc$.
But then from $0 < n < q$ and $c\not\in K$, we get $-nc\not\in K$, contrary to $f\in K[x]$.
Therefore $p$ is irreducible in $K[x]$, as claimed.
For an explicit example of such a polynomial $p$, let $t$ be an indeterminate, and let
$K=F_q(t^q)$.$\\[4pt]$
$E=F_q(t)$.$\\[4pt]$
$p(x)=x^q-t^q$.
Then we have
$\text{char}(K)=q$.$\\[4pt]$
$t\in E$.$\\[4pt]$
$t^q\in K$, but $t\not\in K$.
hence $p$ is irreducible in $K[x]$ and $p'=0$. | {"set_name": "stack_exchange", "score": 19, "question_id": 3297710} |
\section{A two-arms estimate in a slab}
\label{sec5}
Let $q : \R^d \rightarrow \R$ satisfying Assumption \ref{ass1} for some $\beta>d$. In this section, we consider two parameters $0<\gamma<a^2<1$ and study connectivity properties in $[-4R,4R]^2 \times [0,R^a]$. Our main goal is to prove the following uniqueness quasi-planar result that plays the role of \eqref{eq:2armsintro} from the sketch of proof. Recall that for $d'\leq d$, we routinely view $D\subset\R^{d'}$ as the subset $D \times \{ 0\}^{d-d'}$ and that we let $\calP_t:=\{x \in \R^3 : x_3=t\}$.
\begin{proposition}\label{prop:2arms*}
There exists $b>0$ that depends only on the dimension $d$ such that the following holds if $0<\gamma<a^2<b$. For all $\delta>0$ there exist $R_0,c>0$ such that for every $R \ge R_0$,
\[
\mathbb P\Big[\begin{array}{c}\text{all the c.c.~of $\{ f_{R^\gamma} \ge 2R^{-3/2} \} \cap [-2R,2R]^2$ of diameter at least $ \delta R$}\\
\text{belong to the same c.c.\ of $\{ f_{R^\gamma} \ge R^{-3/2} \}\cap[-4R,4R]^2\times[0,R^a]$}\end{array}\Big] \ge 1-\exp(-R^c).
\]
\end{proposition}
\begin{remark}
The same proof gives Proposition \ref{prop:2arms*} with $2R^{-3/2}$ (resp.\ $R^{-3/2}$) replaced by $R^{-3/2}$ (resp.\ $0$).
\end{remark}
Before proving Proposition~\ref{prop:2arms*}, let us state the following elementary planar existence result which, in combination with Proposition \ref{prop:2arms*}, will help us create large components in $\{ f_{R^\gamma} \ge R^{-3/2}\}$ with high probability.
\begin{lemma}[Existence of a macroscopic planar component]\label{lem:macro*}
For every $\delta > 0$ there exists $R_0>0$ such that for every $R \ge R_0$,
\[
\mathbb{P}\big[\text{$\{ f_{R^\gamma} \ge 2R^{-3/2} \} \cap [-R,R]^2$ contains a c.c.~of diameter $ \ge \delta R$}\big] \ge 1-\left( \frac{1}{2} \right)^{\lfloor 1/(2\delta) \rfloor^2-1}.
\]
\end{lemma}
\begin{proof}
For every $\delta>0$, there exist at least $\lfloor 1/(2\delta) \rfloor^2$ squares of side length $\delta R$ which are included in $[-R,R]^2$ and are at a mutual distance larger than $\delta R$. An easy application of Lemma \ref{lem:mani*} and the duality between $\{f_{R^\gamma}>0\}$ and $\{f_{R^\gamma}<0\}$ implies that each square is crossed by $\{f_{R^\gamma}\geq 0\}$ with probability $1/2$. Choose $R_0$ such that $R^\gamma \le \delta R$ for all $R \ge R_0$. Then, for all such $R$,
\[
\mathbb{P}\big[\text{$\{ f_{R^\gamma} \ge 0\} \cap [-R,R]^2$ contains a c.c.~of diameter at least $\delta R$}\big] \ge 1-\left( \frac{1}{2} \right)^{\lfloor 1/(2\delta) \rfloor^2}.
\]
We conclude by applying Lemma \ref{lem:Cameron-Martin} to the event on the left-hand side with $r=R^\gamma$ and $t=-2R^{-3/2}$.
\end{proof}
Before proving Proposition \ref{prop:2arms*}, let us write two remarks that are central in the proof. The first remark is mainly about the use of the local FKG inequality in the proof (which is more subtle than one may expect). The second remark is about the fact that we can use RSW results at level $R^{-3/2}$ instead of $0$.
\begin{remark}\label{rmk:conditioning_and_fkg}
In the proof of Proposition \ref{prop:2arms*}, we will work in $\R^3 \subset \R^d$ and frequently condition the field $f_{R^\gamma}$ with respect to its values on certain subsets. Note that, since the field is a.s.\ analytic, it is determined by its values on a countable number of points. In particular, for any subset $U\subseteq\R^3$, the law of $f_{R^\gamma}$ conditioned on $(f_{R^\gamma})_{|U}$ is still Gaussian.
Moreover, if $V\subseteq\R^3$ is such that for each $x\in U$ and $y\in V$, $\E[f_{R^\gamma}(x)f_{R^\gamma}(y)]=0$, then the law of $(f_{R^\gamma})_{|V}$ is unaffected under the conditioning on $(f_{R^\gamma})_{|U}$. Furthermore, by the local FKG inequality, see Corollary \ref{cor:FKG}, the following property holds, which is tailored to the purposes of the present section:
\medskip
Fix $W,U_A,U_B\subseteq\R^3$ and let $A \in \mathcal{F}_{U_A}$ and $B \in \mathcal{F}_{U_B}$ (recall the definition of these $\sigma$-algebras from Section \ref{ssec:not}). Assume that $B$ is increasing, that $A$ is increasing on the set of points in $\R^3$ at a distance at most $R^\gamma+1$ from $U_B$ and that $\textup{dist}(U_A,W) \ge 2R^\gamma+1$. Then, applying Corollary \ref{cor:FKG} to the conditional (Gaussian) measure $\prob[\,\cdot\, |\, (f_{R^\gamma})_{|W}]$ with the choices $U=U_A$, $V=U_B$, $\phi=1_A$, $\psi=1_B$ and $\delta=1$, one obtains that
\begin{equation}
\label{eq:conditionalFKGlocal}
\prob[f_{R^\gamma} \in A\cap B \mid (f_{R^\gamma})_{|W}]\geq\prob[f_{R^\gamma} \in A \mid (f_{R^\gamma})_{|W}] \times \prob[f_{R^\gamma} \in B \mid (f_{R^\gamma})_{|W}].
\end{equation}
\end{remark}
\begin{remark}\label{rem:sprinkling_rswetc}
As one can notice in Proposition \ref{prop:2arms*}, our goal in this section is to connect planar components by using paths in $\{ f_{R^\gamma} \ge R^{-3/2} \}$. As a result, it will be very useful to have at our disposal RSW results at level $R^{-3/2}$ instead of $0$. Let us first recall that since the RSW theorem (Theorem \ref{thm:rsw}) holds with universal constants, then it holds with $f_{R^\gamma}$ instead of $f$. Next, by applying Lemma \ref{lem:Cameron-Martin} (to $r=R^\gamma$ and $t=-R^{-3/2}$), we obtain that there exits $R_0>0$ such that the following holds:
\begin{equation}\label{eq:sprinkling_rsw**}
\begin{array}{c}
\text{If $R>R_0$ then Theorem \ref{thm:rsw} holds with $\{ f \ge 0 \}$ replaced by $\{ f_{R^\gamma} \ge R^{-3/2} \}$}\\
\text{in the definition of $\cross(\rho R,R)$.}
\end{array}
\end{equation}
The same argument implies that there exists $R_0>0$ such that we have the following:
\begin{equation}\label{eq:sprinkling_poly**}
\begin{array}{c}
\text{If $R>R_0$ then Proposition \ref{prop:polynom_absctract} holds for any $\rho \in [R^\gamma,R]$}\\
\text{and with $\{ f_r \ge 0 \}$ replaced by $\{ f_{R^\gamma} \ge R^{-3/2} \}$.}
\end{array}
\end{equation}
\end{remark}
Let us now prove Proposition \ref{prop:2arms*}.
\begin{proof}[Proof of Proposition \ref{prop:2arms*}]
Recall that we consider two parameters $0<\gamma<a^2<1$ which are both assumed to be small. Throughout the proof, we will frequently condition on $(f_{R^\gamma})_{|\calP_0}$. We will denote by $\widetilde{\prob}$ the probability law with this conditioning (viewed as a regular conditional probability measure). Consider the event
\begin{equation}\label{eq:mw_def}
\textup{Sprouts}(R):=\left\{\begin{array}{c}\text{every continuous path included in $\{f_{R^\gamma} \geq 2R^{-3/2} \} \cap [-2R,2R]^2$ of}\\
\text{diameter at least $R^a$ is connected to $\mathcal{P}_{R^{a^2}}$ by a path included in}\\
\text{$\{ f_{R^\gamma} \ge R^{-3/2} \} \cap ([-2R,2R]^2\times[0,R^{a^2}])$ of diameter $\le 3R^a$}\end{array}\right\}.
\end{equation}
At this point, we assume that $a$ and $\gamma/a$ are small enough for Proposition \ref{prop:mw} to apply at scale $R^a$, with truncation exponent $\gamma/a$ instead of $\gamma$ and with $\theta_0=1/2$. By Proposition~\ref{prop:mw} (at $\ell=R^{-3/2}$) followed by a union bound, there exist $R_0,c_0>0$ such that if $R\geq R_0$,
\begin{equation}\label{eq:sprouts_1}
\prob[\textup{Sprouts}(R)]\geq 1-e^{-R^{c_0}}.
\end{equation}
Let $\widetilde{\textup{Sprouts}}(R)$ be the $(f_{R^\gamma})_{|\calP_0}$-measurable event
\begin{equation}\label{eq:sprouts_tilde_def}
\widetilde{\textup{Sprouts}}(R):=\left\{\widetilde{\prob}[\textup{Sprouts}(R)]\geq 1-e^{-R^{c_0}/2}\right\}.
\end{equation}
Then, by \eqref{eq:sprouts_1} and Markov's inequality applied to $\widetilde{\textup{Sprouts}}(R)^c$, one finds for $R\geq R_0$,
\begin{equation}\label{eq:sprouts_2}
\prob[\widetilde{\textup{Sprouts}}(R)]\geq 1-e^{-R^{c_0}/2}.
\end{equation}
Let $\delta>0$ and $x\in[-2R,2R]^2$ and consider a deterministic path $\mathcal{C}_0\subset D(x,5\delta R)$ of diameter $\delta R$. By Proposition \ref{prop:polynom_absctract} (that we can apply since $\gamma<a$) and \eqref{eq:sprinkling_poly**}, there exist $R_0>0$, a universal $\eta>0$, and a family $(y_i)_i$ of points at mutual distances at least $20R^a$ and at a distance at most $R^a$ from $\mathcal{C}_0$ such that if $R \ge R_0$ and if we let
\begin{align}\label{eq:poly_def} \nonumber
\text{Conn}_i(R)&:=\Big\{\begin{array}{c} D(y_i,R^a)\text{ is connected~to $\partial D(x,20\delta R)$}\\
\text{ in $(\{f_{R^\gamma} \ge R^{-3/2} \} \cap D(x,20\delta R) ) \setminus ( \cup_{j \ne i} D(y_j,10R^a))$}\end{array}\Big\},\\
\textup{Poly}_{\mathcal{C}_0}(R)&:=\{ \#\{i : \textup{Conn}_i(R) \text{ holds} \} \ge (\delta R^{(1-a)})^\eta \},
\end{align}
then
\begin{equation}\label{eq:poly*}
\Pro [\textup{Poly}_{\mathcal{C}_0}(R)] \geq \eta.
\end{equation}
The following lemma is the core of the proof of Proposition \ref{prop:2arms*}. Conditionally on $(f_{R^\gamma})_{|\calP_0}$ and on the (very likely) event $\widetilde{\textup{Sprouts}}$, one may connect two given large connected components of $\{f_{R^\gamma}\geq 2R^{-3/2}\}\cap[-2R,2R]^2$ by a path in $\{f_{R^\gamma}\geq R^{-3/2}\}\cap[-4R,4R]^2\times[0,R^a]$ with very good probability. To conclude the proof, we will essentially apply this lemma to all pairs of connected components of this set.
\begin{lemma}\label{lemma:two_arms_1}
Suppose that $a$ satisfies $5a/(1-a)<\eta$. There exist $c,R_0>0$ such that the following holds. Assume that $\widetilde{\textup{Sprouts}}(R)$ is satisfied and let $\calC$ and $\calC'$ be two continuous paths included in $\{f_{R^\gamma}\geq 2R^{-3/2}\}\cap[-2R,2R]^2$ of diameter $\delta R$ and at a mutual distance at least $100\delta R$. If $R \ge R_0$ then
\[
\widetilde{\prob}\left[\calC\leftrightarrow\calC' \text{ in } ([-4R,4R]^2\times[0,R^a])\cap\{f_{R^\gamma}\geq R^{-3/2}\}\right]\geq 1-e^{-R^c}.
\]
\end{lemma}
Before proving the lemma, we briefly conclude the proof of Proposition~\ref{prop:2arms*}. Recall that we have already assumed that $a$ and $\gamma/a$ are small enough. In order to apply Lemma \ref{lemma:two_arms_1}, we assume in addition that $5a/(1-a)<\eta$. This also holds if $a$ is small enough.
\medskip
Now let $\delta_0:=\delta/1000$ and observe that if $\mathcal{D}$ and $\mathcal{D}'$ are two connected components of $\{ f_{R^\gamma} \ge 2R^{-3/2} \} \cap [-2R,2R]^2$ of diameter larger than or equal to $ \delta R$ then there exist two continuous paths $\mathcal{C}$ and $\mathcal{C}'$ of diameter $\delta_0 R$, included in $\mathcal{D},\mathcal{D}'$ respectively and satisfying $\textup{dist}(\mathcal{C},\mathcal{C}') \ge 100\delta_0 R$. As a result, Proposition \ref{prop:2arms*} is a consequence of Lemma \ref{lemma:two_arms_1} and a union bound over all pairs of connected components of $\{ f_{R^\gamma} \ge 2R^{-3/2} \} \cap [-2R,2R]^2$ of diameter $\ge \delta R$, the number of which can be controlled except on an event of suitably small probability as follows.
\medskip
Let $c>0$ as in Lemma \ref{lemma:two_arms_1}. The probability that there are more than $e^{R^c/2}$ connected components in $\{ f_{R^\gamma}\ge 2R^{-3/2} \} \cap [-2R,2R]^2$ is less than $e^{-R^c/4}$ if $R$ is sufficiently large. This is a direct consequence of Markov's inequality and the fact that the expectation of the number of connected components of $\{ f_{R^\gamma}\ge 2R^{-3/2} \} \cap [-2R,2R]^2$ is less than $CR^2$ for some $C>0$ that depends only on $q$, as soon as $R$ is sufficiently large, see Lemma \ref{L:numbercomps}. This completes the proof of Proposition~\ref{prop:2arms*}, subject to the validity of Lemma~\ref{lemma:two_arms_1}.
\end{proof}
We now proceed with the proof of Lemma~\ref{lemma:two_arms_1}.
\begin{proof}[Proof of Lemma~\ref{lemma:two_arms_1}]
Throughout the proof, crucially, the constants do not depend on $\calC$ and $\calC'$. The proof is split into two steps.
\paragraph{Step 1.} In this step, we consider copies of the event $\textup{Poly}_{\calC}(R)$ at different heights and show, using \eqref{eq:poly*}, that with very good probability, at some height between $R^a/2$ and $R^a$ there exist polynomially many paths that, when projected onto $\calP_0$, connect $\calC$ and $\calC'$ up to distance $O(R^a)$.
\medskip
Let $x,x'\in[-2R,2R]^2$ be such that $\calC\subset D(x,5\delta R)$ and $\calC'\subset D(x',5\delta R)$. Let $(y_i)_i$ (resp.~$(y_j')_j$) be a family of points associated to $\calC$ (resp.~$\calC'$) in the same way as the points associated to $\calC_0$ in the definition of $\textup{Poly}_{\calC_0}(R)$ in \eqref{eq:poly_def} above. For every $i$ (resp.~$j$), let $\mathcal{C}_i$ (resp.~$\mathcal{C}_j'$) be a path included in $\mathcal{C}$ (resp.~$\calC'$) of diameter $R^a$ and at a distance at most $ R^a$ from $y_i$ (resp.~$y_j'$). (These sub-paths exist and are disjoint as $i$ and $j$ vary if $R$ is sufficiently large.)
\medskip
Let $\textup{Circ}_{\mathcal{C},\mathcal{C}'}(R)$ denote the event that
\begin{itemize}[noitemsep]
\item[i)] there is a circuit in $\{ f_{R^\gamma} \ge R^{-3/2} \} \cap ( D(x,20\delta R) \setminus D(x,10\delta R) )$ which surrounds the inner disc $D(x,10\delta R)$,
\item[ii)] the analogous event holds with $x'$ instead of $x$, and
\item[iii)] $D(x,10\delta R)$ is connected to $D(x',10\delta R)$ by a path included in $\{ f_{R^\gamma} \ge R^{-3/2} \} \cap [-4R,4R]^2$.
\end{itemize}
Introduce
\[
\textup{Poly}_{\mathcal{C},\mathcal{C}'}(R):=\textup{Poly}_{\mathcal{C}}(R) \cap \textup{Poly}_{\mathcal{C}'}(R) \cap \textup{Circ}_{\calC,\calC'}(R).
\]
For $0\le k\le \bar k$ with $\bar{k}:=\lfloor R^{a-\gamma}/2 \rfloor$, let $\textup{Poly}_{\mathcal{C},\mathcal{C}'}^k(R)$ (resp.~$\textup{Conn}_i^k(R)$) denote the event $\textup{Poly}_{\mathcal{C},\mathcal{C}'}(R)$ (resp.~$\textup{Conn}_i(R)$) translated by $h_k\textbf{e}_3$, where $h_k:=\frac{R^a}{2}+kR^\gamma$.
\medskip
Note that $\mathcal{P}_{h_0}=\mathcal{P}_{R^a/2}$ and that the planes $\mathcal{P}_{h_k}$ are at mutual distances $R^\gamma$ and are included in $\R^2 \times [R^a/2,R^a]$. Since $f_{R^\gamma}$ is $R^\gamma$-dependent, \eqref{eq:poly*}, the RSW theorem (see \eqref{eq:sprinkling_rsw**}) and standard gluing constructions (as described in Section \ref{ss:rsw}) imply the existence of $c>0$ that depends only on $\delta$ such that for all $R$ sufficiently large
\begin{equation}\label{eq:layers_poly*}
\widetilde{\Pro} \bigg[\bigcup_{ 0\le k\le \bar{k}}\textup{Poly}_{\mathcal{C},\mathcal{C}'}^k(R)\bigg]=\Pro \bigg[\bigcup_{ 0\le k\le \bar{k}}\textup{Poly}_{\mathcal{C},\mathcal{C}'}^k(R)\bigg] \geq 1-e^{-cR^{a-\gamma}}.
\end{equation}
If there exists $k \in \{0,\dots,\bar{k} \}$ such that $\textup{Poly}_{\mathcal{C},\mathcal{C}'}^k(R)$ holds, let $k_{\rm max}$ be the largest such $k$ and define $\bar{I},\bar{I}'$ by $i \in \bar{I}$ if and only if $\textup{Conn}_i^{k_{\rm max}}(R)$ occurs (and similarly for $\bar{I}'$). For every $k_0 \in \{0,\dots,\bar{k}\}$ and every $\bar{I}_0,\bar{I}_0'$, let
\[
A_{k_0,I_0,I_0'}=\Big(\bigcup_k\textup{Poly}^k_{\calC,\calC'}(R) \Big)\cap\{ k_{\rm max}=k_0,\bar{I}=\bar{I}_0,\bar{I}'=\bar{I}_0'\}.
\]
The event $A_{k_0,I_0,I_0'}$ is measurable with respect to the set
\begin{equation}\label{eq:a_meas}
\{x_3\geq h_{k_0}\}\setminus\Big(\bigcup_i D(y_i,R^a)\cup\bigcup_j D(y_j',R^a)\Big)
\end{equation}
and increasing in the set
\begin{equation}\label{eq:a_increasing}
\Big(\bigcup_{i \in I_0} D(y_i,10R^a) \cup \bigcup_{j \in I_0'} D(y_j',10R^a)\Big) \times [0,h_{k_0}+R^\gamma/2].
\end{equation}
\paragraph{Step 2.} In this step we show that, given $k_0\in\{1,\dots,\overline{k}\}$ and $I_0$ a family of indices, it is very likely that there exists some $i\in I_0$ such that $\calC_i$ is connected to a path in $D(y_i,5R^a)\times\{h_{k_0}\}$ of diameter at least $R^a$. Here the difficulty is that we want to do this under $\widetilde{\prob}$, i.e., conditionally on $(f_{R^\gamma})_{|\calP_0}$. In doing so we lose the independence of $f_{R^\gamma}$ restricted to the tubes $D(y_i,5R^a)\times[0,h_{k_0}]$. This lack of independence is replaced with Claim \ref{cl:two_arm_1} below.
\medskip
Fix $k_0\in\{1,\dots,\overline{k}\}$ and $I_0$ a family of indices such that
\begin{equation}\label{eq:index_set_size}
|I_0|\geq (\delta R^{(1-a)})^\eta\, .
\end{equation}
Denote the elements of $I_0$ by $i_1,\dots,i_{|I_0|}$. For each $l\in\{1,\dots,|I_0|\}$, let $H_l$ be the event that there exists an index $1\leq l'\leq l$ such that $\calC_{i_{l'}}$ is connected by a path in $(D(y_{i_{l'}},5R^a)\times[0,h_{k_0}])\cap\{f_{R^\gamma}\geq R^{-3/2}\}$ to a circuit surrounding $D(y_{i_{l'}},R^a)\times\{h_{k_0}\}$ in $(D(y_{i_{l'}},5R^a)\times\{h_{k_0}\})\cap\{f_{R^\gamma}\geq R^{-3/2}\}$.
\begin{claim}\label{cl:two_arm_1}
Let $c_0$ as in \eqref{eq:sprouts_1}. There exist $c,R_0>0$ such that the following holds if $R\geq R_0$. Let $l\in\{1,\dots,|I_0|-1\}$ and assume that
\[
\widetilde{\prob}\left[H_l\right]\leq 1-e^{-R^{c_0}/4}.
\]
Then, on $\widetilde{\textup{Sprouts}}(R)$,
\[
\widetilde{\prob}\left[H_{l+1}\, |\, H_l^c\right]\geq cR^{-5a}.
\]
Moreover, on $\widetilde{\textup{Sprouts}}(R)$, we have $\widetilde{\prob}[H_1]\geq cR^{-5a}$.
\end{claim}
Before proving Claim \ref{cl:two_arm_1}, let us complete the proof of Lemma~\ref{lemma:two_arms_1}. By Claim \ref{cl:two_arm_1}, we deduce that on $\widetilde{\textup{Sprouts}}(R)$, the probability of the event $H_{|I_0|}$ that there exists $i\in I_0$ for which $\calC_i$ is connected by a path in $(D(y_i,5R^a)\times[0,h_{k_0}])\cap \{f_{R^\gamma} \ge R^{-3/2} \}$ to a circuit surrounding $D(y_i,R^a) \times \{h_{k_0}\}$ in $(D(y_i,5R^a) \times \{h_{k_0}\})\cap\{f_{R^\gamma}\geq R^{-3/2}\}$ satisfies
\[
\widetilde{\prob}[H_{|I_0|}] \geq 1-(1-c R^{-5a})^{|I_0|}\geq 1-e^{-R^{c'}}
\]
for $R$ large enough and some $c'>0$ since $|I_0|\geq (\delta R^{(1-a)})^\eta$ by \eqref{eq:index_set_size} and since we have assumed that $5a<\eta(1-a)$. Clearly, the same holds for the analogous construction with $\calC'$ instead of $\calC$. In addition to $k_0$ and $I_0$, let $I_0'$ be a set of indices $j$ of the family $(y_j')_j$ satisfying $|I_0'|\geq (\delta R^{(1-a)})^\eta$. Let $B_{k_0,I_0,I_0'}$ be the event that $H_{|I_0|}$ holds and that the analogous event for $\calC'$ holds as well. By union bound, we deduce that on the event $\widetilde{\textup{Sprouts}}(R)$, for $R$ large enough,
\begin{equation}\label{eq:final_up_lower_bound}
\widetilde{\prob}[B_{k_0,I_0,I_0'}]\geq 1-2e^{-R^{c'}}.
\end{equation}
Now this event is increasing and measurable with respect to the set
\[
\Big( \bigcup_{i\in I_0} D(y_i,5R^a)\cup \bigcup_{j\in I_0} D(y_j',5R^a) \Big) \times [0,h_{k_0}].
\]
Since, $A_{k_0,I_0,I_0'}$ is measurable with respect to the set \eqref{eq:a_meas} and increasing on the set \eqref{eq:a_increasing} we deduce by applying \eqref{eq:conditionalFKGlocal} that on the event $\widetilde{\textup{Sprouts}}(R)$, for $R$ large,
\[
\widetilde{\prob}[A_{k_0,I_0,I_0'}\cap B_{k_0,I_0,I_0'}]\geq \widetilde{\prob}[B_{k_0,I_0,I_0'}] \widetilde{\prob}[A_{k_0,I_0,I_0'}] \ge (1-2e^{-R^{c'}})\widetilde{\prob}[A_{k_0,I_0,I_0'}].
\]
Recall now that the disjoint union of the $A_{k_0,I_0,I_0'}$ over all the choices of $k_0$, $I_0$ and $I_0'$ is $\bigcup_{k=1}^{\overline{k}}\textup{Poly}_{\calC,\calC'}^k(R)$. Consequently, summing over all these possible choices, we deduce that, on the event $\widetilde{\textup{Sprouts}}(R)$, for $R$ large enough,
\begin{align}\label{eq:two_arm_proof_1}
\nonumber \widetilde{\prob}\bigg[\bigsqcup_{k_0,I_0,I_0'}A_{k_0,I_0,I_0'}\cap B_{k_0,I_0,I_0'}\bigg]&\geq (1-2e^{-R^{c'}})\widetilde{\prob}[\cup_{k=1}^{\overline{k}}\textup{Poly}_{\calC,\calC'}^k(R)]\\
&\overset{\eqref{eq:layers_poly*}}{\geq} (1-2e^{-R^{c'}})(1-e^{-c_2R^{a-\gamma}})\, .
\end{align}
But notice that the event $\bigsqcup_{k_0,I_0,I_0'}(A_{k_0,I_0,I_0'}\cap B_{k_0,I_0,I_0'})$ implies that $\calC$ and $\calC'$ are connected by a continuous path in $([-4R,4R]^2 \times [0,R^a]) \cap \{ f \ge R^{-3/2}\}$. Therefore, \eqref{eq:two_arm_proof_1} completes the proof of Lemma~\ref{lemma:two_arms_1} (assuming Claim~\ref{cl:two_arm_1} holds true).
\end{proof}
It remains to give the proof of Claim \ref{cl:two_arm_1}.
\begin{proof}[Proof of Claim \ref{cl:two_arm_1}]
Recall the definition \eqref{eq:sprouts_tilde_def} of $\widetilde{\textup{Sprouts}}(R)$. Assume that
\[
\widetilde{\prob}[H_l]\leq 1-e^{-R^{c_0}/4}.
\]
Let $\textup{Up}_{l+1}$ be the event that $\calC_{i_{l+1}}$ is connected to $D(y_{i_{l+1}},5R^a)\times\{R^{a^2}\}$ by a continuous path in $\{f_{R^\gamma}\geq R^{3/2}\}\cap (D(y_{i_{l+1}},5R^a)\times[0,R^{a^2}])$. Then, $\textup{Sprouts}(R)\subset \textup{Up}_{l+1}$ so that, on $\widetilde{\textup{Sprouts}}(R)$, for $R$ large enough,
\[
\widetilde{\prob}[\textup{Up}_{l+1} \mid H_l^c]\geq \widetilde{\prob}[\textup{Sprouts}(R) \mid H_l^c] \ge 1-\frac{\widetilde{\prob}[\textup{Sprouts}(R)^c]}{\widetilde{\prob}[H_l^c]}\geq 1-e^{-R^{c_0}/4}.
\]
In particular, by union bound and FKG, we deduce that there exist a constant $c_1>0$ and a (random $(f_{R^\gamma})_{|\calP_0}$-measurable) $u_{l+1}\in D(y_{i_{l+1}},5R^a)\times\{R^{a^2}\}$ such that the event $\textup{Up}^\star_{l+1}$ that $\calC_{i_{l+1}}$ is connected to $u_{l+1}$ by a continuous path in $\{f_{R^\gamma}\geq R^{-3/2}\}\cap D(y_{i_{l+1}},5R^a)\times[0,R^{a^2}]$ satisfies, for $R$ large enough,
\begin{equation}\label{eq:claim_two_arm_1}
\widetilde{\prob}[\textup{Up}^\star_{l+1}\, |\, H_l^c]\geq c_1R^{-2a}.
\end{equation}
Let
\[
\textup{Tube}_{l+1}
:=\Bigg\{\begin{array}{c}\exists \text{ a c.c.\ of } \{ f_{R^\gamma} \ge R^{-3/2} \} \cap (D(y_{i_{l+1}},5R^a) \times [R^{a^2},h_{k_0}])\text{ that contains both}\\
\text{$u_{l+1}$ and a circuit in $\{f_{R^\gamma} \ge R^{-3/2} \} \cap (D(y_{i_{l+1}},5R^a)\times \{h_{k_0}\})$}\\
\text{that surrounds $D(y_{i_{l+1}},R^a) \times \{h_{k_0}\}$} \end{array}\Bigg\}.
\]
Note that $\textup{Up}_{l+1}^\star\cap\textup{Tube}_{l+1} \subset H_{l+1}$ so our goal will be to show that, for some constant $c>0$ and $R$ large enough,
\begin{equation}\label{eq:claim_two_arm_2}
\widetilde{\prob}[\textup{Up}_{l+1}^\star\cap\textup{Tube}_{l+1}\, |\, H_l^c]\geq c R^{-5a}.
\end{equation}
Let $\calE_{l+1}=\R^3 \setminus (D(y_{i_{l+1}},10 R^a)\times [0,h_{k_0}])$. Note that $H_l$ is measurable with respect to $(f_{R^\gamma})_{|\calE_{l+1}}$ while $\textup{Tube}_{l+1}$ is measurable with respect to $f_{R^\gamma}$ restricted to $D(y_{i_{l+1}},5 R^a)\times [0,h_{k_0}]$. On the other hand, by the RSW theorem (see \eqref{eq:sprinkling_rsw**}) and by using standard gluing constructions from Section \ref{ss:rsw} (more precisely, by using three times Item vi) of this section at scale $R^a$: once in $\calP_{h_k}$ and twice in a vertical plane in order to connect $u_i$ to a well-chosen point of $\calP_{h_k}$), one can show that $\textup{Tube}_{l+1}$ occurs with probability $\Omega(R^{-3a})$ so that, for some constant $c_2>0$,
\begin{equation}\label{eq:claim_two_arm_3}
\widetilde{\prob}[\textup{Tube}_{l+1} \, | \, (f_{R^\gamma})_{|\calE_{l+1}}]=\prob[\textup{Tube}_{l+1}]\geq c_2 R^{-3a}.
\end{equation}
Now, note that the events $\textup{Up}^\star_{l+1}$ and $\textup{Tube}_{l+1}$ are both increasing and recall that $\gamma<a^2$. Hence, for $R$ large enough,
\begin{align*}
\widetilde{\prob}[\textup{Up}^\star_{l+1}\cap\textup{Tube}_{l+1}\, |\, (f_{R^\gamma})_{|\calE_{l+1}}]&\overset{\eqref{eq:conditionalFKGlocal}}{\geq}\widetilde{\prob}[\textup{Up}^\star_{l+1}\, |\, (f_{R^\gamma})_{|\calE_{l+1}}]\widetilde{\prob}[\textup{Tube}_{l+1}\, |\, (f_{R^\gamma})_{|\calE_{l+1}}]\\
&\overset{\eqref{eq:claim_two_arm_3}}{\geq} c_2 R^{-3a}\widetilde{\prob}[\textup{Up}^\star_{l+1}\, |\, (f_{R^\gamma})_{|\calE_{l+1}}]
\end{align*}
so that
\begin{align*}
\widetilde{\prob}[\textup{Up}^\star_{l+1}\cap\textup{Tube}_{l+1}\, |\, H_l^c]&=\frac{1}{\widetilde{\prob}[H_l^c]}\widetilde{\E}[\widetilde{\prob}[\textup{Up}^\star_{l+1}\cap\textup{Tube}_{l+1}\, |\, (f_{R^\gamma})_{|\calE_{l+1}}]\mathbf{1}_{H_l^c}]\\
&\geq c_2R^{-3a}\frac{1}{\widetilde{\prob}[H_l^c]}\widetilde{\E}[\widetilde{\prob}[\textup{Up}^\star_{l+1}\, |\, (f_{R^\gamma})_{|\calE_{l+1}}]\mathbf{1}_{H_l^c}]\\
&\geq c_2R^{-3a}\widetilde{\prob}[\textup{Up}^\star_{l+1}\, |\, H_l^c]\\
&\overset{\eqref{eq:claim_two_arm_1}}{\geq} c_1c_2R^{-5a},
\end{align*}
which yields \eqref{eq:claim_two_arm_2} as required.
The remaining statement, i.e.~the proof that on $\widetilde{\textup{Sprouts}}(R)$, $\widetilde{\prob}[H_1]\geq c R^{-5a}$, follows from the same construction.
\end{proof}
\begin{figure}
\begin{center}
\includegraphics[scale=0.5]{2arms.pdf}
\caption{The events $\textup{Up}_i(R)$, $\textup{Tube}_i^k(R)$ and $\textup{Conn}_i^k(R)$.\label{fig:2arms}}
\end{center}
\end{figure}
We conclude this section with a result analogous to Proposition \ref{prop:2arms*}. Below, given some $d' \in [3,d]$, $F_R^1,\dots,F_R^{N_{d'}}$ denote all the $2$-dimensional faces of the cube $[-R/2,R/2]^{d'}$, i.e.\ the $F_R^i$'s are the sets obtained from the sets $[-R/2,R/2]^2\times\{\pm R/2\}^{d'-2}$ by permuting the coordinates. Moreover, for every $i \in \{1,\dots,N_{d'}\}$ we let $\tilde{F}_R^i \subset F_R^i$ be the concentric square with side length equal to $R/2$.
\begin{proposition}\label{prop:2armsbis*}
There exists $b>0$ that depends only on the dimension $d$ such that the following holds if $0<\gamma<b$. Let $d' \in [3,d]$. For all $\delta>0$ there exist $R_0,c>0$ such that for every $R \ge R_0$,
\[
\mathbb P\Bigg[\bigcap_{i,j}\Bigg\{ \begin{array}{c}\text{all the c.c.~of $\{ f_{R^\gamma} \ge 2R^{-3/2} \} \cap \tilde{F}_R^i$ of diameter at least $ \delta R$}\\
\text{and all the c.c.~of $\{ f_{R^\gamma} \ge 2R^{-3/2} \} \cap \tilde{F}_R^j$ of diameter at least }\\
\text{$ \delta R$ belong to the same c.c.\ of $\{ f_{R^\gamma} \ge R^{-3/2} \}\cap [-R/2,R/2]^{d'}$}\end{array}\Bigg\} \Bigg] \ge 1-e^{-R^c},
\]
where the intersection in the probability is taken over the pairs of indices $1 \le i,j \le N_{d'}$ for which $F_R^i$ and $F_R^j$ share a side.
\end{proposition}
\begin{proof}
As in Proposition \ref{prop:2arms*}, we also consider some $a \in (0,1)$ such that $0 < \gamma < a^2 < b$ and let $h_k:=\frac{R^a}{2}+kR^\gamma$. The proof is essentially the same as Proposition \ref{prop:2arms*} but we need to replace the construction in $[-4R,4R]^2 \times \{h_k\}$ by a construction in the $2$-dimensional faces of $[-(R/2-h_k),R/2-h_k]^{d'}$ that correspond to $F_R^i$ and $F_R^j$.
\medskip
The only new technicality (which appears only if $i \ne j$, so let assume this) is about how to adapt point iii) in the definition of $\textup{Circ}_{\mathcal{C},\mathcal{C}'}(R)$ (see the beginning of the proof of Lemma~\ref{lemma:two_arms_1} for the definition of this event). In order to connect two macroscopic paths which belong to two adjacent $2$-dimensional faces -- which is what will replace $\textup{Circ}_{\mathcal{C},\mathcal{C}'}(R)$, one would probably like to use a box-crossing property for $f_{R^\gamma}$ retricted to the union of two orthogonal half-spaces: $(\R \times \{0\} \times [0,+\infty)) \cup (\R \times [0,+\infty)\times \{0\})$. Such a property is probably tractable (at least for $\gamma$ sufficiently small) but it is sufficient for us to use the following weaker result (together with Lemma \ref{lem:Cameron-Martin} as in Remark \ref{rem:sprinkling_rswetc}).
\begin{lemma}\label{lemma:angle_crossing}
Given some $r \ge 1$, consider the following union $\mathcal{S}_r:=([0,r] \times \{0\} \times [0,r]) \cup ([0,r] \times [0,r] \times \{0\})$ of two orthogonal squares. There exists $c>0$ such that, if $R \ge 1$ satisfies $r_q \le R^\gamma \le r$, then
\[
\Pro \Big[ \begin{array}{c}\text{$\exists$ a c.\ path in $\mathcal{S}_r \cap \{f_{R^\gamma} \ge 0 \}$ that connects the}\\
\text{two sides $[0,r]\times \{0\} \times \{r\}$ and $[0,r]\times \{r\} \times \{0\}$}\end{array} \Big] \ge \frac{c}{\log(r)+R^\gamma}.
\]
\end{lemma}
The proof of Lemma \ref{lemma:angle_crossing} is a variation of a standard percolation argument. We present it in Appendix \ref{ss:angle_crossing} below. Using Lemma \ref{lemma:angle_crossing} with $r$ of the order of $\delta R$, one can follow the proof of Proposition \ref{prop:2arms*} in order to prove Proposition \ref{prop:2armsbis*}. The only difference is that we need to replace point iii) in the definition of $\textup{Circ}_{\mathcal{C},\mathcal{C}'}(R)$ by an event of probability at least $c/R^\gamma$. As a result, we need to replace the lower bound from \eqref{eq:layers_poly*} by $1-e^{-c_2R^{a-2\gamma}}$. Taking $a$ small enough followed by $\gamma$ small enough, we obtain that the analogue of Lemma \ref{lemma:two_arms_1} holds with $\mathcal{C} \subset \tilde{F}_R^i$ and $\mathcal{C}' \subset \tilde{F}_R^j$ where $F_R^i$ and $F_R^j$ share a side. The rest of the proof is exactly the same as Proposition \ref{prop:2arms*} so we omit the details.
\end{proof} | {"config": "arxiv", "file": "2108.08008/section12.tex"} |
TITLE: Showing that $\lim\limits_{n\to\infty}x_n$ exists, where $x_{n} = \sqrt{1 + \sqrt{2 + \sqrt{3 + ...+\sqrt{n}}}}$
QUESTION [1 upvotes]: Let $x_{n} = \sqrt{1 + \sqrt{2 + \sqrt{3 + ...+\sqrt{n}}}}$
a) Show that $x_{n} < x_{n+1}$
b) Show that $x_{n+1}^{2} \leq 1+ \sqrt{2} x_{n}$
Hint : Square $x_{n+1}$ and factor a 2 out of the square root
c) Hence Show that $x_{n}$ is bounded above by 2. Deduce that $\lim\limits_{n\to \infty} x_{n}$ exists.
Any help? I don't know where to start.
REPLY [0 votes]: 10 days old question, but .
a) Is already clear, that $ \sqrt{1 + \sqrt{2 + \sqrt{3 + ...+\sqrt{n}}}} < \sqrt{1 + \sqrt{2 + \sqrt{3 + ...+\sqrt{n+1}}}}$ , because $\sqrt{n} <\sqrt{n} + \sqrt{n+1}$ which is trivial.
My point here is to give some opinion about b) and c), for me it's better to do the c) first. We know that : $$ \sqrt{1 + \sqrt{2 + \sqrt{3 + ...+\sqrt{n}}}} < \sqrt{p+\sqrt{p+\sqrt{p+ ... }}} $$
But it is only true for $q\leq p<\infty $ for $q \in \mathbb{Z}^{+}$. Because it is trivial that
$$ \sqrt{1 + \sqrt{2 + \sqrt{3 + ...+\sqrt{n}}}} > \sqrt{1+\sqrt{1+\sqrt{1+ ... }}} $$
Let $x=\sqrt{2+\sqrt{2+\sqrt{2+ ... }}}$, then $x^2=2+ \sqrt{2+\sqrt{2+\sqrt{2+ ... }}} \rightarrow x^2-x-2=0 $, thus $x=2$, because $x>0$.
Now let's probe this equation :
$$\sqrt{1 + \sqrt{2 + \sqrt{3 + ...+\sqrt{n}}}} \leq \sqrt{2+\sqrt{2+\sqrt{2+ ... }}}=2 \tag{1}$$
2 is bigger than 1 , with their difference is 1. so for $x_{n}$ to be bigger than 2, it is required for $\sqrt{2+\sqrt{3+\sqrt{4+ ... \sqrt{n}}}} \geq 3$
but if square both sides of (1) and substract, we get that $\sqrt{2+\sqrt{3+\sqrt{4+ ... \sqrt{n}}}} \leq 3$.
for (b) , first square both sides, the '1' is gone , square again until the '2' is gone, and we arrive to this equation :
$$\sqrt{3+\sqrt{4+...\sqrt{n}}} \leq 2.\sqrt{2+\sqrt{3+...\sqrt{n}}}$$
which is true, because from (1) we know that $\sqrt{3 +\sqrt{4 ...+\sqrt{n}}} \leq 2$ and $ \sqrt{2+\sqrt{3+...\sqrt{n}}} >0 $
In fact, if you can prove (b) then (c) is trivial and vice versa. | {"set_name": "stack_exchange", "score": 1, "question_id": 428841} |
TITLE: How to show $I_k(t)=\int_\Omega (u(x,t)-k)_+^2 dx$ is absolutely continuous for any $u\in \mathring W^{1,1}_2(Q_T)$?
QUESTION [0 upvotes]: $\Omega$ is bounded smooth domain of $\mathbb R^n$.
$Q_T=\Omega\times [0,T]$.
$\overset{\circ}{W}^{1,1}_2(Q_T)$ is the parabolic Sobolve space and is zero at $\partial \Omega \times[0,T]$.
$u_+=\max\{u,0\}$.
How to show for any $k>\sup_\limits{\partial_pQ_T} u$, (where $\partial_pQ_T=\partial \Omega \times[0,T] \cup\Omega\times \{t=0\}$)
$$
I_k(t)=\int_\Omega (u(x,t)-k)^2_+dx
$$
is absolutely continuous on $[0,T]$ ?
REPLY [1 votes]: It is enough to show that $I_k$ is in $W^{1,1}(0,T)$. So multiply by $f’(t)$, where $f\in C^1_c(0,T)$, integrate in $t$ in $(0,T)$ and then use the definition of weak derivative to integrate by parts.Since the function $g(z)=(z-k)_+$ is Lipschitz continuous, you can apply the chain rule, which tells you that $v(x,t)=(u(x,t)-k)_+$ is in $W^{1,2}$ with
$$
\frac{\partial v}{\partial t}(x,t)=\left\{
\begin{array}
[c]{ll}%
\frac{\partial u}{\partial t}(x,t) & \text{if }u(x,t)>k,\\
0 & \text{otherwise.}%
\end{array}
\right.
$$
Hence,
\begin{align*}
\int_{0}^{T}f^{\prime}(t)I_{k}(t)\,dt & =\int_{0}^{T}\int_{\Omega}f^{\prime
}(t)v^{2}(x,t)\,dxdt\\&=-\int_{0}^{T}\int_{\Omega}f(t)2v(x,t)\frac{\partial
v}{\partial t}(x,t)\,dxdt\\
& =-\int_{0}^{T}\int_{\Omega}f(t)2(u(x,t)-k)_{+}\frac{\partial u}{\partial
t}(x,t)\,dxdt\\
& =-\int_{0}^{T}f(t)\int_{\Omega}2(u(x,t)-k)_{+}\frac{\partial u}{\partial
t}(x,t)\,dxdt.
\end{align*}
This shows that the weak derivative of $I_{k}(t)$ is the function
$$
\omega(t)=\int_{\Omega}2(u(x,t)-k)_{+}\frac{\partial u}{\partial t}(x,t)\,dx,
$$
which is integrable since by Holder's inequality
\begin{align*}
\int_{0}^{T}|\omega(t)|\,dt & \leq2\int_{0}^{T}\int_{\Omega}(u(x,t)-k)_{+}%
\left\vert \frac{\partial u}{\partial t}(x,t)\right\vert \,dxdt\\
& \leq2\left( \int_{0}^{T}\int_{\Omega}(u(x,t)-k)_{+}^{2}\,dxdt\right)
^{1/2}\left( \int_{0}^{T}\int_{\Omega}\left\vert \frac{\partial u}{\partial
t}(x,t)\right\vert ^{2}dxdt\right) ^{1/2}<\infty.
\end{align*} | {"set_name": "stack_exchange", "score": 0, "question_id": 3961365} |
TITLE: Is there a fundamental reason why "dynamic energy storage" is lossy?
QUESTION [6 upvotes]: Consider springs and moving masses - both can be used to store energy, the spring via tension $E=\frac{1}{2}kx^2$ and the mass via kinetic energy $E=\frac{1}{2}mv^2$. But day to day experience tells us that while springs can essentially hold this energy indefinitely, a moving object always slows and stops eventually.
A very similar phenomenon can be observed in electric circuits - capacitors behave like springs, and can hold charge and energy for long times, while inductors hold energy in the form of magnetic field induced by a moving charge - which once again decays pretty fast.
I realize that ideally both methods are lossless, and it is only due to "imperfections" that energy is lost - friction in the case of moving masses and electric resistance in the case of inductors' magnetic field. But is there some fundamental or "philosophical" reason explaining why we might expect it to be so? Why isn't there a mechanism similar to friction/resistance causing a rapid loss of energy in a compressed spring or charged capacitor? Is there some underlying cause explaining the rarity of both frictionless-surfaces and superconductors? Or is it just a coincidence?
REPLY [0 votes]: Another possible explanation, is that potential energy is not lost if no "action" is done, while kinetic energy is lost if the "action" is not continued.
In the case of the spring and the capacitor, potential energy is stored, while for a mass and an inductor, it is the motion that generates the kinetic energy.
Since there are a number of things that can interfere with the "motion," kinetic energy is subject to losses. | {"set_name": "stack_exchange", "score": 6, "question_id": 418001} |
TITLE: Evaluate the indefinite integral $\int\frac{(x(\pi + 49))^{\frac{15}{7}}}{\pi ^ {2} (x^{\pi}+7)} dx $
QUESTION [3 upvotes]: I was looking at a website that contained 5 integrals that supposedly have beautiful solutions; I managed to solve #4 yet I can't figure out how to solve #1:
$$\int\frac{(x(\pi + 49))^{\frac{15}{7}}}{\pi ^ {2} (x^{\pi}+7)} dx $$
I've tried doing a u substitution with $x^\pi$ and $\pi^2 x^\pi$ with no success. Any hints or solutions would be greatly appreciated. Thanks!
REPLY [1 votes]: Ignoring, for the time being, the constant terms, we face an integral which looks like
$$A=\int \frac {x^a}{x^b+c}\,dx$$ Changing variable $x^b= c t$ we end with
$$A=\frac{c^{\frac{a-b+1}b}}{b }\int \frac {t^{\frac{a-b+1}b} }{t+1}\,dt=k \int \frac {t^{\alpha}}{t+1}\,dt$$ and, except for a few values of $\alpha$, the last integral does not show a closed form but expresses in terms of hypergeometric function as mickep commented.
$$\int \frac {t^{\alpha}}{t+1}\,dt=\frac{t^{\alpha +1}}{\alpha +1} \, \, _2F_1(1,\alpha +1;\alpha +2;-t)$$
Using this result, for the integral of concern, after simplifications,
$$I=\int\frac{(x(\pi + 49))^{\frac{15}{7}}}{\pi ^ {2} (x^{\pi}+7)}\, dx=\frac{(49+\pi )^{15/7}}{22 \pi ^2}\,\, x^{22/7} \,\, _2F_1\left(1,\frac{22}{7 \pi };1+\frac{22}{7 \pi
};-\frac{x^{\pi }}{7}\right)$$
Comparing to $$J=\frac{(\pi + 49)^{\pi-1}}{\pi ^ {3}}\,\,\log\mathopen{}\left(x^\pi+7\right)+C_1$$ as nicely proposed by alex.jordan in his answer, or to $$K= \frac{7 (49+\pi )^{15/7} }{22 \pi ^2}\log \left(x^{22/7}+7\right)+C_2$$ what is obtained for the integrals between $0$ and $t$
$$\left(
\begin{array}{ccc}
t & I & J & K\\
1 & 20.5889 & 20.4948 & 20.5892 \\
2 & 125.869 & 125.193 & 125.845 \\
3 & 263.344 & 261.826 & 263.208 \\
4 & 385.308 & 382.990 & 385.003 \\
5 & 487.088 & 484.071 & 486.602 \\
6 & 572.727 & 569.099 & 572.062 \\
7 & 646.142 & 641.976 & 645.307 \\
8 & 710.205 & 705.557 & 709.210 \\
9 & 766.954 & 761.871 & 765.807 \\
10 & 817.852 & 812.371 & 816.561 \\
11 & 863.975 & 858.128 & 862.548 \\
12 & 906.133 & 899.946 & 904.576 \\
13 & 944.947 & 938.444 & 943.267 \\
14 & 980.907 & 974.107 & 979.109 \\
15 & 1014.40 & 1007.32 & 1012.49 \\
16 & 1045.74 & 1038.40 & 1043.73 \\
17 & 1075.20 & 1067.60 & 1073.07 \\
18 & 1102.97 & 1095.14 & 1100.75 \\
19 & 1129.25 & 1121.19 & 1126.93 \\
20 & 1154.18 & 1145.91 & 1151.77
\end{array}
\right)$$ which confirms (even if not needed, how good is his approximation. | {"set_name": "stack_exchange", "score": 3, "question_id": 2565967} |
{\bf Problem.} Seven points are marked on the circumference of a circle. How many different chords can be drawn by connecting two of these seven points?
{\bf Level.} Level 4
{\bf Type.} Prealgebra
{\bf Solution.} We can choose two out of seven points (without regard to order) in $\dfrac{7 \times 6}{2} = 21$ ways, so there are $\boxed{21}$ chords. | {"set_name": "MATH"} |
TITLE: How to show sequence of partial sum $S_n$ is bounded given the the sequence $a_k$ is bounded?
QUESTION [1 upvotes]: Define the sum, $S_n = \sum\limits_{k=1}^{n} a_k$
Given that the sequence $a_k$ is bounded above, so can I show that $S_n$ is bounded above this way? :
Since $a_k$ is bounded above, there exists a real number $M$ such that $a_k \leq M$, for all $k$. So, we have,
$$S_n = \sum_{k=1}^{n} a_k = a_1+a_2+a_3+...+a_n \leq M+M+M+\cdots+M =nM$$
Hence $S_n$ is also bounded above.
Is it correct?
REPLY [3 votes]: I choose $a_k=1$ for every $k$; you did not forbid it. Then we can have $M=1$. So $S_n=n$, which is not bounded above. Your proposition is not true. | {"set_name": "stack_exchange", "score": 1, "question_id": 4173305} |
TITLE: Proving that distinct edge atoms of a graph are vertex-disjoint.
QUESTION [2 upvotes]: The Question
In Godsil and Royle's Algebraic Graph Theory, they prove that any two distinct edge atoms of a graph $X$ are vertex-disjoint, making the following argument, where $A$ and $B$ are such edge atoms:
So we may assume that $A\cup B$ is a proper subset of $V(X)$. Now, the previous lemma yields
$$|\partial(A\cup B)| + |\partial(A\cap B)|\le 2\kappa_1(X),$$
and, since $A\cup B\ne V(X)$ and $A\cap B\ne\emptyset$, this implies that
$$|\partial(A\cup B)| = |\partial(A\cap B)| = \kappa_1(X).$$
I don't understand why the second assertion is true, nor why it follows from $A\cup B\ne V(X)$ and $A\cap B\ne\emptyset$.
Background
I'm unsure how common their notation/definitions are, so I've included some below.
We let $\kappa_1(X)$ denote edge-connectivity;
For $S\subseteq V(X)$, we define $\partial S$ to be $\{xy\in E(X):x\in S, y\notin S\}$. That is, it is the set of edges with one end in $S$ and one end not in $S$;
An edge atom of $X$ is a subset $S\subseteq V(X)$ such that $|\partial S|=\kappa_1(X)$, and $|S|$ is minimal.
The lemma referenced states that $|\partial(A\cup B)|+|\partial(A\cap B)|\le |\partial A| + |\partial B|$ for $A,B\subseteq V(X)$.
REPLY [0 votes]: In Godsill & Royle's book :
"So we may assume that A∪B is a proper subset of V(X)."
This is proved immediately above the statement on page 38. So A∪B≠V(X) since it is a proper subset (and is also not the empty set since neither A nor B are empty).
"A∩B≠∅" should be stated in the proof as an assumption because the object of corollary 3.3.2 is to prove A and B are disjoint.
The final line:
" |∂(A∪B)|=|∂(A∩B)|=κ1(X)."
follows the expression from the previous lemma because neither |∂(A∪B)| nor |∂(A∩B)| can be less than κ1 (and both are non-zero from the previous working) since the edge connectivity is κ1 so the only option is that both equal κ1
So the corollary is proved by contradiction. | {"set_name": "stack_exchange", "score": 2, "question_id": 2377803} |
\section{A Method with Fixed Storage}\label{approx-collinear}
Inspired by the results of Theorem~\ref{thm.collinear-resid-exist}, we consider a different
alternative than that briefly discussed in Section~\ref{sect.shifted-gcrodr}.
If we enforce the fixed-storage requirement (i.e., only one recycled subspace $\CU$ is stored
and all approximations are drawn from the same augmented Krylov subspace) then
a prospective algorithm must overcome two obstacles.
First, we cannot conveniently update the residual of the shifted system.
For the shifted system, we construct
approximations of the form (\ref{xst.eq}).
As already discussed, without a $\vek U^{(\sigma)}$ defined as in (\ref{eqn.AUCsig}),
we cannot project $\vek r_{-1}^{(\sigma)}$
and update $\vek x_{-1}^{(\sigma)}$, as in (\ref{eqn.init-proj-update}).
As a remedy,
we can perform an update of the shifted system approximation which implicitly updates the
residual by the perturbation of an orthogonal projection.
We set
\begin{equation}\nonumber
\vek x_{0}^{(\sigma)} = {\vek x}_{-1}^{(\sigma)} + \vek U\vek C^{\ast}{\vek r}_{-1}^{(\sigma)}.
\end{equation}
The updated residual can be written as
\begin{eqnarray}
{\vek r}_{0}^{(\sigma)}& = & \vek b-(\vek A+\sigma\vek I)\vek x_{0}^{(\sigma)}\nonumber\\
& = & \vek b-(\vek A+\sigma\vek I)({\vek x}_{-1}^{(\sigma)}+\vek U\vek C^{\ast}{\vek r}_{-1}^{(\sigma)}) \nonumber\\
& = & {\vek r}_{-1}^{(\sigma)}-(\vek A+\sigma\vek I)\vek U\vek C^{\ast}{\vek r}_{-1}^{(\sigma)} \nonumber\\
& = & \underbrace{{\vek r}_{-1}^{(\sigma)}- \vek C\vek C^{\ast}{\vek r}_{-1}^{(\sigma)}}_{\text{true orthogonal projection}} -\underbrace{\sigma\vek U\vek C^{\ast}{\vek r}_{-1}^{(\sigma)} . }_{perturbation}\label{eqn.proj-pert}
\end{eqnarray}
Second, the collinear residual does not exist.
Deriving this result yields clues to another way forward.
Neglecting a term from (\ref{eqn.gcrodr-colin-resid}) allows us to solve a nearby approximate collinearity
condition (which we will explain shortly, after Algorithm \ref{alg.GCRODR-approx-collinear}) and update the
approximation for the shifted system. This update is of the
form (\ref{xst.eq}) with $\vek s_{m}^{(\sigma)}\in\CU$.
These corrections tend to improve the residual but do not lead to convergence for the
shifted system, which will start with an expected improved approximation.
We present analysis showing how much improvement is possible with this method.
After convergence of the base system,
the algorithm can be applied recursively on the remaining unconverged systems.
This recursive method of solving one seed system at a time while choosing corrections
for the approximations for the other systems has been previously suggested in the context
of linear systems with multiple
right-hand sides; see e.g., \cite{Chan1997},\cite{SPM1.1981}.
We begin by providing
an overview of the strategy we are proposing and encode this into a schematic algorithm.
This algorithm solves the base system with \RGMRES\ while cheaply computing better initial
approximations for the shifted systems.
We present this outline in Algorithm \ref{alg.GCRODR-approx-collinear}.
\begin{algorithm}[hb!]
\caption{Schematic of Shifted \RGMRES\ with an Approximate Collinearity Condition}
\label{alg.GCRODR-approx-collinear}
\SetKwInOut{Input}{Input}\SetKwInOut{Output}{Output}\SetKwComment{Comment}{}{}
\Input{$\vek A\in\C^{n\times n}$; $\curl{\sigma^{(\ell)}}_{\ell=1}^{L}\subset\C$; $\vek U,\vek C\in\C^{n\times k}$ such that $\vek A\vek U=\vek C$ and $\vek C^{\ast}\vek C = \vek I_{k}$; Initial Approximations $\vek x_{0}$ and $\vek x_{0}^{(\sigma^{(\ell)})}$ such that residuals are collinear; $\varepsilon > 0$}
$\vek x \leftarrow \vek x_{0}$, $\vek r = \vek b-\vek A\vek x$\\
$\vek x\leftarrow \vek x + \vek U\vek C^{\ast}\vek r$, $\vek r \leftarrow \vek r - \vek C\vek C^{\ast}\vek r$\Comment*[r]{Project base residual}
$\vek x^{(\sigma^{(\ell)})}\leftarrow \vek x_{0}^{(\sigma^{(\ell)})}$, $\vek r ^{(\sigma^{(\ell)})}= \vek b-\vek A\vek x^{(\sigma^{(\ell)})}$ for all $\ell$\\
\For{$\ell=1\text{ to }L$}{
$\vek x^{(\sigma^{(\ell)})}\leftarrow \vek x^{(\sigma^{(\ell)})} + \vek U\vek C^{\ast}\vek r^{(\sigma^{(\ell)})}$\Comment*[r]{Update shifted approximation, but not an implicit residual projection}\label{alg.line.noproj}
}
\While{$\norm{\vek r}>\vek \varepsilon$}{
Construct a basis of the subspace $\CK_{m}((\vek I-\vek C\vek C^{\ast})\vek A,\vek r)$\\
Compute update $\vek t\in \cal{R}(\vek U) + \CK_{m}((\vek I-\vek C\vek C^{\ast})\vek A,\vek r)$ by minimizing residual using
\RGMRES \\
$\vek x\leftarrow\vek x+\vek t$; $\vek r\leftarrow\vek b-\vek A\vek x$\\
\For{$\ell=1\text{ to }L$}{
Compute update $\vek t^{(\sigma^{(\ell)})}\in \cal{R}(\vek U) + \CK_{m}((\vek I-\vek C\vek C^{\ast})\vek A,\vek r)$ according to the approximate collinearity condition\label{alg.line.approx-collin}\\
$\vek x^{(\sigma^{(\ell)})} \leftarrow \vek x^{(\sigma^{(\ell)})} + \vek t^{(\sigma^{(\ell)})}$
}
Compute updated recycled subspace information $\vek U$ and $\vek C$
}
Clear any variables no longer needed\\
\If{$L>2$}{
Make a recursive call to Algorithm \ref{alg.GCRODR-approx-collinear} with $\vek A\leftarrow \vek A + \sigma^{(1)}\vek I$, shifts $\curl{\sigma^{(\ell)}-\sigma^{(1)}}_{\ell=2}^{L}$,
approximations $\curl{\vek x^{(\sigma^{(\ell)})}}_{\ell=2}^{L}$ and updated recycled subspace matrix $\vek U$
}
\Else{
Apply \RGMRES\ to the last unconverged system
}
\end{algorithm}
This algorithm relies on dropping the term $\vek E\vek F\prn{\tilde{\vek y}_{m}}_{1:k}$
from (\ref{eqn.gcrodr-colin-resid}),
which yields an augmented linear system that can be solved directly,
\begin{eqnarray}
\vek z_{m+1}\tilde{\beta}_{m} + \widetilde{\vek G}_{m}^{(\sigma)}\tilde{\vek y}^{(\sigma)} _{m}& = & \beta_{0}\norm{\vek r_0}\vek e_{k+1}^{(m+1)}\label{eqn.approx-collinear-system}\mbox{,\ \ \ or}\\
\brac{\begin{matrix} \widetilde{\vek G}_{m}^{(\sigma)} & \vek z_{m+1} \end{matrix}}\brac{\begin{matrix}\tilde{\vek y}^{(\sigma)} _{m}\\ \tilde{\beta}_{m} \end{matrix}} & = & \beta_{0}\norm{\vek r_0}\vek e_{k+1}^{(m+1)}.\nonumber
\end{eqnarray}
Thus, we proceed by solving this nearby problem and updating the shifted solution,
\begin{equation}\label{eqn.shifted-incorrect-update}
\vek x_{m}^{(\sigma)} = \vek x_{0}^{(\sigma)} +\widehat{\vek V}_{m}\tilde{\vek y}^{(\sigma)} _{m}.
\end{equation}
For each restart cycle, we repeat this process for the shifted system. We stop when the base residual
norm is below tolerance.
When the residual norm for the base
system reaches the desired tolerance, the residual norm
of the shifted system will have been reduced at little additional cost;
but, generally, the reduction is insufficient.
Thus, we apply the GMRES with
recycling algorithm with this approximate collinearity scheme to the remaining unsolved systems,
taking one of the shifted systems as
our new base system. This method is amenable to recursion on the number of shifts.
When only one system remains,
\RGMRES\ is applied.
Observe that for any number of shifts, we can easily form $\widetilde{\vek G}_{m}^{(\sigma)}$
for each $\sigma$ at little additional cost.
The matrices $\vek Y$ and $\vek Z$ in (\ref{eqn.U-decompose}) must be computed only once
per cycle, regardless of the number of shifted
systems we are solving. However, additional shifts will require more recursive calls to the
algorithm and, thus, more iterations.
Why does the approximate collinearity condition
produce an improved approximation to the solution of the shifted system?
How well we can expect the algorithm to perform? The following analysis answers these
questions and also
yields a cheap way in which we can monitor the progress of the residuals of the shifted systems.
Theorem~\ref{thm.gcrodr-shift-convergence}
shows how the algorithm behaves when we start with already non-collinear residuals.
This allows for the treatment of the case when the perturbed initial projection of the
residual (\ref{eqn.shifted-incorrect-update}) renders collinearity invalid at the start.
\begin{theorem}\label{thm.gcrodr-shift-convergence}
Suppose we begin the cycle as in {\rm (\ref{eqn.gcrodr-colin-resid})}, with approximate collinearity between the base and shifted residuals, satisfying the relation
\begin{equation}\label{eqn.initial-resids-noncollinear}
\vek r_{0}^{(\sigma)} = \tilde{\beta}_{0}\vek r_{0} + \vek w^{(\sigma)}.
\end{equation}
If we perform a cycle of \RGMRES\ to reduce the residual of the base system and
apply the approximate collinearity condition {\rm(\ref{eqn.approx-collinear-system})} to the shifted residual,
then we have the relation
\begin{equation}\label{eqn.shifted-resid-relation2}
\tilde{\vek r}_{m}^{(\sigma)} = \tilde{\beta}_{m}\vek r_{m} - \sigma \vek E\vek F\prn{\tilde{\vek y}_{m}^{(\sigma)}}_{1:k} + \vek w^{(\sigma)}.
\end{equation}
\end{theorem}
\textit{Proof.} We can write the residual produced by the approximate collinearity procedure for the shifted system
as follows, using (\ref{eqn.z-def}),
\begin{eqnarray*}
\tilde{\vek r}_{m}^{(\sigma)} & = & \vek b - (\vek A+\sigma\vek I)\vek x_{m}^{(\sigma)}\label{eqn.approx-collin-anal}\\
& = &\vek r_{0}^{(\sigma)} - (\vek A+\sigma\vek I)\widehat{\vek V}_{m}\widetilde{\vek y}_{m}^{(\sigma)}\nonumber\\
& = & \tilde{\beta}_{0}\vek r_{0} + \vek w^{(\sigma)} - (\vek A+\sigma\vek I)\widehat{\vek V}_{m}\tilde{\vek y}_{m}^{(\sigma)}\nonumber\\
& = & \tilde{\beta}_{0}\vek r_{0} - \prn{\widehat{\vek W}_{m+1}\widetilde{\vek G}_{m}^{(\sigma)} + \sigma \brac{\begin{matrix}\vek E\vek F & \vek 0 \end{matrix}}}
\tilde{\vek y}_{m}^{(\sigma)} + \vek w^{(\sigma)} \nonumber\\
& = & \tilde{\beta}_{0}\norm{\vek r_{0}}\widehat{\vek W}_{m+1}\vek e_{k+1}^{(m+1)} - \widehat{\vek W}_{m+1}\widetilde{\vek G}_{m}^{(\sigma)}\tilde{\vek y}_{m}^{(\sigma)} - \sigma
\brac{\begin{matrix}\vek E\vek F & \vek 0 \end{matrix}}\tilde{\vek y}_{m}^{(\sigma)} + \vek w^{(\sigma)}\nonumber\\
& = & \tilde{\beta}_{0}\norm{\vek r_{0}}\widehat{\vek W}_{m+1}\vek e_{k+1}^{(m+1)} - \widehat{\vek W}_{m+1}\widetilde{\vek G}_{m}^{(\sigma)}\tilde{\vek y}_{m}^{(\sigma)} - \tilde{\beta}
_{m}\widehat{\vek W}_{m+1}\vek z_{m+1} + \tilde{\beta}_{m}\widehat{\vek W}_{m+1}\vek z_{m+1} \nonumber\\
& & - \sigma \brac{\begin{matrix}\vek E\vek F & \vek 0 \end{matrix}}\tilde{\vek y}_{m}^{(\sigma)} + \vek w^{(\sigma)} \nonumber \\
& = & \widehat{\vek W}_{m+1}\prn{\tilde{\beta}_{0}\norm{\vek r_{0}}\vek e_{k+1}^{(m+1)} - \widetilde{\vek G}_{m}^{(\sigma)}\tilde{\vek y}_{m}^{(\sigma)} - \tilde{\beta}_{m}\vek z_{m+1}}+
\tilde{\beta}_{m}\widehat{\vek W}_{m+1}\vek z_{m+1} \nonumber\\
& & - \sigma \brac{\begin{matrix}\vek E\vek F & \vek 0 \end{matrix}}\tilde{\vek y}_{m}^{(\sigma)} + \vek w^{(\sigma)}.\nonumber
\end{eqnarray*}
Now using the approximate collinearity condition (\ref{eqn.approx-collinear-system}) and the fact that by definition $\vek r_{m} = \widehat{\vek W}_{m+1}\vek z_{m
+1}$, we have that
\begin{equation} \nonumber \label{eqn.shifted-resid-rel}
\tilde{\vek r}_{m}^{(\sigma)} = \tilde{\beta}_{m}\vek r_{m} - \sigma \brac{\begin{matrix}\vek E\vek F & \vek 0 \end{matrix}}\tilde{\vek y}_{m}^{(\sigma)} + \vek w^{(\sigma)},
\end{equation}
which can be rewritten in the form (\ref{eqn.shifted-resid-relation2}). \cvd
It should be noted that the term $-\sigma\vek E\vek F\prn{\tilde{\vek y}_{m}^{(\sigma)}}_{1:k} + \vek w^{(\sigma)}$ is a
function of the quality of the recycled subspaces as well as of $\sigma$. With the use of simple inequalities, we obtain
an important corollary estimating the amount of residual norm reduction we can expect for the shifted systems.
\begin{corollary}\label{cor.shifted-resid-bound}
The shifted system residual norm satisfies the following inequality,
\begin{equation}\label{eqn.shifted-resid-norm-estimate}
\norm{\tilde{\vek r}_{m}^{(\sigma)}} \leq \ab{\tilde{\beta}_{m}}\norm{\vek r_{m}} + \ab{\sigma} \norm{\vek E\vek F}\norm{\prn{\tilde{\vek y}_{m}^{(\sigma)}}
_{1:k}} + \norm{\vek w^{(\sigma)}}.
\end{equation}
\end{corollary}
As long as
$\ab{\tilde{\beta}_{m}}\norm{\vek r_{m}}$ dominates the right-hand side, we will observe
a reduction of the shifted residual norm.
This reduction is controlled by $\ab{\sigma}
$, $\norm{\vek E\vek F}$, and $\norm{\prn{\tilde{\vek y}_{m}^{(\sigma)}}_{1:k}}$. We cannot control $\norm{\prn{\tilde{\vek y}_{m}^{(\sigma)}}_{1:k}}$, and $\sigma$ is
dictated by the problem. The size of $\norm{\vek E\vek F}$ is connected to the quality of $\vek U$ as an approximation to an invariant subspace of $\vek A
$. This can seen by writing
\begin{equation}\label{eqn.EF}
\vek E\vek F = \vek U - \prn{\vek C\vek Y + \vek V_{m+1}\vek Z}
\end{equation}
and observing that the norm of this difference decreases as $\vek U$ becomes a better approximation of an invariant subspace of $\vek A$.
Thus, choosing $\vek U$ as an approximate invariant subspace may improve performance
of the method.
Ideally, we would like to detect
when $\ab{\tilde{\beta}_{m}}\norm{\vek r_{m}}$ ceases to dominate (\ref{eqn.shifted-resid-norm-estimate})
in order to cease updating the approximations to the shifted system
once such an update no longer leads to a decrease in residual norm.
Our analysis gives us a way to monitor both quantities. Observe that given $\sigma$, $\vek U$, and
$\vek C$, if we compute $\widetilde{\vek y}_{m}^{(\sigma)}$ according to (\ref{eqn.approx-collinear-system}),
then from (\ref{eqn.EF}), we can compute the product $\vek E\vek F\prn{\tilde{\vek y}_{m}^{(\sigma)}}_{1:k}$.
Thus, we can keep track of the vector $\vek w^{(\sigma)}$, and use it to construct $\vek r_{m}^{(\sigma)}$
using (\ref{eqn.shifted-resid-relation2}). Rather than detecting that
$\ab{\tilde{\beta}_{m}}\norm{\vek r_{m}}$ ceases to dominate (\ref{eqn.shifted-resid-norm-estimate}),
it is simpler to calculate $\norm{\vek r_{m}^{(\sigma)}}$ after each cycle and detect when it has ceased
to be reduced by the correction from that cycle. At this point, we cease updateing $\vek x^{(\sigma)}_{m}$
for the remaining cycles.
It should be noted that $\vek w^{(\sigma)}$ can be easily accumulated.
At the beginning of Algorithm \ref{alg.GCRODR-approx-collinear}, we compute an initial value of $\vek w^{(\sigma)}$
according to (\ref{eqn.initial-resids-noncollinear}). At Line \ref{alg.line.noproj} of Algorithm~\ref{alg.GCRODR-approx-collinear}, we
update $\vek w^{(\sigma)} \leftarrow \vek w^{(\sigma)} - \sigma\vek U\vek C^{\ast}\widetilde{\vek r}^{(\sigma)}$
according to (\ref{eqn.proj-pert}). At Line \ref{alg.line.approx-collin}, we
update $\vek w^{(\sigma)}~\leftarrow~\vek w^{(\sigma)}~-~\sigma\vek E\vek F(\widetilde{\vek y}_{m})_{1:k}$ according to (\ref{eqn.shifted-resid-relation2}).
Does the linear system (\ref{eqn.approx-collinear-system}) correspond to an exact
collinear condition for some choice of deflation space?
Observe that if we write
\begin{equation}\nonumber
\vek U^{(\sigma)} = \vek U - \sigma(\vek A+\sigma\vek I)^{-1}\vek E\vek F,
\end{equation}
then we obtain an exact Arnoldi-like relation
\begin{equation}\label{eqn.new-arnoldi-rel}
(\vek A+\sigma\vek I)\widehat{\vek V}_{m}^{(\sigma)} = \widehat{\vek W}_{m+1}\widetilde{\vek G}_{m}^{(\sigma)},
\end{equation}
where $\widehat{\vek V}_{m}^{(\sigma)} = \begin{bmatrix} \vek U^{(\sigma)} & \vek V_{m-k}\end{bmatrix}$.
If we select $\vek x^{(\sigma)}_{m}\in\vek x_{0}^{(\sigma)}+\CR(\widehat{\vek V}_{m}^{(\sigma)})$ and enforce the collinearity condition $\vek r_{m}^{(\sigma)} = \beta_{m}\vek r_{m}$,
then we see that (\ref{eqn.approx-collinear-system}) is the exact collinearity equation which must be solved to obtain the collinear residual. Thus, the failure of the approximate collinearity condition,
due to singularity of (\ref{eqn.approx-collinear-system}), corresponds to the nonexistence
of an exactly collinear residual for the shifted system
over a different augmented subspace (which is unavailable in practice). | {"config": "arxiv", "file": "1301.2650/approx-collinear-correction.tex"} |
\newcommand{\Lip}{M}
\section{The fine grids settings}
\label{sec:finegrid-mirror}
Particularly challenging settings correspond to cases where the columns of $A$ are highly correlated.
This is a typical situation for inverse problems in imaging sciences, and in particular deconvolution-type problems~\cite{candes2014towards,chizat2021convergence}.
In these settings, $A$ arises from the discretization of some continuous operator, and the dimension $n$ grows as the grid refines. For the sake of concreteness, we consider an ideal low-pass filter in dimension $d$ (for instance $d=2$ for images), which is equivalent to the computation of low Fourier frequencies, up to some cut-off frequency $p$. The rows $A_k \in \RR^n$ of $A$ are indexed by $k=(k_i)_{i=1}^d\in [p]^d \eqdef \ens{0,\ldots, p}^d$,
\begin{equation}\label{eq:fouriersys}
A_k = \phi\pa{\theta_k}, \qwhereq \theta_k\eqdef \frac{1}{p} (k_i)_{i=1}^d,\; \phi(\theta) \eqdef \frac{1}{m^{d/2}} \pa{e^{2\pi \sqrt{-1} \dotp{\theta}{ \ell}}}_{\ell\in [m/2]^d},
\end{equation}
where $[m/2]\eqdef\ens{-\frac{m}{2},\ldots, \frac{m}{2}}$, and so, $A$ corresponds to the Fourier operator discretized on a uniform grid on $[0,1]^d$.
To better cope with the ill-conditioning of the resulting optimizations problem, it is possible to use descent method according to some adapted metric. This can be conveniently achieved using so-called mirror descent scheme/Proximal Bregman Descent scheme, which we review below in Section \ref{sec:mirror_overview}, since this is closely linked to the Hadamard parameterization (as exposed in Section~\ref{sec:hypentropy-mirror}).
As discussed below, in the mirror descent scheme,
the usual $\ell^2$ proximal gradient descent is retrieved when using a squared Euclidean entropy function. This Euclidean scheme suffers from an exponential dependency on $d$ in the convergence rate. Using non-quadratic entropy functions (such as the so-called hyperbolic entropy), together with a dimension-dependent parameter tuning, leads in sharp contrast to dimension independent rates \cite{chizat2021convergence}.
After this review of mirror descent, we then analyse in Section~\ref{sec:hadamard-finegrid} the performance of gradient descent on the Hadamard parameterized function $G(u,v)$ on the case of the Lasso. The key observation is that the Lipschitz constant of $G$ is independent of the grid size $n$ and hence, one can derive dimension-free convergence rates on the gradient. Moreover, we draw in Section~\ref{sec:hypentropy-mirror} connections to mirror descent by showing that the continuous time limit (as the gradient descent stepsize tends to 0) corresponds to the mirror descent ODE with a hyperbolic entropy map whose parameter changes with time.
\subsection{Overview of mirror descent}\label{sec:mirror_overview}
We consider a structured optimization problem of the form
\begin{equation}\label{eq-Phi-def}
\min_{x\in\RR^n} \Phi(x)\eqdef R(x) + F(x)
\end{equation}
where $R:\RR^n \to [0,\infty]$ is a (nonsmooth) convex function and $F:\RR^n \to \RR$ is assumed to be convex and Lipschitz continuous on a closed convex set $\Xx\supset \mathrm{dom}(R)$ with
\begin{equation}\label{eq:mirror_lip_assumption}
\norm{\nabla F(x) - \nabla F(x')}_{\Xx^*} \leq \Lip \norm{x-x'}_{\Xx}
\end{equation}
where $\norm{\cdot}_{\Xx}$ is some norm on $\Xx$ and $\norm{\cdot}_{\Xx^*}$ is the dual norm.
This includes in particular sparsity regularized problems of the form~\eqref{eq:gen-tv}.
A natural algorithm to consider is the Bregman proximal gradient descent method, of which the celebrated iterative soft thresholding algorithm is a special case. In this section, we provide a brief overview of this method and the associated convergence results.
Given a strictly convex function (called an entropy function) $\eta:\Ee\to [-\infty,\infty)$ that is differentiable on an open set $\Ee\supset \mathrm{int}(\Xx)$, its associated Bregman divergence is defined to be
\begin{equation}
D_{\eta}(a,b)\eqdef \eta(a) - \eta(b) - \dotp{\eta'(b)}{a-b}.
\end{equation}
By possibly rescaling $\eta$, assume that
\begin{equation}\label{eq:pinkser}
D_\eta(a,b) \geq \frac12 \norm{a-b}_{\Xx}^2.
\end{equation}
The Bregman proximal gradient descent method (BPGD) \cite{tseng2010approximation} is
\begin{equation}\label{BPG}
x_{k+1} = \argmin_{x} F(x_k) +\nabla F(x_k)^\top (x - x_k) +R(x) + \frac{\Lip}{2} D_{\eta}(x,x_k),
\end{equation}
with corresponds to taking constant stepsize $1/\Lip$.
\begin{rem}
The case of $R = 0$ corresponds to the mirror descent method, this dates back to \cite{blair1985problem,alber1993metric} and has in more recent years been revitalised by \cite{beck2003mirror}. The proximal version given here is due to \cite{tseng2010approximation}.
\end{rem}
It is shown in \cite{tseng2010approximation} that this is a descent method with $\Phi(x_{k+1}) \leq\Phi(x_k)$ and for any $x\in \mathrm{dom}(R)$,
\begin{equation}\label{eq:mirror_tseng}
\Phi(x_k) - \Phi(x) \leq \frac{1}{k} \Lip D_\eta(x,x_0).
\end{equation}
\subsection{The Lasso ($\ell_1$) special case}
\label{sec:l1setting}
The BPGD algorithm~\eqref{BPG} is mainly interesting when the updated (the so-called proximal operator associated to $R$) step can be computed in closed form. This is not the case for an arbitrary operator $L$, and we thus focus on the setting $L=\Id$. For the sake of simplicity, we also consider the case where there is no group structure (Lasso), so that $R(x) = \norm{x}_1$. The most natural norm to perform the convergence analysis is $\norm{\cdot}_\Xx = \norm{\cdot}_1$, so that
$\norm{\nabla F(x) - \nabla F(x')}_\infty \leq \Lip_1 \norm{x-x'}_1$.
For this choice of $R$, \eqref{BPG} can be rewritten as
\begin{equation}\label{eq:BPG_l1}
\nabla \eta(x_{k+1}) =\Tt_{\lambda\tau}\pa{ \nabla \eta(x_k) - \frac{\tau}{n} \nabla F(x_k)},
\end{equation}
where $\Tt_\tau(z) = \max(\abs{z}-\tau,0)\odot \sign(z)$ is the soft thresholding operator.
Let us now single out notable choices of entropy functions, in order to particularize the convergence bound~\eqref{eq:mirror_tseng}:
\begin{itemize}
\item \textit{The quadratic entropy:} observe that $n\norm{x-x'}^2 \geq \norm{x-x'}_1^2$, so \eqref{eq:pinkser} holds by choosing $\eta(x) = \frac{n}{2}\norm{x}^2$. For $\norm{x}_1\leq 1$ and $ \norm{x_0}_1\leq 1$, we have $D(x,x_0) \leq n \norm{x}^2 +n \norm{x_0}^2 \leq 2n$. The error bound \eqref{eq:mirror_tseng} is therefore $\Oo(\frac{n\Lip_1}{k})$.
\item \textit{The hyperbolic entropy:} introduced in\cite{ghai2020exponentiated}, it is defined for $c>0$ by
\begin{equation}\label{eq:hyp_ent_fn}
\eta_c(s) = s\cdot \mathrm{arcsinh}(s/c) - \sqrt{s^2 + c^2} + c,
\end{equation}
so that $
\eta_c'(s) = \mathrm{arcsinh}(s/c) \qandq \eta_c''(s) = \frac{1}{\sqrt{s^2 +c^2}}.
$
It is shown in \cite{ghai2020exponentiated} that $\eta_c$ satisfies
$
D_\eta(x,x') \geq \frac{1+cn}{2} \norm{x-x'}_1^2
$.
In particular, \eqref{eq:pinkser} holds by choosing $c = \frac{1}{n}$ and rescaling $\eta$ by $\frac12$. On the other hand, for $\norm{x}_1,\norm{x'}_1\leq 1$,
$D_\eta(x,x') = \Oo(\log(n)).
$
The error bound \eqref{eq:mirror_tseng} is therefore $\Oo(\frac{\log(n) \Lip_1}{k})$.
\end{itemize}
\paragraph{Grid-free convergence rates}
The above results show that the error bound \eqref{eq:mirror_tseng} in general has a dependency on $n$, either through $\Lip$ or through $D_\eta(x,x_0)$.
This is thus unable to cope with very fine grids, and the analysis breaks in the ``continuous'' (often called off-the-grid) setting where discrete vectors with bounded $\ell^1$ are replaced by measures with bounded total variation~\cite{bredies2013inverse,candes2014towards}.
To address this issue, a more refined analysis of BPGD is carried out in\cite{chizat2021convergence} and this lead to the first grid-free convergence rates for BPGD. In particular, it is shown that the objective for quadratic entropy converges at rate $\Oo(k^{-2/(d+2)})$, independent of grid size $n$ but dependent on the underlying dimension $d$.
In contrast, BPGD with hyperbolic entropy satisfies $\Phi(x_k) - \min_x \Phi(x) = \Oo(d\log(k)/k)$. | {"config": "arxiv", "file": "2205.01385/sections/mirror_litreview.tex"} |
TITLE: Completeness of a finite dimensional linear subspace of X
QUESTION [0 upvotes]: How can I show that a finite dimensional linear subspace F of an arbitrary normed space X is complete, hence closed?
REPLY [2 votes]: Let $(F, ||\cdot||)$ be a finite dimensional normed space over $\mathbb{F}$ (where $\mathbb{F} = \mathbb{R}$ or $\mathbb{F} = \mathbb{C}$). Choose some linear isomorphism $T \colon \mathbb{F}^n \rightarrow F$ and define a norm $||\cdot||_1$ on $\mathbb{F}^n$ by $||v||_1 := ||Tv||$. Then $T \colon (\mathbb{F}^n, ||\cdot||_1) \rightarrow (F,||\cdot||)$ becomes an isometry. Since all the norms on $\mathbb{F}^n$ are equivalent and $(\mathbb{F}^n, ||\cdot||_2)$ is complete (here, $||\cdot||_2$ is the standard norm on $\mathbb{F}^n$), this implies that $(\mathbb{F}^n, ||\cdot||_1)$ is complete which implies in turn that $(F, ||\cdot||)$ is complete. | {"set_name": "stack_exchange", "score": 0, "question_id": 1628103} |
TITLE: $\ln x \leq x-1$ for all $x\ge 1$
QUESTION [0 upvotes]: Let $x \geq 1$. Show that $\ln x \leq x-1$.
I used the mean value theorem to show that $\ln x \leq x-1$.
But then they ask us to deduce that $e^t \geq x+1$. I know that $\ln x$ and $e^x$ are inverse but I am not able to solve it.
REPLY [5 votes]: Let $y\geq 0$. Then $e^y\geq 1$, and from what you already proved, it follows
$$y= \ln (e^y)\leq e^y-1 \quad \Longrightarrow y+1 \leq e^y\,.$$ | {"set_name": "stack_exchange", "score": 0, "question_id": 4011004} |
TITLE: Find the eigenspaces of $\;\;\pmatrix{7 & 0 & 4 \\ 0 & 10 & 0 \\ 4 & 0 & -8}$
QUESTION [0 upvotes]: Let $$ A = \pmatrix{7 & 0 & 4 \\ 0 & 10 & 0 \\ 4 & 0 & -8},$$
Using the fact that $(A-\lambda I)v=0$, where $\lambda \in \{-9, 10, 8\},$
I have found
The eigenspaces for $\lambda=-9$ to be the set of non-zero vectors $(\frac{1}
{4}k,0,k)$, where k $\ne0$,
The eigenspaces for $\lambda=8$ to be the set of non-zero vectors $(4k,0,k)$, where k $\ne0$,
The eigenspaces for $\lambda=10$ to be the set of non-zero vectors $(0,k,0)$, where k $\ne0$.
I'm not totally sure about the last eigenspaces as in solving the inequation I obtained, $x=4/3z$ and $x=9/2z$, which is plausible if and only if, $x=z=0$.
Are these the correct eigenspaces? Is my last reasoning sound?
REPLY [1 votes]: Your eigenspace for $\lambda=-9$ is wrong.
You want
$$-9\begin{pmatrix}x\\y\\z\end{pmatrix}=\begin{pmatrix}7x+4z\\10y\\4x-8z\end{pmatrix}$$
so $y=0$, $-9x=7x+4z$ and $-9z=4x-8z$ giving $-4x=z$, therefore it should be $(-k/4,0,k)$.
Your reasoning for the other two is right.
REPLY [1 votes]: We can solve this using Wolfram Alpha to reduce row echelon form the matrices in the form of $A-\lambda I$.
$\lambda=-9$ gives us $(-x, 0, 4x)$. (The reduced row echelon form of $A+9I$ shows why.)
$\lambda=8$ gives us $(4x, 0, x)$.
$\lambda=10$ gives us $(0, x, 0)$. | {"set_name": "stack_exchange", "score": 0, "question_id": 1770241} |
TITLE: Propositional Dynamic Logic Compactness
QUESTION [1 upvotes]: I know that Propositional Dynamic Logic is NOT compact, but I don't exactly know how to show that. I know that the given set:
$$ \def\<#1>{\langle#1\rangle}\left\{\<a^*>b\right\} \cup \left\{¬b,\;¬\<a>b,\;¬\<a^2>b,\;\ldots\right\} $$
is finitely satisfiable, but not satisfiable. Can anyone help me prove that?
PS: $\<a>$ stand for diamond and $a^n$ says that the program $a$ is iterated $n$ times; here $a$ represents a program and $b$ an assertion.
REPLY [1 votes]: $\def\<#1>{\langle#1\rangle}$If you take any finite subset of the $\neg\<a^n>b$ sentences, you can find the maximum $n$ in it. Let it be $m$. Then a model with $m+2$ states exists that satisfies the whole set. If PDL enjoyed compactness, then the whole set would be satisfiable, but it obviously isn't, because a model for the whole set of sentences satisfies $\<a^*>b$, which requires $\<a^\ell>b$ to be true for some $\ell$. Hence compactness does not hold for PDL. | {"set_name": "stack_exchange", "score": 1, "question_id": 2083967} |
TITLE: Normal subgroups and $xH=Hx$ in Lang's Algebra
QUESTION [1 upvotes]: In Lang's Algebra (3rd Ed.) §3 p.13 he writes:
Let $f:G \to G'$ be a group-homomorphism, and let $H$ be its kernel. If $x$ is an element of G, then $xH=Hx$, because both are equal to $f^{-1} (f(x))$.
I can't understand his justification of this statement.
First, how do I know for sure that $f$ has an inverse? Second, it is easy to see that $f(xH)=f(Hx)$, since $H$ is $f$'s kernel, but this isn't a reason to conclude that $xH=Hx$. Is he implicitly assuming that $f$ is injective by any chance?
REPLY [3 votes]: No, he does not assume $f$'s infectivity, by $f^{-1}(\cdot)$ he denotes the inverse image operation, given for subsets $B \subseteq G'$ by
$$ f^{-1}(B) := \{x \in G: f(x) \in B\} $$
For singletons, e. g. $\{f(x)\}$, he seems to omit the curly brackets, writing $f^{-1}(f(x))$ instead of the correct $f^{-1}(\{f(x)\})$. Now note that
$$ y \in f^{-1}(f(x)) \iff f(y) \in \{f(x)\} \iff f(y) = f(x) $$
which happens exactly iff $yx^{-1} \in H$, that is $y \in xH$. That is $f^{-1}(f(x))= xH$. Along the same lines, one sees that $f^{-1}(f(x)) = Hx$.
REPLY [1 votes]: $f^{-1}(f(x))$ is talking about the elements in the pre-image of $f(x)$, so it is all the elements of $G$ which get mapped to $f(x)$. It is not talking about the inverse function. In fact $f$ is injective if and only if the kernel is only the identity.
Think about $f(x)=x^2$ just as a real valued function. $f^{-1}(f(2))=f^{-1}(2^2)=f^{-1}(4)=\{-2,2 \}$ | {"set_name": "stack_exchange", "score": 1, "question_id": 1523703} |