text
stringlengths
0
6.23M
__index_level_0__
int64
0
419k
Suri Cruise, Tom Cruise Spend Thanksgiving Together in London Suri Cruise, Tom Cruise Spend Thanksgiving Together in London NYC. NYC
109,752
9th Wonder & Big Daddy Kane Inducted Into North Carolina Music Hall Of Fame Photo by Lars Niki/Getty Images for 2018 Tribeca Film Festival The acclaimed producer and rapper join other notable artists like Nina Simone and John Coltrane. 9th Wonder and Big Daddy Kane have been inducted into North Carolina’s Music Hall of Fame. In a report from the Winston-Salem Journal, Wonder and Kane were inducted Thursday (October 17). WATCH: Murs And 9th Wonder’s Super Mini-Movie Speaking to the Journal, Wonder spoke of the importance of his parents being present during the ceremony. “You never know how long you’re going to have your parents. For them to see me go through this, coming from where they came from on the east side of Winston-Salem, it means a lot,” he said. He also spoke of how he talked to his parents about qutting college back in 2000. “It was a hard conversation, but she said, ‘It’s your life,'” he recalled. “And here we are.” Although born in New York City, Kane moved to Raleigh, North Carolina back in 2000. “I like the laid-back feel here. I can do without the drama. Raleigh feels like home. It has before I even moved here. During the late ’90s I just fell in love with this city,” he once said when explaining why he moved. In related news, Little Brother recently explained why Wonder wasn’t part of their latest project, May the Lord Watch. Source: Winston-Salem Journal
248,829
Electricity generation, transmission, and distribution equipment is designed to provide safe, efficient, and reliable power when properly maintained. But years of continued use and exposure to the elements can adversely affect equipment reliability, which can result in unplanned outages . . . and associated financial losses. 16th Era offers Asset Management Programs for hydro, wind and solar generation, data centers, utilities, government facilities, universities, and other major power users. These services reflect 16th Era’s expertise with medium- and high-voltage electrical equipment and our proven systematic approach to maintenance. The 16th Era Asset Management Program will reduce the risk of equipment failure, extend the life of your equipment, and minimize the occurrence of costly unplanned outages. Our broad range of asset management services allows 16th Era to be your single resource for managing, monitoring, and maintaining your critical assets.
35,089
Advertisement Toronto Raptors Best and Worst Seasons Stats: Toronto Raptors Best and Worst Seasons by Winning Percentage Raptors regular season records sorted from best to worst with details of games won and lost and stats like points, rebounds, assists, steals and blocks. = NBA Champions = Season in progress / Toronto Raptors Seasons from Best to Worst by Winning % scroll for more >> Advertisement
38,114
From its founding, Pella has always been about bringing you innovative windows and doors that help make life easier.. 1925 – Pete and Lucille Kuyper invest in the Rolscreen Company to manufacture Rolscreen® window screens that roll up and down like a shade. 1934 – The Rolscreen Company introduces Pella Venetian Blinds. 1937 – The first Pella window makes its debut: the deluxe casement window with a steel frame, wood interior, divided windowpanes, exterior wash feature and removable insulating glass panel. 1958 – Introduction of the first roomside removable wood grilles (windowpane dividers). 1960 – Pella wood sliding glass doors become an important design element in American homes. 1964 – Pella invents the first double-hung window with a sash that pivots so exterior glass can be washed from inside the home. 1966 – Pella introduces Slimshade® blinds – the world's first window with blinds tucked between-the-glass. 1970 – The company introduces wood windows with low-maintenance aluminum-clad exteriors. 1985 – The first Pella Window Store opens, redefining how customers shop for windows and doors. 1990 – Pella Architect Series® wood windows and patio doors are introduced with patented Integral Light Technology® grilles that combine the realistic look of individual window panes with superior energy efficiency. 1992 – The Rolscreen Company becomes Pella Corporation. Designer Series® wood windows and patio doors – featuring between-the-glass blinds, shades and grilles – are introduced. 1992 – Pella 450 Series wood windows and patio doors debut, offering Pella quality at a value price. 1993 – In our commitment to people, products and customers, Pella launches a companywide program to continuously improve its work processes and efficiencies. 1995 – Pella Precision Fit® pocket replacement windows install quickly and easily with little mess. 2000 – The first of six consecutive years that Fortune magazine recognizes Pella as one of the 100 Best Companies To Work For. 2000 – Pella entry doors are introduced, featuring another Pella breakthrough – Jamb-On-Sill design. 2000 – Pella creates the first casement window crank with a fold-away handle. 2001 – Pella's new HurricaneShield® windows and doors meet tough standards for impact resistance. 2002 – Pella debuts the sliding glass patio door with Rolscreen retractable screen. 2003 – Pella® Impervia® windows and patio doors are launched – made with Duracast® fiberglass composite. 2003 – The company introduces its first vinyl window and door line. 2003 – Pella improves its between-the-glass blinds and shades by making them cordless so they're safer for children and pets. 2004 – Pella premieres its exclusive VividView® screen, which is virtually invisible and dramatically increases light and airflow. 2004 – Pella storm doors with Rolscreen retractable screens are introduced. 2004 – Pella offers a new Architect Series double-hung window with a historically correct profile, authentic spoon-style hardware, hidden sash tile mechanisms and wood jambliners. 2005 – Pella's revolutionary new Designer Series windows and patio doors feature exclusive snap-in between-the-glass window fashions that are easy to change. 2005 – The company launches its Express Install storm doors that can be installed in as little as 60 minutes. 2007 – Pella acquires EFCO Corporation – one of the largest manufacturers of commercial aluminum fenestration systems in the United States. 2009 – Pella is the first to introduce a new high-altitude insulating glass option with argon. 2011 – Pella launches its patent-pending PerformaSeal® Design on Architect Series in-swing hinged patio doors featuring performance levels up to PG55 plus a lower sill height of 1.5 inches. 2011 – Pella launches Pella® 350 Series vinyl windows and patio doors with premium aesthetics, exceptional energy efficiency, plus blinds- or shades-between-the-glass on sliding patio doors. 2011 – Pella introduces three new entry door lines – Architect Series® featuring premium grained fiberglass or wood panels; Pella fiberglass panels; and Encompass by Pella® fiberglass or steel panels. Both the Architect Series and Pella entry doors feature the new PerformaSeal design. The first to be rated for air and water performance in the industry. 2014 – Pella® 250 Series Vinyl Windows are introduced. 2015 – Pella launches Pella® Insynctive® technology – a family of smart window and door products. 2015 – Pella acquires Grabill Windows and Doors – a premiere designer and manufacturer of custom luxury windows and doors for elite properties. 2015 – Pella expands its commercial offering with Architect Series® Monumental Hung Windows. 2016 – Pella introduces expansive Scenescape™ Patio Doors. 2016 – The Vibrancy Collection comes on the market – a trend-setting collection of colorful entry-door paint finishes to choose from. 2017 – Pella offers new hardware styles and finishes for wood windows and patio doors, providing more design choices than ever before. 2017 – Pella introduces Architect Series Reserve™ – our finest line of wood windows and patio doors – and adds Architect Series Contemporary windows and patio doors with sleek sightlines and minimalist design. 2018 – Pella revolutionizes single- and double-hung window screens with the patent-pending Integrated Rolscreen® retractable screen that seamlessly appears when you open the window and rolls out of sight when you close it. Next Page:History of Sustainability
173,860
TITLE: How to simplify $\left(\frac{3}{2}\right)^0 + \left(\frac{3}{2}\right)^1 + \dots + \left(\frac{3}{2}\right)^{n-1}$? QUESTION [0 upvotes]: I have a recurrence relation with a geometric series within it. I want to simplify the series to something more useful. $$\left(\frac{3}{2}\right)^0 + \left(\frac{3}{2}\right)^1 + \left(\frac{3}{2}\right)^2 + \dots + \left(\frac{3}{2}\right)^{n-2} + \left(\frac{3}{2}\right)^{n-1}$$ Some notes I am looking at suggest it can be simplified to: $$\begin{align} &= \frac{\left(\frac{3}{2}\right)^n - 1}{\left(\frac{3}{2}\right) - 1} \\ &= 2\left(\left(\frac{3}{2}\right)^n - 1\right) \\ &= 2\left(\frac{3}{2}\right)^n - 2 \end{align}$$ I'm a bit lost here. How was this answer approached specifically the first simplification and the second? The third is trivial algebra, I'm just unsure of the first two. REPLY [1 votes]: $1 + a + a^2 + a^3 + .......+a^n = \frac {a^{n+1}-1}{a-1}$ whenever $a \ne 1$. That's a formula you should know and love.[1] So plug in $a = \frac 32$ (and replace $n$ and $n+1$ with $n-1$ and $n$). And for the second $\frac 32 - 1 = \frac 12$ so $\frac {(\frac 32)^n - 1}{\frac 32 -1} = \frac {(\frac 23)^n-1}{\frac 12} = 2((\frac 23)^n-1)$. ===== [1] $(a-1)(1 + a + a^2 + a^3 + .......+a^n) = $ $(a + a^2 + a^3 + a^4 + .......+a^{n+1})- (1 + a + a^2 + a^3 + .......+a^n)=$ $(a + a^2 + a^3 + a^4 + .......) + a^{n+1} - 1 - (a + a^2 + a^3 + .......+a^n)=$ $a^{n+1} - 1$. So if $a\ne 1$ then $a-1\ne 0$ and we can divide both sides by $a-1$ to get $(1 + a + a^2 + a^3 + .......+a^n) =\frac {a^{n+1} - 1}{a-1}$.
177,312
TITLE: Surjectivity of a map between a module and its double dual QUESTION [5 upvotes]: Let $\mathbb{Z} = R$ be our base ring. I am trying to show for a countable direct product of $\mathbb{Z}$ modules there is an isomorphism between it and its dual. I am stuck on the part about surjectivity and I am a little confused because according to Dummite and Foote you can only get surjectivity of the map if the modules are projective and finitely generated. Let me explain the problem in detail: Let $P = \oplus_{i \in \mathbb{N}} A_i$ where each $A_i = \mathbb{Z}$. How do we to show the map $c_P : P \rightarrow P^{**}$ given by $x \mapsto (y^{*} \mapsto \left< x, y^{*} \right>$ is surjective? I know how to compute the dual of $\mathbb{Z}^{*} = Hom_{\mathbb{Z}}(\mathbb{Z},\mathbb{Z})$ by showing the mapping of each $y* \in \mathbb{Z}^{*}$ given by $y^{*} \mapsto y^{*}(1)$ is an isomorphism so $\mathbb{Z}^{*} \cong \mathbb{Z}$. Now since the dual of a direct sum is the direct product of corresponding duals we have $P^{*} \cong \prod_{i \in \mathbb{N}} \mathbb{Z} \cong \mathbb{Z} \times \mathbb{Z} \times \ldots $ Form here I don't know what to do to prove the map $c_p$ is surjective. I am confused about the statements I have read saying we need the module to be projective and finitely generated. Is it just the fact that the dual of a direct product should be the direct sum of the duals? REPLY [5 votes]: As stated by Matt E in the comments, the question has been asked (and answered) on MathOverflow: Is it true that, as Z-modules, the polynomial ring and the power series ring over integers are dual to each other?. Hailong Dao's answer links to a text by Grzegorz Bobinski, based on a talk by Lutz Hille, proving the result for any non-local principal ideal domain. Here is the pdf file, and here is an html page it can be accessed from. Here is a mild simplification of the proof. Let $A$ be a principal ideal domain, let $(p)$ and $(q)$ be two distinct maximal ideals, let $$A^{\mathbb N}$$ be the $A$-module formed by all the sequences in $A$, and let $$A^{(\mathbb N)}$$ be the submodule consisting of the finitely supported elements of $A^{\mathbb N}$. We claim: The canonical map from $A^{(\mathbb N)}$ into its double dual is an isomorphism. Let $f:A^{\mathbb N}\to A$ be a nonzero $A$-linear map. Define $a\in A^{\mathbb N}$ by the condition $$ f(x)=\sum_i\ a_i\,x_i\quad\forall\ x\in A^{(\mathbb N)}. $$ It suffices to show: (1) $a\not=0$, (2) $a\in A^{\mathbb N}$. Proof of (1). Let $x$ be an element of $A^{\mathbb N}$ satisfying $f(x)\neq0$. For each $i$ in $\mathbb N$ there are $a_i,b_i$ in $A$ such that $$p^i\,a_i+q^i\,b_i=x_i.$$ Then $a=0$ would imply $$ f((p^i)_{i\in\mathbb N})=0=f((q^i)_{i\in\mathbb N}), $$ and thus $f(x)=0$. Proof of (2). Suppose by contradiction that $a$ is not in $A^{(\mathbb N)}$. We can assume $a_i\neq0$ for all $i$. Write $$a_i=p^{r(i)}b_i$$ with $b_i$ prime to $p$. We can also assume $r(i)\le r(i+1)$ for all $i$, and $r(0)=0$. Put $$x_0:=p^{1+r(1)}.$$ Let $y$ be in $A^{\mathbb N}$, and set $$x_i:=p^i\,q\,y_i\quad\forall\ i > 0.$$ It is easy to see that $y$ can be chosen so that $$ p^{n+r(n)}\ |\ a_0\,x_0+\cdots+a_{n-1}\,x_{n-1}\quad\forall\ n > 0. $$ Then $q$ doesn't divide $f(x)$, but $p ^n$ does for all $n$, contradiction.
49,292
\begin{document} \allowdisplaybreaks \renewcommand{\PaperNumber}{085} \FirstPageHeading \ShortArticleName{On the Projective Algebra of Randers Metrics of Constant Flag Curvature} \ArticleName{On the Projective Algebra of Randers Metrics\\ of Constant Flag Curvature} \Author{Mehdi RAFIE-RAD~$^{\dag\ddag}$ and Bahman REZAEI~$^\S$} \AuthorNameForHeading{M.~Rafie-Rad and B.~Rezaei} \Address{$^\dag$~School of Mathematics, Institute for Research in Fundamental Sciences (IPM),\\ \hphantom{$^\dag$}~P.O. Box 19395-5746, Tehran, Iran} \Address{$^\ddag$~Department of Mathematics, Faculty of Mathematical Sciences, University of Mazandaran,\\ \hphantom{$^\ddag$}~P.O. Box 47416-1467, Babolsar, Iran} \EmailD{\href{mailto:rafie-rad@umz.ac.ir}{rafie-rad@umz.ac.ir}, \href{mailto:m.rafiei.rad@gmail.com}{m.rafiei.rad@gmail.com}} \Address{$^\S$~Department of Mathematics, Faculty of Sciences, University of Urmia, Urmia, Iran} \EmailD{\href{mailto:b.rezaei@urmia.ac.ir}{b.rezaei@urmia.ac.ir}} \ArticleDates{Received February 26, 2011, in f\/inal form August 20, 2011; Published online August 31, 2011} \Abstract{The collection of all projective vector f\/ields on a Finsler space $(M, F)$ is a f\/inite-dimensional Lie algebra with respect to the usual Lie bracket, called the projective algebra denoted by $p(M,F)$ and is the Lie algebra of the projective group $P(M,F)$. The projective algebra $p(M,F=\alpha+\beta)$ of a Randers space is characterized as a certain Lie subalgebra of the projective algebra $p(M,\alpha)$. Certain subgroups of the projective group $P(M,F)$ and their invariants are studied. The projective algebra of Randers metrics of constant f\/lag curvature is studied and it is proved that the dimension of the projective algebra of Randers metrics constant f\/lag curvature on a compact $n$-manifold either equals $n(n+2)$ or at most is $\frac{n(n+1)}{2}$.} \Keywords{Randers metric; constant f\/lag curvature; projective vector f\/ield; projective al\-gebra} \Classification{53C60; 53B50, 58J60} \section{Introduction} The motion of freely falling particles def\/ine a projective structure on spacetime. Mathematically speaking, this provides a projective connection or an equivalence class of symmetric af\/f\/ine connections all possessing the same unparameterized geodesic curves. This may be regarded as a~mathematical formulation of the weak principle of equivalence valid both in the Newtonian and relativistic theory of spacetime and gravity~\cite{Israel1973}. Many physical considerations require metric structures on spacetime in liaison to af\/f\/ine connections. A necessary condition for two such metric structures to have the same (unparameterized) geodesic curves is that their Weyl projective tensors are identical. The locally anisotropic space-times are studied in \cite{Stavrinos2009} from a geometrical point of view and thus may include some auspices on the Weyl projective tensor. As we will see in this paper, certain subgroups of the Lorentz group may be at once a subgroup of the projective group of the Finsler metric $F=\sqrt{\eta_{\mu\nu}dx^\mu dx^\nu}+{\bf A}_\mu dx^\mu$, where $\eta$ and ${\bf A}$ denotes the Lorentz metric and the electromagnetic potential vector of the f\/lat space-time. Some reduced forms of Weyl projective tensor $W$ have been introduced in \cite{Akbar-Zadeh1986,NajafiTayebi2010}, which are invariant among projectively related constant curvature Finsler metrics but not identical among scalar f\/lag curvature metrics. Any two Finsler metrics possessing the same (unparameterized) geodesics have the same Weyl projective tensor. Studying Weyl projective vector f\/ields (i.e.\ those vector f\/ields preserving the Weyl projective tensor and also the reduced Weyl projective tensors) and projective vector f\/ields have such a leading role to obtain projective symmetries which provide some conservation laws in physical terms. On the other hand, there are many papers devoted on projective symmetry in metric-af\/f\/ine gravity and cosmology, see for example \cite{HallLonie2008,Barnes1993}. Randers metrics are the most popular Finsler metrics in dif\/ferential geometry and physics simply obtained by a Riemannian metric $\alpha=\sqrt{a_{ij}(x)y^iy^j}$ and $\beta=b_i(x)y^i$ as $F=\alpha+\beta$ and was introduced by G.~Randers in \cite{Randers1941} in the context of general relativity. They arise naturally as the geometry of light rays in stationary spacetimes \cite{GibbonsHerdeiroWerner2009}. One may refer to \cite{BaoRobles2003,BaoShen2002,Mo2008,Robles2003} for an extensive series of results about the Einstein Randers metrics and the Randers metrics of constant f\/lag curvature. The present paper is closely related to the problem of projective relatedness of Randers metrics which is investigated in~\cite{ShenYu2008}. To avoid the obscurity, given a~Randers metric $\alpha+\beta$, the geometric objects in $(M,F)$ and $(M,\alpha)$ are denoted respectively by the prif\/ices ``$F$-'' and ``$\alpha$-'', respectively, for instance, an $F$-projective vector f\/ield means a~projective vector f\/ield on $(M,F)$, an $\alpha$-projective vector f\/ield means a~projective vector f\/ield on $(M,\alpha)$, an $\alpha$-Killing vector f\/ields stands for a Killing vector f\/ield on $(M,\alpha)$, etc. We use the usual notations for Randers metrics in \cite{BaoShen2002,ShenYu2008}. Given any vector f\/ield $V$, its complete lift to $TM_0=TM\backslash\{0\}$ is denoted by $\hat{V}$ and the Lie derivative along $\hat{V}$ is denoted by ${\cal L}_{\hat{V}}$. One can benef\/it~\cite{Yano1957} for an extensive discussion on the theory of Lie derivatives of various geometric objects in Finsler spaces. Traditionally we use the notations for the so-called $(\alpha,\beta)$-metric disposal in~\cite{Shen2001}, thereby $s^i_{\ \circ}=a^{ik}s_{kj}y^j$, where $s_{kj}=\big(\frac{\partial b_k}{\partial x^j}-\frac{\partial b_j}{\partial x^k}\big)$. We characterize the projective vector f\/ields on a Randers space by proving the following theorem: \begin{theorem} \label{Liethm} Let $(M,F=\alpha+\beta)$ be a Randers space and $V$ be a vector field on $M$. $V$ is $F$-projective if and only if $V$ is $\alpha$-projective and ${\cal L}_{\hat{V}}\{\alpha s^i_{\ \circ}\}=0$. \end{theorem} Determining the dimension of the projective algebra of constant curvature and Einstein spaces is of interests in physical and geometrical discussions, see \cite{Barnes1993} and interested readers may be\-nef\/it~\cite{Yano1957} for a large discussion on this f\/ield. This leads to calculate number of independent projective vector f\/ields and is closely related to the number of independent Killing vector f\/ields in each case. It is well known that in an $n$-dimensional Riemannian space of constant curvature the dimension of the projective algebra is $n(n+ 2)$ and vice-versa, see \cite{Barnes1993,Yano1957}. This weaves an overture for an analogue problem for Randers space $(M,F=\alpha+\beta)$. If we have $s^i_{\ j}=0$, then the respective projective algebras $p(M,F)$ of $F$ and $p(M,\alpha)$ of $\alpha$ coincide. If moreover, $F$ is locally projectively f\/lat, then $\alpha$ is too and hence, the dimension of the projective algebra $p(M,F)$ is $n(n+2)$. Notice that our discussions are closely related to the algebra $k(M,\alpha)$ of $\alpha$-Killing vector f\/ields. The important case is considerable when $s^i_{\ j}\neq0$ and uncovers a non-Riemannian feature of Finsler metrics in comparison with the analogue Riemannian case. We summarize the argument by establishing the following result: \begin{theorem} \label{admitting} Let $(M,F)$ be an $n$-dimensional $(n\geq3)$ Randers space of constant flag curvature and $M$ is compact. The dimension of the projective algebra $p(M,F)$ is either $n(n+2)$ or at most equals $\frac{n(n+1)}{2}$. \end{theorem} The spaces admitting certain vector f\/ields has a long history in Riemannian geometry, see for example \cite{Akbar-Zadeh1978,Barnes1993,Hiramoto1980,Obata1962,Tanno1978,Tashiro1965,Yamauchi1974}. Existence of some special vector f\/ields on a Riemannian space may pertain to some global properties of the underlying Riemannian space. We prove the following result to uncover such an interaction for Randers spaces: \begin{theorem} \label{mainthm} Let $(M,F=\alpha+\beta)$ be a Randers space of vanishing ${\bf S}$-curvature and dimension $n\geq3$. If $(M,F)$ admits a non-$\alpha$-affine projective vector field $V$, then $(M,F)$ is a Berwald space. \end{theorem} \section{Preliminaries}\label{sectionP} Let $M$ be a $n$-dimensional $ C^\infty$ connected manifold. $T_x M $ denotes the tangent space of~$M$ at~$x$. The tangent bundle of $M$ is the union of tangent spaces $TM:=\bigcup _{x \in M} T_x M$. We will denote the elements of $TM$ by $(x, y)$ where $y\in T_xM$. Let $TM_0 = TM\setminus \{ 0 \}.$ The natural projection $\pi: TM_0 \rightarrow M$ is given by $\pi (x,y):= x$. A~\textit{Finsler metric} on $M$ is a function $ F:TM \rightarrow [0,\infty )$ with the following properties: $(i$)~$F$~is $C^\infty$ on $TM_0$, $(ii)$~$F$~is positively 1-homogeneous on the f\/ibers of tangent bundle $TM$, and $(iii)$~the Hessian of $F^{2}$ with elements $ g_{ij}(x,y):=\frac{1}{2}[F^2(x,y)]_{y^iy^j} $ is positive def\/inite matrix on $TM_0$. The pair $(M,F)$ is then called a {\it Finsler space}. Throughout this paper, we denote a Riemannian metric by $\alpha=\sqrt{a_{ij}(x)y^iy^j}$ and a 1-form by $\beta=b_i(x)y^i$. A globally def\/ined vector f\/ield ${\bf G}$ is induced by $F$ on $TM_0$, which in a standard coordinate $(x^i,y^i)$ for $TM_0$ is given by ${\bf G}=y^i {{\partial} \over {\partial x^i}}-2G^i(x,y){{\partial} \over {\partial y^i}}$, where $G^i(x,y)$ are local functions on $TM_0$ satisfying $G^i(x,\lambda y)=\lambda^2 G^i(x,y)$, $\lambda>0$. Assume the following conventions: \[ G^i_{\ j}=\frac{\partial G^i}{\partial y^i},\qquad G^i_{\ jk}=\frac{\partial G^i_{\ j}}{\partial y^k},\qquad G^i_{\ jkl}=\frac{\partial G^i_{\ jk}}{\partial y^l}. \] Notice that the local functions $G^i_{\ jk}$ give rise to a torsion-free connection in $\pi^*TM$ called \textit{the Berwald connection} which is practical in this paper, see~\cite{Shen2001}. The local functions $G^i_{\ j}$ def\/ine a~nonlinear connection ${\cal H}TM$ spanned by the horizontal frame $\{\frac{\delta}{\delta x^i}\}$, where $\frac{\delta}{\delta x^j}=\frac{\partial}{\partial x^j}-G^i_{\ j}\frac{\partial}{\partial y^i}$. The nonlinear connection ${\cal H}TM$ splits $TTM$ as $TTM=\ker\pi_*\oplus{\cal H}TM$, see~\cite{Shen2001}. A Finsler metric is called a {\it Berwald metric} if $G^i_{\ jk}(x, y)$ are functions of $x$ only at every point $x\in M$, equivalently $F$ is a Berwald metric if and only if $G^i_{\ jkl}=0$. For a Finsler metric $F$ on an $n$-dimensional manifold $M$ {\it the Busemann--Hausdorff volume form} $dV_F = \sigma_F(x) dx^1 \cdots dx^n$ is def\/ined by \[ \sigma_F(x) := \frac{\textrm{Vol} ({\mathbb B}^n(1))}{ \textrm{Vol} \{ (y^i)\in \mathbb{R}^n \; | \; F ( y^i \frac{\partial}{\partial x^i}|_x ) < 1 \} }. \] Assume $\underline{g}={\det ( g_{ij}(x,y) )}$ and def\/ine $\tau (x, y):=\ln{\sqrt{\underline{g}}\over \sigma_F(x)}$. $\tau=\tau(x,y)$ is a scalar function on~$TM_0$, which is called the {\it distortion}~\cite{Shen2001}. For a vector $y\in T_xM$, let $c(t)$, $-\epsilon < t <\epsilon $, denote the geodesic with $c(0)=x$ and $\dot{c}(0)=y$. The function ${\bf S}(y):= {d \over dt} [ \tau (\dot{c}(t) ) ]_{|_{t=0}}$ is called the ${\bf S}$-curvature with respect to Busemann--Hausdorf\/f volume form. A Finsler space is said to be {\it of isotropic ${\bf S}$-curvature} if there is a function $\sigma=\sigma(x)$ def\/ined on $M$ such that ${\bf S}=(n+1)\sigma(x)F$. It is called a Finsler space {\it of constant {\bf S}-curvature} once $\sigma$ is a constant. Every Berwald space is of vanishing {\bf S}-curvature \cite{Shen2001}. The {\bf E}-curvature of the Finsler space $(M,F)$ is def\/ined by ${\bf E}_y={\bf E}_{ij}(y)dx^i\otimes dx^j$, where ${\bf E}_{ij}=\frac{1}{2}\frac{\partial^2{\bf S}}{\partial y^i\partial y^j}$. $(M,F)$ is called a weakly-Berwald space if ${\bf E}=0$. It is easy to see that we have ${\bf E}_{ij}=\frac{1}{2}G^r_{\ irj}$. Let $(M,\alpha)$ be a Riemannian space and $\beta=b_i(x)y^i$ be a 1-form def\/ined on $M$ such that $\|\beta\|_x :=\sup\limits_{y \in T_xM} \beta(y)/\alpha(y) < 1$. The Finsler metric $F = \alpha+\beta$ is called a Randers metric on a manifold $M$. Denote the geodesic spray coef\/f\/icients of $\alpha$ and $F$ by the notions $G_\alpha^i$ and $G^i$, respectively and the Levi-Civita connection of $\alpha$ by $\nabla$. Def\/ine $\nabla_jb_i$ by $(\nabla_jb_i) \theta^j := db_i -b_j \theta_i^{\ j}$, where $\theta^i :=dx^i$ and $\theta_i^{\ j} :=\tilde{\Gamma}^j_{ik} dx^k$ denote the Levi-Civita connection forms and $\nabla$ denotes its associated covariant derivation of $\alpha$. Let us put \begin{gather*} r_{ij} := {1\over 2} ( \nabla_jb_i+\nabla_ib_j), \qquad s_{ij}:= {1\over 2}(\nabla_jb_i-\nabla_ib_j),\\ s^i_{\ j} := a^{ih}s_{hj}, \qquad s_j:=b_i s^i_{\ j}, \qquad e_{ij} := r_{ij}+ b_i s_j + b_j s_i. \end{gather*} Then $G^i$ are given by \begin{gather*} G^i = G_\alpha^i + \left({e_{\circ\circ} \over 2F} -s_\circ\right)y^i+ \alpha s^i_{\ \circ}, \end{gather*} where $e_{\circ\circ}:= e_{ij}y^iy^j$, $s_\circ:=s_iy^i$, $s^i_{\ \circ}:=s^i_{\ j} y^j$ and $G_\alpha^i$ denote the geodesic coef\/f\/icients of $\alpha$, see~\cite{Shen2001}. Notice that the {\bf S}-curvature of a Randers metric $F=\alpha+\beta$ can be obtained as follows \begin{gather*} {\bf S}=(n+1)\left\{\frac{e_{\circ\circ}}{F}-s_\circ-\rho_\circ\right\}, \end{gather*} where $\rho=\ln \sqrt{1-\|\beta\|}$ and $\rho_\circ=\frac{\partial \rho}{\partial x^k}y^k$. It is well-known that every weakly-Berwald Randers space is of vanishing {\bf S}-curvature~\cite{Shen2001}. Let $F$ be a Finsler metric on an $n$-manifold and $G^i$ denote the geodesic coef\/f\/icients of $F$. Def\/ine ${\bf R}_y= K^i_{\ k}(x, y) dx^k \otimes \frac{\partial}{\partial x^i}|_x: T_xM \to T_xM$ by \[ K^i_{\ k} := 2 \frac{\partial G^i}{\partial x^k} - y^j \frac{\partial^2 G^i}{\partial x^j\partial y^k} + 2 G^j \frac{\partial^2 G^i}{\partial y^j \partial y^k} - \frac{\partial G^i}{\partial y^j} \frac{\partial G^j}{\partial y^k}. \] The family ${\bf R}:=\{{\bf R}_y\}_{y\in TM_0}$ is called the Riemann curvature~\cite{Shen2001}. The {\it Ricci scalar} is denoted by ${\bf Ric}$ it is def\/ined by ${\bf Ric}:=K^k_{\ k}$. The Ricci scalar ${\bf Ric}$ is a generalization of the Ricci tensor in Riemannian geometry. A~Finsler space $(M,F)$ is called an {\it Einstein space} if there is function~$\sigma$ def\/ined on $M$ such that ${\bf Ric}=\sigma(x)F^2$. D.~Bao and C.~Robles proved in~\cite{BaoRobles2003,Robles2003} the following theorem: \begin{theorem} Let $(M,F=\alpha+\beta)$ be an $n$-dimensional Randers space and $n\geq3$. If $(M,F)$ is an Einstein space with ${\bf Ric} = (n-1)K(x)F^2$, then it is of constant ${\bf S}$-curvature and $K(x)$ is constant. \end{theorem} The Berwald--Riemannian curvature tensor ${\bf K}_y=K^i_{\ jkl}(y)\frac{\partial}{\partial x^i}\otimes dx^j\otimes dx^k\otimes dx^l$ and the Berwald--Ricci tensor $K_{jl}(y)dx^j\otimes dx^l$ are respectively def\/ined by \begin{gather*} K^i_{\ jkl} := \frac{1}{3}\left\{\frac{\partial^2K^i_{\ k}}{\partial y^j\partial y^l}-\frac{\partial^2K^i_{\ l}}{\partial y^j\partial y^k}\right\},\qquad K_{jl}:=K^i_{\ jil}. \end{gather*} Due to a result in \cite{Mo2008}, every Finsler metric of constant {\bf S}-curvature on a compact manifold is of vanishing {\bf S}-curvature. Therefore, the {\bf S}-curvature of every Einstein Randers metric on an $n$-dimensional ($n\geq3$) compact manifold is vanishing. Denote the horizontal and vertical covariant derivation of the Berwald connection of $F$ respectively by ``$_|$" and ``$_.$". The quit nouvelle non-Riemannian quantity ${\bf H}_y={\bf H}_{ij}(y)dx^i\otimes dx^j$ is simply def\/ined by ${\bf H}_{ij}={\bf E}_{ij|k}y^k$, see \cite{Akbar-Zadeh1988,Mo2009,NajafiTayebiShen2008}. Consider the following \textit{Bianchi identity} for the Berwald connection \cite{Akbar-Zadeh1988}: \[ G^i_{\ jkl|m}-G^i_{\ jkm|l}=K^i_{\ jkl.m}.\label{Ricci identity} \] After convecting the indices $i$ and $k$ and taking into account the equation $G^i_{\ jil}=2{\bf E}_{jl}$, we obtain $G^k_{\ jkl|m}-G^k_{\ jkm|l}=2({\bf E}_{\ jl|m}-{\bf E}_{\ jm|l})=K_{\ jl.m}$. From which it results \begin{gather} y^jK_{\ jl.m}=0,\qquad y^lK_{\ jl.m}=-2{\bf H}_{jm}.\label{Ricci identity} \end{gather} \section{Projectively related metrics and projectively invariants} Two Finsler metrics $F$ and $\tilde{F}$ on a manifold $M$ are said to be \textit{$($pointwise$)$ projectively related} if they have the same geodesics as point sets. Hereby, there is a function $P(x,y)$ def\/ined on $TM_0$ such that $\tilde{G}^i=G^i+Py^i$ on coordinates $(x^i,y^i)$ on $TM_0$, where $\tilde{G}^i$ and $G^i$ are the geodesic spray coef\/f\/icients of $\tilde{F}$ and $F$, respectively. A Finsler metric $F$ on an open subset $\textsl{U}\subseteq\mathbb{R}^n$ is called \textit{projectively flat} if all geodesics are straight in~$\textsl{U}$. In this case, $F$ and the Euclidean metric on $\textsl{U}$ are projectively related. A Finsler metric is called \textit{locally projectively} f\/lat if at any point $x\in M$, there is a local coordinate $(x^i, U)$ in which $F$ is projectively f\/lat. We consider projectively related Finsler metrics, namely those having the same geodesics as set points. Let~$\tilde{F}$ and~$F$ be two projectively related Finsler metrics. Consider a natural coordinate system $((x^i,y^i),\pi^{-1}(U))$. There is function~$P$ def\/ined on $TM_0$ such that $\widetilde{G}^i=G^i+Py^i$. Let us put $P_i=P_{.i}$ and $P_{ij}=P_{i.j}$. Observe that we have \begin{gather} \widetilde{G}^i_{\ j} = G^i_{\ j}+P_jy^i+P\delta^i_{\ j},\qquad \widetilde{G}^i_{\ jk}=G^i_{\ jk}+P_{jk}y^i+P_{k}\delta^i_{\ j}+P_{j}\delta^i_{\ k}, \label{1}\\ \widetilde{\bf E}_{ij} = {\bf E}_{ij}+\frac{(n+1)}{2}P_{ij}.\label{3} \end{gather} The Berwald--Riemannian curvature and the Berwald--Ricci tensors of $\tilde{F}$ and $F$ are related as follows \begin{gather} \widetilde{K}^i_{\,\,hjk} = K^i_{\,\,hjk}+y^i(P_{jh|k}-P_{kh|j})+\delta^i_h(P_{j|k}-P_{k|j})\nonumber\\ \phantom{\widetilde{K}^i_{\,\,hjk} =}{} +\delta^i_j(P_{h|k}-P_hP_k-PP_{hk})-\delta^i_k(P_{h|j}-P_hP_j-PP_{hj}),\nonumber\\ \widetilde{K}_{\,\,hk} = K_{hk}+(P_{h|k}-P_{k|h})+(n-1)(P_{h|k}-P_hP_k-PP_{hk})-P_{hk|\circ}.\label{RicK} \end{gather} Finally we f\/ind out that $(\widetilde{K}_{\,\,hk}-\widetilde{K}_{\,\,kh})_{.j}=(K_{hk}-K_{kh})_{.j}+(n+1)(P_{h|k}-P_{k|h})_{.j}$. The non-Riemannian quantity {\bf H} was introduced in~\cite{Akbar-Zadeh1986,NajafiTayebiShen2008} and developed in~\cite{Mo2009}. We would like to consider projectively related Finsler metrics with the same {\bf E}- and {\bf H}-curvatures. Observe that according to~(\ref{1}) and~(\ref{3}), {\bf H}-curvatures of $\tilde{F}$ and $F$ are related as follows \begin{gather} \widetilde{{\bf H}}_{ij}= y^r\frac{\tilde{\delta}}{\tilde{\delta}x^r}\widetilde{\bf E}_{ij}-\widetilde{\bf E}_{rj}\widetilde{G}^r_{\ i}-\widetilde{\bf E}_{ir}\widetilde{G}^r_{\ j}=\left(y^r\frac{\delta}{\delta x^r}-2Py^r\frac{\partial}{\partial y^r}\right) \left({\bf E}_{ij}+\frac{(n+1)}{2}P_{ij}\right)\nonumber\\ \phantom{\widetilde{{\bf H}}_{ij}}{} -\left({\bf E}_{rj}+\frac{(n+1)}{2}P_{rj}\right)\!(G^r_{\ i}+P_iy^r+P\delta^r_{\ i})- \left({\bf E}_{ir}+\frac{(n+1)}{2}P_{ir}\right)\!(G^r_{\ j}+P_jy^r+P\delta^r_{\ j})\nonumber \\ \phantom{\widetilde{{\bf H}}_{ij}}{} = {\bf E}_{ij|\circ}+\frac{(n+1)}{2}P_{ij|\circ} ={\bf H}_{ij}+\frac{(n+1)}{2}P_{ij|\circ}, \label{Hc} \end{gather} where $\frac{\tilde{\delta}}{\tilde{\delta} x^k}=\frac{\partial}{\partial x^k}-2\widetilde{G}^i_{\ k}\frac{\partial}{\partial y^i}$ and $\frac{\delta}{\delta x^k}=\frac{\partial}{\partial x^k}-2G^i_{\ k}\frac{\partial}{\partial y^i}$. From (\ref{Hc}) one may conclude the following lemma. \begin{lemma} \label{Lemma 1} Suppose that $\widetilde{F}$ and $F$ are projectively related with projective factor $P$. Then~$\widetilde{F}$ and~$F$ have the same {\bf H}-curvature if and only if $P_{ij|\circ}=0$. \end{lemma} The Finsler metrics $\widetilde{F}$ and $F$ are said to be \textit{{\bf H}-projectively related} if they are projectively related and have the same {\bf H}-curvature. From (\ref{3}) it results that if given any $x\in M$ the function~$P(x,y)$ is linear with respect to $y$, in other words $P_{ij}=0$, then $F$ and $\widetilde{F}$ are {\bf H}-projectively related. Hereby the Finsler metrics $\widetilde{F}$ and $F$ are said to be \textit{specially projectively related} if~$P(x,y)$ is linear with respect to $y$. \begin{example} The Funk metric $\Theta$ on the Euclidean unit ball $\mathbb{B}^n(1)$ is a Randers metric given by \[ \Theta(x,y):=\frac{\sqrt{|y|^2-(|x|^2|y|^2-\langle x,y\rangle^2)}}{1-|x|^2}+\frac{\langle x,y\rangle}{1-|x|^2}, \qquad y\in T_xB^n(1)\simeq \mathbb{R}^n, \] where $\langle,\rangle$ and $|.|$ denotes the Euclidean inner product and norm on $\mathbb{R}^n$, respectively. Given any constant vector $a\in \mathbb{R}^n$, the generalized Funk metric $\Theta_a$ is given by $\Theta_a:=\Theta+d\varphi_a$, where $\varphi_a=\ln (1+\langle a,x\rangle)+C$ and $C$ is a constant. From the variational point of view this changes the length function by something which depends only on the end-points, not the path between them. One may also refer to \cite{Shen2001} to f\/ind an analytic proof. $\Theta$ and $\Theta_a$ are both projectively f\/lat, {\bf H}-projectively related of constant {\bf S}-curvature with $\sigma=\frac{1}{2}$. It not hard to see that the projective factor $P$ is $P=-\frac{1}{2}d\varphi_a$. \end{example} \begin{example} Given any vector $a\in\mathbb{R}^n$, def\/ine the Finsler metric $F$ on $\mathbb{B}^n(1)$ by \[ F := (1+\langle a,x\rangle)\big(\Theta+\Theta_{x^k}x^k\big). \] $F$ is projectively f\/lat with projective factor $P =\Theta$ and $F$ is of constant f\/lag curvature ${\bf K} = 0$. Thus $F$ and the Euclidean metric on $\mathbb{B}^n(1)$ have the same vanishing {\bf H}-curvature and are {\bf H}-projectively related. This example is borrowed from~\cite{Shen2002}. \end{example} \subsection{Projectively invariants} Any geometric object which is identical between two projectively related metrics is called a~\textit{projective invariant}. There are many projectively invariant tensors in Finsler geometry such as the {\it Douglas} tensor ${\bf D}_y=D^i_{\ jkl}(y)\frac{\partial}{\partial x^i}\otimes dx^j\otimes dx^k\otimes dx^l$ and \textit{Weyl} curvature ${\bf W}_y=W^i_{\ jkl}(y)\frac{\partial}{\partial x^i}\otimes dx^j\otimes dx^k\otimes dx^l$. However, the notion of the projective connection in Finsler geometry encounters some dif\/f\/iculties to be globally projectively invariant. The tensors ${\bf D}$ and ${\bf W}$ are def\/ined as follows \begin{gather*} D^i_{\ jkl}=\frac{\partial^3}{\partial y^j\partial y^k\partial y^l}\left\{G^i-\frac{1}{n+1}G^m_{\ m}y^i\right\},\nonumber\\ W^i_{\ jkl}=K^i_{\ jkl}-\frac{1}{n^2-1}\big\{\delta^i_{\ j}(\hat{K}_{kl}-\hat{K}_{lk})+(\delta^i_{\ k}\hat{K}_{jl}-\delta^i_{\ l}\hat{K}_{jk})+y^i(\hat{K}_{kl}-\hat{K}_{lk})_{.j}\big\}, \end{gather*} where $\hat{K}_{jk}=nK_{jk}+K_{kj}+y^rK_{kr.j}$. In 1986, H.~Akbar-Zadeh introduced a tensor which is just invariant by a~sub-group of projective transformations, not all of them~\cite{Akbar-Zadeh1988}. In fact, this is a~non-Riemannian generalization of Weyl's curvature. It is denoted by $\overset{*}{W}{^i_{\ jkl}}$ and is def\/ined by \[ \overset{*}{W}{^i_{\ jkl}}=K^i_{\ jkl}-\frac{1}{n^2-1}\big\{\delta^i_{\ k}(nK_{jl}+K_{lj})-\delta^i_{\ l}(nK_{jk}+K_{kj})+(n-1)\delta^i_{\ j}(K_{kl}-K_{lk})\big\}. \] Assume that $W^i_{\ k}=y^jy^lW^i_{\ jkl}$. From (\ref{Ricci identity}) ${\bf W}$ can be written in terms of {\bf H}-curvature by the following equation \begin{gather} \label{W H} W^i_{\ jkl}=\overset{*}{W}{^i_{\ jkl}}-\frac{2}{n^2-1}\big\{\delta^i_{\ l}{\bf H}_{jk}-\delta^i_{\ k}{\bf H}_{jl}\big\} -\frac{y^i}{n+1}(K_{kl}-K_{lk})_{.j}. \end{gather} One may easily check from (\ref{RicK}) and (\ref{Hc}) that every two specially projectively related metrics $\tilde{F}$ and $F$ have the same tensors $W$, {\bf H} and $(K_{hk}-K_{kh})_{.j}$. Observe that from (\ref{W H}) it results that they have the same tensor $\overset{*}{W}$. There is the following identity for $W$ given in~\cite{Akbar-Zadeh1986,Shen2001}: \begin{gather} \label{W identity} W^i_{\ jkl}=\frac{1}{3}\big\{W^i_{\ k.l.j}-W^i_{\ l.k.j}\big\}. \end{gather} \begin{theorem} Let $(M,F)$ be an $n$-dimensional Finsler manifold $(n\geq3)$. ${\bf W}=0$ if and only if~$F$ is of scalar flag curvature. \end{theorem} Let $\overset{*}{W}{^i_{\ k}}=y^jy^l\overset{*}{W}{^i_{\ jkl}}$. The following theorem is proved in \cite{Akbar-Zadeh1986}, however, we give a modif\/ied proof for it. \begin{theorem} Let $(M,F)$ be an $n$-dimensional Finsler manifold $(n\geq3)$. $\overset{*}{W}=0$ if and only if~$F$ is of constant flag curvature. \end{theorem} \begin{proof} From (\ref{W identity}), (\ref{W H}) and $y^l{\bf H}_{jl}=0$, it follows that we have $W^i_{\ k}=\overset{*}{W}{^i_{\ k}}$ and \[ W^i_{\ jkl}=\frac{1}{3}\big\{\overset{*}{W}{^i_{\ k.l.j}}-\overset{*}{W}{^i_{\ l.k.j}}\big\}. \] Now let $\overset{*}{W}=0$. It follows immediately that $W=0$ and from Theorem B. that $(M,F)$ is of scalar curvature. But, from (\ref{W H}) it results \[ \frac{2}{n^2-1}\big\{\delta^i_{\ l}{\bf H}_{jk}-\delta^i_{\ k}{\bf H}_{jl}\big\}+\frac{y^i}{n+1}(K_{kl}-K_{lk})_{.j}=0. \] Convecting the index $k$ in $y^k$ and applying (\ref{Ricci identity}) yields \[ -\frac{2}{n^2-1}y^i{\bf H}_{jl}+\frac{2}{n+1}y^i{\bf H}_{jl}=\frac{2(n-2)}{n^2-1}y^i{\bf H}_{jl}=0, \] and f\/inally ${\bf H}_{jl}=0$, since $n\geq3$. Now, it results that $(M,F)$ is of constant f\/lag curvature, since ${\bf H}=0$. Conversely, suppose that $(M,F)$ is of constant f\/lag curvature. Then, ${\bf H}=0$, $K_{kl}=K_{lk}$ and from (\ref{W H}) it follows that $\overset{*}{W}=W=0$, since $(M,F)$ is of constant (scalar) f\/lag curvature. \end{proof} \begin{remark} Projectively related Finsler metrics certainly have the same Weyl and Douglas curvatures. In \cite{ShenYu2008}, the authors studied projectively related Randers metrics. Their discussion is closely related to the subject of the present paper. \end{remark} \section{Projective vector f\/ields on Randers spaces} Every vector f\/ield $V$ on $M$ induces naturally a transformation under the following inf\/initesimal coordinate transformations on $TM$, $(x^i,y^i)\longrightarrow(\bar{x}^i,\bar{y}^i)$ given by \[ \bar{x}^i=x^i+V^idt,\qquad \bar{y}^i=y^i+y^k\frac{\partial V^i}{\partial x^k}dt. \] This leads to the notion of \textit{the complete lift} $\hat{V}$ (or traditionally denoted by $V^C$, see \cite{Yano1957}) of $V$ to a vector f\/ield on $TM_0$ given by \[ \hat{V}=V^i\frac{\partial}{\partial x^i}+y^k\frac{\partial V^i}{\partial x^k}\frac{\partial}{\partial y^i}. \] Since almost every geometric object in Finsler geometry depend on the both points and velocities, the Lie derivatives of such geometric objects should be regarded with respect to $\hat{V}$. One may get familiar to the theory of Lie derivatives in Finsler geometry in \cite{Yano1957}. It is a notable remark in the Lie derivative computations that ${\cal L}_{\hat{V}}y^i=0$ and the dif\/ferential operators ${\cal L}_{\hat{V}}$, $\frac{\partial}{\partial x^i}$ and~$\frac{\partial}{\partial y^i}$ commute. A smooth vector f\/ield $V$ on $(M,F)$ is called \textit{projective} if each local f\/low dif\/feomorphism associated with $V$ maps geodesics onto geodesics. If $V$ is projective and each such map preserves af\/f\/ine parameters, then $V$ is called \textit{affine}, otherwise it is said to be \textit{proper projective}. The collection of all projective vector f\/ields on $M$ is a f\/inite-dimensional Lie algebra, with respect to the usual Lie bracket operation on vector f\/ields, called the projective algebra, and is denoted by~$p(M,F)$. It is easy to prove that a vector f\/ield $V$ on the Finsler space $(M,F)$ is a projective if and only if there is a function $P$ def\/ined on $TM_0$ such that \begin{gather} \label{Lie} {\cal L}_{\hat{V}}G^i=Py^i, \end{gather} and $V$ is af\/f\/ine if and only if $P=0$. Whence $F$ is Riemannian, the equation (\ref{Lie}) is just ${\cal L}_{\hat{V}}\Gamma^i_{jk}=\omega_j\delta^i_{\ k}+\omega_k\delta^i_{\ j}$, where $\omega_j$ are the components of a globally def\/ined 1-form on $M$ and thus, $P(x,y)=\omega_i(x)y^i$. \begin{proof}[Proof of Theorem \ref{Liethm}] Suppose that $V$ is $F$-projective. Hence it preserves the Douglas tensor, i.e.\ ${\cal L}_{\hat{V}}D^i_{\ jkl}=0$. Let us put $T^i=\alpha s^i_{\ \circ}$. The sprays $G^i$ of $F$ and $\widehat{G}^i=G^i_\alpha+T^i$ are projectively related and thus they have the same Douglas tensor, hence \[ D^i_{\ jkl}=\widehat{D}^i_{\ jkl}=\frac{\partial^3}{\partial y^j\partial y^k\partial y^l}\left\{T^i-\frac{1}{n+1}T^m_{\ y^m}y^i\right\}. \] A simple calculation shows that $T^m_{\ m}=0$. From that we have \[ {\cal L}_{\hat{V}}D^i_{\ jkl}={\cal L}_{\hat{V}}T^i_{.j.k.l}={\cal L}_{\hat{V}}\{\alpha s^i_{\ \circ}\}_{.j.k.l}=0. \] Therefore, there are functions $H^i(x,y),\ (i=1,2,\dots,n)$ quadratic in $y$ such that \begin{gather} \label{Wee} {\cal L}_{\hat{V}}\{\alpha s^i_{\ \circ}\}=H^i. \end{gather} Let us put $t_{ij}={\cal L}_{\hat{V}}a_{ij}$. Observe that ${\cal L}_{\hat{V}}\{\alpha s^i_{\ \circ}\}=\frac{t_{\circ\circ}}{2\alpha}s^i_{\ \circ}+\alpha{\cal L}_{\hat{V}} s^i_{\ \circ}$. Now the equation (\ref{Wee}) can be re-written as follows: \begin{gather} \label{Hii} t_{\circ\circ}s^i_{\ \circ}+2\alpha^2{\cal L}_{\hat{V}}s^i_{\ \circ}=\alpha H^i. \end{gather} Here we emphasis that $\alpha^2=a_{ij}(x)y^iy^j$, $t_{\circ\circ}s^i_{\ \circ}=(t_{ij}(x)s^i_{\ k}(x))y^iy^ky^k$ and ${\cal L}_{\hat{V}}s^i_{\ \circ}=({\cal L}_{V}s^i_{\ k})(x)y^k$ are polynomials in $y^1,y^2,\dots,y^n$. Hence the left hand of (\ref{Hii}) is a polynomial in $y^1,y^2,\dots,y^n$ for every $i$, while the right hand is not. It follows immediately that $H^i=0$ for every index $i$ and~(\ref{Wee}) reads as ${\cal L}_{\hat{V}}\{\alpha s^i_{\ \circ}\}=0$. Recall that the geodesic coef\/f\/icients of $F$ are of the following form: \begin{gather} \label{Randers gco} G^i = G^i_\alpha + \left({e_{\circ\circ} \over 2F} -s_\circ\right)y^i+ \alpha s^i_{\ \circ}. \end{gather} From ${\cal L}_{\hat{V}}\{\alpha s^i_{\ \circ}\}=0$ and ${\cal L}_{\hat{V}}G^i=Py^i$ it results now, that we have \[ {\cal L}_{\hat{V}}G^i={\cal L}_{\hat{V}}\left\{G^i_\alpha + \left({e_{\circ\circ} \over 2F} -s_\circ\right)y^i\right\}=Py^i, \] and f\/inally we obtain \[ {\cal L}_{\hat{V}}G^i_\alpha=\left\{P-{\cal L}_{\hat{V}}\left({e_{\circ\circ} \over 2F} -s_\circ\right)\right\}y^i, \] which shows that $V$ is a $\alpha$-projective vector f\/ield. Conversely suppose that $V$ is $\alpha$-projective (i.e.\ ${\cal L}_{\hat{V}}G^i_\alpha=\omega_\circ y^i$, for some 1-forms $\omega_\circ=\omega_k(x)y^k$ on $M$) and ${\cal L}_{\hat{V}}\{\alpha s^i_{\ \circ}\}=0$. From (\ref{Randers gco}) it follows \begin{gather*} {\cal L}_{\hat{V}}G^i = {\cal L}_{\hat{V}}\left\{G^i_\alpha+\left({e_{\circ\circ} \over 2F} -s_\circ\right)y^i+ \alpha s^i_{\ 0}\right\}={\cal L}_{\hat{V}}G^i_\alpha+{\cal L}_{\hat{V}}\left({e_{\circ\circ} \over 2F} -s_\circ\right)y^i \\ \phantom{{\cal L}_{\hat{V}}G^i}{} = \left\{\omega_\circ+{\cal L}_{\hat{V}}\left({e_{\circ\circ} \over 2F} -s_\circ\right)\right\}y^i, \end{gather*} which proves that $V$ is a $F$-projective vector f\/ield. \end{proof} Let us prove initially the following lemma: \begin{lemma} \label{lemkomaki} Let $(M, F=\alpha+\beta)$ be an n-dimensional Randers space. If $s^i_{\ j}\neq0$, then $V$ is F-projective vector field if and only if it is a $\alpha$-homothety and ${\cal L}_{\hat{V}}d\beta = \mu d\beta$, where ${\cal L}_{\hat{V}} a_{ij} = 2\mu a_{ij}$. \end{lemma} \begin{proof} Suppose that $s^i_{\ \circ}\neq0$. By Theorem \ref{Liethm}, $V$ is $F$-projective if and only if it is $\alpha$-projective and ${\cal L}_{\hat{V}}\{\alpha s^i_{\ \circ}\}=0$. Let us suppose $t_{ij}={\cal L}_{\hat{V}}a_{ij}$ and observe that ${\cal L}_{\hat{V}}\{\alpha s^i_{\ \circ}\}=0$ is equivalent to \begin{gather} \label{tt} t_{\circ\circ}s^i_{\ \circ}+2\alpha^2{\cal L}_{\hat{V}}s^i_{\ \circ}=0. \end{gather} It follows that $\alpha^2$ divides $t_{\circ\circ}s^i_{\ \circ}$ for every index $i$. This equivalent to that $s^i_{\ j}=0$ or $\alpha^2$ divides $t_{\circ\circ}$ which means that $V$ is a conformal vector f\/ield on $(M,\alpha)$, since $s^i_{\ j}\neq0$. Since $V$ is already $\alpha$-projective, it follows that $V$ is an $\alpha$-homothety and there is a constant $\mu$ such that ${\cal L}_{V}a_{ij}=2\mu a_{ij}$. From (\ref{tt}) we obtain ${\cal L}_{V}s^i_{\ j}=-\mu s^i_{\ j}$. Now observe that \begin{gather*} {\cal L}_{V}s_{ij} = {\cal L}_{V}\{a_{ik}s^k_{\ j}\}=({\cal L}_{V}a_{ik})s^k_{\ j}+a_{ik}{\cal L}_{V}s^k_{\ j}=2\mu a_{ik}s^k_{\ j}-\mu a_{ik}s^k_{\ j}=\mu s_{ij}.\tag*{\qed} \end{gather*} \renewcommand{\qed}{} \end{proof} \begin{proof}[Proof of Theorem \ref{admitting}] Let us suppose that $M$ is compact and $F=\alpha+\beta$ is a Randers metric of constant f\/lag curvature and $n\geq3$. Following \cite{BaoRobles2003}, $F$ is of constant {\bf S}-curvature and due to a result about constant {\bf S}-curvature Finsler spaces in \cite{Mo2008}, it follows that ${\bf S}=0$. This results $e_{\circ\circ}=r_{\circ\circ}+2\beta s_\circ=0$. Now let us suppose that $s^i_{\ j}\neq0$. By Lemma~\ref{lemkomaki}, every $F$-projective vector f\/ield $V$ is an $\alpha$-homothety and since $M$ is compact, thus every $F$-projective vector f\/ield~$V$ is $\alpha$-Killing. Hence in this case we have the inclusion $p(M,F)\subseteq k(M,\alpha)$, where $k(M,\alpha)$ denotes the Lie algebra of $\alpha$-Killing vector f\/ields. It is well-known that the dimension of algebra of $\alpha$-Killing vector f\/ields is at most $\frac{n(n+1)}{2}$. Therefore $\dim(p(M,F))\leq\frac{n(n+1)}{2}$. Now let us suppose that $s^i_{\ j}=0$. In this case we have $p(M,F)=p(M,\alpha)$ and moreover one conclude that $\nabla_jb_i=0$ and~$F$ is a Berwald metric. Since $F$ is of constant f\/lag curvature, thus~$F$ and~$\alpha$ are metrics of zero f\/lag curvatures. Notice that $F$ is of constant f\/lag curvature, its Weyl curvature vanishes and since $s^i_{\ j}=0$, thus $F$ and $\alpha$ are projectively related and hence $\alpha$ has vanishing Weyl curvature and by Beltrami's theorem, $\alpha$ has constant sectional curvature. It is well-known that the dimension of $p(M,\alpha)$ is $n(n+2)$. Hence we have $\dim (p(M,F) )=n(n+2)$. \end{proof} The following inclusive result follows from the proof of Theorem \ref{admitting}. \begin{corollary} Let $(M,F=\alpha+\beta)$ be a Randers space of constant flag curvature. The following statements hold: \begin{enumerate}\itemsep=0pt \item[$(a)$] if $\beta$ is closed, then $p(M,F)=p(M,\alpha)$; \item[$(b)$] if $\beta$ is not a closed $1$-form, then $p(M,F)\subseteq h(M,\alpha)$, where $h(M,\alpha)$ denotes the Lie algebra of $\alpha$-homothety vector fields. \end{enumerate} \end{corollary} \begin{proof}[Proof of Theorem \ref{mainthm}] To obtain general formulae, let us assume that $(M,F=\alpha+\beta)$ be a~Randers space of isotropic ${\bf S}$-curvature ${\bf S}=(n+1)\sigma(x)F$ and $V$ be a~non-af\/f\/ine projective vector f\/ield. Following a result in \cite{ShenXing2008}, it results that $e_{\circ\circ}=2\sigma(x)(\alpha^2-\beta^2)$. Suppose that there is a~function $\Psi$ such that $\Psi(x,y)$ is linear with respect to $y$ such that ${\cal L}_{\hat{V}}G^i=\Psi y^i$. By applying Theorem \ref{Liethm}, we have \[ {\cal L}_{\hat{V}}G^i={\cal L}_{\hat{V}}\widetilde{G}^i+{\cal L}_{\hat{V}}(\sigma(\alpha-\beta)y^i)-{\cal L}_{\hat{V}}s_\circ y^i=\Psi y^i. \] Put $t_{ij}={\cal L}_{\hat{V}}a_{ij}$. It is well-known that ${\cal L}_{\hat{V}}y^i=0$, it follows $t_{\circ\circ}={\cal L}_{\hat{V}}\alpha^2$ and \[ {\cal L}_{\hat{V}}\widetilde{G}^i+\alpha{\cal L}_{\hat{V}}\sigma y^i-\beta{\cal L}_{\hat{V}}\sigma y^i +\frac{t_{\circ\circ}}{2\alpha}cy^i-\sigma y^i{\cal L}_{\hat{V}}\beta -{\cal L}_{\hat{V}}s_\circ y^i =\Psi y^i. \] Given any natural coordinate system $((x^i,y^i),\pi^{-1}(U))$ and $x\in U$, we can regard each terms of the above equation as a polynomial in $y^1,y^2,\dots,y^n$. Multiplying the two sides of the last equation in $\alpha$, we obtain the following relation: \[ {\rm Rat}^i+\alpha\, {\rm Irrat}^i=0,\qquad i=1,2,\dots,n, \] where the polynomials ${\rm Rat}^i$ and ${\rm Irrat}^i$ are given by \begin{gather*} {\rm Rat}^i=\alpha^2{\cal L}_{\hat{V}}\sigma y^i+\frac{1}{2}t_{\circ\circ}\sigma y^i,\\ {\rm Irrat}^i={\cal L}_{\hat{V}}\widetilde{G}^i-(\beta{\cal L}_{\hat{V}}\sigma+\sigma{\cal L}_{\hat{V}}\beta+{\cal L}_{\hat{V}}s_\circ +\Psi) y^i. \end{gather*} Now let us assume ${\bf S}=(n+1)\sigma(x)F=0$. By Lemma \ref{lemkomaki}, $(M,F)$ must be locally projectively f\/lat, otherwise $V$ is a $\alpha$-homothety which is a contradiction to the assumption that $V$ is non-$\alpha$-homothety. Hence $s_{ij}=0$ and by $e_{ij}=r_{ij}+b_is_j+b_js_i=r_{ij}=0$. This is equivalent to $\nabla_ib_j=0$ and $(M,F)$ is a Berwald space. \end{proof} By Theorem \ref{admitting}, the following theorems results \begin{theorem} \label{fini} Let $(M,F=\alpha+\beta)$ be a compact $n$-Randers space of constant flag curvature. The following statements hold: \begin{enumerate}\itemsep=0pt \item[$(a)$] if $\dim (p(M,F) )=\frac{n(n+1)}{2}$, then $\alpha$ is of constant sectional curvature; \item[$(b)$] if $\dim (p(M,F) )>\frac{n(n+1)}{2}$, then $F$ is a locally Minkowski metric. \end{enumerate} \end{theorem} \begin{proof} Let us suppose $\dim(p(M,F))=\frac{n(n+1)}{2}$. Due to the discussions in the proof of Theorem~\ref{admitting}, in this case we have $p(M,F)\subseteq k(M,\alpha)$ and thus, \[ \frac{n(n+1)}{2}=\dim (p(M,F))\leq \dim(k(M,\alpha)). \] Hence $(M,\alpha)$ is of maximum rank $\frac{n(n+1)}{2}$ and it is well-known that in this case $\alpha$ is of constant sectional curvature. This proves $(a)$. To prove $(b)$, we notice that if $\dim (p(M,F) )>\frac{n(n+1)}{2}$, then we must have $s^i_{\ j}=0$, Otherwise, by proof of Theorem \ref{admitting}, we have $\dim (p(M,F) )\leq\frac{n(n+1)}{2}$. Now, $s^i_{\ j}=0$ and ${\bf S}=0$ results that $F$ is a Berwald metric which is already of constant f\/lag curvature. $F$ is not Riemannian and Numata's theorem ensures that ${\bf K}=0$. Finally, Akbar-Zadeh's classif\/ication theorem entails $F$ is a locally Minkowski metric. \end{proof} \begin{example} In \cite{BaoShen2002}, the authors presented a worthily source of a 1-parameter family of Randers metric $F_\kappa=\alpha_\kappa+\beta_\kappa$ on the Lie group $S^3$ which all are of constant positive f\/lag curvature~$\kappa$. Due to their construction, non of the Riemannian metrics $\alpha_\kappa$ is of constant sectional curvature and hence, by Theorem \ref{fini}, it follows that $\dim\big(p(S^3,F_\kappa)\big)<6$. \end{example} \section{The Lorentz nonlinear connection\\ and Randers projective symmetry} The stage on which special relativity is played out is a specif\/ic four dimensional manifold, known as Minkowski spacetime. The $x^\mu$, $\mu=0,1,2,3$, are coordinates on this manifold and conventionally, we set $x^0=t$. The elements of spacetime are known as events; an event is specif\/ied by giving its location in both space and time. The inf\/initesimal (distance) between two points known as the \textit{spacetime interval} is def\/ined by \[ ds^2=\eta_{\mu\nu}dx^\mu dx^\nu=-dt^2+dx^2+dy^2+dz^2. \] The matrices $\Lambda$ satisfying $\Lambda^T\eta\Lambda=\eta$ are known as the \textit{Lorentz transformations}. As a notable well-known case, consider the celebrated Randers metric of the form $F=\sqrt{\eta_{\mu\nu}dx^\mu dx^\nu}+{\bf A}_idx^i$ on the 4-manifold of spacetime, where ${\bf A}$ is the electromagnetic vectorial potential and ${\bf F}=d{\bf A}$ obtained in the Cartesian coordinates $(t,x,y,z)$ as \begin{gather*} {\bf F}_{\mu\nu}= \begin{pmatrix} 0 & -E_x & -E_y & E_z \\ E_x & 0 & B_z & -B_y \\ E_y & -B_z & 0 & B_x \\ E_z & B_y & -B_x & 0 \end{pmatrix}. \end{gather*} Notice that ${\bf F}=0$ if and only if $F$ is locally projectively f\/lat. Hence, the presence of a proper electromagnetic f\/ield, entails non-locally projectively f\/latness of $F$. Consider a Lorentz transformation $\Lambda$ which maps the coordinates $(t,x,y,z)$ onto $(\bar{t},\bar{x},\bar{y},\bar{z})$. $\Lambda$ changes $F=\sqrt{\eta_{\mu\nu}dx^\mu dx^\nu}+{\bf A}_idx^i$ to $\bar{F}=\sqrt{\eta_{\mu\nu}dx^\mu dx^\nu}+\bar{{\bf A}}_idx^i$. Following an extensive discussion on projectively related Randers metrics in~\cite{ShenYu2008}, we conclude that the Lorentz transformation~$\Lambda$ is $F$-projective if and only if $\Lambda^T{\bf F}\Lambda={\bf F}$. The collection of all such Lorentz transformation forms a subgroup of the Lorentz group which is at once a subgroup of projective group $P(M,F)$. Theory of Finsler spaces with $(\alpha,\beta)$-metrics was studied by such famous geometers as M.~Matsumoto, D.~Bao and many others as a natural extension of the theory of Randers spaces. Associated to any $(\alpha,\beta)$-metric $F=F(\alpha,\beta)$ one may consider a nonlinear connection called Lorentz connection which has physical applications in the study of gravitational and electromagnetic f\/ields~\cite{Miron2006}. In this section, we uncover some results about its projective symmetry in a Finsler space with a~Randers metric. Let $F=F(\alpha,\beta)$ be an $(\alpha,\beta)$-metric on the manifold $M$. Through a variational approach, Lorentz equations are derived using Euler--Lagrange equations in the following form: \[ \frac{d^2x^i}{ds^2}+\Gamma^i_{jk}\frac{dx^j}{ds}\frac{dx^k}{ds}+\sigma\left(x,\frac{dx}{ds}\right)s^i_{\ j}\frac{dx^j}{ds}=0, \] where $ds^2=\alpha^2(x,\frac{dx}{dt})dt^2$ and $\sigma=\alpha{F^2_\beta}/{F^2_\alpha}$. The \textit{Lorentz nonlinear connection} $\overset{\circ}{G}{^i_j}$ is now def\/ined~by \begin{gather*} \overset{\circ}{G}{^i_{\ j}}(x,y)=\Gamma^i_{jk}(x)y^k+\sigma(x,y)s^i_{\ j}. \end{gather*} Every geometric object associated to the Lorentz connection will be denoted by the sign ``$^\circ$'' on top. Notice that the Lorentz nonlinear connection is determined by the Finsler--Lagrange metric $F=F(\alpha,\beta)$ only. Notice that the autoparallel curves of the nonlinear connection $\overset{\circ}{G}{^i_j}$, in the natural parameterizations (i.e.~$\alpha(x, \frac{dx}{ds}) = 1$), coincide with the integral curves of the canonical semispray $S$ given by \begin{gather*} S=y^i\frac{\partial}{\partial x^i}-2\overset{\circ}{G}{^i}\frac{\partial}{\partial y^i},\qquad \textrm{where} \qquad 2\overset{\circ}{G}{^i}=\Gamma^i_{jk}y^jy^k+\alpha s^i_{\ \circ}. \end{gather*} Akbar-Zadeh in \cite{Akbar-Zadeh1995} considers the Berwald connection of the semispray $\overset{\circ}{G}{^i}$ to obtain a covariant derivative and derived a unif\/ied formulation for electromagnetism and gravity. However, it encounters physical consistency: all the formulation require being invariant under the Lorentz group in f\/lat space-time. This is not satisf\/ied generally. It can be shown that the only Lorentz transformation which preserve the spray $\overset{\circ}{G}{^i}$ are those satisfying~$\Lambda^T{\bf F}\Lambda={\bf F}$. \subsection*{Acknowledgements} The authors would like to express their grateful thanks to the referees for their valuable comments. This work was supported in part by the Institute for Research in Fundamental Sciences (IPM) by the grant No.~89530036. The f\/irst author wishes to thank Borzoo Nazari for many fruitful conversations. \vspace{-2mm} \pdfbookmark[1]{References}{ref}
139,949
Download iTunes Free You’re just a few steps away from downloading music, HD TV programmes, films and more from the iTunes Store. PC users, make sure you’re logged in as Administrator before downloading iTunes. Click the Download button, then click Run. After your download completes, click Run again. Follow the onscreen installation instructions. View System Requirements - To learn how Apple safeguards your personal information, please review the Apple Customer Privacy policy. - iTunes is licensed for reproduction of noncopyrighted materials or materials the user is legally permitted to reproduce. Purchases from the iTunes Music Music Store #1 music download store according to Nielsen SoundScan. See Terms of Sale.
8,064
Episcopal Public Policy Network Action Alert Support a Truth and Healing Commission for Indian Boarding Schools Last week the Department of the Interior released its first report on the findings from the Federal Boarding School Initiative, a program which took Indigenous children away from their parents and communities with the goal of assimilation. This week, an Action Alert from the Office of Government Relations encourages Episcopalians to write to their senators, urging them to pass the Truth and Healing Commission on Indian Boarding School Policies in the United States Act. Learn more and write to Senators Durbin and Duckworth.
98,037
\begin{document} \maketitle \begin{abstract} We construct examples to show that having nef cotangent bundle is not preserved under finite ramified covers. Our examples also show that a projective manifold with Stein universal cover may not have nef cotangent bundle, disproving a conjecture of Liu-Maxim-Wang\cite{LiuMaximWang+2021}. \end{abstract} \section{Introduction} The notion of nefness of the cotangent bundle is closely related to various notions of hyperbolicity of a projective manifold (or more precisely, non-ellipticness), and it entails some topological obstructions. For example, any perverse sheaf on a projective manifold with nef cotangent bundle has nonnegative Euler characteristic \cite[Proposition 3.6]{LiuMaximWang+2021}. In \cite{LiuMaximWang+2021}, inspired by the Singer-Hopf conjecture and the Shafarevich conjecture, the authors made the following conjecture. \begin{conjecture}(\cite[Conjecture 6.3]{LiuMaximWang+2021})\label{Main_Conj} Let $Y$ be a projective manifold. If the universal cover of $Y$ is a Stein manifold, then the cotangent bundle of $Y$ is nef. \end{conjecture} Since having Stein universal cover is preserved under finite ramified covering, one may expect the property of having nef cotangent bundle is also preserved under finite ramified covering. However, we show that it is not the case in this paper. \begin{theorem}\label{Main_Theorem} For any positive integer $n\geq2$, there exists a finite surjective map between projective smooth $n$-folds $f: X\to Y$, such that $Y$ has nef cotangent bundle, while $X$ does not. \end{theorem} As a corollary, we give a counterexample to the conjecture above. \begin{corollary}\label{Main_Corollary} There is a smooth projective variety $X$ whose universal cover is Stein while its cotangent bundle is not nef. \end{corollary} By a theorem of Kratz \cite[Theorem 2]{kratz1997compact}, any compact quotient of bounded domain in $\mathbb{C}^n$ (or any Stein manifold) has nef cotangent bundle. \cref{Main_Theorem} shows that boundedness in Kratz's theorem is necessary. There are other hyperbolic-type properties that are preserved under finite surjective morphism, for example the property of having large fundamental group. A normal variety is said to have \emph{large fundamental group}, if its universal cover does not contain any positive dimensional proper complex subspaces (see for example \cite{Kollar1993}). If a variety has Stein universal cover, then it has large fundamental group (\cite[Proposition 6.7]{LiuMaximWang+2021}). Having large fundamental group is also related to various notions of hyperbolicity, as the Shafarevich conjecture predicts varieties with nonpositive sectional curvature have Stein universal covers (also see \cite{LiuMaximWang+2021}). \begin{corollary}\label{Corollary_large_fundamental_group} There is a projective variety $X$ which has large fundamental group while its cotangent bundle is not nef. \end{corollary} The example in \cref{Main_Theorem} is constructed using the cyclic covering trick. We produce ramified covering map with smooth branched locus. Using a lemma of Sommese, we prove the ramified cover cannot have nef cotangent bundle. \subsubsection*{Acknowledgements.} The author would like to thank his advisor Botong Wang for suggesting this problem and helpful discussions. He also would like to thank Conner Simpson for polishing the first draft. \section{A Lemma of Sommese} A line bundle $L$ on a projective variety is called \textit{nef} if for every irreducible curve $C$, we have $L\cdot C\geq 0$. A vector bundle $E$ is called \textit{nef} if the tautological line bundle $\mathcal{O}_{\mathbf{P}(E)}(1)$ is nef on $\mathbf{P}(E)$. The next lemma is due to Sommese \cite[Lemma 1.9]{Sommese1984}. Sommese used this lemma to classify all Hirzebruch surfaces with ample cotangent bundle. \begin{lemma}\label{rami_locus_negative} Suppose $f:X\to Y$ is a double covering between $n$-dimensional smooth projective varieties over $\mathbb{C}$, $n\geq 2$, with smooth ramification locus $R\subset X$ and branched locus $B \subset Y$. Suppose furthermore at any point $p\in R$, $f$ can be written (analytical) locally as $(u_1,u_2,\cdots, u_n) \to (u_1^2,u_2,\cdots, u_n)$. If $X$ has nef cotangent bundle, then $R$ has nef conormal bundle. \end{lemma} \begin{proof} Consider the natural exact sequence \begin{equation*} f^* \Omega_Y^1 \to \Omega_X^1 \to \Omega_f^1 \to 0, \end{equation*} and restrict it to $R$, \begin{equation*} \Omega_X^1|_R \to \Omega_f^1|_R \to 0, \end{equation*} We claim that $\Omega_f^1|_R\simeq \mathcal{O}_R(-R)$. Assuming this is true, since $\Omega_X^1|_R$ is nef and any quotient bundle of a nef bundle is also nef, we must have $\mathcal{O}_R(-R)$ is nef. To prove the fact $\Omega_f^1|_R\simeq \mathcal{O}_R(-R)$, consider the natural short exact sequence \begin{equation}\label{fund_ses} 0\to \mathcal{I}_R/\mathcal{I}_R^2 \to \Omega_X^1|_R \to \Omega_R^1\to 0. \end{equation} Let $\varphi$ be the composition $\mathcal{O}_R(-R)\simeq \mathcal{I}_R/\mathcal{I}_R^2 \to \Omega_X^1|_R \to \Omega_f^1|_R$, and we want to prove $\varphi$ is an isomorphism. This can be proved locally. In the coordinate patch $U$ that satisfies the second condition of this lemma, we know that $\Omega_f^1$ can be locally written as \begin{equation*} \frac{\bigoplus_{i=1}^n \mathcal O_U\{du_i\}}{\mathcal O_U\{u_1du_1\}\bigoplus_{i=2}^n \mathcal O_U\{du_i\}} \simeq \mathcal{O}_U/u_1\mathcal{O}_U\{du_1\}, \end{equation*} which is a line bundle over $R\cap U$, generated by $du$. Moreover, the equivalence class of $u$ which generates $\mathcal{I}_R/\mathcal{I}_R^2$ maps to the class of $du$, which generates $\Omega_f^1|_R$. So $\varphi$ must be an isomorphism. \end{proof} \begin{corollary}\label{negative_intersection} Under the assumption of the above lemma, for any irreducible curve $C\subset R$, we must have $C\cdot R \leq 0$. In particular, if $X$ is of dimension 2, then $R\cdot R\leq 0$. \end{corollary} \begin{proof} For such a curve $C$, $R\cdot C=\mathrm{deg}_C(\mathcal{O}_X(R))=\mathrm{deg}_C(\mathcal{O}_R(R))\leq 0$ by $\mathcal{O}_R(-R)$ is nef. \end{proof} \begin{remark} What we actually proved here is the short exact sequence (\ref{fund_ses}) splits. This is clear geometrically: the tangent bundle $TX$ of $X$ restricted to $R$ has two natural subbundles, one is the tangent bundle of $R$, and the other is the kernel bundle of $df|_R:TX|_R\to TY|_B$. Moreover, the assumption ``analytic locally'' can be replaced by ``formal locally", since for checking $a$ is an isomorphism, we only need to verify it stalk by stalk formally. Therefore the argument works in all characteristics. \end{remark} \section{The Cyclic Covering} We recall the construction of cyclic covering. \begin{proposition}[{\cite[Proposition 4.1.6]{Lazarsfeld2004}}] Let $Y$ be an algebraic variety, and $L$ a line bundle on $Y$. Suppose given a positive integer $m \geq 1$ plus a non-zero section $$s \in \Gamma(Y,L^m)$$ defining a divisor $D \subset Y$. Then there exists a finite flat covering $\pi : X \to Y$, where $X$ is a scheme having the property that the pullback $L'= \pi^{*} L$ of $L$ carries a section $s'\in \Gamma(X,L')$ with $(s')^m = \pi^*s$. The divisor $D'= div(s')$ maps isomorphically to $D$. Moreover, if $Y$ and $D$ are non-singular, then so too are $X$ and $D'$. \end{proposition} The construction of $X$ can be described in detail locally. On an affine open subset $U$ of $Y$ that $L^m$ is trivial, $s$ can be viewed as a regular function on $U$. Then $\pi^{-1}(U)\subset Y \times \mathbf{A}^1$ is defined by the equation $t^m-s=0$. In particular, if $m=2$, and $D$ is nonsingular, then the covering we construct here satisfies the second condition of \cref{rami_locus_negative}. The reason is that analytic locally, the map $\pi$ can be written as $(x_1,x_2,\cdots,x_n,t) \to (x_1,x_2,\cdots,x_n)$. Since $D$ is nonsingular, at any given point $p\in D$, there exists some $i$ such that $(x_1,x_2,\cdots,\widehat{x_i},\cdots,x_n,s)$ is a local coordinate of $Y$. Then $(x_1,\cdots,\widehat{x_i},\cdots,x_n,t) \to (x_1,\cdots,\widehat{x_i},\cdots,x_n,s)$ is the desired coordinate presentation of $\pi$. Now, we can prove the claimed results in the introduction. \begin{proof}[Proof of \cref{Main_Theorem}] We pick any smooth $n$-fold $Y$ with nef cotangent bundle, a very ample line bundle $H$, and any smooth section $D\in |2H|$. By the construction above, we get a double covering map $f:X\to Y$, branched over $D$, satisfying all the conditions of \cref{rami_locus_negative}. In particular, if $X$ has nef cotangent bundle, then for any irreducible curve $C\subset D$, $D'\cdot f^*C=2D'\cdot f^{-1}(C) \leq 0$ by \cref{negative_intersection} ($f^{-1}(C)$ is still irreducible since $f$ maps $D'$ isomorphically to $D$). On the other hand, $D'\cdot f^*C=2D\cdot C=4H\cdot C>0$ by the assumption $H$ is very ample. Therefore $X$ cannot have nef cotangent bundle. \end{proof} \begin{proof}[Proof of \cref{Main_Corollary}] Let $Y$ in the above construction be an abelian variety. Then $Y$ has universal cover $\mathbb{C}^n$. Taking the fiber product in the category of complex spaces, we have the Cartesian diagram \[ \begin{tikzcd} X' \arrow[d] \arrow[r] & \mathbb{C}^n \arrow[d] \\ X \arrow[r] & Y \end{tikzcd} \] where $X'$ is an infinite unramified cover of $X$. Since $X \to Y$ is finite, $X'\to \mathbb{C}^n$ is also finite, by \cite[Theorem V.1.1]{Grauert2004} $X'$ is Stein. Therefore, the universal cover of $X$ is Stein because any unramified covering (not necessarily finite) of a Stein manifold is Stein (see \cite{Stein1956}). But the cotangent bundle of $X$ is not nef. \end{proof} \begin{proof}[Proof of \cref{Corollary_large_fundamental_group}] For a normal variety $X$, having large fundamental group is equal to say for any positive dimensional cycle $w:W\to X$, the image of $\pi_1(W)\to\pi_1(X)$ is infinite (see \cite{Kollar1993}). It's obvious that having large fundamental group is preserved under finite surjective morphism using this definition. Take $Y$ to be an abelian variety, $X$ as in the proof of \cref{Main_Theorem}, then $X$ has large fundamental group while its cotangent bundle is not nef. \end{proof} Kobayashi hyperbolicity is an important example of hyperbolicity of complex manifolds. If a projective variety has ample cotangent bundle, then it also has Kobayashi hyperbolicity (see \cite[Theorem 6.3.26]{Lazarsfeld2004_2}). However, the above construction shows \begin{corollary} There is a projective variety $X$ which is Kobayashi hyperbolic while its cotangent bundle is not nef. \end{corollary} \begin{proof} Take $Y$ to be any projective Kobayashi hyperbolic variety in the above construction. Then $f:X\to Y$ is finite surjective but $X$ does not have nef cotangent bundle. The variety $X$ is also Kobayashi hyperbolic, because if there is any nonconstant holomorphic map $g:\mathbb{C} \to X$, then $f\circ g$ is constant since $Y$ is Kobayashi hyperbolic. Therefore $g(\mathbb{C})$ is contained in a fiber of $f$, which is a finite set, so $g$ must be constant, and $X$ is also Kobayashi hyperbolic. \end{proof} \begin{remark} Such an example in dimension 2 was known a long time ago (see for example \cite{10.2307/2041670}). \end{remark} All these examples show that, the nefness of cotangent bundle is too strong as a hyperbolicity condition. Inspired by \cite{arapura2021perverse}, one possible adjustment is to look at the class of projective manifolds that admit a finite morphism to a manifold with nef cotangent bundle. Since the pushforward of a perverse sheaf under a finite morphism is still perverse, by \cite[Proposition 3.6]{LiuMaximWang+2021} any perverse sheaf on such manifold has nonnegative Euler characteristic. In \cite{arapura2021perverse}, the authors proved the same property holds for projective varieties that admit a complex variation of Hodge structures with finite period map. This result also indicates that admitting a finite morphism to a projective manifold with nef cotangent bundle is a more appropriate hyperbolicity condition. \printbibliography \end{document}
60,047
Chennai go fourth with tough win Chennai Super Kings 160 for 6 (du Plessis 42) beat Deccan Chargers 150 for 5 (White 77) by 10 runs Scorecard and ball-by-ball details Their batsmen did not set Chepauk alight, nor did their bowlers destroy the opposition, but Chennai Super Kings fought hard on a demanding pitch to earn a victory that helped them claw back into the league's top half. Their top-order made nugget-sized but swift contributions that ensured Super Kings reached a competitive target despite an end-over slowdown, after which their bowlers prevented Cameron White's solitary straining at the reins from saving the night for Deccan Chargers. It was a gritty, unspectacular win, but one that ensured they did not drop points against the IPL's bottom-placed team. The turnaround for Super Kings, however, came via a stroke of luck. White and Shikhar Dhawan had kept Chargers on course by reaching 77 for 1 in the 11th over, when White drove the ball hard at Dwayne Bravo, the bowler. The ball thudded into Bravo's hand and deflected a long way on to the stumps, catching Dhawan backing-up much too far. There was no luck involved in Kumar Sangakkara's dismissal, though, when Suresh Raina lunged to his right to grab a firm drive with the fingertips of his outstretched hand. Daniel Christian took time to settle in and Chargers scored only 26 between overs 11 and 15 for the loss of two wickets. They needed 59 off the final five and run scoring was significantly harder as the ball got older. Two more economical overs drove the equation up to 47 off 18 balls, when White swung Ravindra Jadeja far over long-on and long-off, and then through backward square leg, to give Chargers hope. With 27 needed off 10 deliveries, though, Jadeja's accurate throw to Dhoni from the deep ran out White for 77, snuffing out Chargers' last hope. Perhaps the most relieved man on the field was Albie Morkel, who had dropped White first ball. Chargers had themselves to blame for their seventh defeat in ten matches. Their fielding has been shocking through the tournament and today's performance was typical. Amit Mishra dropped Faf du Plessis on 10; he went on to top-score for Super Kings with 42. In the final over, Ankit Sharma and Parthiv Patel failed to call for a catch off Bravo and conceded two runs off that delivery. The next ball went for six and the last two for two each. In a format of small margins, Chargers were generous once again. They could have been chasing 140 instead of 160. At one stage, however, Super Kings' top order was building a platform for 180. M Vijay, who opened because Michael Hussey was replaced by Ben Hilfenhaus, flung his bat around before he was caught early, but there were brisk partnerships between the rest. Raina and du Plessis added 64 in 7.2 overs, the highlight being Raina, who took six balls to score, pulling and hooking Veer Pratap Singh for consecutive sixes. In the end, the acceleration did not come. After scoring at a healthy clip for the majority of their innings, and despite sending the strongest in their arsenal upfront, Super Kings did not find that end-over propulsion. In fact, they slowed down: having made 118 for 3 in 14 overs, Super Kings scored only 42 off the final six, but it was ten too many for Chargers. George Binoy is an assistant editor at ESPNcricinfo
370,452
Best Dive Bar (2014) Lakeview Lounge. -
108,436
\begin{document} \begin{abstract}{We study the subsets $V_k(A)$ of a complex abelian variety $A$ consisting in the collection of points $x\in A$ such that the zero-cycle $\{x\}-\{0_A\}$ is $k$-nilpotent with respect to the Pontryagin product in the Chow group. These sets were introduced recently by Voisin and she showed that $\dim V_k(A) \leq k-1$ and $V_k(A)$ is countable for a very general abelian variety of dimension at least $2k-1$. We study in particular the locus $\mathcal V_{g,2}$ in the moduli space of abelian varieties of dimension $g$ with a fixed polarization, where $V_2(A)$ is positive dimensional. We prove that an irreducible subvariety $\mathcal Y \subset \mathcal V_{g,2}$, $g\ge 3$, such that for a very general $y \in \mathcal Y $ there is a curve in $V_2(A_y)$ generating $A$ satisfies $\dim \mathcal Y \le 2g - 1.$ The hyperelliptic locus shows that this bound is sharp. MSC codes: 14K10, 14C15.} \end{abstract} \maketitle \begin{flushright} \textit{Dedicated to the memory of our friend Alberto Collino} \end{flushright} \section{Introduction} Claire Voisin in \cite{Vo} defines the subset $V_k(A)$ of a complex abelian variety $A$ consisting in the collection of points $x\in A$ such that the zero-cycle $\{x\}-\{0_A\}$ is $k$-nilpotent with respect to the Pontryagin product in the Chow group: \[ V_k(A):=\{x\in A \mid (\{x\}-\{0_A\})^{*k}=0 \text{ in } CH_0(A)_{\mathbb Q}\}. \] Here we have denoted by $\{x\}$ the zero-cycle of degree $1$ corresponding to the point $x\in A$. These are naturally defined sets in the sense that they exist in all the abelian varieties, are functorial and move in families. Moreover they are related with the gonality of the abelian variety itself (the minimal gonality of a curve contained in $A$) in a natural way. We consider the following subsets of the moduli space of abelian varieties of dimension $g$ with a polarization of type $\delta$: \[ \mathcal V_{g,k,l}=\{ A \in \mathcal A_g^\delta \mid \dim V_k(A) \ge l \}. \] Since the sets $V_k$ are naturally defined, then $\mathcal V_{g,k,l}$ is a union of countably many closed subvarieties of $\mathcal A_g^\delta$. Hence it makes sense to ask about its dimension. Put $\mathcal V_{g,k}:=\mathcal V_{g,k,1}$. For an abelian subvariety $B\subset A$ the inclusion $V_k(B)\subset V_k(A)$ holds and a well-known theorem of Bloch implies that $B=V_k(B)$ if $\dim B+1\le k$, hence in this situation $B\subset V_k(A)$. These are in some sense ``degenerated examples''. In this paper we concentrate in the case $k=2$ and we take care of the non-degenerate case, that is, we will assume that $V_2(A)$ contains some curve generating the abelian variety $A$. Our main result is: \vskip 3mm \begin{thm}\label{m.res} Let $g\ge 3$ and consider an irreducible subvariety $\mathcal Y \subset \mathcal V_{g,2}$ such that for a very general $y\in \mathcal Y$ there is a curve in $V_2(A_y)$ generating $A_y$. Then $\dim \mathcal Y\le 2g-1.$ \end{thm} This result is sharp due to the fact that the hyperelliptic locus is contained in $\mathcal V_{g,2}$, see section 2. In fact, the motivation for this study is to understand the geometrical meaning of the positive dimensional components in $V_2$. Our result gives some evidence that there is a link between the existence of hyperelliptic curves in abelian varieties and the fact that $V_2$ is positive dimensional. We remark that the statement of the theorem (\ref{m.res}) was suggested by the analogous result in \cite{Isog_hyp} concerning hyperelliptic curves. \vskip 3mm Section $2$ is devoted to give some preliminaries and some useful properties of the loci $V_k(A)$ focusing specially on the case $k=2$. A remarkable property is that $V_2(A)$ is the preimage of the orbit of the image of the origin with respect to rational equivalence in the Kummer variety $Kum(A)$. We also prove the following interesting facts (see Corollaries (\ref{cor1}) and (\ref{cor2})): \vskip 3mm \begin{prop} For any abelian variety $A$ the inclusion $$V_k(A)+V_l(A)\subset V_{k+l-1}(A)$$ holds for all $\, 1\le k, l \le g$. Moreover if $C$ is a hyperelliptic curve of genus $g$, and $J(C)$ be its Jacobian variety, then for all $1\le k \le g+1$, we have that $\dim V_{k}(JC)=k-1$ (the maximal possible value). \end{prop} The rest of the paper is devoted to the proof of the main theorem. The beginning follows closely the same strategy as in \cite{Isog_hyp} since we reduce to prove the vanishing of a certain adjoint form. The novelty here is that we prove this vanishing by using the action of a family of rationally trivial zero-cycles as in the spirit of Mumford and Roitman results revisited by Bloch-Srinivas, Voisin and others. More precisely, we can assume that there is relative map $f:\cC \lra \cA$ of curves in abelian varieties over a base $\cU$ such that $f(C_y)\subset V_2(A_y)$ generates the abelian variety $A_y$ for all $y\in \cU$. We use the properties of the sets $V_2(A_y)$ to construct a cycle on the family of abelian varieties which acts trivially on the differential forms. Then, by using deformation of differential forms as in \cite{Isog_hyp} we compute the so-called adjunction form in a generic point of the family. This technique can be traced-back to \cite{collino_pirola}, where this procedure is introduced for the first time. In section $4$ we assume that, by contradiction, the dimension of the family is $\ge 2g$ and we prove that this implies the existence of a non-trivial adjoint form. This contradicts the results on section 3. \textbf{Acknowledgments:} We warmly thank Olivier Martin for his careful reading of the paper, and the referee for valuable suggestions that have simplified and clarified the paper. \section{Preliminaires on the subsets $V_k$ of an abelian variety $A$} \subsection{On the dimension of $V_k$} The most part of this subsection comes from \cite{Vo}, where the sets $V_k(A)$ appear for the first time. \begin{defn} Let $A$ be an abelian variety and denote by $CH_0(A)$ the Chow group of zero-cycles in $A$ with rational coefficients. We also denote by $*$ the Pontryagin product in the Chow group. Given a point $x \in A$, we put $\{x\}$ for the class of $x$ in $CH_0(A)$. Then we define the Voisin sets (cf. \cite{Vo}): \[ V_k=V_k(A):=\{x\in A \mid (\{x\}-\{0_A\})^{*k}=0\}. \] \end{defn} It is known that the set $V_k$ is a countable union of closed subvarieties of $A$. A typical way to obtain points in $V_k$ is given by the following Proposition: \vskip 3mm \begin{prop}(\cite[Prop. (1.9)]{Vo}) \label{Vo_Be} Assume that $\{x_1\}+\dots +\{x_k\}=k\{0_A\}$ in $CH_0(A)$ for some points $x_i\in A$, then for all $i$ we have $x_i\in V_k$. \end{prop} Since the orbits w.r.t. rational equivalence $\vert k \{0_A\}\vert$ are hard to compute there are only a few examples of positive dimensional components in $V_k(A)$ that we can construct from this proposition. The simplest instance of this comes from a $k$-gonal curve $C$ contained in $A$ such that there is a totally ramified point $p$ for the degree $k$ map $f:C\lra \mathbb P^1$. Translating we can assume that $p$ is the origin $0_A$ and then the fibers of $f$ provide a $1$-dimensional component in the symmetric product, $k$ times, of $A$. Therefore, by the proposition above, we obtain that $C$ is contained in $V_k(A)$. Observe that the sets $V_k$ are invariant under isogenies, hence these positive dimensional components also appear in many other abelian varieties. In particular for any integer $n$, we have that $n_* V_2(A)\subset V_2(A)$. \begin{rem}\label{remarks_on_V_k} We have the following properties: \begin{enumerate} \item [a)] All the abelian varieties containing hyperelliptic cur\-ves have positive dimensional com\-po\-nents in $V_2$. \item [b)] A very well known theorem of Bloch (see \cite{bloch}) implies that $$V_{g+1}(A)=A,$$ hence the natural filtration \[ V_1(A) \subset V_2(A)\subset \ldots \subset V_g(A) \subset V_{g+1}(A)=A \] has at most $g$ steps. It is natural to ask what is the behaviour of the dimension of $V_k(A)$, with $k\le g$, for very general abelian varieties, and which geometric properties of $A$ codify these sets. \item [c)] Assume that $B\subset A$ is an abelian subvariety, then $V_k(B)\subset V_k(A)$. In particular, if $k\ge \dim B+1$, then $B\subset V_k(A)$. For instance: all the elliptic curves in $A$ passing through the origin are contained in $V_2(A)$. \item [d)] Let $C$ be a smooth quartic plane curve and let $p$ be a flex point with tangent $t$, then $t\cdot C=3p+q$ and the projection from $q$ provides a collection of zero-cycles of degree $3$ in $JC$ rationally equivalent to $3\{0_{JC}\}$, then $C\subset V_3(JC)$. Using isogenies we get that there are in fact a countably number of curves in $V_3(A)$ for a very general abelian variety of dimension $3$. \end{enumerate} \end{rem} The following is proved in Theorem (0.8) of \cite{Vo} by using some ideas from \cite{Mumford} and improving the techniques of \cite{Kummer}: \vskip 3mm \begin{thm}\label{dim_V_k} Let $A$ be an abelian variety of dimension $g$. Then: \begin{enumerate} \item [a)] $\dim V_k(A) \le k-1$. \item [b)] If $A$ is very general and $g\ge 2k-1$ we have that $\dim V_k(A)=0$. \end{enumerate} \end{thm} In the specific case of $V_2(A)$ we have the following properties. \begin{prop}\label{properties_V_2} Let $A$ be an abelian variety and let $Kum(A)$ be its Kummer variety. \begin{enumerate} \item [a)] We have the equality $V_2(A)=\{x\in A \mid \{x\}+\{-x\}=2\{0_A\}\}.$ \item [b)] Let $\alpha : A\lra Kum(A)$ be the quotient map. Then $$ V_2(A)=\alpha^{-1}(\{y\in Kum(A) \mid \{y\} \sim_{rat} \{\alpha (0_A)\} \}). $$ \end{enumerate} \end{prop} \begin{proof} Part a) follows from the observation that \[ (\{x\}-\{0_A\})^{*2}=\{2x\}-2\{x\}+\{0_A\}=0 \] is equivalent, translating with $-x$, to $\{x\}+\{-x\}=2\{0_A\}$. To prove b) we first see that $x\in V_2(A)$ if and only if $\{ \alpha (x) \} \sim_{rat} \{\alpha (0_A)\}$. Indeed, assume that $\{x\}+\{-x\}=2\{0_A\}$, applying $\alpha$ we get that $2 \{\alpha(x)\}\sim_{rat} 2\{\alpha ( 0_A)\}$. Since $Alb({Kum(A)})=0$ the Chow group has no torsion (see \cite{Roitman_tors}) therefore $\{\alpha(x)\}\sim_{rat} \{\alpha (0_A)\}$. In the opposite direction, if $\{\alpha (x)\}\sim_{rat} \{\alpha (0_A)\}$ we apply $\alpha^*$ at the level of Chow groups and we obtain that $x\in V_2(A)$. Hence $V_2(A)$ is the pre-image by $\alpha $ of the points rationally equivalent to $\alpha (0_A)$. \end{proof} \subsection{Relation with the Chow ring} In this part we collect some computations on $0$-cycles on abelian varieties which are more or less implicit in \cite{beauville_quelques}, \cite{beauville} and \cite{Vo}. Let us recall first some facts on the Chow group (with rational coefficients) of an abelian variety $A$ of dimension $g$ which are proved in \cite{beauville}. Let us define the subgroups: \[ CH^g_s(A):=\{z\in CH^g(A)_{\mathbb Q} \mid k_*(z)=k^s z, \quad \forall k\in\mathbb Z \}. \] Then: \[ CH^g(A)_{\mathbb Q}=CH^g_0(A) \oplus CH^g_1(A) \oplus \ldots \oplus CH^g_g(A). \] Moreover $CH^g_0(A)=\mathbb Q \{0_A\}$ and $I=\bigoplus _{s\ge 1} CH^g_s(A)$ is the ideal, with respect to the Pontryagin product, of the zero-cycles of degree $0$. It is known that $I^{*\, r}= \bigoplus _{s\ge r} CH^g_s(A)$ and that $I^{*\, 2}$ is the kernel of the albanese map tensored with $\mathbb Q$: \[ I \lra A_{\mathbb Q}, \] sending a zero cycle $\sum n_i\{a_i\}$ to the sum $\sum n_i \, a_i$ in $A$. Another useful property is that $CH^g_s(A)*CH^g_t(A)=CH^g_{s+t}(A)$. We point out that the filtration $V_1(A)\subset V_2(A) \subset \ldots \subset A$ is, in some sense, induced by the filtration $\ldots \subset I^{* \, 2}\subset I \subset CH^g(A)_{\mathbb Q}$. Indeed, given a point $x\in A$ we use the notation: \begin{equation}\label{decomposition} \{x\}=\{0_A\}+x_1+\ldots +x_g, \qquad x_i \in CH^g_i(A). \end{equation} Then we have: \begin{prop}\label{V_k_vs_Chow} For all $x\in A$, $x$ belongs to $V_k(A)$ if and only if $$x_{k}=\ldots =x_g=0.$$ In particular $x \in V_2(A)$ if and only if $\{x\}-\{0_A\}\in CH^g_1(A)$. \end{prop} \begin{proof} We apply to (\ref{decomposition}) the multiplication by $l$ in $A$: \[ \{l x\}=\{0_A\}+l x_1+\ldots + l^g x_g. \] Using this we have: \[ \begin{aligned} (\{x\}-\{0_A\})^{* k}&= \sum_{i=0}^k (-1)^i\binom{k}{i}\{(k-i)x\}=\\ &= \sum_{i=0}^k (-1)^i\binom{k}{i}(\{0_A\}+(k-i)x_1+\ldots +(k-i)^g x_g)= \\ &= \sum_{i=0}^k (-1)^i\binom{k}{i} \{0_A\}+ \sum_{i=0}^k (-1)^i\binom{k}{i} (k-i)x_1+\ldots \\ &+ \sum_{i=0}^k (-1)^i\binom{k}{i}(k-i)^g x_g. \end{aligned} \] Now we use the following formulas (see the proof of Lemma 3.3 in \cite{Vo} or prove them by induction): \[ \begin{aligned} \sum_{i=0}^k (-1)^i\binom{k}{i}(k-i)^l=& 0 \qquad \text{ if }\, l<k \\ \sum_{i=0}^k (-1)^i\binom{k}{i}(k-i)^k=& k! \end{aligned} \] Therefore we obtain that: \[ (\{x\}-\{0_A\})^{* k}=k! x_k+\ldots \] and similarly $ (\{x\}-\{0_A\})^{* l}=l! x_l+\ldots $ for any $l\ge k$. Hence $x\in V_k(A)$ if and only if $x\in V_l(A)$ for all $l\ge k$ if and only if $x_k=\ldots =x_g=0$. \end{proof} We have several consequences of this Proposition and of its proof: \begin{cor} With the same notations we have that $x_k=\frac 1 {k!}x_1^k$. Hence $\{x\}=exp(x_1).$ Moreover $x\in V_k$ if and only if $x_k=x_1^k=0$. \end{cor} \begin{proof} We have seen along the proof of the Proposition that \[ (\{x\}-\{0_A\})^{* k}=k! x_k+\ldots \text{ higher degree terms}. \] Computing directly we get that \[ (\{x\}-\{0_A\})^{* k}=(x_1+\ldots +x_k)^{*k}=x_1^k+\ldots \text{higher degree terms}. \] By comparing both formulas we obtain the equality. \end{proof} \begin{rem} Notice that our computations are more or less contained in \cite{beauville_quelques}. Indeed define as in section $4$ of loc. cit. the map \[ \gamma: A \lra I, \qquad a \mapsto \{0\} -\{a\} + \frac 12 (\{0\} -\{a\})^{*2} +\frac 13 (\{0\} -\{a\})^{*3}+\ldots \] this is a morphims of groups. Then, with our notations, $\gamma (x)= -x_1$. In particular the image of $\gamma $ belongs to $CH^g_1(A)$. \end{rem} \begin{cor}\label{points_V_2} Let $a, b \in A$ be two points such that $$n \{a\} + m\{b\}=(n+m)\{0_A\}$$ for some integers $1\le n,m$. Then $a,b \in V_2(A)$. \end{cor} \begin{proof} Decomposing as before: \[ \{a\}=\{0\}+a_1+\frac 12 a_1^2+\ldots \qquad \{b\}=\{0\}+b_1+\frac 12 b_1^2+\ldots \] the equality of the statement implies that $n a_1+m b_1=0=n a_1^2+m b_1^2$. Then $b_1=-\frac nm a_1$ and thus $n a_1^2+ \frac {n^2}{m^2} a_1^2=0$, so $a_1^2=b_1^2=0$ and $a,b \in V_2(A)$. \end{proof} \begin{cor} Let $\varphi:A\lra B$ be an isogeny, then $\varphi^{-1}(V_k(B))\subset V_k(A)$. In particular $\varphi (V_k(A))=V_k(B)$ and $\varphi ^{-1}(V_k(B))=V_k(A)$. \end{cor} \begin{proof} Since we work with Chow groups with rational coefficients it is clear that for an integer $n\neq 0$ the map $n_*:CH^g_k(A)\lra CH^g_k(A)$ is bijective. Let $\psi : B\lra A$ be an isogeny such that $\psi \circ \varphi =n$, we deduce that $\varphi _*: CH^g_k(A) \lra CH^g_k(B)$ is injective. Let $x\in \varphi^{-1}(V_k(B))$ and set $\{x\}=\{0_A\}+x_1+\ldots + x_g$. By hypothesis \[ \varphi (\{x\}) =\{0_B\} +\varphi _*(x_1)+\ldots +\varphi _*(x_g) \in V_k(B). \] Hence $\varphi_* (x_k)=0$ and $x_k=0$. Therefore $x \in V_k(A)$ and we are done. \end{proof} \begin{rem} This corollary also follows from the definition of $V_k$ and the fact that $\varphi _*$ is an isomorphism modulo torsion on Chow groups compatible with the Pontryagin product. \end{rem} Another interesting consequence of this characterization is the following property: \begin{cor}\label{cor1} For any $0\le k,l \le g$ we have that $$V_k(A)+V_l(A)\subset V_{k+l-1}(A).$$ \end{cor} \begin{proof} Let $x\in V_k(A)$, $y\in V_l(A)$. Then $\{x\}=\{0_A\}+x_1+\ldots +x_{k-1}$ and $\{y\}=\{0_A\}+y_1+\ldots +y_{l-1}$. Since $x_i*y_j \in CH^g_{i+j}(A)$ we obtain \[ \{x+y\}= \{x\}*\{y\}=\{0_A\}+(x_1+y_1)+ (x_2+x_1*y_1+y_2)+\ldots+ x_{k-1}*y_{l-1}. \] Thus $x+y\in V_{k+l-1}(A)$. \end{proof} As an application we have: \begin{cor}\label{cor2} Let $C$ be a hyperelliptic curve of genus $g$. Then \[ \dim V_k(JC)=k-1 \] for $1\le k\le g$, that is, the maximal possible dimension is attained. \end{cor} \begin{proof} Choosing a Weiestrass point to define the Abel-Jacobi map we can assume that the curve $C$ is contained in $V_2(JC)$, using inductively the previous Corollary, we have that \[ C+\stackrel{(k-1)}{\ldots} +C=W^0_{k-1}(C)\subset V_{k}(JC). \] \end{proof} For instance, for the Jacobian of a genus $3$ curve $C$ we have, in the hyperelliptic case, that $\dim V_2(JC)=1$ and $\dim V_3(JC)=2$. If instead $C$ is a generic quartic plane curve we have that $\dim V_2(JC)=0$ by Theorem (\ref{dim_V_k}) and $\dim V_3(JC)\ge 1$ by Remark (\ref{remarks_on_V_k},d). Denoting as in that remark $p$ a flex point and $q$ its residual point, we have also the following: for any $x\in C$ such that the tangent line to $C$ in $x$ goes through $q$, $x\in V_2(C)$ (we identify $C$ with the Abel-Jacobi image in $JC$ using $p$). Indeed: there exists a $y\in C$ with $2x+y+q\sim 3p+q$, hence in $JC$ there is a relation of the form $2\{x\}+\{y\}=3\{0\}$ and then Corollary (\ref{points_V_2}) implies that $\{x\}, \{y\} \in V_2(JC)$. Assume now that $C$ is a quartic plane with a hyperflex, that is a point $p$ such that $\mathcal O_C(1)\cong \mathcal O_C (4p)$. This condition defines a divisor in $\mathcal M_3$. Embed $C$ in its Jacobian using $p$ as base-point. Then for any bitangent $2x+2y$ we have $\{x\}, \{y\} \in V_2(JC)$. Also, with the same argument, for a standard flex $q$ we have $\{q\}\in V_2(JC)$. Everything suggests that the points in ``$C\cap V_2(JC)$'' could have some geometrial meaning. Notice that this leaves open the question whether the dimension of $V_3(JC)$ is $1$ or $2$ for a generic quartic plane curve $C$. \vskip 3mm \section{A family of zero-cycles and the action on differential forms} In this section we begin the proof of the main theorem. We proceed by contradiction, hence we assume that there exists an irreducible component $\mathcal Y$ of $\mathcal V_{g,2}$ of dimension $\ge 2g$. By \ref{dim_V_k} we have $\dim V_2(A)\le 1$, for any abelian variety $A$. Hence $V_2(A_y)$ contains curves for all $y\in \mathcal Y$ and, by hypothesis, at least one of these curves generates the abelian variety. By a standard argument (involving the properness and countability of relative Chow varieties and the existence of universal families of abelian varieties up to base change) we can assume the existence of the following diagram: \[ \xymatrix@C=1.pc@R=1.8pc{ \mathcal {C} \ar[rd] \ar[rr]^ f && \mathcal {A} \ar[ld]^{\pi } \\ &\mathcal U, } \] where the parameter space $\cU$ comes equipped with a generically finite map $\Phi: \cU \lra \cY $ such that $\Phi(y)$ is the isomorphism class of $A_y$ for all $y\in \mathcal U$. Moreover $f_y:C_y \lra A_y$ is the normalization map of an irreducible curve $f_y(C_y)$ contained in $V_2(A_{y})$, and generating $A_y$, followed by the inclusion. We can also assume that $\cC \lra \cU$ has a section and then that $f$ induces a map of families of abelian varieties $F:\cJC \lra \mathcal A$ over $\mathcal U$. To start with we pull-back the families of curves and abelian varieties to $\cC$ itself: \[ \xymatrix@C=1.pc@R=1.8pc{ \mathcal A_{\cC} \ar[d]_{\pi_{\cC }} \ar[rr] && \mathcal {A} \ar[d]^\pi\\ \cC \ar[rr] && \cU. } \] Now we define a family of zero-cycles in $\mathcal A_{\cC }$ parametrized by $\cC$. Let $s_+: \cC \lra \cA_{\cC}$ be the section given by the maps $(id_{\cC},f)$: \[ \xymatrix@C=1.pc@R=1.8pc{ \cC \ar@/_/[ddr]_{id_{\cC}} \ar@/^/[drr]^f \ar[dr]^{s_+} \\ & \cA_{\cC} \ar[d]^{\pi_{\cC}} \ar[r] & \cA \ar[d]^{\pi} \\ & \cC \ar[r] & \cU } \] Put $\mathcal Z^+:=s_+(\cC)$. Analogously, by considering $-1_{\mathcal A}\circ f$, where $-1_{\mathcal A} $ is the relative $-1$ map on the family of abelian varieties we define a section $s_-:\cC \lra \cA_{\mathcal C}$ of $\pi_{\mathcal C}$ and a cycle $\mathcal Z^-:=s_-(\cC)$. Finally the zero section $0_{\mathcal A}$ induces a section $s_0$ and a cycle $\mathcal Z_0$. Set $\mathcal Z=\mathcal Z^++\mathcal Z^- - 2 \mathcal Z_0$, a cycle on $\mathcal A_{\cC}$. In fact $\mathcal Z^++\mathcal Z^-, 2 \mathcal Z_0 \subset Sym^2 \mathcal A_{\cC}$. Observe that $\mathcal Z_{t}$, $t\in \cC$ is the $0$-cycle \[ \mathcal Z_t=\{f(t)\}+\{-f(t)\}-2\{0_{A_y}\} \] on $A_y$, where $\pi(t)=y$. Since $f(t)\in f(C_y)\subset V_2(A_y)$ we have that $\mathcal Z_{t}\sim _{rat}0$ in $A_y$ (see Proposition \ref{properties_V_2}). We are interested in an infinitesimal deformation of a curve $C_y$ for a general $y\in \mathcal U$. Thus, let us denote by $\Delta $ the spectrum of the ring of the dual numbers $Spec \, \mathbb C[\varepsilon ]/(\varepsilon ^2)$. We consider a tangent vector $\xi \in T_{\mathcal U}(y)$ and we take a smooth quasi projective curve $B\subset {\mathcal U}$ passing through $y$ and with $\xi \in T_B(y)$. This induces the maps \[ \alpha_{\xi }:\Delta \lra B \lra \mathcal U. \] We pull-back to $B$ and to $\Delta $ the families of curves and abelian varieties and the cycle $\mathcal Z$, thus we have \[ \xymatrix@C=1.pc@R=1.8pc{ \mathcal A_{\cC_{\Delta }} \ar[d]_{\pi_{\Delta}} \ar[rr] && \mathcal A_{\cC_B} \ar[d]^\pi \ar[rr] && \mathcal A_{\cC} \ar[d]^{\pi_{\mathcal C}}\\ \mathcal \cC_{\Delta } \ar[d] \ar[rr] && \mathcal C_B \ar[d] \ar[rr] && \mathcal C \ar[d] \\ \Delta \ar[rr]^{\alpha_{\xi}} && B \ar[rr] && \cU, } \] and $\mathcal Z_{\Delta }$ (resp. $\mathcal Z_{B }$) denotes the pull-back to $\mathcal A_{\mathcal C_{\Delta}}$ (resp. to $\mathcal A_{\mathcal C_{B}}$) of the cycle $\mathcal Z$. The cycle $\mathcal Z_B$ determines a cohomological class $[\mathcal Z_B]\in H^g(\mathcal A_{\cC_B},\Omega^g_{\mathcal A_{\cC_B}})$ and its restriction a class $[\mathcal Z_B]_t$ in $H^g(A_{t}, \Omega ^g_{\mathcal A_{\cC_B} \vert A_{ t} } )$, where $\pi(t)=y$. The following is well-known by the experts and is a version of the classical results by Mumford and Roitman on zero-cycles (see \cite[Lemma 2.2]{voisin_AG} and also \cite{Bloch_Srinivas}): \begin{prop}\label{Bloch-Srinivas} If for any $t\in \cC_B$, the restricted cycle $\mathcal Z_t$ is rationally equivalent to $0$, then there is a dense Zariski open set $\mathcal V\subset \cC_B$ such that $[\mathcal Z_{\mathcal V}]=0$ in $H^g(\mathcal A_{\mathcal V}, \Omega ^g_{\mathcal A_{\mathcal V}})$. In particular for all $t\in \mathcal V$ we have $[\mathcal Z_{\mathcal V}]_t=[\mathcal Z_{B}]_t =0$ in $H^g( A_{t}, \Omega ^g_{\mathcal A_{\mathcal V}\vert A_t})$. \end{prop} Now we look at the action of $\mathcal Z_{\Delta }$ in the space of differential forms on the infinitesimal family of abelian varieties. This works as follows. Let $\mathcal A_{C_y}=A_y\times C_y$ be the restriction of $\mathcal A_{C_{\Delta }}$ over $C_y$. Consider $\Omega \in H^0(\mathcal A_{C_y}, \Omega^2_{\mathcal A_{\cC_ {\Delta}}|\mathcal A_{C_y}}) $ and define: \begin{equation}\label{def_action} \mathcal Z_{\Delta }^*(\Omega )=(s_+)^*(\Omega)+(s_-)^*(\Omega )-2\, s_0^*(\Omega), \end{equation} which belongs to $H^0( C_y, \Omega^2_{\mathcal C_{\Delta }|C_y}). $ Then we have by (\ref{Bloch-Srinivas}) that this action is trivial which gives the vanishing $ \mathcal Z_{\Delta }^*(\Omega )=0$. Consider also the family of abelian varieties on $\Delta $: \[ \xymatrix@C=1.pc@R=1.8pc{ \mathcal \cA_{\Delta } \ar[d] \ar[rr] && \mathcal A \ar[d] \\ \Delta \ar[rr] && \cU. } \] Notice that there is a natural map $\mathcal A_{\cC_{\Delta } } \xrightarrow{h} \mathcal A_{\Delta}$ and that the composition: \[ \cC_{\Delta } \xrightarrow{s_+} \mathcal A_{\cC_{\Delta } } \xrightarrow{ h} \mathcal A_{\Delta} \] is simply the original family of curves $f_{\Delta}: \cC_{\Delta } \lra \mathcal A_{\Delta }$ over $\Delta $. Therefore for any form $\Omega' \in H^0(A_y,\Omega^2_{\mathcal A_{\Delta} \vert A_y}) $, denoting $\Omega = h^*(\Omega ')$, we have that \[ f_{\Delta}^*(\Omega')=s_+^*(\Omega). \] An almost identical computation can be done with $s_-$ since $-1_{\mathcal A_{\cC_{\Delta}}}$ acts trivially on the $(2,0)$-forms. Finally if $\Omega' \in H^0(A_y,\Omega^2_{\mathcal A_{\Delta} \vert A_y})^0 \subset H^0(A_y,\Omega^2_{\mathcal A_{\Delta} \vert A_y})$, is a form vanishing at the origin, then $s^*_0(h^*(\Omega ') )=0$. These considerations combined with (\ref{def_action}) gives that for any form $\Omega '\in H^0(A_y , \Omega ^2 _{\cA_{\Delta }|A_y})^0$: \[ \mathcal Z_{\Delta}^* h^*(\Omega')=2f_{\Delta}^*(\Omega'). \] Then, using the vanishing of $\mathcal Z^*(\Omega)$, this implies: \begin{prop} \label{vanishing_Delta} With the above notations, \[ f_{\Delta}^*:H^0(A_y,\Omega^2_{\mathcal A_{\Delta} \vert A_y})^0 \longrightarrow H^0(C_y,\Omega^2_{\cC_\Delta \vert C_y}) \cong H^0(C_y,\omega _{C_y}) \] is the zero map. \end{prop} We will see in the next section that, if the dimension of the family is $\ge 2g$, then for a generic point of $\mathcal U$ and a convenient infinitesimal deformation there is a form in $ H^0(A_y,\Omega^2_{\mathcal A_{\Delta } \vert A_y})^0$ with non-trivial image in $H^0(C_y,\Omega^2_{\cC_\Delta \vert C_y})$. This contradicts the Proposition. \vskip 3mm \section{The geometry of the adjoint form and end of the proof} In this section we end the proof of the main Theorem. Assuming that the dimension of the family is $\ge 2g$, we will find a contradiction with Proposition (\ref{vanishing_Delta}). As at the beginning of section $3$ we can assume the existence of the following diagram: \[ \xymatrix@C=1.pc@R=1.8pc{ \mathcal {C} \ar[rd] \ar[rr]^ f && \mathcal {A} \ar[ld]^{\pi } \\ &\mathcal U, } \] where the parameter space $\cU$ comes equipped with a generically finite map $\Phi: \cU \lra \cY$. We fix a generic point $y$ in $\cU$ and we denote by $\mathbb T$ the tangent space of $\cU$ at $y$. Observe that $\mathbb T \hookrightarrow Sym^2 H^{1,0}(A_y)^\ast$. Moreover the surjective map $F_y$ induces an inclusion of $W_y:=H^{1,0}(A_y)$ in $H^0(C_y,\omega_{C_y})$. Let $D$ be the base locus of the linear system generated by $W_y$, therefore $W_y\subset H^{0}(C_y,\omega_{C_y}(-D_y))$. Lemma 3.1 in \cite{Isog_hyp} states that for a generic two dimensional subspace $E$ of $W_y$ the base locus of the pencil attached to $E$ is still $D_y$. As in the proof of Theorem 1.4 \cite{Isog_hyp} we consider the map sending $\xi \in \mathbb T$, seen as a symmetric map $\, \cdot \xi:\, W_y=H^{1,0}(A_y)\lra H^{1,0}(A_y)^*$, to its restriction to $E$. Let $E_0$ be a complementary of $E$ in $W_y$. Then we have an element in $E^*\otimes E^* +E^*\otimes E_0^*$ which, by the symmetry, belongs to $Sym^2 E^* + E^* \otimes E_0^*$. This last space has dimension $3+2(g-2)=2g-1$. Since $\dim \mathcal Y \ge 2g$ we get that the linear map \[ \mathbb T \lra Sym^2 E^* + E^* \otimes E_0^* \] sending $\xi$ to $\delta_{\xi \mid E} $ has non trivial Kernel. Therefore we conclude the following: \begin{lem}\label{existence_xi} For any $2$-dimensional vector space $E \subset W_y$ there exists $\xi \in \mathbb T$ killing all the forms in $E$. Hence, if $\omega_1,\omega_2$ is a basis of $E$, then $\xi \cdot \omega_1=\xi \cdot \omega _2=0$. \end{lem} We want to compute the adjunction form for a basis $\omega_1, \omega _2$ of $E$, as defined in \cite{collino_pirola}. Observe that $\xi$ can be seen as an infinitesimal deformation $\mathcal A_{\Delta }$ of $A_y$. We denote by $F_\xi $ the rank $2$ vector bundle on $A_y$ attached to $\xi$ via the isomorphism $H^ 1(A_y,T_{A_y})\cong Ext^ 1(\Omega^1_{A_y},\mathcal O_{A_y})$. With this notation the sheaf $F_\xi$ can be identified with $\Omega ^1_{\cA_{\Delta}\vert A_y}$. By definition there is a short exact sequence of sheaves: \[ 0\lra \mathcal O_{A_y} \lra \Omega ^1_{\cA_{\Delta}\vert A_y} \lra \Omega^1_{A_y} \lra 0. \] The connection map $H^0(A_y,\Omega^1_{A_y})\lra H^1(A_y,\mathcal O_{A_y})$ is the cup-product with $\xi \in H^1(A_y,T_{A_y})$. Then the forms $\omega_1, \omega_2$ lift to sections $s_1,s_2\in H^ 0(A_y,\Omega ^1_{\cA_{\Delta}\vert A_y})$. These sections are not unique, but they are by imposing them to be $0$ on the $0$-section of $\mathcal A_{\Delta }\lra \Delta$. Then the adjoint form of $\omega_1, \omega_2$ with respect to $\xi$ is defined as the restriction of \[ s_1\wedge s_2 \in H^0(A_y, \Omega ^2_{\cA_{\Delta}\vert A_y} )^0 \] to $C_y$. This is a section of $H^0(C_y,\Omega ^2 _{\cC_{\Delta}|C_y} )\cong H^0(C_y,\omega _{C_y})$. \vskip 3mm \begin{prop} \label{vanishing_implies_end} If the adjoint form vanishes then $\xi$ belongs to the kernel of $d\Phi: \mathbb T \lra T_{\cA_g}(A_y)=Sym^2 H^{1,0}(A_y)^*$. \end{prop} \begin{proof} According to Theorem 1.1.8 in \cite{collino_pirola}, the adjoint form vanishes if and only if the image of $\xi $ in \[ H^1(C_y, T_{C_y}(D))\cong Ext^1(\omega_{C_y}(-D), \mathcal O_{C_y}) \] is zero. This says that the corresponding extension is trivial, so the short exact sequence in the first row of the next diagram splits (i.e. $i^* \Omega ^1_{\cC_{\Delta}\vert C_y}=\mathcal O_{C_y} \oplus \omega _{C_y}(-D)$): \[ \xymatrix@C=1.pc@R=1.8pc{ 0 \ar[r] & \mathcal O_{C_y} \ar[r] \ar@{=}[d] & i^* \Omega ^1_{\cC_{\Delta}\vert C_y} \ar[r] \ar[d] & \omega _{C_y}(-D) \ar[r] \ar@{^{(}->}[d]^{i} & 0 \\ 0 \ar[r]& O_{C_y} \ar[r] & \Omega ^1_{\cC_{\Delta}\vert C_y} \ar[r] & \omega _{C_y} \ar[r] & 0 } \] which implies that the connecting homomorphism in the associated long exact sequence of cohomology $H^0(C_y,\omega _{C_y}(-D))\lra H^1(C_y,\mathcal O_{C_y})$ is trivial. Therefore $\xi \cdot H^0(C_y,\omega _{C_y}(-D))=0$ and in particular $\xi \cdot W_{y}=0$. Hence $\xi$ is in the kernel of $d\Phi_y$. \end{proof} \vskip 3mm \textbf{End of the proof of \ref{m.res}:} Since $d\Phi$ is injective in a generic point, we are reduced to prove the vanishing of the adjoint form to reach a contradiction. Our aim is to use the vanishing obtained in Corollary (\ref{vanishing_Delta}). We fix a generic point $y\in \mathcal U$ and consider $(\xi, E)$ as in Lemma (\ref{existence_xi}). As in the section 3, set $\Delta:=Spec \mathbb C[\varepsilon]/(\varepsilon ^2)$ and let $\alpha_{\xi } \lra \mathcal U $ be the map attached to $\xi $. From now on we restrict our family over $\mathcal U$ to a family over $\Delta $. Moreover we denote by $\cC_{\Delta }$ the pull-back of $\mathcal C$ to $\Delta$, hence we have an infinitesimal deformation of $C_y$: \[ \cC_{\Delta } \lra \Delta. \] Again we pull-back to $\Delta $ the family of abelian varieties and the family of curves we get the diagram: \begin{equation}\label{definition_Gamma} \xymatrix@C=1.pc@R=1.8pc{ &\mathcal A_{\Delta } \ar[dd] \ar[rr] && \mathcal A \ar[dd]^{\pi} \\ \cC_{\Delta} \ar[ru]^{f_{\Delta }} \ar[rd] \ar[rr] && \mathcal {C} \ar[ru]^f \ar[rd]& \\ &\Delta \ar[rr] &&\mathcal U. } \end{equation} Notice that $E$ is generated by two linearly independent forms $\omega_1, \omega_2 \in H^0(A_y,\Omega ^1_{A_y})\subset H^0(C_y,\omega_{C_y})$, we still denote by $s_1,s_2\in H^ 0(A_y,\Omega ^1_{\cA_{\Delta}\vert A_y})$ the lifting of both sections. Then by the Proposition (\ref{vanishing_Delta}) the restriction to $C_{\Delta}$ of the form $s_1 \wedge s_2$ is zero. Hence the adjoint form is zero and thus the Theorem is proved. \qed \vskip 3mm \begin{rem} Since the Voisin set $V_2$ can be seen as the preimage of the rational orbit of the image of the origin in the corresponding Kummer variety our Theorem gives a bound on the dimension of the locus of Kummer varieties where these orbits are positive dimensional and ``non-degenerate'' (that is, the preimage in the abelian variety generates the abelian variety itself). \end{rem}
43,022
Tongariro Skiing Accommodation Package Winter trips are more fun with our Ski Ruapehu package. The ski and snowboard season at Mt Ruapehu in the central North Island starts from late June and usually finishes around late October, depending on weather conditions. The three ski fields - Whakapapa, Turoa and Tukino have your bases covered whether you are a beginner, intermediate or more advanced skier or snowboarder. All levels of skill are catered for with forty-three trails making up the Turoa Ski field - so there is no shortage of snow adventures to be had. The Mt Ruapehu, Tukino and Turoa ski area also has a ski hire facility, cafés and chair lifts. The Turoa mountain road is the closest to Ruapehu Country Lodge - just a short 10 minute drive away. This means you can spend all day on the slopes and get the most out of your lift pass. Our Ski Ruapehu package let you enjoy scenic Mt Ruapehu in all its glory. We suggest you book your stay with us as early as possible as during the winter season our rooms fill up quickly. Your Mt Ruapehu Skiing Accommodation package includes: • Two nights in a luxury guest room with ensuite at Ruapehu Country Lodge (twin share) • A hearty full breakfast per person • A two-course meal including a glass of wine per person NZ$325 per person
84,848
Professor George Kyeyune, Uganda and Siegrun Salmanian, Bayreuth As part of the project “African Art History and the Formation of a Modern Aesthetic”, Professor George Kyeyune (Makerere University, Kampala, Uganda) and Siegrun Salmanian (M.A. Iwalewa House, Bayreuth University) are guests at the Weltkulturen Museum in February 2016. “African Art History and the Formation of a Modern Aesthetic”is a Weltkulturen Museum Frankfurt research project in cooperation with Bayreuth University (Iwalewa House) supported by the Volkswagen Foundation’s “Research in Museums” funding initiative. George Kyeyune is Professor of Fine Art at the Margaret Trowell School of Industrial and Fine Art, where he also served as Dean from 2006. In addition, he is the Director of the Institute of Heritage Conservation and Restoration, Makerere University, Kampala. In 2003, he obtained his Ph.D. from the School of Oriental and African Studies at the University of London, where he specialised in African art and specifically in the development of Uganda’s contemporary art scene. As an artist, George Kyeyune is well known for his art works in public space in Uganda. He was a Fulbright Scholar from 2012–13, and a Commonwealth Scholar from 2013–14. George Kyeyune is a research member of the “African Art History and the Formation of a Modern Aesthetic” project sponsored by the VW Foundation. Siegrun Salmanian has completed an M.A. in Africa Studies at Bayreuth University, majoring in art history and curating. During her studies, she began working in the Iwalewa House collection, and is also involved in exhibitions there as a curator. In the “African Art History and the Formation of a Modern Aesthetic” project, she is working as a junior researcher in the collections for modern and contemporary art in the participating institutions. “African Art History and the Formation of a Modern Aesthetic” In 1974, the Weltkulturen Museum began collecting modern and contemporary art from Africa. Today, the museum holdings include an extensive collection of approximately 3,000 works of contemporary art, primarily focused on Nigeria, Senegal, South Africa and, in particular, Uganda. This area of the museum’s holdings was acquired by German collectors, with the Jochen Schneider collection as the largest single addition. From 1960 to the 1980s, during the years he lived in Uganda, Jochen Schneider, a German engineer, built up a collection of contemporary art by local artists. Above all, he bought works by students at the Makerere School of Fine Arts in Kampala. At present, these account for around 1,000 works in the Weltkulturen Museum’s collection. This research project critically explores the collections of paintings, sculpture and graphic works by modern African artists, primarily from Nigeria and Uganda. In their research, George Kyeyune and Siegrun Salmanian consider how far the collections are informed by a variety of narratives of African art history – on the one hand, those of the artists and, on the other, those of the collectors. The exhibition focuses on the links between the Jochen Schneider collection in the Weltkulturen Museum and the Makerere Art Gallery collection in Kampala. The objective is to obtain an insight into the reception of African art history in Germany through analysing individual works from the 1940s to the 1980s, the composition of the collections and the collectors’ relationship to the local art scene. Aside from conducting basic research into the individual objects, the project also aims to compile biographies of the artists and collectors. During their month in Frankfurt, the visiting research scholars will be supported by Dr. Yvette Mutumba, the museum’s Africa Curator. The Volkswagen Foundation’s “Research in Museums” funding initiative particularly supports small and medium-sized museums in creating the well-researched exhibitions needed to fulfil their educational mandate.
216,293
N. Abdirizak was shot when he tried to pass through a checkpoint near a public park in Mogadishu, Peace Garden, while riding in a bajaj, a three-wheeled public transportation vehicle, according to Hassan Omar Mohammed, the journalist’s brother, and Mohammed Shiil, the coordinator of the Somalia Mechanism for the Safety of Journalists, a press freedom coalition that monitors attacks on the media. Witnesses said the police officer ordered the bajaj driver to reverse and said that he could not go beyond the checkpoint, according to Mohamed Ibrahim Moalimuu, secretary general of the government-recognized National Union of Somali Journalists (NUSOJ), and Mohamed Shiil. CPJ was unable to determine the driver’s identity or contact details. The bajaj driver and Abdirizak argued with the police officer to let them beyond the checkpoint; during the dispute the police officer shot Abdirizak twice in the head, according to the same sources. The journalist was then taken to Madina Hospital, where he was pronounced dead, according to a report by the privately owned Radio Dalsan. Mohamed Shiil told CPJ that the police officer fled the scene after the shooting. Hassan Omar told CPJ that the journalist’s family reported the incident to police in Mogadishu the same day the journalist was shot. Local police questioned the bajaj driver, who was unharmed in the shooting, according to Mohamed Moalimuu whose colleagues spoke with the driver. “Somali authorities must act swiftly to investigate the killing of Abdirizak Kasim Iman, determine the motive, and bring those responsible to justice,” said CPJ Africa program Coordinator Angela Quintal from Harare. “Dozens of unsolved killings of journalists are a grave reminder of the dangers facing the press in Somalia. The police should be working to ensure the media are able to do their job without fear, not adding to the dangers they face.” At least 26 journalists in Somalia have been murdered with complete impunity over the last decade, according to CPJ research. Mogadishu’s mayor, Abdirahman Omar Osman, referred CPJ to police for comment on the status of investigations. Police Commissioner Bashir Abdi Mohamed yesterday declined to comment on the case, saying that investigations were being carried out by the Criminal Investigations Department (CID). CPJ was unable to determine where the journalist was travelling at the time he was shot. Mohamed Shiil and Mohamed Moalimuu told CPJ that Abdirizak was going to work at his television station when he was shot. Hassan Omar Mohammed, the journalist’s brother, told CPJ that the journalist was going home from work. Ismail Sheikh Khalifa, chairman of the press rights organization Human Rights Journalists (HRJ), told CPJ that the Mogadishu director of SBS TV, Mohamud Dhakane Nur, had told him that the journalist was going to work while Abdirizak’s father said that the journalist was coming home from the Somali Institute of Management and Administration Development (SIMAD) University where he was a student. Abdirizak had his camera with him at the time of the attack, according to these sources. According to Mohamed Moalimuu, Adirizak covered current affairs for SBS TV. Be the first to comment
153,001
LAST Holiday Gift Guides | Last Minute Gifts and Stocking Stuffers! Merry Christmas, Happy Holidays! I have been really getting into the holiday spirit the past couple days. I am finally home for a few weeks, getting a couple weeks off from travelling for work. Working from home is the greatest :) I am nearly done with my shopping... are you?! Today, I'm posting my LAST gift guide before Christmas!! Here are last minute gifts and stocking stuffer ideas. If you missed my previous gift guides, check them out below... For the host(ess) and road warrior | For the glamorous and home body | For the adventurer and foodie | For the athlete and workaholic | For man's best friend and my wish list These are some last minute gift ideas! All of these are gifts are good for pretty much anyone on your list - friends, family, coworkers, even secret santa gift exchanges. Order quick to ensure delivery in time for your event! 1. Volupsa Prosecco Rose Candle | Volupsa is known for their high quality candles; this one smells incredible and is pretty to boot. 2. Clinique Happy Gift Set | The Clinique Happy scent will make anyone happy - this super rich moisturizer set will help rough and dry winter skin. 3. Michael Wainwright Trinket Box | Everyone needs a pretty trinket box! This pretty decor item can be used as a jewelry box or simply as part of your #shelfie. 4. Italian Leather Gloves | Who doesn't need leather gloves? Super durable and effortlessly stylish - these ones are perfect for a brother, dad, significant other, or friend! 5. Cashmere Scarf | Cashmere is obviously the softest and warmest - you already knew that. This simple and timeless scarf will be a staple in any wardrobe. 6. FitBit Alta HR | I love my Alta HR! It tracks sleep, exercise, heart rate, and integrates with the FitBit app perfectly. 10/10 would recommend. 7. Statement Necklace | Can you ever have too many statement necklaces? Nope! This one is sparkly and has a cute ribbon detail. Always love some sparkle. 8. Dip Mug | Calling all coffee and tea lovers! Give them a set of mugs, or a kit with hot cocoa mix. 9. Knit Mittens | These adorable knit mittens with keep your hands and heart warm. 10. Bose SoundLink Color | I have a SoundLink in the teal color and adore it, we use it constantly! The sound quality is on point - especially at this price point. 11. BaubleBar Bracelet Trio | This gorgeous set of bracelets will fit into anyone's wrist stack. Classic and simple. These are some adorable stocking stuffer ideas! These are all relatively small items (to fit inside a gift basket or stocking!) at a good price point. 1. Plaid Wine Bag | Love this as a 'stocking'! Fill with your favorite bottle of wine to share. This would be great as a host/hostess gift as well. 2. Art of Shaving Razor Set | Honestly, any one who shaves regularly could probably use an upgrade in razor. This fancy one comes with a razor and an extra cartridge. 3. BaubleBar Tassel Earrings | These tiered tassel earrings are very popular right now and come in a ton of colors. Perfect for NYE! 4. Kiehl's Body Moisturizer | Kiehl's is known for their high quality skin care products, and this body lotion doesn't disappoint. Self care themed stocking, anyone? 5. Baking Set | For the aspiring baker in your life! Or anyone who just moved into a new home and has no baking stuff yet :) 6. Marble Notepad | This cute notebook is great for journals and planners, like myself. Starting out the new year with a fresh pad of paper is so refreshing! 7. Peppermint Bark Cookies Tin | Is any stocking complete without a sweet treat? Enjoy these peppermint bark COOKIES in an adorable tin. 8. Confetti Socks | I always seemed to get fuzzy socks in my stocking - why stop now? These confetti socks come in a couple colors too. 9. Blossom Honey Jasmine Roll On Perfume | Doesn't honey jasmine just sound amazing? This roll on perfume comes in a bunch of different scents and are less than $10! Luxury on the cheap. 10. Heart Wine Stopper | This cute wine stopper would go great with the plaid wine bag, just saying :) 11. Barr-Co. Reserve Bath Bomb | I like the idea of giving a bunch of bath things in a stocking - this bath bomb, some lotions, a loofah, maybe a cute nail polish or something. Spa day! That's all folks! Happy shopping and happy holidays.
49,914
\begin{document} \textcopyright~2019 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. \thispagestyle{empty} \newpage \setcounter{page}{1} \title{Efficient Implementation of Second-Order Stochastic Approximation Algorithms in High-Dimensional Problems} \author[1]{Jingyi Zhu} \author[1]{Long Wang} \author[1,2]{James C. Spall \thanks{Correspondence should be addressed to James C. Spall: \href{mailto:james.spall@jhuapl.edu}{james.spall@jhuapl.edu}}} \affil[1]{Department of Applied Mathematics and Statistics, Johns Hopkins University} \affil[2]{Applied Physics Laboratory, Johns Hopkins University} \date{} \maketitle \onehalfspacing \begin{abstract} Stochastic approximation (SA) algorithms have been widely applied in minimization problems when the loss functions and/or the gradient information are only accessible through noisy evaluations. Stochastic gradient (SG) descent---a first-order algorithm and a workhorse of much machine learning---is perhaps the most famous form of SA. Among all SA algorithms, the second-order simultaneous perturbation stochastic approximation (2SPSA) and the second-order stochastic gradient (2SG) are particularly efficient in handling high-dimensional problems, covering both gradient-free and gradient-based scenarios. However, due to the necessary matrix operations, the per-iteration floating-point-operations (FLOPs) cost of the standard 2SPSA/2SG is $O(p^3)$, where $p$ is the dimension of the underlying parameter. Note that the $O(p^3)$ FLOPs cost is distinct from the classical SPSA-based per-iteration $O(1)$ cost in terms of the number of noisy function evaluations. In this work, we propose a technique to efficiently implement the 2SPSA/2SG algorithms via the symmetric indefinite matrix factorization and show that the FLOPs cost is reduced from $O(p^3)$ to $O(p^2)$. The formal almost sure convergence and rate of convergence for the newly proposed approach are directly inherited from the standard 2SPSA/2SG. The improvement in efficiency and numerical stability is demonstrated in two numerical studies. \textbf{Keywords:} Newton Method, Modified-Newton Method, Quasi-Newton Method, Simultaneous Perturbation Stochastic Approximation (SPSA), Stochastic Optimization, Symmetric Indefinite Factorization \end{abstract} \section{Introduction} \subsection{Problem Context} Stochastic approximation (SA) has been widely applied in minimization and/or root-finding problems, when only \emph{noisy} loss function and/or gradient evaluations are accessible. Consider minimizing a differentiable loss function $ L(\btheta): \real^p \to \real $, $p\ge 1$ being the dimension of $\btheta$, where only noisy evaluations of $L\parenthese{\cdot}$ and/or its gradient $\bg\parenthese{\cdot} $ are accessible. The key distinction between SA and classical deterministic optimization is the presence of noise, which is largely inevitable when the function measurements are collected from either physical experiments or computer simulation. Besides, the noise term comes into play when the loss function is only evaluated on a small subset of an entire (inaccessible) dataset as in online training methods popular with neural network and machine learning. In the era of big-data, we deal with applications where the solution is data dependent such that the cost is minimized over a given set of sampled data rather than the entire distribution. Overall, SA algorithms have numerous applications in adaptive control, natural language processing, facial recognition and collaborative filtering, to name a few. In modern machine learning, there is a growing need for algorithms to handle high-dimensional problems. Particularly for deep learning, the need arises as the number of parameters (including both weights and bias) explodes quickly as the network depth and width increase. First-order methods based on back-propagation are widely applied, yet they suffer from slow convergence rate in later iterations after a sharp decline during early iterations. Second-order methods are occasionally utilized to speed up convergence in terms of the number of iterations, but, still, at a computational burden of $O(p^3)$ per-iteration floating-point-operations (FLOPs). The adaptive second-order methods here differ in fundamental ways from stochastic quasi-Newton and other similar methods in the machine learning literature. First, most of the machine learning-based methods are designed for loss functions of the empirical risk function (ERF) form, namely for functions represented as summations, where each summand represents the contribution of one data vector. Such a structure, together with an assumption of strong convexity, has been exploited in \cite{johnson2013accelerating,martens2015optimizing}. Second, first- or second-order derivative information is often assumed to be directly available on the summands in the loss function (e.g., \cite{byrd2016stochastic, sohl2014fast, schraudolph2007stochastic}). Ref. \cite{saab2019multidimensional} also assumes direct information on the Hessian is available in a second-order stochastic method, but allows for loss functions more general than the ERF. Ref. \cite{byrd2016stochastic} applies the BFGS method to stochastic optimization, but under a nonstandard setup where noisy Hessian information can be gathered. In our work, we assume only noisy loss function evaluations or noisy gradient information are available. Third, notions of convergence and rates of convergence are in line with those in deterministic optimization when the loss function (the ERF) is composed of a finite (although possibly large) number of summands. For example, in \cite{johnson2013accelerating, martens2015optimizing, byrd2016stochastic, sohl2014fast, schraudolph2007stochastic}, rates of convergence are linear or quadratic as a measure of iteration-to-iteration improvement in the ERF. In contrast, we follow the traditional notion of stochastic approximation, including applicability to general noisy loss functions, no availability of direct derivative information, and stochastic notions of convergence and rates of convergence based on sample-points (almost surely, a.s.) and convergence in distribution. To achieve a faster convergence rate at a reasonable computational cost, we present a second-order simultaneous perturbation (SP) method that incurs only $O(p^2)$ per-iteration FLOPs, in contrast to the standard $O(p^3)$. The idea of SP is an elegant generalization of the finite difference (FD) scheme and can be applied in both first-order and second-order SA algorithms. Our proposed method rests on factorization of symmetric indefinite matrices. \subsection{Summary of SP-Based Methods} Among various stochastic optimization schemes, SP algorithms are particularly efficient compared with the FD methods. Under certain regularity conditions, \cite{spall1992multivariate} shows that the simultaneous perturbation stochastic approximation (SPSA) algorithm uses only $ 1/p $ of the required number of loss function observations needed in the FD form to achieve the same level of mean-squared-error (MSE) for the SA iterates. To further explore the potential of SP algorithms, \cite{spall2000adaptive} presents the second-order SP-based methods, including second-order SPSA (2SPSA) for applications in the gradient-free case and the second-order stochastic gradient (2SG) for applications in the gradient-based case. Those methods estimate the Hessian matrix to achieve near-optimal or optimal convergence rates and can be viewed as the stochastic analogs of the deterministic Newton-Raphson algorithm. Ref. \cite{spall2009feedback} incorporates both a feedback process and an optimal weighting mechanism in the averaging of the per-iteration Hessian estimates to improve the accuracy of the cumulative Hessian estimate in E2SPSA (enhanced 2SPSA) and E2SG (enhanced 2SG). The guidelines for practical implementation details and the choice of gain coefficients are available in \cite{spall1998implementation}. SPSA is also capable in dealing with discrete variables as shown in \cite{wang2011discrete, wang2018mixed}. More details on the related methods are discussed in \cite[Chaps. 7\textendash 8]{bhatnagar2012stochastic}. \subsection{Our Contribution} Refs. \cite{spall2000adaptive,spall2009feedback} show that the 2SPSA/2SG methods can achieve near-optimal or optimal convergence rates with a much smaller number (independent of dimension $p$) of loss or gradient function evaluations relative to other second-order stochastic methods in \cite{fabian1971stochastic, ruppert1985newton}. However, after obtaining the function evaluations, the per-iteration FLOPs to update the estimate is $ O(p^3) $, as discussed below. The computational burden becomes more severe as $p$ gets larger. This is usually the case in many modern machine learning applications. Here we propose a scheme to implement 2SPSA/2SG efficiently via the symmetric indefinite factorization, which reduces the per-iteration FLOPs from $ O(p^3) $ to $ O(p^2) $. We also show that the proposed scheme inherits the almost sure convergence and the rate of convergence from the original 2SPSA/2SG in \cite{spall2000adaptive}. The remainder of the paper is as follows. Section~\ref{sec:2SPSA} reviews the original 2SPSA/2SG in \cite{spall2000adaptive} along with the computational complexity analysis. Section~\ref{sec:efficient_implementation} discusses the proposed efficient implementation. Section~\ref{sec:theory} covers the almost sure convergence and asymptotic normality. Section~\ref{sec:discussion} discusses some practical issues. Numerical studies and conclusions are in Section~\ref{sec:numerical} and Section~\ref{sec:conclusion}, respectively. \section{Review of 2SPSA/2SG}\label{sec:2SPSA} Before proceeding, let us review the original 2SPSA/2SG algorithms and explain their $ O(p^3) $ per-iteration FLOPs. \subsection{2SPSA/2SG Algorithm} Following the standard SA framework, we find the root(s) of $\bg\parenthese{\btheta}\equiv \partial L\parenthese{\btheta}/\partial\btheta$ in order to solve the problem of finding $ \arg\min L\parenthese{\btheta} $. Our central task is to streamline the computing procedure, so we do not dwell on differentiating the global minimizer(s) from the local ones. Such a root-finding formulation is widely used in the neural network training and other machine learning literature. We consider optimization under two different settings: \begin{enumerate} \item Only noisy measurements of the loss function, denoted by $ y(\btheta), $ are available. \item Only noisy measurements of the gradient function, denoted by $ \bY(\btheta) $, are available. \end{enumerate} The conditions for noise can be found in \cite[Assumptions C.0 and C.2]{spall2000adaptive}, which include various type of noise such as Gaussian, multiplicative and impulsive noise as special cases. The main updating recursion for 2SPSA/2SG in \cite{spall2000adaptive} is \begin{equation} \label{eq:theta_update} \hbtheta_{k+1} = \hbtheta_k - a_k \oobH_k^{-1} \bG_k(\hbtheta_k), k = 0, 1, \dots , \end{equation} where $ \{a_k\}_{k\geq0} $ is a positive decaying scalar gain sequence, $ \bG_k(\hbtheta_k) $ is the direct noisy observation or the approximation of the gradient information, and $ \oobH_k $ is the approximation of the Hessian information. The true gradient $ \bg(\hbtheta_k) $ is estimated by \begin{numcases} {\bG_k(\hbtheta_k)=} \frac{y(\hbtheta_k+c_k\bDelta_k)-y(\hbtheta_k-c_k\bDelta_k)}{2c_k\bDelta_k}, &\hspace{-.25in}\text{for 2SPSA,}\label{eq:gradient_estimate_2SPSA}\\ \bY_k(\hbtheta_k), &\hspace{-.25in}\text{for 2SG,}\label{eq:gradient_estimate_2SG} \end{numcases} where $ \bDelta_k = [\Delta_{k1}, \dots, \Delta_{kp}]^\transpose $ is a mean-zero $p$-dimensional stochastic perturbation vector with bounded inverse moments \cite[Assumption B.6$^{\prime\prime}$ on pp. 183]{spall2005introduction}, $ 1 / \bDelta_k = \bDelta_k^{-1} \equiv [\Delta_{k1}^{-1}, \dots, \Delta_{kp}^{-1}]^\transpose $ is a vector of reciprocals of each nonzero components of $ \bDelta_k $ ($\bDelta_k^{-\transpose}$ is the transpose of $\bDelta_k^{-1}$), and $ \{c_k\}_{k\geq0} $ is a positive decaying scalar gain sequence satisfying conditions in \cite[Sect. 7.3]{spall2005introduction}. A valid choice for $ c_k $ is $ c_k = 1 / (k+1)^{1/6}$. For the Hessian estimate $ \oobH_k $, \cite{spall2000adaptive} proposes \begin{numcases}{} \oobH_k = \bm{f}_k(\obH_k), & \label{eq:H_ooverline}\\ \obH_k = (1-w_k) \obH_{k-1} + w_k \hbH_k ,& \label{eq:H_overline}\\ \hbH_k=\frac{1}{2}\left[\frac{\updelta\bG_k}{2c_k}\bDelta_k^{-\transpose}+\left(\frac{\updelta\bG_k}{2c_k}\bDelta_k^{-\transpose}\right)^\transpose \right] , \label{eq:H_hat}&\\ \updelta\bG_k=\bG_k^{(1)}(\hbtheta_k+c_k\bDelta_k)-\bG_k^{(1)}(\hbtheta_k-c_k\bDelta_k) , \nonumber& \end{numcases} where $ \bm{f}_k\hspace{-0.04in}: \real^{p\times p} \to \{$positive definite $p\times p$ matrices$\}$ is a preconditioning step to guarantee the positive-definiteness of $ \oobH_k $, $ \{w_k\}_{k\geq0} $ is a positive decaying scalar weight sequence, and $ \bG_k^{(1)}(\hbtheta_k\pm c_k\bDelta_k) $ are one-sided gradient estimates calculated by \begin{align*} &\bG_k^{(1)}(\hbtheta_k\pm c_k\bDelta_k) = \begin{cases} \frac{y(\hbtheta_k\pm c_k\bDelta_k+\tilde{c}_k\tbDelta_k)-y(\hbtheta_k\pm c_k\bDelta_k)}{\tilde{c}_k\tbDelta_k}, &\hspace{-.1in}\text{in 2SPSA,}\\ \bY_k(\hbtheta_k\pm c_k\bDelta_k), &\hspace{-.1in}\text{in 2SG,} \end{cases} \end{align*} where $ \{\tilde{c}_k\}_{k\geq0} $ is another positive decaying gain sequence, and $ \tbDelta_k = [\tilde{\Delta}_{k1}, \dots, \tilde{\Delta}_{kp}]^\transpose $ is generated independently of $ \bDelta_k $, but in the same statistical manner as $ \bDelta_k $. Some valid choices for $w_k$ include $w_k=1/(k+1)$ and the asymptotically optimal choices in \cite[Eq. (4.2) or Eq. (4.3)]{spall2009feedback}. Ref. \cite{spall2000adaptive} considers the special case where $ w_k=1/\parenthese{k+1} $, i.e., $\obH_k$ is a sample average of the $ \hbH_j $ for $ j = 1, ..., k $. Later, \cite{spall2009feedback} proposes the E2SPSA and E2SG to obtain more accurate Hessian estimates by taking the optimal selection of weights and feedback-based terms in (\ref{eq:H_overline}) into account. While the focus of this paper is the original 2SPSA/2SG in \cite{spall2000adaptive}, we also discuss the applicability of the ideas to the E2SPSA/E2SG algorithms in \cite{spall2009feedback}. Note that, independent of $p$, one iteration of 2SPSA/E2SPSA uses four noisy measurements $ y(\cdot) $ and one iteration of 2SG/E2SG uses three noisy measurements $ \bY(\cdot) $. \subsection{Per-Iteration Computational Cost of $ O(p^3) $} \label{subsect:p3cost} The per-iteration computational cost of $ O(p^3) $ arises from two steps: one is from the preconditioning step in (\ref{eq:H_ooverline}), i.e., obtaining $ \oobH_k $; the other is from the descent direction step in (\ref{eq:theta_update}), i.e., obtaining $ \oobH_k^{-1}\bG_k(\hbtheta_k) $. We now discuss the per-iteration computational cost of these two steps in more detail. \textbf{Preconditioning} The preconditioning step in (\ref{eq:H_ooverline}) is to guarantee the positive-definiteness of the Hessian estimate $ \oobH_k $. This step is necessary, because the updating of $ \obH_k $ in (\ref{eq:H_overline}) does not necessarily yield a positive-definite matrix (but $ \obH_k $ is guaranteed to be symmetric). One straightforward way is to perform the following transformation: \begin{equation} \label{eq:f_k_sqrtm} \bm{f}_k(\obH_k) = (\obH_k \obH_k + \delta_k \bI)^{1/2} , \end{equation} where $ \delta_k > 0 $ is a small \emph{decaying} scalar coefficient \cite{spall2000adaptive} and superscript ``1/2" denotes the symmetric matrix square root. Let $ \lambda_i(\cdot) $ denote the $i$-th eigenvalue of the argument. Since $ \lambda_i(\bA+c\bI) = \lambda_i(\bA)+c $ for any matrix $\bA$ and constant $c$ \cite[Obs. 1.1.7]{horn1990matrix}, we see that (\ref{eq:f_k_sqrtm}) directly modifies the eigenvalues of $ \obH_k\obH_k $ such that $ \lambda_i(\obH_k\obH_k + \delta_k\bI) = \lambda_i(\obH_k\obH_k) + \delta_k $ for $ i = 1, ..., p $. When $ \delta_k > 0 $, all the eigenvalues of $ \obH_k\obH_k + \delta_k\bI $ are strictly positive and therefore the resulting $ \oobH_k $ is positive definite. However, (\ref{eq:f_k_sqrtm}) has a computational cost of $ O(p^3) $ due to both the matrix multiplication in $ \obH_k \obH_k $ and the matrix square root computing \cite{higham1987computing}. Another intuitive transformation is \begin{equation} \label{eq:f_k_add} \bm{f}_k(\obH_k) = \obH_k + \delta_k \bI \end{equation} for a positive and sufficiently large $ \delta_k $. Again, applying eigen-decomposition on $ \obH_k $, we see that $ \lambda_i(\oobH_k) = \lambda_i(\obH_k) + \delta_k $ for $ i = 1, \cdots, p $. Take $ \uplambda_{\min}\parenthese{\cdot}= \min_{1\le i\le p } \uplambda_i\parenthese{\cdot} $ for any argument matrix in $\real^{p\times p}$. Any $ \delta_k > |\lambda_{\min}(\obH_k)| $ will result in $ \lambda_{\min}(\oobH_k) > 0 $, and therefore the output $ \oobH_k $ is positive definite. Unfortunately, (\ref{eq:f_k_add}) cannot avoid the $O(p^3)$ cost in estimating $ \lambda_{\min}(\obH_k) $. Besides the $O(p^3)$ cost in (\ref{eq:f_k_sqrtm}) and (\ref{eq:f_k_add}), the Hessian estimate $ \oobH_k $ may be ill-conditioned, leading to slow convergence. Ref. \cite{zhu2002modified} proposes to replace all negative eigenvalues of $ \obH_k $ with values proportional to its smallest positive eigenvalue. Such modification is shown to improve the convergence rate for problems with ill-conditioned Hessian and achieve smaller mean square errors for problems with better-conditioned Hessian compared with original 2SPSA \cite{zhu2002modified}. However, those benefits are gained at a price of computing the eigenvalues of $ \obH_k $, which still costs $ O(p^3) $. \textbf{Descent direction} Another per-iteration computational cost of $ O(p^3) $ originates from computing the descent direction in (\ref{eq:theta_update}), which is typically computed by solving the linear system for $ \bd_k: \oobH_k\bd_k = \bG_k(\hbtheta_k) $. The estimate is updated recursive as following: \begin{equation}\label{eq:theta_update_s} \hbtheta_{k+1} = \hbtheta_k - a_k\bd_k . \end{equation} With the matrix left-division, it is possible to efficiently solve for $ \bd_k $. However, the computation costs of typical methods, such as $LU$ decomposition or singular value decomposition, are still dominated by $ O(p^3) $. To speed up the original 2SPSA/2SG, \cite{rastogi2016efficient} proposes to rearrange (\ref{eq:H_overline}) and (\ref{eq:H_hat}) into the following two sequential rank-one modifications, \begin{numcases}{} \obH_k = t_k\obH_{k-1} + b_k\tbu_k\tbu_k^{\transpose} - b_k\tbv_k\tbv_k^{\transpose}, & \label{eq:two_rank_one_update}\\ \tbu_k = \sqrt{\frac{\norm{\bv_k}}{2\norm{\bu_k}}} \parenthese{\bu_k + \frac{\norm{\bu_k}}{\norm{\bv_k}}\bv_k} , & \label{eq:u_k_tilde}\\ \tbv_k = \sqrt{\frac{\norm{\bv_k}}{2\norm{\bu_k}}} \parenthese{\bu_k - \frac{\norm{\bu_k}}{\norm{\bv_k}}\bv_k} , &\label{eq:v_k_tilde} \end{numcases} where the scalar terms $ t_k $ and $ b_k $ (\ref{eq:two_rank_one_update}), and vectors $ \bu_k $ and $ \bv_k $ in (\ref{eq:u_k_tilde}) and (\ref{eq:v_k_tilde}) are listed in Table~\ref{table:u_k_v_k}. Applying the matrix inversion lemma \cite[pp. 513]{spall2005introduction}, \cite{rastogi2016efficient} shows that $ \obH_k^{-1} $ can be computed from $ \obH_{k-1}^{-1} $ with a cost of $ O(p^2) $. However, the positive-definiteness of $ \obH_k^{-1} $ is not guaranteed, and an additional eigenvalue modification step similar to either (\ref{eq:f_k_sqrtm}) or (\ref{eq:f_k_add}) is required. As discussed before, for any direct eigenvalue modifications, the computational cost of $ O(p^3) $ is inevitable due to the lacking knowledge about the eigenvalues of $ \obH_{k-1}^{-1} $. In short, no prior works can fully streamline the entire second-order SP procedure with an $O(p^2)$ per-iteration FLOPs, which motivates the elegant procedure below. \begin{table*}[!htbp] \renewcommand{\arraystretch}{2} \caption{Expressions for terms in (\ref{eq:two_rank_one_update})--(\ref{eq:v_k_tilde}). See \cite[Sect. 7.8.2]{spall2005introduction} for detailed suggestions.} \label{table:u_k_v_k} \centering \begin{tabular}{|l|c|c|c|c|} \hline Algorithm & $ t_k $ & $ b_k $ & $ \bu_k $ & $ \bv_k $\\ \hline\hline 2SPSA \cite{spall2000adaptive} & $ 1 - w_k $ & $ w_k \delta y_k / (4c_k\tilde{c}_k) $ & $ \tbDelta_k^{-1} $ & \multirow{4}{*}{$ \bDelta_k^{-1}$} \\ \cline{1-4} E2SPSA \cite{spall2009feedback} & $ 1 $ & $ w_k[\updelta y_k / (2c_k\tilde{c}_k) - \bDelta_k^\transpose\obH_{k-1}\tbDelta_k]/2 $ & $ \tbDelta_k^{-1} $ & \\ \cline{1-4} 2SG \cite{spall2000adaptive} & $ 1 - w_k $ & $ w_k / (4c_k) $ & $ \delta\bG_k $ & \\ \cline{1-4} E2SG \cite{spall2009feedback} & $ 1 $ & $ w_k / 2 $ & $ \delta\bG_k/(2c_k) - \obH_{k-1} \bDelta_k $ & \\ \hline \end{tabular} \end{table*} \section{Efficient implementation of 2SPSA/2SG} \label{sec:efficient_implementation} \subsection{Introduction}\label{subsect:Introduction} With the motivation for proposing an efficient implementation scheme for 2SPSA/2SG laid out in Subsection~\ref{subsect:p3cost}, we now explain our methodology in more detail. Note that none of the prior attempts on 2SPSA/2SG methods can bypass the end-to-end computational cost of $O(p^3)$ per iteration in high-dimensional stochastic optimization problems. Therefore, we propose to replace $ \obH_k $ by its symmetric indefinite factorization, which enables us to implement the 2SPSA/2SG at a per-iteration computational cost of $O(p^2)$. Our work helps alleviate the notorious curse of dimensionality by achieving, to the best of our knowledge, the fastest possible second-order methods based on Hessian estimation. Also, note that the techniques in \cite{rastogi2016efficient} are no longer applicable because our scheme keeps track of the matrix factorization in lieu of the matrix itself, so we propose new algorithms to establish our claims. To better illustrate our scheme and to be consistent with the original 2SPSA/2SG, we decompose our approach into the following three main steps and discuss the efficient implementation step by step. \begin{enumerate} \item[i)] \label{item:ranktwomodification} \textbf{Two rank-one modifications}: Update the symmetric indefinite factorization of $ \obH_k $ by the two sequential rank-one modifications in (\ref{eq:two_rank_one_update}). \item[ii)] \label{item:preconditioning} \textbf{Preconditioning}: Obtain the symmetric factorization of a positive definite $ \oobH_k $ from the symmetric factorization of $ \obH_k $. \item[iii)] \label{item:descentdirection} \textbf{Descent direction}: Update $ \hbtheta_{k+1} $ by recursion (\ref{eq:theta_update_s}). \end{enumerate} Note that $ \obH_k $ is guaranteed to be symmetric by (\ref{eq:two_rank_one_update}) as long as $ \obH_0 $ is chosen symmetric. For the sake of comparison, we present the flow-charts of the original 2SPSA and that of our proposed scheme in Figure~\ref{fig:Flow_chart}, along with the per-iteration and per-step computational cost. The comparison of the flow-charts helps to put the extra move of indefinite factorization into perspective. { \tikzset{font=\footnotesize} \tikzstyle{block} = [rectangle, draw, fill=blue!20, text centered, rounded corners, minimum height=2em] \tikzstyle{line} = [draw, -latex'] \begin{figure*}[!hbtp] \centering \subfloat[Flow chart for the original 2SPSA/2SG]{ \begin{tikzpicture}[auto] \node [block] (hbtheta) {$ \hbtheta_k $}; \node [block, right = 1.8cm of hbtheta, text width=1.5cm] (bG) {$ \bG_k(\hbtheta_k)$,\\$\hbH_k $}; \node [block, right = 1.2cm of bG] (obH) {$ \obH_k $}; \node [block, right = 2.4cm of obH] (oobH) {$ \oobH_k $}; \node [block, right = 2.8cm of oobH] (direction) {$ \bd_k $}; \node [block, right = 1.6cm of direction] (hbthetanew) {$ \hbtheta_{k+1} $}; \path [line] (hbtheta) -- node {2SPSA/2SG} node[below] {$ O(p^2) $} (bG); \path [line] (bG) -- node {(\ref{eq:H_overline})} node[below] {$ O(p^2) $} (obH); \path [line] (obH) -- node[align=center] {maintain positive- \\ [-0.4ex] definiteness (\ref{eq:H_ooverline})} node[below] {$ O(p^3) $} (oobH); \path [line] (oobH) -- node[align=center]{solve full-rank system \\ [-0.3ex] via back-division} node[below] {$ O(p^3) $} (direction); \path [line] (direction) -- node[align=center] {recursive \\ [-0.4ex] update (\ref{eq:theta_update})} node[below] { $ O(p) $} (hbthetanew); \end{tikzpicture} \label{fig:Original}} \hfil \subfloat[Flow chart for the proposed efficient implementation of 2SPSA/2SG (see Section~\ref{subsec:complexity} for detailed description)]{ \begin{tikzpicture}[auto] \node [block] (hbtheta) {$ \hbtheta_k $}; \node [block, right = 1.7cm of hbtheta, text width=1.5cm] (bG) {$ \bG_k(\hbtheta_k)$, \\ $\tbu_k, \tbv_k $}; \node [block, right = 2.5cm of bG, text width=1.7cm] (obH) {factorization of $ \obH_k $}; \node [block, right = 1.8cm of obH, text width=1.7cm] (oobH) {factorization of $ \oobH_k $}; \node [block, right = 2cm of oobH] (direction) {$ \bd_k $}; \node [block, right = 0.8cm of direction] (hbthetanew) {$ \hbtheta_{k+1} $}; \path [line] (hbtheta) -- node {2SPSA/2SG} node[below] {$ O(p) $} (bG); \path [line] (bG) -- node[align=center] {two rank-one updates \\ [-0.4ex] via Algo.~\ref{algo:two_rank_one_update}} node[below] {$ O(p^2) $} (obH); \path [line] (obH) -- node[align=center] {preconditioning\\ [-0.4ex] via Algo.~\ref{algo:preconditioning}} node[below] {$ O(p^2) $} (oobH); \path [line] (oobH) -- node[align=center] {solve triangular systems\\ [-0.4ex] via Algo.~\ref{algo:descent_direction}} node[below] {$ O(p^2) $} (direction); \path [line] (direction) -- node {(\ref{eq:theta_update})} node[below] {$ O(p) $} (hbthetanew); \end{tikzpicture} \label{fig:Proposed}} \captionsetup{justification=centering} \caption{Flow charts showing FLOPs cost at each stage of the original 2SPSA/2SG and the proposed 2SPSA/2SG. Algorithms~\ref{algo:two_rank_one_update}--\ref{algo:descent_direction} in the lower path are described in Section~\ref{subsec:algo}.} \label{fig:Flow_chart} \end{figure*} } The remainder of this section is as follows. We introduce the symmetric indefinite factorization in Subsection~\ref{subsec:IMF} and derive the efficient algorithms in Subsection~\ref{subsec:algo}. The per-iteration computational complexity analysis is included in Subsection~\ref{subsec:complexity}. \subsection{Symmetric Indefinite Factorization} \label{subsec:IMF} This subsection briefly reviews the symmetric indefinite factorization, also called $ \bL\bB\bL^\transpose $ factorization, introduced in \cite{bunch1971direct}, which applies to any symmetric matrix $ \obH $ regardless of the positive-definiteness: \begin{equation} \label{eq:LBL} \bP \obH \bP^\transpose = \bL\bB\bL^\transpose, \end{equation} where $ \bP $ is a permutation matrix, $ \bB $ is a block diagonal matrix with diagonal blocks being symmetric with size $ 1\times1 $ or $ 2\times2 $, and $ \bL $ is a lower-triangular matrix. Furthermore, the matrices $ \bL $ and $ \bB $ satisfy the following properties \cite[Sect. 4]{bunch1971direct}, which are fundamental for carrying out subsequent steps i) -- iii) at a computational cost of $ O(p^2) $: \begin{itemize} \item The magnitudes of the entries of $ \bL $ are bounded by a fixed positive constant. Moreover, the diagonal entries of $ \bL$ are all equal to $1$. \item $ \bB $ has the same number of positive, negative, and zero eigenvalues as $ \obH $. \item The number of negative eigenvalues of $ \obH $ is the sum of the number of blocks of size $ 2\times2 $ on the diagonal and the number of blocks of size $ 1\times1 $ on the diagonal with negative entries of $ \bB $. (Note: There are no guarantees for the signs of the entries in the $ 2\times2 $ blocks.) \end{itemize} The bound on the magnitudes of the entries of $ \bL $ is approximately $ 2.7808 $ per \cite{bunch1977some} and it is \textit{independent} of the size of $ \obH $. As shown in Theorems~\ref{thm:H_barbar_property}--\ref{thm:H_barbar_uniform_bound}, such a constant bound is useful in practice to perform a quick sanity regarding the appropriateness of the symmetric indefinite factorization and to provide useful bounds for the eigenvalues of $ \oobH_k $. From (\ref{eq:LBL}), $ \obH $ can be expressed as $ \obH = (\bP^\transpose\bL) \bB(\bP^\transpose\bL)^\transpose $. Then the second bullet point above can be shown easily by \textit{Sylvester's law of inertia}, which states that two congruent matrices have the same number of positive, negative and zero eigenvalues ($ \bA $ and $\bB $ are congruent if $ \bA = \bP\bB\bP^\transpose $ for some nonsingular matrix $ \bP $) \cite{sylvester1852xix}. From the third bullet point, if $ \obH $ is positive semidefinite, the corresponding $\bB$ is a diagonal matrix with nonnegative diagonal entries. \subsection{Main Algorithms} \label{subsec:algo} We now illustrate how the $ \bL\bB\bL^\transpose $ factorization can be used in 2SPSA/2SG and discuss steps i) -- iii) in Sect. \ref{subsect:Introduction} in detail. The results are presented in three algorithms, with Algorithms~\ref{algo:two_rank_one_update}--\ref{algo:descent_direction} implementing steps \ref{item:ranktwomodification}\textendash \ref{item:descentdirection}, respectively. Algorithm~\ref{algo:2SP} in Subsection~\ref{subsec:complexity} is to produce the updated $\hbtheta_k$. Code for all algorithms is available at \url{https://github.com/jingyi-zhu/Fast2SPSA}. \textbf{Two rank-one modifications} Although the direct calculation of $ \obH_k $ in (\ref{eq:two_rank_one_update}) only costs $ O(p^2) $, the subsequent preconditioning step incurs a computational cost of $ O(p^3) $ when not using any factorization of $ \obH_k $. Therefore, in anticipation of the subsequent necessary preconditioning, we propose to keep track of the $ \bL\bB\bL^\transpose $ factorization of $ \obH_k $ instead of the matrix itself. That is, the two direct rank-one modifications in (\ref{eq:two_rank_one_update}) are transformed to two non-trivial modifications on the $ \bL\bB\bL^\transpose $ factorization, which also incurs a computational cost of $ O(p^2) $. It is not necessary that $ \obH_k $ be explicitly computed in the algorithm, thereby avoiding the $ O(p^3) $ cost arising from matrix-associated necessary multiplications in the preconditioning. Lemma~\ref{lem:rank_one_update} states that the $ \bL\bB\bL^\transpose $ factorization can be updated for rank-one modification at a computational cost of $O(p^2)$. The detailed algorithm is established in \cite{sorensen1977updating}. We adopt that algorithm to our two rank-one modifications in (\ref{eq:two_rank_one_update}) and present the result in Theorem~\ref{thm:two_rank_one_update}. \begin{lemma}\label{lem:rank_one_update} \textit{\cite[Thm. 2.1]{sorensen1977updating}}. Let $ \bA \in \real^{p \times p} $ be symmetric (possibly indefinite) and \emph{non-singular} with $ \bP\bA\bP^\transpose = \bL\bB\bL^\transpose $. Suppose that $ \bz \in \real^p, \sigma \in \real $ are such that \begin{equation}\label{eq:IMF_rank_one_update} \tbA = \bA + \sigma\bz\bz^\transpose \end{equation} is also \emph{nonsingular}. Then the factorization $ \tilde{\bP}\tbA\tilde{\bP}^\transpose = \tilde{\bL}\tilde{\bB}\tilde{\bL}^\transpose $ can be obtained from the factorization $ \bP\bA\bP^\transpose = \bL\bB\bL^\transpose $ with a computational cost of $ O(p^2) $. \end{lemma} \begin{theorem}\label{thm:two_rank_one_update} Suppose $ \obH_k $ is given in (\ref{eq:two_rank_one_update}). Further assume that both $ \obH_{k-1} $ and $ \obH_k $ are \emph{nonsingular} and the factorization $ \bP_{k-1}\obH_{k-1}\bP_{k-1}^\transpose = \bL_{k-1}\bB_{k-1}\bL_{k-1}^\transpose $ is available. Then the factorization \begin{equation}\label{eq:IMF_rwo_rank_one_update} \bP_k\obH_k\bP_k^\transpose = \bL_k\bB_k\bL_k^\transpose \end{equation} can be obtained at a computational cost of $ O(p^2) $. \end{theorem} \begin{proof} With Lemma~\ref{lem:rank_one_update}, we see that (\ref{eq:IMF_rwo_rank_one_update}) can be obtained by applying (\ref{eq:IMF_rank_one_update}) twice with $ \sigma = b_k, \bz = \tbu_k $ and $ \sigma = -b_k, \bz = \tbv_k $, respectively. Because each update requires a computational cost of $ O(p^2) $, the total computational cost remains $ O(p^2) $. \end{proof} \begin{remark} The nonsingularity (\emph{not} necessarily positive-definiteness) of $ \obH_k $ is a modest assumption for the following reasons: i) $ \obH_0 $ is often initialized to be a positive definite matrix satisfying the nonsingularity assumption, e.g., $ \obH_0 = c\bI $ for some constant $ c > 0 $. ii) Whenever $ \obH_k $ violates the nonsingularity assumption due to the two rank-one modifications in (\ref{eq:two_rank_one_update}), a new pair of $ \bDelta_k $ and $ \tbDelta_k $ along with the noisy measurements can be generated to redo the modifications in (\ref{eq:two_rank_one_update}). In practice, the singularity of $ \obH_k $ can be detected via the entry-wise bounds of $ \bL_k $ per \cite{bunch1977some}. Namely, if $ \bL_k $ has an entry exceeding $ 2.7808 $, the nonsingularity assumption of $ \obH_k $ is violated. It is indeed possible to compute the probability of getting a singular $ \obH_k $; however, we deem it as a minor practical issue and do not pursue further analysis in this work. iii) Because the second-order method is often recommended to be implemented only after $ \hbtheta_k $ reaches the vicinity of the optimal point $ \btheta^* $, and the true Hessian matrix of $ \btheta^* $ is assumed to be positive definite \cite{spall2000adaptive}, the estimate $ \obH_k $ is ``pushed" towards nonsingularity. The bottom line is that we are able to run second-order methods at any iteration $ k $, but are more interested when $\hbtheta_k$ is near $\btheta^*$. \end{remark} We summarize the two rank-one modifications of $ \obH_k $ in the following Algorithm~\ref{algo:two_rank_one_update}. The outputs of Algorithm~\ref{algo:two_rank_one_update} are used to obtain a computational cost of $ O(p^2) $ in the preconditioning step as the eigenvalue modifications on $ \bB_k $, a diagonal block matrix, is more efficient than the direct eigenvalue modifications in (\ref{eq:f_k_sqrtm}) and (\ref{eq:f_k_add}). Algorithm~\ref{algo:two_rank_one_update} is the key that renders steps ii) and iii) in Subsection~\ref{subsect:Introduction} achievable at a computational cost of $ O(p^2) $. \begin{algorithm}[!htbp] \caption{Two rank-one updates of $ \obH_k $} \label{algo:two_rank_one_update} \begin{algorithmic}[1] \renewcommand{\algorithmicrequire}{\textbf{Input:}} \renewcommand{\algorithmicensure}{\textbf{Output:}} \REQUIRE matrices $ \bP_{k-1}, \bL_{k-1}, \bB_{k-1} $ in the symmetric indefinite factorization of $\obH_{k-1}$, scalars $ t_k, b_k $, and vectors $ \bu_k, \bv_k $ computed per Table~\ref{table:u_k_v_k}. \ENSURE matrices $ \bP_k, \bL_k, \bB_k $ in the symmetric indefinite factorization of $ \obH_k $ per (\ref{eq:LBL}). \STATE \textbf{set} $ \bP_k \gets \bP_{k-1}, \bL_k \gets \bL_{k-1}, \bB_k \gets t_k\bB_{k-1} $. \STATE \textbf{update} $ \bP_k, \bL_k, \bB_k $ with the rank-one modifications $ b_k\tbu_k\tbu_k^\transpose $ with $ \tbu_k $ computed in (\ref{eq:u_k_tilde}) and $ -b_k\tbv_k\tbv_k^\transpose $ with $ \tbv_k $ computed in (\ref{eq:v_k_tilde}), using the updating procedure outlined in \cite{sorensen1977updating}. (Recall that the code is available at \url{https://github.com/jingyi-zhu/Fast2SPSA}.) \RETURN matrices $ \bP_k, \bL_k, \bB_k $. \end{algorithmic} \end{algorithm} \begin{remark} Though $ \obH_k $ is not explicitly computed during each iteration, whenever needed it can be computed easily from its $ \bL\bB\bL^\transpose $ factorization, though with a computational cost of $ O(p^3) $, i.e, $ \obH_k = \bP_k^\transpose\bL_k\bB_k\bL_k^\transpose\bP_k $. This calculation yields the same $ \obH_k $ as (\ref{eq:H_overline}) or (\ref{eq:two_rank_one_update}). The $ \bL\bB\bL^\transpose $ factorization of $ \obH_0 $ requires a computational cost of, at most, $ O(p^3) $ \cite[Table 2]{bunch1971direct}. However, as a one-time sunk-in cost, it does not compromise the overall computational cost. Of course, we can avoid this bothersome issue by initializing $ \obH_0 $ to a diagonal matrix, which immediately gives $ \bP_0 = \bL_0 = \bB_0 = \bI $. Generally, the cost for initialization is trivial if $\obH_0$ is a diagonal matrix. \end{remark} \textbf{Preconditioning} Given the factorization of the estimated Hessian information $ \obH_k $, which is symmetric yet potentially \emph{indefinite} (especially during early iterations), we aim to output a factorization of the Hessian approximation $ \oobH_k $ such that $ \oobH_k $ is symmetric and sufficiently positive definite, i.e., $ \lambda_{\min}(\oobH_k) \geq \tau $ for some constant $ \tau > 0 $. With the above $ \bL\bB\bL^\transpose $ factorization associated with $\obH_k$ obtained from the previous two rank-one modification steps, we can modify the eigenvalues of $ \bB_k $. Note that $\bB_k$ is a block diagonal matrix, so any eigenvalue modification can carried out inexpensively. This is in contrast to directly modifying the eigenvalues of $\obH_k$ to obtain $ \oobH_k $, which is computationally-costly as laid out in Subsection~\ref{subsect:p3cost}. Denote $ \obB_k $ as the modified matrix from $ \bB_k $. Note that $ \oobH_k $ and $ \obB_k $ are congruent as $ \oobH_k = (\bP_k^\transpose\bL_k)\obB_k(\bP_k^\transpose\bL_k)^\transpose $. By Sylvester's law of inertia, the positive definiteness of $ \oobH_k $ is guaranteed as long as $ \obB_k $ is positive definite. To modify the eigenvalues of $ \bB_k $, we borrow the ideas from the modified Newton's method \cite[pp. 50]{nocedal2006numerical} to set $ \lambda_j(\obB_k) = \max \set{\tau_k, |\uplambda_j(\bB_k)|} $ for $ j = 1, ..., p $, where $ \tau_k $ is a user-specified stability threshold, which is possibly data-dependent. A possible choice of the uniformly bounded $ \set{\tau_k} $ sequence in the Section~\ref{sec:numerical} is to set $ \tau_k = \max\{10^{-4}, 10^{-4}p\max_{1\leq j \leq p} |\uplambda_j(\bB_k)| \} $. The intuition behind the eigenvalue modification in Algorithm~\ref{algo:preconditioning} is to make $ \obB_k $ well-conditioned while behaving similarly to $ \bB_k $. The pseudo code of the preconditioning step is listed in Algorithm~\ref{algo:preconditioning}. \begin{algorithm}[htbp] \caption{Preconditioning} \label{algo:preconditioning} \begin{algorithmic}[1] \renewcommand{\algorithmicrequire}{\textbf{Input:}} \renewcommand{\algorithmicensure}{\textbf{Output:}} \REQUIRE user-specified stability-threshold $ \tau_k > 0 $ and matrix $ \bB_k $ in the symmetric indefinite factorization of $ \obH_k $. \ENSURE matrix $ \bQ_k $ in the eigen-decomposition of $ \bB_k $ and the modified matrix $ \obLambda_k $. \STATE \textbf{apply} eigen-decomposition of $ \bB_k = \bQ_k \bLambda_k \bQ_k^\transpose $, where $ \bLambda_k = \text{diag}(\lambda_{k1}, ..., \lambda_{kp}) $ and $ \lambda_{kj} \equiv \lambda_j(\bB_k) $ for $ j = 1, ..., p $. \STATE \textbf{update} $ \obLambda_k = \text{diag}(\bar{\lambda}_{k1}, ..., \bar{\lambda}_{kp}) $ with $ \bar{\lambda}_{kj} = \max \set{\tau_k, |\uplambda_{kj}|} $ for $j = 1, ..., p$. \RETURN eigen-decomposition of $ \obB_k = \bQ_k \obLambda_k \bQ_k^\transpose $. \end{algorithmic} \end{algorithm} \begin{remark} Although the eigen-decomposition, in general, incurs an $ O(p^3) $ cost, the block diagonal structure of $ \bB_k $ allows such an operation to be implemented relatively inexpensively. In the worst-case scenario, $ \bB_k $ consists of $ p/2 $ diagonal blocks of size $ 2 \times 2 $, where eigen-decompositions are applied on each block separately leading to a total computational cost of $ O(p) $. For the sake of efficiency, the matrix $ \oobH_k $ is not explicitly computed. Whenever needed, however, it can be computed by $ \oobH_k = \bP_k^\transpose\bL_k\bQ_k\obLambda_k\bQ_k^\transpose\bL_k^\transpose\bP_k $ at a cost of $O(p^3)$. \end{remark} Algorithm~\ref{algo:preconditioning} makes our approach different from \cite{spall2000adaptive}. We only modify the eigenvalues of $ \bLambda_k $ (or equivalently of $ \bB_k $), which indirectly affects the eigenvalues of $ \oobH_k $ in a non-trivial way. However, if one constructs $ \obH_k $ and $ \oobH_k $ from their factorization (formally unnecessary as mentioned above), Algorithm~\ref{algo:preconditioning} can be viewed as a function that maps $ \obH_k $ to a positive-definite $ \oobH_k $. In this sense, Algorithm~\ref{algo:preconditioning} is just a special choice of $ f_k(\cdot) $ in (\ref{eq:H_ooverline}) even though such a $ f _k(\cdot) $ is non-trivial and difficult to find. \textbf{Descent direction} After the preconditioning step, the descent direction $ \bd_k: \oobH_k \bd_k = \bG_k(\hbtheta_k) $ can be computed readily via one forward substitution with respect to (w.r.t.) the lower-triangular matrix $\bL_k$ and one backward substitution w.r.t. the upper-triangular matrix $\bL_k^\transpose$, as the decomposition $\oobH_k=\bP_k^\transpose\bL_k\bQ_k\obLambda_k\bQ_k^\transpose\bL_k^\transpose\bP_k $ is available. The estimate $ \hbtheta_k $ can then be updated as in (\ref{eq:theta_update_s}). Note that $ \oobH_k $ is not directly computed in any iteration, and the forward and backward substitutions are implemented through the terms in the $ \bL\bB\bL^\transpose $ factorization. Algorithm~\ref{algo:descent_direction} below summarizes the details. \begin{algorithm}[H] \caption{Descent Direction Step} \label{algo:descent_direction} \begin{algorithmic}[1] \renewcommand{\algorithmicrequire}{\textbf{Input:}} \renewcommand{\algorithmicensure}{\textbf{Output:}} \REQUIRE gradient estimate $ \bG_k(\hbtheta_k) $, and matrices $ \bP_k, \bL_k, \bQ_k, \obLambda_k $ in the $ \bL\bB\bL^\transpose $ factorization of $ \oobH_k $. \ENSURE descent direction $ \bd_k $. \STATE \textbf{Solve} $ \bz$ by forward substitution such that $\bL_k\bz = \bP_k\bG_k(\hbtheta_k) $. \STATE \textbf{Compute} $ \bw $ such that $ \bw = \bQ_k\obLambda_k^{-1}\bQ_k^\transpose \bz $. \STATE \textbf{Solve} $ \by $ by backward substitution such that $ \bL_k^\transpose\by = \bw $. \RETURN $ \bd_k = \bP_k^\transpose\by $. \end{algorithmic} \end{algorithm} Given the triangular structure of $\bL_k$ and that both $\bP_k$ and $\bQ_k$ are permutation matrices, the computational cost of Algorithm~\ref{algo:descent_direction} is dominated by $O(p^2)$. \subsection{Overall Algorithm (Second-Order SP) and Computational Complexity } \label{subsec:complexity} With the aforementioned steps, we present the \emph{complete} algorithm for implementing second-order SP in Algorithm~\ref{algo:2SP} below, which applies to 2SPSA/2SG/E2SPSA/E2SG. Complete computational complexity analysis for 2SPSA is also stated, and the suggestions for the user-specified inputs are listed in \cite[Sect. 7.8.2]{spall2005introduction}. Results for 2SG/E2SPSA/E2SG can be obtained similarly. \begin{algorithm}[!htbp] \caption{Efficient Second-order SP (applies to 2SPSA, 2SG, E2SPSA, and E2SG)} \label{algo:2SP} \begin{algorithmic}[1] \renewcommand{\algorithmicrequire}{\textbf{Input:}} \renewcommand{\algorithmicensure}{\textbf{Output:}} \REQUIRE initialization $ \hbtheta_0 $ and $ \bP_0, \bQ_0, \bB_0 $ in the symmetric indefinite factorization of $ \obH_0 $; user-specified stability-threshold $ \tau_k > 0 $; coefficients $ a_k, c_k, w_k $ and, for 2SPSA/E2SPSA, $ \tilde{c}_k $. \ENSURE terminal estimate $ \hbtheta_k $. \STATE \textbf{set} iteration index $ k = 0 $. \WHILE{terminating condition for $ \hbtheta_k $ has not been satisfied} \STATE \textbf{estimate} gradient $ \bG_k(\hbtheta_k) $ by (\ref{eq:gradient_estimate_2SPSA}) or (\ref{eq:gradient_estimate_2SG}). \STATE \textbf{compute} $ t_k, b_k, \tbu_k $ and $ \tbv_k $ by (\ref{eq:u_k_tilde}), (\ref{eq:v_k_tilde}) and Table~\ref{table:u_k_v_k}. \STATE \textbf{update} the symmetric indefinite factorization of $ \obH_k $ by Algorithm~\ref{algo:two_rank_one_update}. \STATE \textbf{update} the symmetric indefinite factorization of $ \oobH_k $ by Algorithm~\ref{algo:preconditioning}. \STATE \textbf{compute} the descent direction $ \bd_k $ by Algorithm~\ref{algo:descent_direction}. \STATE \textbf{update} $ \hbtheta_{k+1} = \hbtheta_k - a_k\bd_k $. \STATE $ k \leftarrow k + 1 $. \ENDWHILE \RETURN $ \hbtheta_k $. \end{algorithmic} \end{algorithm} For the terminating conditions, the algorithm is set to stop when a pre-specified total number of function (applicable for 2SPSA and E2SPSA) or gradient (applicable for 2SG and E2SG) measurements is reached or the norms of the differences between several consecutive estimates are less than a pre-specified threshold. Note that, for each iteration, four noisy loss function measurements are required in the gradient-free case and three noisy gradient measurements are required in the gradient-based case. The corresponding computational complexity analysis for Algorithm~\ref{algo:2SP} under the gradient-free case is summarized in Table~\ref{table:computational_complexity}. Analogously, the analysis can be carried out for the gradient-based case (2SG) and the feedback-based case (E2SPSA or E2SG). A floating-point operation is assumed to be either a summation or a multiplication, while transposition requires no FLOPs. For the updating $ \obH_k $ step in original 2SPSA, $ 3p^2 $ FLOPs are required per (\ref{eq:H_overline}) and $ 4p^2 $ FLOPs are required per (\ref{eq:H_hat}). In the proposed implementation, $ 10p $ FLOPs are required to get $ \tbu_k $ and $ \tbv_k $ per (\ref{eq:u_k_tilde}) and (\ref{eq:v_k_tilde}), respectively, and $ 22p^2/6 + O(p) $ FLOPs are required to update the symmetric indefinite factorization of $ \obH_k $ \cite[Thm. 2.1 ]{sorensen1977updating}. For the preconditioning step in original 2SPSA, if using (\ref{eq:f_k_sqrtm}), $ p^3 + p $ FLOPs are required to get $ \obH_k\obH_k + \delta_k\bI $ and additional $ 50p^3/3 + O(p^2) $ FLOPs are required for the matrix square root operation \cite{higham1987computing}. In the proposed implementation, at most $ 7p $ FLOPs are required to get an eigenvalue decomposition on $ \bB_k $ ($ 14 $ FLOPs for at most $ p/2 $ blocks of size $ 2 \times 2 $) and $ p $ FLOPs are required to update the eigenvalues of $ \bB_k $. For computing the descent direction $ \bd_k $ in the original 2SPSA, $ p^3/3 $ FLOPs are required to apply Cholesky decomposition for $ \oobH_k $ and $ 2p^2 $ FLOPs are required for the backward substitutions. In the proposed implementation, $ 4p^2 + 2p $ FLOPs are required to backward substitutions. \begin{table}[!htbp] \caption{Computational complexity analysis in gradient-free case (2SPSA in Algorithm~\ref{algo:2SP}) Complexity cost shown in FLOPs.} \centering \begin{tabular}{|c|c|c|} \hline Leading Cost & Original 2SPSA & Proposed Implementation\\ \hline\hline Update $ \obH_k $ & $ 7p^2 $ & $ 3.67p^2 + O(p) $ \\ \hline Precondition $ \oobH_k $ & $ 17.67p^3 + O(p^2) $ & $ 8p $ \\ \hline Descent direction $ \bd_k $ & $ 0.33p^3 + O(p^2) $ & $ 4p^2 + O(p) $ \\ \hline\hline Total Cost & $ 18p^3 + O(p^2) $ & $7.67p^2 + O(p) $ \\ \hline \end{tabular}\label{table:computational_complexity} \end{table} Table~\ref{table:computational_complexity} may not provide the lowest possible computational complexities, because a great deal of existing work on parallel computing\textemdash such as \cite{george1986parallel} on parallelization of Cholesky decomposition, \cite{deadman2012blocked} for computing principal matrix square root, and \cite{dongarra1987fully} for the symmetric eigenvalue problem\textemdash have tremendously accelerated the matrix-operation computing speed in modern data analysis packages. Nonetheless, even with such enhancements, the FLOPS counts remain $ O(p^3) $ in the standard methods. The bottom line is that our proposed implementation reduces the overall computational cost from $ O(p^3) $ to $ O(p^2) $. \section{Theoretical Results and Practical Benefits}\label{sec:theory} This section presents the theoretical foundation related to the almost sure convergence and the asymptotic normality of $ \hbtheta_k $. We also offer comments on the practical benefits from the proposed scheme. Lemma~\ref{lem:Ostrowski} provides the theoretical guarantee to connect the eigenvalues of $ \oobH_k $ and $ \obLambda_k $, which are important for proving Theorem~\ref{thm:H_barbar_property}--\ref{thm:H_barbar_uniform_bound} related to the matrix properties of $\obH_k$ and $\oobH_k$. \begin{lemma}\label{lem:Ostrowski} \textit{\cite[Thm. 4.5.9]{horn1990matrix}} Let $ \bA, \bS \in \real^{p \times p} $, with $ \bA $ being symmetric and $ \bS $ being \emph{nonsingular}. Let the eigenvalues of $ \bA $ and $ \bS\bA\bS^\transpose $ be arranged in \emph{nondecreasing} order. Let $ \sigma_1 \geq \cdots \geq \sigma_p > 0 $ be the singular values of $ \bS $. For each $ j = 1, \cdots, p $, there exists a positive number $ \zeta_j \in [\sigma_p^2, \sigma_1^2] $ such that $ \lambda_j(\bS\bA\bS^\transpose) = \zeta_j\lambda_j(\bA)$. \end{lemma} Before presenting the main theorems, we first discuss the singular values of $ \bL_k $. Denote $ \{\sigma_i(\bL_k)\}_{i=1}^p $ as the singular values of $ \bL_k $. Also let $ \sigma_{\min}(\cdot) = \min_{1\leq i \leq p} \sigma_i(\cdot)$ and $ \sigma_{\max}(\cdot) = \max_{1\leq i \leq p} \sigma_i(\cdot) $. Since $ \bL_k $ is a unit lower triangular matrix, we have $ \lambda_j(\bL_k) = 1 $ for $ j = 1, .., p $ and $ \det(\bL_k) = 1 $. From the entry-wise bounds of $ \bL_k $ in Subsection~\ref{subsec:IMF}, we see that $ p \leq \norm{\bL_k}_F \leq 3p^2/2 - p/2 $ for all $ k $, where $\norm{\cdot}_F$ is the Frobenius norm of the argument matrix in $ \real^{p\times p} $. With the lower bound of $ \sigma_{\min}(\bL_k) $ \cite{yi1997note}, there exists a constant $ \underline{\sigma} > 0 $ such that $ \sigma_{\min}(\bL_k) \geq \underline{\sigma} $ for all $ k $. On the other hand, by the equivalence of the matrix norms, i.e, $ \sigma_{\max}(\bL_k) = \norm{\bL_k}_2 \leq \norm{\bL_k}_F $ for $ \norm{\cdot}_2 $ being the spectral norm, there exists a constant $ \overline{\sigma} > 0 $ such that $ \sigma_{\max}(\bL_k) \leq \overline{\sigma} $ for all $ k$. Both $ \underline{\sigma} $ and $ \overline{\sigma} $ are independent of the sample path for $ \bL_k $. By the Rayleigh-Ritz theorem \cite[Thm. 4.2.2]{horn1990matrix}, $ \be_1^\transpose(\bL_k\bL_k^\transpose)\be_1 = 1 $ implies that $ \sigma_{\min}(\bL_k) \leq 1 $ and $ \sigma_{\max}(\bL_k) \geq 1 $. Combined, all the singular values of $ \bL_k $ are bounded uniformly across $k$, i.e., $ \underline{\sigma} < \sigma_{\min}(\bL_k) \leq 1 \leq \sigma_{\max}(\bL_k) \leq \overline{\sigma} $. Let $ \kappa(\bL_k) $ be the condition number of $ \bL_k $, then $ 1 \leq \kappa(\bL_k) \leq \overline{\sigma} / \underline{\sigma} $. Because the focus of Algorithm~\ref{algo:preconditioning} is to generate a positive definite $ \obB_k $ (or equivalently its eigen-decomposition), we replace $ \tau_k $ in Theorem~\ref{thm:H_barbar_property}--\ref{thm:H_barbar_uniform_bound} with some constant $ \underline{\tau}\in\left(0,\uptau_k\right] $ independent of the sample path for $ \bB_k $ for all $k$. Note that the substitution is solely for succinctness and does not affect the theoretical result that $\obB_k$ is positive definite. Theorem~\ref{thm:H_barbar_property} presents the key theoretical properties of $ \oobH_k $ satisfying the regularity conditions in \cite[C.6]{spall2000adaptive}. Based on Theorem~\ref{thm:H_barbar_property}, the strong convergence, $ \hbtheta_k \to \btheta^* $ and $ \obH_k \to \bH(\btheta^*) $, can be established conveniently, see Remark~\ref{rmk:strong_convergence}. \begin{theorem}\label{thm:H_barbar_property} Assume there exists a symmetric indefinite factorization $ \obH_k = \bP_k^\transpose\bL_k\bB_k\bL_k^\transpose\bP_k$. Given any constant $\underline{\uptau} \in\left(0,\uptau_k\right] $ for all $ k $, the matrix $ \oobH_k = \bP_k^\transpose\bL_k\bQ_k\obLambda_k\bQ_k^\transpose\bL_k^\transpose\bP_k $ with $ \bQ_k$ and $\obLambda_k $ returned from Algorithm~\ref{algo:preconditioning} satisfies the following properties: \begin{itemize} \item[(a)] $ \lambda_{\min}(\oobH_k) \geq \underline{\sigma}^2\underline{\tau} > 0 $. \item[(b)] $\oobH_k^{-1}$ exists a.s., $ c_k^2 \oobH_k ^{-1}\to \bm{0}$ a.s., and for some constants $\updelta, \uprho>0$, $ \mathbb{E} [ \| \oobH_k^{-1} \|^{2+\updelta} ]\le \uprho $. \end{itemize} \end{theorem} \begin{proof} For all $k$, it is easy to see that $ \lambda_{\min}(\obLambda_k) \geq \underline{\tau} > 0 $ implying $ \obLambda_k $ is positive definite. Since both $ \bQ_k $ and $ \bL_k $ are nonsingular, by Sylvester's law of inertia \cite{sylvester1852xix}, $ \oobH_k $ is also positive definite as $ \obLambda_k $ is positive definite. Moreover, by Lemma~\ref{lem:Ostrowski}, \begin{equation}\label{eq:lambda_min_H_barbar} \lambda_{\min}(\oobH_k) \geq \sigma_{\min}^2(\bL_k)\lambda_{\min}(\obLambda_k) \geq \underline{\sigma}^2\underline{\tau} > 0 . \end{equation} Since $ \oobH_k $ has a constant lower bound for all its eigenvalues across $k$, property (b) follows. \end{proof} \begin{remark}\label{rmk:strong_convergence} Theorem~\ref{thm:H_barbar_property} guarantees that $ \oobH_k $ is positive definite, and therefore the estimates of $\btheta$ in the second-order method move in a descent-direction on average. Meeting property (b) is also necessary in showing the convergence results. Suppose the standard regularity conditions in \cite[Sect. III and IV]{spall2000adaptive} hold. To show the strong convergence, $ \hbtheta_k \to \btheta^* $ and $ \obH_k \to \bH(\btheta^*) $, we only need to verify that $ \oobH_k $ satisfies the regularity conditions in \cite[C.6]{spall2000adaptive} because the key difference between the original 2SPSA/2SG and our proposed method is effectively the preconditioning step. Theorem~\ref{thm:H_barbar_property} verifies the Assumption C.6 in \cite{spall2000adaptive} directly, and therefore we have $ \hbtheta_k \to \btheta^* $ a.s. and $ \obH_k \to \bH(\btheta^*) $ a.s. under both the 2SPSA and 2SG settings by \cite[Thms. 1 and 2]{spall2000adaptive}. \end{remark} Theorem~\ref{thm:H_bar_property} discusses the connection between $ \obH_k $ and $ \oobH_k $ when $ k $ is sufficiently large. It also verifies a key condition when proving the asymptotic normality of $ \hbtheta_k $, see Remark~\ref{rmk:asymptotic_normality}. \begin{theorem}\label{thm:H_bar_property} Assume $ \bH(\btheta^*) $ is positive definite. When choosing $ 0 < \underline{\tau} \leq \lambda_{\min}(\bH(\btheta^*)) / (2\overline{\sigma}^2) $, there exists a constant $ K_1 $ such that for all $ k > K_1 $, we have $ \oobH_k = \obH_k $. \end{theorem} \begin{proof} By Remark~\ref{rmk:strong_convergence}, since $ \obH_k \to \bH(\btheta^*) $ a.s., there exists an integer $K_1$ such that for all $ k > K_1 $, $ \lambda_{\min}(\obH_k) \geq \lambda_{\min}(\bH(\btheta^*)) / 2 > 0 $. By Lemma~\ref{lem:Ostrowski}, we can achieve a lower bound for the eigenvalues of $ \bLambda_k $ as \begin{equation*} \lambda_{\min}(\bLambda_k) \geq \frac{\lambda_{\min}(\obH_k)}{\sigma_{\max}^2(\bL_k)} \geq \frac{\lambda_{\min}(\obH_k)}{\overline{\sigma}^2} \geq \underline{\tau}. \end{equation*} Therefore, for all $ k > K_1$, $ \obLambda_k = \bLambda_k $ and consequently $ \oobH_k = \obH_k $. \end{proof} \begin{remark}\label{rmk:asymptotic_normality} Theorem~\ref{thm:H_bar_property} shows that when $ k $ is large (the estimated Hessian $ \obH_k $ is sufficiently positive definite), the proposed preconditioning step will automatically make $ \oobH_k = \obH_k $, which satisfies one of the key required conditions for the asymptotic normality of $ \hat{\btheta}_k $ in \cite{spall2000adaptive}. Besides the additional regularity conditions in \cite[C.10--12]{spall2000adaptive}, we are required to verify that $ \oobH_k - \obH_k \to \bm{0}~\text{a.s.}, $ which can be inferred by Theorem~\ref{thm:H_bar_property}. Following \cite[Thm. 3]{spall2000adaptive}, when the gain sequences have the standard form $ a_k = a/(A+k+1)^\alpha $ and $ c_k = c/(k+1)^\gamma $, the asymptotic normality of $ \hbtheta_k $ gives \begin{equation*} \begin{split} k^{(\alpha-2\gamma)/2}(\hbtheta_k - \btheta^*) \stackrel{\text{dist}}{\longrightarrow} N(\bm{\upmu}, \bm{\Omega}) &\quad \text{for 2SPSA,}\\ k^{\alpha/2}(\hbtheta_k - \btheta^*) \stackrel{\text{dist}}{\longrightarrow} N(\bm{0}, \bm{\Omega'}) &\quad\text{for 2SG,} \end{split} \end{equation*} where the specifications of $ \alpha, \gamma, \bm{\upmu}, \bm{\Omega}$ and $ \bm{\Omega'} $ are available in \cite{spall2000adaptive}. Under the E2SPSA/E2SG settings, the convergence and asymptotic results can be derived analogously from \cite[Thms. 1--4]{spall2009feedback}. \end{remark} Because an ill-conditioned matrix may cause an excessive step-size in recursion (\ref{eq:theta_update_s}), leading to slow convergence \cite{li2018preconditioned}, we need to make sure that the resulting $\oobH_k $ (or its equivalent factorization) is not only positive definite but also numerically favorable. Theorem~\ref{thm:H_barbar_uniform_bound} below shows that changing the eigenvalues of $ \bLambda_k $ does not lead to the eigenvalues of $ \oobH_k $ becoming either too large or too small. \begin{theorem}\label{thm:H_barbar_uniform_bound} Assume the eigenvalues of $\bH\parenthese{\btheta^*}$ are bounded uniformly such that $0< \underline{\uplambda}^*<\abs{\uplambda_j\parenthese{\bH\parenthese{\btheta^*}}}<\overline{\uplambda}^*<\infty $ for $j=1,...,p$ for all $k$. Then there exists some $K_2$ such that for $ k>K_2 $, the eigenvalues and condition number of $ \oobH_k $ are also bounded uniformly. \end{theorem} \begin{proof} Again by Remark~\ref{rmk:strong_convergence}, since $\obH_k\to \bH\parenthese{\btheta^*}$ a.s., therefore for all $k>K_2$, the eigenvalues of $ \obH_k $ are bounded uniformly in the sense that $ \underline{\lambda} < |\lambda_j(\obH_k)| < \overline{\lambda} $ for $ j = 1, ..., p $, where $ \underline{\lambda} = \underline{\uplambda}^*/2 $ and $ \overline{\lambda} =2 \overline{\uplambda}^*$ are constants independent of the sample path for $ \obH_k $. Given $ \obH_k = \bP_k\bL_k\bB_k\bL_k^T\bP_k $, by Lemma~\ref{lem:Ostrowski}, \begin{equation*} \frac{\lambda_{\min}(\obH_k)}{\sigma_{\max}^2(\bL_k)} \leq \lambda_{\min}(\bB_k) \leq \frac{\lambda_{\min}(\obH_k)}{\sigma_{\min}^2(\bL_k)}, \end{equation*} and \begin{equation*} \frac{\lambda_{\max}(\obH_k)}{\sigma_{\max}^2(\bL_k)} \leq \lambda_{\max}(\bB_k) \leq \frac{\lambda_{\max}(\obH_k)}{\sigma_{\min}^2(\bL_k)}. \end{equation*} Similarly, since $ \oobH_k = \bP_k\bL_k\obB_k\bL_k^T\bP_k $, \begin{equation*} \begin{split} \lambda_{\min}(\oobH_k) & \geq \sigma_{\min}^2(\bL_k) \lambda_{\min}(\obB_k)\\ &\geq \sigma_{\min}^2(\bL_k)\max\left\{\underline{\tau},\frac{\lambda_{\min}(\obH_k)}{\sigma_{\max}^2(\bL_k)}\right\}\\ & \geq \underline{\sigma}^2\max\left\{\underline{\tau},\frac{\underline{\lambda}}{\overline{\sigma}^2}\right\}, \end{split} \end{equation*} \begin{equation*} \begin{split} \lambda_{\max}(\oobH_k)& \leq \sigma_{\max}^2(\bL_k) \lambda_{\max}(\obB_k)\\ &\leq \sigma_{\max}^2(\bL_k)\max\left\{\underline{\tau},\frac{\lambda_{\max}(\obH_k)}{\sigma_{\min}^2(\bL_k)}\right\}\\ &\leq \overline{\sigma}^2\max\left\{\underline{\tau},\frac{\overline{\lambda}}{\underline{\sigma}^2}\right\}, \end{split} \end{equation*} where $ \kappa(\cdot) $ is the condition number of the matrix argument. Since $ \underline{\sigma}^2, \overline{\sigma}^2, \underline{\lambda} $, and $ \overline{\lambda} $ are all constants specified before running the algorithm, the eigenvalues of $ \oobH_k $ are bounded uniformly across $ k > K_2 $. Moreover, for the condition number of $ \oobH_k $, we have \begin{equation*} \kappa(\oobH_k) \leq \frac{\sigma_{\max}^2(\bL_k)}{\sigma_{\min}^2(\bL_k)}\frac{\max\left\{\underline{\tau},\lambda_{\max}(\obH_k)/\sigma_{\min}^2(\bL_k)\right\}}{\max\left\{\underline{\tau},\lambda_{\min}(\obH_k)/\sigma_{\max}^2(\bL_k)\right\}}. \end{equation*} Hence the condition number of $ \oobH_k $ is also bounded uniformly across $k>K_2$. \end{proof} \begin{remark} Theorem~\ref{thm:H_barbar_uniform_bound} is highly desired for the preconditioning step since it is to ensure the numerical stability. Recall that the preconditioning step listed in Algorithm~\ref{algo:preconditioning} modifies the eigenvalues of $ \obH_k $ by modifying the eigenvalues of $ \bB_k $. This modification is desirable since the eigenvalues of $ \oobH_k $ are controllable, i.e., a bound for $ \lambda_j(\oobH_k) $ uniformly for sufficiently large $k$ under a given size $ p $ can be obtained. The controlled condition number in Theorem~\ref{thm:H_barbar_uniform_bound} differs from the original preconditioning procedure as in Eq. (\ref{eq:f_k_add}), which does not control the condition number of $\overline{\overline{\bH}}_k$. \end{remark} \section{Discussion} \label{sec:discussion} This short section discusses two practical questions regarding Algorithm~\ref{algo:2SP} that produces the updated estimate of $\btheta$. 1) What is the difference between the standard adaptive SPSA-based method and the proposed algorithm if $ \obB_k $ (or $ \obH_k $) is sufficiently positive definite? 2) How to recover $ \obH_k $ at any $ k $? In the ideal case, if $ \bB_k $ (or $ \obH_k $) is assumed to always be positive definite, the preconditioning step becomes unnecessary and we can directly set the symmetric indefinite factorization of $ \obH_k $ as the symmetric indefinite factorization of $ \oobH_k $, i.e., $ \bLambda_k = \obLambda_k $. In this scenario, the proposed method is identical to the original 2SPSA. However, because of the symmetric indefinite factorization, the overall computational cost remains at $ O(p^2) $ as in Table~\ref{table:computational_complexity} and it is still favorable relative to the original 2SPSA, which incurs a computational cost of $ O(p^3) $ due to the Gaussian elimination of $ \oobH_k $ in computing the descent direction $ \bd_k $. As mentioned in Sect. \ref{subsect:p3cost}, however, \cite{rastogi2016efficient} uses the matrix inversion lemma to show that the computational cost can be reduced to $ O(p^2) $ as well. Comparing with \cite{rastogi2016efficient}, which directly updates the matrix $ \obH_k^{-1} $ using the matrix inverse lemma, our proposed method has more control over the eigenvalues of $ \obH_k $ and performs well even when $ \obH_k $ is ill-conditioned. The second aspect is that both $ \obH_k $ and $ \oobH_k $ are never explicitly computed during each iteration. By maintaining the corresponding factorization, we avoid the expensive matrix multiplications and gain a much faster way to achieve second-order convergence. However, whenever needed, either $ \obH_k $ or $ \oobH_k $ can be directly computed from the factorizations at a cost of $ O(p^3) $, see Subsection~\ref{subsec:algo}. \section{Numerical studies} \label{sec:numerical} In this section, we demonstrate the strength of the proposed algorithms by minimizing the skewed-quartic function \cite{spall2000adaptive} using the efficient 2SPSA/E2SPSA and training a neural network using the efficient 2SG. \subsection{Skewed-Quartic Function} We consider the following skewed-quartic function used in \cite{spall2000adaptive} to show the performance of the efficient 2SPSA/E2SPSA: \begin{equation*} L(\btheta) = \btheta^\transpose\bB^\transpose\bB\btheta+0.1 \sum_{i=1}^{p} (\bB\btheta)_i^3 +0.01 \sum_{i=1}^{p} (\bB\btheta)_i^4, \end{equation*} where $ (\cdot)_i $ is the $i$-th component of the argument vector, and $ \bB $ is such that $ p\bB $ is an upper triangular matrix of all $1$'s. The additive noise in $ y(\cdot) $ is independent $\mathcal{N}\parenthese{0,0.05^2}$, i.e., $ y(\btheta) = L(\btheta) + \varepsilon$, where $ \varepsilon \sim \mathcal{N}\parenthese{0,0.05^2} $. It is easy to check that $ L(\btheta) $ is strictly convex with a unique minimizer $ \btheta^* = \bzero $ such that $L(\btheta^*) = 0$. For the preconditioning step in the original 2SPSA/E2SPSA, we choose $ \oobH_k = \bm{f}_k(\obH_k) = (\obH_k\obH_k + 10^{-4}e^{-k} \bI)^{1/2} $, which satisfies the definition of $ \bm{f}_k(\cdot) $ in (\ref{eq:f_k_sqrtm}) since $ \delta_k = 10^{-4}e^{-k} \to 0 $. In the efficient 2SPSA/E2SPSA, we choose $ \obLambda_k = \text{diag}(\bar{\lambda}_{k1}, ..., \bar{\lambda}_{kp}) $ with $ \bar{\lambda}_{kj} = \max \{10^{-4}$, $10^{-4}p\max_{1\leq i \leq p} |\lambda_{ki}|, |\lambda_{kj}|\} $ for all $ j $, which is consistent with the suggestion in \cite[pp. 118]{sorensen1977updating} and satisfies Theorem~\ref{thm:H_barbar_property}. To guard against unstable steps during the iteration process, a blocking step is added to reset $ \hbtheta_{k+1} $ to $ \hbtheta_k $ if $ \norm{\hbtheta_{k+1} - \hbtheta_k} \geq 1 $. We choose an initial value $ \hbtheta_0 = [1, 1, \dots, 1]^\transpose $. We show three plots below. Figures \ref{fig:loss_2SPSA} and \ref{fig:loss_E2SPSA} illustrate how the efficient method here provides essentially the same solution in terms of the loss function values as the $O(p^3)$ methods in \cite{spall2000adaptive} and \cite{spall2009feedback} (2SPSA and feedback and weighting-based E2SPSA). Figure 4 illustrates how the $O(p^3)$ vs. $O(p^2)$ FLOPS-based cost in Table~\ref{table:computational_complexity} above is manifested in overall runtimes. Figure~\ref{fig:loss_2SPSA} plots the normalized loss function values $ [L(\hbtheta_k) - L(\btheta^*)] / [L(\hbtheta_0) - L(\btheta^*)] $ of the original 2SPSA and the efficient 2SPSA averaged over 20 independent replicates for $ p = 100 $ and number of iterations $ N = 50,000 $. Similar to the numerical studies in \cite{spall2009feedback}, the gain sequences of the two algorithms are chosen to be $ a_k = a/(A+k+1)^{0.602} $, $ c_k = \tilde{c}_k = c/(k+1)^{0.101} $, and $ w_k = w/(k+1)^{0.501} $ where $ a = 0.04, A = 1000, c = 0.05 $, and $ w = 0.01 $ following the standard guidelines in \cite{spall1998implementation}. \begin{figure}[!t] \centering \includegraphics[width=\linewidth]{compare_2SPSA_loss_per_iteration_2019_06_12} \caption{Similar performance of algorithms with respect to loss values (\emph{different} run times). Normalized terminal loss $ [L(\hbtheta_k) - L(\btheta^*)] / [L(\hbtheta_0) - L(\btheta^*)] $ of the original 2SPSA and the efficient 2SPSA averaged over 20 replicates for $ p = 100 $.} \label{fig:loss_2SPSA} \end{figure} Figure~\ref{fig:loss_E2SPSA} compares the normalized loss function values $ [L(\hbtheta_k) - L(\btheta^*)] / [L(\hbtheta_0) - L(\btheta^*)] $ of the standard E2SPSA and the efficient E2SPSA averaged over 10 independent replicates for $ p = 10 $ and number of iterations $ N = 10,000 $. The gain sequences of the two algorithms are chosen to have the form $ a_k = a/(A+k+1)^{0.602} $, $ c_k = \tilde{c}_k = c/(k+1)^{0.101} $, and $ w_k = w/(k+1)^{0.501} $ where $ a = 0.3, A = 50 $, and $ c = 0.05 $. The weight sequence $ w_k = \tilde{c}_k^2c_k^2/[\sum_{i=0}^{k}(\tilde{c}_i^2c_i^2 )]$ is set according to the optimal weight in \cite[Eq. (4.2)]{spall2009feedback}. \begin{figure}[!t] \centering \includegraphics[width=\linewidth]{compare_E2SPSA_loss_per_iteration_2019_06_12} \caption{Similar performance of algorithms with respect to loss values (\emph{different} run times). Normalized terminal loss $ [L(\hbtheta_k) - L(\btheta^*)] / [L(\hbtheta_0) - L(\btheta^*)] $ of the original E2SPSA and the efficient E2SPSA averaged over 10 replicates for $ p = 10 $. } \label{fig:loss_E2SPSA} \end{figure} In the above comparisons, the loss function decreases significantly for all the dimensions with only noisy loss function measurements available. We see that the two implementations of E2SPSA provide close to the same accuracy for $1000$ or more iterations, although at a computing cost difference of $O(p^2)$ versus $O(p^3)$. Note that the differences (across $k$) between the original 2SPSA and the efficient 2SPSA/E2SPSA in Figure~\ref{fig:loss_E2SPSA} can be made arbitrarily small by picking an appropriate $ \bm{f}_k(\cdot) $ (or equivalently $ \oobH_k $) in the original 2SPSA, although such a choice might be non-trivial. \begin{figure}[!t] \centering \includegraphics[width=\linewidth]{compare_time_2019_05_28} \caption{Running time ratio of the original 2SPSA to the efficient 2SPSA averaged over 10 replicates, where the same skewed-quartic loss function is used and the total number of iterations is fixed at 10 for each run. The trend is close to the theoretical linear relationship as a function of dimension $p$.} \label{fig:time} \end{figure} To measure the computational time, Figure~\ref{fig:time} plots the running time (measured by the built-in \textsc{C++} function \texttt{clock()} with no input) ratio of the original 2SPSA to the efficient 2SPSA averaged over 10 independent replicates with dimension up to 10000. It visualizes the practicality of the efficient 2SPSA over the original 2SPSA. In terms of the general trend, the linear relationship between the running time ratio and the dimension number is consistent with the $ O(p^3) $ cost for the original 2SPSA and $ O(p^2) $ cost for the efficient 2SPSA. From Figure~\ref{fig:time}, it is clear that the computational benefit of the efficient 2SPSA is more apparent as the dimension $ p $ goes up. The slope in Figure~\ref{fig:time} is roughly 0.56, which is consistent with the theoretical FLOPs ratio of 2.35 in Table~\ref{table:computational_complexity}, when accounting for differences due to the storage costs and code efficiency. With a more dedicated programming language, it is expected that the running time ratio will be closer to the theoretical FLOPs ratio in Table~\ref{table:computational_complexity}. \subsection{Real-Data Study: Airfoil Self-Noise Data Set} In this subsection, we compare the efficient 2SG with the stochastic gradient descent (SGD) and ADAM \cite{kingma2014adam} in training a one-hidden-layer feed-forward neural network to predict sound levels over an airfoil. Although there are many gradient-based methods to train a neural network, we select SGD and ADAM because they are popular and representative of algorithms within the machine learning community. Comparison of efficient 2SG and the two aforementioned algorithms is appropriate, as all of them use the noisy gradient evaluations \emph{only}, despite their different forms. Aside from the application here, neural networks have been widely used as function approximators in the field of aerodynamics and aeroacoustics. Recent applications include airfoil design \cite{rai2000aerodynamic} and aerodynamic prediction \cite{perez2000prediction}. The dataset used in this example is the NASA data of NACA 0012 airfoil self-noise data set \cite{brooks1981trailing, brooks1989airfoil}, which is also available on the UC Irvine Machine Learning Repository \cite{brooks2014UCI}. This NASA dataset is obtained from a series of aerodynamic and acoustic tests of two and three-dimensional airfoil blade sections conducted in an anechoic wind tunnel. The inputs contain five variables: frequency (in Hertz); angle of attack (in degrees, not in radians); chord length (in meters); free-stream velocity (in meters per second); and suction side displacement thickness (in meters). The output contains the scaled sound pressure level (in decibels). Readers may refer to \cite{brooks1989airfoil} and \cite[Sect. 3]{errasquin2009airfoil} for further details. Given the number of samples $ n = 1503 $, we fit the dataset using a one-hidden-layer neural network with 150 hidden neurons and sigmoid activating functions. Other choices of the neural network structures, such as using a different number of layers or different activation functions, have been implemented in \cite{errasquin2009airfoil}. Here, we use a neural network with a greater number of neurons than the one used in \cite{errasquin2009airfoil} to demonstrate the strength of the efficient 2SG in high-dimensional problems. The dimension $ p = 1051 $ is calculated as $ 5 \times 150 $ weights and $ 150 $ bias parameters for the hidden neurons plus 150 weights and 1 bias parameters for the output neuron. Following the principles in \cite{wilson2003general}, we train the neural network in an online manner, where only one training sample is evaluated during each iteration. Denote the dataset as $ \{(y_i, \bx_i)\}_{i=1}^n $ and the parameters in the neural network as $ \btheta $. The loss function is chosen to be the empirical risk function (ERF), i.e., $ L(\btheta) = (1/n) \sum_{i=1}^n (y_i - \hat{y}_i)^2 $, where $ \hat{y}_i $ is the neural network output based on input $ \bx_i $ and parameter $ \btheta $. Consistent with the online training of an ERF in machine learning, the loss function based on that one training sample can be deemed as a noisy measurement of the loss function based on the entire dataset. We implement SGD and ADAM with 10 epochs, each corresponds to 1503 iterations (one iteration per data point), resulting in a total of 15030 iterations. The gain sequence is chosen to be $ a_k = a / (k + 1 + A)^\alpha $ with $ A = 1503$ being 10\% of the total number of iterations and $ \alpha = 1 $ following \cite[pp. 113\textendash 114]{spall2005introduction}. After tuning for optimal performance, we choose $ a = 1 $ for SGD and ADAM \cite{kingma2014adam} . Other hyper-parameters for ADAM are determined from the default settings in \cite{kingma2014adam}. There is no ``re-setting'' of $ a_k $ imposed at the beginning of each epoch so that the gain sequence goes down consecutively across iterations and epochs. The initial value $ \hbtheta_0 = \bm{0} $. Recall that efficient 2SG requires three back-propagations per iteration, where SGD and ADAM only requires one back-propagation per iteration. Therefore, for a fair comparison, we implement the efficient 2SG under two different scenarios: (1) serial computing, and (2) concurrent computing. Within each iteration of efficient 2SG, the three gradient measurements, $ \bY_k(\hbtheta_k),\bY_k(\hbtheta_k +c_k\bDelta_k) $ and $ \bY_k(\hbtheta_k-c_k\bDelta_k) $ can be computed simultaneously since they do not rely on each other. Using this concurrent implementation, the time spent in back-propagation can be reduced to one third of the original time. All the remaining steps are unchanged. Although the efficient 2SG takes time in performing algorithm~\ref{algo:preconditioning}, numerical studies indicate that majority of the time is spent on the back-propagation. Hence, under the concurrent implementation, the efficient 2SG has roughly the same running time per iteration as SGD and ADAM. Figure~\ref{fig:real_data_log_MSE_per_iteration} shows the value of ERF under the concurrent implementation. In the efficient 2SG, the gain sequences are chosen to be $ a_k = a/(A+k+1)^\alpha $, $ w_k = 1/(k+1) $ and $ c_k = c/(k+1)^\gamma $ with $A=1503, \alpha=1 $ and $ \gamma = 1/6 $ following \cite{spall1998implementation}. Other parameters $ a = 0.1 $ and $ c = 0.05 $ are tuned for optimal performance. The matrix $ \obLambda_k $ is computed the same as in the skewed-quartic function above. For better practical performance, training data is normalized to the range $ [0,1] $. Since all the inputs and output are positive, the normalization is simply done by dividing the data by their corresponding maximum. Figure~\ref{fig:real_data_log_MSE_per_iteration} shows that the efficient 2SG converges much quicker and obtains a better terminal value. One explanation for this phenomenon is that the Hessian information helps the speed of convergence, similar to the benefits of Newton-Raphson relative to the gradient-descent method. \begin{figure}[!htbp] \centering \includegraphics[width=\linewidth]{plot_MSE_per_iteration_2019_05_29} \caption{ERF of training samples in SGD, ADAM, and the efficient 2SG under concurrent implementation.} \label{fig:real_data_log_MSE_per_iteration} \end{figure} Figure~\ref{fig:real_data_log_MSE_per_iteration_adjusted} compares the ERF of the two algorithms in terms of the number of gradient evaluations. Note that each iteration of SGD and ADAM takes one gradient evaluation, while the efficient 2SG takes three gradient evaluations. This comparison is suitable for the non-concurrent implementation since one iteration of the efficient 2SG has roughly the cost of three iterations of the SGD. It is shown in Figure~\ref{fig:real_data_log_MSE_per_iteration_adjusted} that the efficient 2SG still outperforms SGD and ADAM even without any concurrent implementation. There is less than a 7\% difference in running time among SGD, ADAM, and the efficient 2SG under the concurrent implementation. \begin{figure}[!htbp] \centering \includegraphics[width=\linewidth]{plot_MSE_per_iteration_adjusted_2019_05_29} \caption{ERF of training samples in SGD, ADAM, and the efficient 2SG per gradient evaluation under \emph{serial} (non-concurrent) computing. SGD and ADAM have 3 times the number of iterations of 2SG.} \label{fig:real_data_log_MSE_per_iteration_adjusted} \end{figure} \section{Conclusions} \label{sec:conclusion} To the best of our knowledge, 2SPSA, 2SG, E2SPSA and E2SG are the fastest possible second-order stochastic Newton-type algorithms based on the estimation of the Hessian matrix from either noisy loss measurements or noisy gradient measurements. The algorithms use only a small number of measurements, independent of $p$, at each iteration. This paper shows how symmetric indefinite matrix factorization may be used to reduce the per-iteration FLOPs of the algorithms from $ O(p^3) $ to $ O(p^2) $. The approach guarantees a positive definite estimation of the Hessian matrix (``preconditioned") and a valid stochastic Newton-type update of the parameter vector, both in $ O(p^2) $. This implementation scheme serves to improve practical performance in high-dimensional problems, such as deep learning. In our proposed scheme, the formal convergence and convergence rate for $\hbtheta_k$ and $\obH_k$ are maintained, following the prior work \cite{spall2000adaptive,spall2009feedback}. Besides the theoretical guarantee, numerical studies show that the efficient implementation of second-order SP methods provides a promising convergence rate at a tolerable computing cost, compared with stochastic gradient descent method. Note that second-order methods do not provide global convergence in general, and therefore the second-order method is recommended to be implemented after reaching the vicinity of the optimizer. Overall, our proposed scheme of second-order SA methods has value in high-dimensional optimization and learning problems. Because a key step of this work is the symmetric indefinite factorization, the proposed algorithm might be useful for other algorithms whenever updating an estimated Hessian matrix is involved, such as second-order random directions stochastic approximation \cite{prashanth2017adaptive}, natural gradient descent \cite{amari2000adaptive}, and stochastic variants of the BFGS quasi-Newton methods \cite{schraudolph2007stochastic}. In all those methods, instead of directly updating the matrix of interest (usually the Hessian matrix), one might consider updating its corresponding symmetric indefinite factorization in the manner of this paper to speed up any matrix inverse operation or matrix eigenvalue modification. Overall, the proposed approach provides a practical second-order method that can be used following first-order or other methods that are able to put the iterate in at least the vicinity of the solution. \bibliographystyle{IEEEtran} \bibliography{Fa2SPSA_reference} \end{document}
122,761
TITLE: How to calculate $\int\frac{1}{x + 1 + \sqrt{x^2 + 4x + 5}}\ dx$? QUESTION [8 upvotes]: How to calculate $$\int\frac{1}{x + 1 + \sqrt{x^2 + 4x + 5}}dx?$$ I really don't know how to attack this integral. I tried $u=x^2 + 4x + 5$ but failed miserably. Help please. REPLY [2 votes]: Another approach : (The shortest one) Using Euler substitution by setting $t-x=\sqrt{x^2+4x+5}$, we will obtain $x=\dfrac{t^2-5}{2t+4}$ and $dx=\dfrac{t^2+4t+5}{2(t+2)^2}\ dt$, then the integral turns out to be $$ -\int\dfrac{t^2+4t+5}{2(t+2)(t+3)}\ dt=\int\left[\frac1{t+3}-\frac1{2(t+2)}-\frac12\right]\ dt. $$ The last part uses partial fraction decomposition and the rest should be easy to be solved.
23,383
TITLE: Generating functions in combinatorics QUESTION [1 upvotes]: I am not very familiar with how generating functions are used but for something I was needing I ran into the following constructions, Let $c_{nk}$ be the number of solutions in $\{1,2,3,4...,\}$ for the equation $x_1+x_2+x_3+...+x_k = n$. Then one can easily show that $c_{nk} = ^{n-1}C_{k-1}$ But the following is not clear to me..one defines a "generating function" $c_k(x)$ as, $c_k(x) = \sum_{n=k}^{\infty} c_{nk} x^n$. Then one claims that the following is true, $c_k(x) = (x+x^2+x^3+...)^k = x^k(1-x)^k$ I can't see from where the above came! From the above it follows that, $\sum_{k=1} ^\infty c_k(x) = \frac{x}{1-2x}$ The idea seems to be that from the above one can read off that $\sum {k=1} ^n c_{nk} = 2^{n-1}$. (This was obvious from the initial formula given for $c_{nk}$) But this way of getting that result is not clear to me. Let $p(n)$ be the number of unordered partitions of $n$ (a positive integer). Then clearly $p(n)$ is the number of solutions to the equation, $x_1+x_2+x_3+...+x_n = n$ with $x_1\geq x_2 \geq x_3 ...x_n > 0$ After appropriate variable redefinition one can see that $p(n)$ is the number of solutions of $y_1 + 2y_2 + 3y_3 + ... + ny_n = n$ with $y_i \geq 0$ for all $i$. Now apparently $p(n)$ can be given through a generating function $P(x)$ defined as, $P(x) = \sum _{n=0} ^\infty p(n)x^n$ Now I can't see how it follows that, $P(x) = \prod _{k=1} ^\infty \frac{1}{(1-x^k)}$ (apparently the above can somehow be mapped to the familiar question of number of ways in which $n$ balls can be arranged in a line where $r_i$ of them are of the colour $i$ ($\sum r_i = n$). This is equal to the coefficient of $\prod _i x_{i}^{r_i}$ in $\prod _i (1+x_i + x_i^2 + x_i ^3 + ...)$. But the connection between this question and the generating function for unordered partitions of an integer is not cleat to me.) REPLY [5 votes]: In both cases, the key is to recognize the power series identity (which should be familiar if you know about geometric series) $$ 1+y+y^2+y^3+ \ldots = \sum_{i=0}^\infty y^i = \frac{1}{1-y}. $$ In your formula for $c_k(x)$, the last exponent should certainly be negative, so we have $c_k(x) = x^k(1-x)^{-k}$. Then applying the power series identity shows that $\frac{x}{1-x} = x(1-x)^{-1} = x +x^2 +x^3 + \ldots$. For the partitions generating function, we use the same identity many times, with $y = x^k$ for all integers $k\geq 1$. Think of how to find the coefficient of $x^n$ in $$(1+x+x^2 + x^3 + \ldots) (1+x^2+x^4+x^6 +\ldots) (1+x^3+x^6+x^9+ \ldots)\cdots$$ and you'll see that it gives you precisely the number of partitions of $n$. EDIT: Okay, think about the case $k=2$ for $c_k(x)$. The claim is that $$c_2(x) = (x+x^2+x^3+\ldots)^2.$$ Just look at what happens when we expand this, using the distributive property: \begin{align*} c_2(x) &= (x+x^2+x^3+\ldots)(x+x^2+x^3+\ldots) \\ &=x(x+x^2+x^3+\ldots)+x^2(x+x^2+x^3+\ldots)+x^3(x+x^2+x^3+\ldots)+ \ldots \\ &=(x^{1+1}+x^{1+2}+x^{1+3}+\ldots)+(x^{2+1}+x^{2+2}+x^{2+3}+\ldots)+(x^{3+1}+x^{3+2}+x^{3+3}+\ldots)+ \ldots \\ &=(x^{1+1})+(x^{1+2}+x^{2+1})+(x^{1+3}+x^{2+2}+x^{3+1})+\ldots \\ &=x^2 + 2x^3 + 3x^4 + \ldots \end{align*} You see that to get each $x^n$ term, we simply need a solution to $a_1 + a_2 = n$ with $a_i \geq 1$. In the same way, if you have $c_k(x) = (x+x^2+x^3+\ldots)^k$, to get each $x^n$ term, we simply need a solution to $a_1 + a_2 + \ldots + a_k = n$ with $a_i \geq 1$. Do you see how it works? Try expanding the partition generating function in the same way-- it's basically the same idea.
150,722
Floral Nude Nail Art Weddbook ♥ If you love simplicity, flowers and glitter this nail design is just for you. The two flowers are the key of this look with the details, and the tininess. How the golden glitter is not used excessively adds a hint of elegance to this look. nails, nail art, nail design, manicure, flowers, floral,gold,glitter, pink, dust pink, gray Fuente : ¿Esto es tuyo o Cómo sabes que lo vende ?
153,977
Jebin studied Business and Social Studies as well as gaining an MA in Media Studies at Kingston University. She went on to gain an MA in Digital Design at London School of Arts. Jebin has a unique approach to design and is a keen follower of the latest tech trends and gaming fads. Recently, Jebin completed a creative review for a “Learn Game Development” project we are working on and contributed a series of screen mocks for games around an app we are creating which controls real world robots.
144,928
After running an art house cinema in Amsterdam, it’s great to be back in the film industry, especially now it’s on the other side of the globe. Wellington is absolutely lovely and I can’t wait for the festival to inundate the city with the best cinema has to offer. To me, the power of film lies in its ability to teleport you to a setting unknown, ready for you to be explored. The titles on my list invite you into the family life of 1950's Tokyo, the clever mind of philosopher Hannah Arendt and sensual, bull-riding Brazil. Listen to a world without sight, time travel to future Hong Kong, or go back to Central Park in the era that running was still frowned upon, especially when done by women. Happy visual travelling!
14,812
Monday, Nov 07, 2011 Vlog: Marloes Coenen Gets Tattooed In Japan, Visits Shooto Fights Women’s superstar Marloes Coenen continues her visit to Japan, dining with some local folks and checking out the latest Shooto fights. Coenen also gets a brand new tattoo. posted by FCF Staff @ 1:52 pm 2 Responses to “Vlog: Marloes Coenen Gets Tattooed In Japan, Visits Shooto Fights” View More Very cool video. Coenen is such a grounded and well cultured woman. I hope to see her back in competition soon. Big shout out to you, champ!! Marloes = Class
191,363
TITLE: Calculate GCD$(x^4+x+1,x^3+x^2)$ and a Bezout Identity in $\mathbb{F_2}$ QUESTION [3 upvotes]: A really short task: Calculate GCD$(x^4+x+1,x^3+x^2)$ and a Bezout Identity in $\mathbb{F_2}.$ I've tried it but my GCD is $1$ and I cannot see where my mistake is. $x^4+x+1= x \cdot (x^3+x^2) + x^3 +x + 1$ $x^3+x^2 = 1 \cdot (x^3 + x + 1) + x^2 + x + 1$ $x^3+x+1 = x \cdot (x^2 + 1 + x) + x^2 + 1$ $x^2+x+1 = 1 \cdot (x^2+1) + x$ $x^2 + 1 = x \cdot x + 1$ $x = 1 \cdot x + 0$ REPLY [4 votes]: This is easy when using the augmented-matrix form of the extended Euclidean algorithm. $\begin{eqnarray} (1)&& &&x^4\!+x+1 \,&=&\, \left<\,\color{#c00}1,\color{#0a0}0\,\right>\ \ \ {\rm i.e.}\,\ \ x^4\!+x+1 = \color{#c00}1\cdot (x^4\!+x+1) + \color{#0a0}0\cdot(x^3\!+x^2)\\ (2)&& && x^3\!+x^2 \,&=&\, \left<\,\color{#c00}0,\color{#0a0}1\,\right>\ \ \ {\rm i.e.}\,\quad\ \ \ x^3\!+x^2 = \color{#c00}0\cdot (x^4\!+x+1) + \color{#0a0}1\cdot(x^3\!+x^2)\\ (3)&=&(1)-x\cdot (2)\quad && x^3\!+x+1 \,&=&\, \left<\,1,\,x\,\right>\\ (4)&=&(2)-(3)\quad && x^2\!+x+1 \,&=&\, \left<\,1,\,x+1\,\right>\\ (5)&=&(2)-x\cdot(4)\quad &&\qquad\quad\ \ \ x \,&=&\, \left<x,\,x^2+x+1\right>\\ (6)&=&(4)-(x\!+\!1)\,(5)\!\!\!\! &&\qquad\quad\ \ \ 1 \,&=&\, \left<\color{#c00}{x^2+x+1},\ \color{#0a0}{x^3+x}\right> \end{eqnarray}$ The Bezout Identity is $\ 1\, =\, (\color{#c00}{x^2\!+x+1})(x^4\!+x+1) + (\color{#0a0}{x^3\!+x})(x^3\!+x^2)$
205,602
Ethical fashion: what is it? Some answers from London. By: Anna Watkinson When I was asked to write about ethical fashion by my dear friend Kimberly, who is also one inspirational freedom fighter, I said yes without hesitation. However, I am by no means an expert. Yet, I do feel very passionately about the topic and I am at the beginning of a journey myself to find out what this means for me and my family. Fashion by definition is something that changes constantly: ‘a popular or the latest style of clothing […]’. (1) That does not sound that bad, does it really? Fashion is also synonymous to words, such as rage, mania, fad or passing fancy. That sounds a bit less appealing to me. Indeed, it has been this side of fashion, the disposability of it, that has really got me thinking. Whilst doing my research, the issues that came up time and time again were the consequences and perils of fast fashion to the environment and the people making the clothing. Fast fashion encapsulates the phenomenon of trends changing almost weekly and new items being delivered to the shops daily, before being bought and swiftly discarded. I am preaching to myself, as I have definitely contributed to the endless and rapid cycle of production and disposal of clothing. There are and have been items in my wardrobe that have only been worn once or twice; items that were purchased for one specific occasion; items that were bought really cheaply or bought from a sale just because the item was so inexpensive and it might not have even lasted the first wash. We are able to buy a piece of clothing in many stores just for a few pounds, dollars or rands. But how is this actually possible? The garment has not been made in a vacuum, but someone has made it. The yarn has been spun, the garment has been dyed and sewn by someone, and if the cost of the end product is so little, it is impossible for the labourer to get a fair wage. An expensive high end item by no means guarantees or even implies a fair wage for workers, but the disconnect between the price of the items and the value of the worker is poignantly highlighted in the cheap fast fashion items. Our craving for cheap fashion garments may not have a great monetary cost to us, but it comes with a great human cost - to the men, women and children who are forced to work with very little or no pay at all. Furthermore, the environmental impact of the fashion industry is huge from water pollution to the usage of toxic chemicals and textile waste.(2) This is why we, why I, need to rethink our buying habits and most importantly need a shift in thinking - in how we value other human beings and how we nurture the environment. So, how can we look good without buying into the fast fashion industry? These are brilliant suggestions, which I found as I researched the topic.(3) 1. Buy only what you need - Capsule wardrobes: A range of good quality essential items as per your own style that all go with each other. It will take time to build up a capsule wardrobe, but having one will ensure that your wardrobe will consist of quality pieces that are easy to mix and match. It might even induce creativity as your time is not taken up by trawling through the endless piles of clothes in your closet. - Before going shopping make a list of what you need - Research options 2. Ethical fashion Ethical fashion will be more expensive than items in a high street shop. However, ethical brands are working towards better working conditions, fairer wages and/or reducing the environmental impact. Thus you are more likely to pay for what an item is actually worth. Many ethical brands are championing those who make the clothes we wear providing dignity to workers. Accordingly, items from ethical brands would be great additions to your capsule wardrobe. 3. Vintage/ charity/ thrift stores or secondhand online stores + clothes swaps You can purchase clothing with minimum impact on the environment. 4. High street shops Only buy items that you will wear 30-50 times and so reduce the environmental impact. Also, do your research into the most ethical stores on your high street. I hope this has inspired you to start or continue your journey towards better and more ethical buying habits. Anna xx
53,402
TITLE: Norm of multiplication and multiplication of norms QUESTION [1 upvotes]: It is well known that $\|u \cdot v \|_2 \le \|u \|_2 \cdot \| v \|_2 $ for all $u, v \in \mathbb{R}^d$. Is the following true for all $p \in \mathbb{R}$: $$\|u \cdot v \|_p \le \|u \|_p \cdot \| v \|_p$$ for all $u, v \in \mathbb{R}^d$? REPLY [1 votes]: No, this is not true. Take $p=3$ and $u=v = (1,1)$. Then the right side is $2$, while the left-side is $2^{1/3}2^{1/3}$. For a result somewhat in that direction see Hölder's inequality.
38,494
\begin{document} \maketitle \begin{abstract} Let $\ell_m$ be a sequence of $m$ points on a line with consecutive points of distance one. For every natural number $n$, we prove the existence of a red/blue-coloring of $\mathbb{E}^n$ containing no red copy of $\ell_2$ and no blue copy of $\ell_m$ for any $m \geq 2^{cn}$. This is best possible up to the constant $c$ in the exponent. It also answers a question of Erd\H{o}s, Graham, Montgomery, Rothschild, Spencer and Straus from 1973. They asked if, for every natural number $n$, there is a set $K \subset \mathbb{E}^1$ and a red/blue-coloring of $\mathbb{E}^n$ containing no red copy of $\ell_2$ and no blue copy of $K$. \end{abstract} \section{Introduction} Let $\mathbb{E}^n$ denote $n$-dimensional Euclidean space, that is, $\mathbb{R}^n$ equipped with the Euclidean distance. Following Erd\H{o}s, Graham, Montgomery, Rothschild, Spencer and Straus~\cite{EGMRSS2}, we study the following question. \begin{question} For which subsets $K \subset \mathbb{E}^n$ does every red/blue-coloring of $\mathbb{E}^n$ contain a red pair of points of distance one or a blue isometric copy of $K$? \end{question} In what follows, we will write $\ell_m$ for a sequence of $m$ points on a line with consecutive points of distance one and $\mathbb{E}^n \longrightarrow (\ell_2,K)$ if every red/blue-coloring of $\mathbb{E}^n$ contains either a red copy of $\ell_2$ or a blue copy of $K$, where a copy of a set will always mean an isometric copy. Conversely, $\mathbb{E}^n \centernot\longrightarrow (\ell_2,K)$ expresses the fact that there is some red/blue-coloring of $\mathbb{E}^n$ which contains neither a red copy of $\ell_2$ nor a blue copy of $K$. The problem of determining which $n$ and $K$ satisfy the relation $\mathbb{E}^n \longrightarrow (\ell_2,K)$ has received considerable attention, with a particular focus on small values of $n$. For example, Erd\H{o}s et al.~\cite{EGMRSS2} showed that $\mathbb{E}^2 \longrightarrow (\ell_2,\ell_4)$ and $\mathbb{E}^2 \longrightarrow (\ell_2,K)$ for any three-point set $K$. Juh\'asz~\cite{Juh} later improved the latter result to cover all four-point planar sets, while just recently Tsaturian~\cite{Tsat} improved the former result by showing that $\mathbb{E}^2 \longrightarrow (\ell_2,\ell_5)$. In three dimensions, Iv\'an~\cite{Ivan} showed that $\mathbb{E}^3 \longrightarrow (\ell_2,K)$ for any five-point set $K \subset \mathbb{E}^3$. The particular case where $K = \ell_5$ was recently improved by Arman and Tsaturian~\cite{ArmTsat}, who showed that $\mathbb{E}^3 \longrightarrow (\ell_2,\ell_6)$. On the other hand, Csizmadia and T\'oth~\cite{CsTo} identified a set $K$ of $8$ points in the plane, namely, a regular heptagon with its center, such that $\mathbb{E}^2 \centernot\longrightarrow (\ell_2,K)$. This improved a result of Juh\'asz~\cite{Juh}, who had previously identified a set $K$ of $12$ points with the same property. Our chief concern in this paper will be with extending these results to higher dimensions by studying the smallest possible size of a set $K \subset \mathbb{E}^n$ such that $\mathbb{E}^n \centernot\longrightarrow (\ell_2,K)$. In general, $|K|$ can be unbounded in terms of $n$ and still satisfy $\mathbb{E}^n \longrightarrow (\ell_2,K)$. For example, any subset $K$ of the unit sphere in $\mathbb{E}^n$ satisfies $\mathbb{E}^n \longrightarrow (\ell_2,K)$. Indeed, in a red/blue-coloring of $\mathbb{E}^n$, if there is no red point, then we clearly get a copy of $K$, while if there is a red point, then the sphere of radius one around that point must be blue, so we again get a blue copy of $K$. However, our main result shows that under some mild conditions a set $K \subset \mathbb{E}^n$ such that $\mathbb{E}^n \longrightarrow (\ell_2,K)$ can have size at most exponential in $n$. To state the result, we say that a point set $S \subset \mathbb{E}^n$ is {\it $t$-separated} if any two points in $S$ have distance at least $t$. Here and throughout, we use $\log$ to denote log base $2$. \begin{theorem} \label{thm:main} If $R>2$ and $K$ is a $1$-separated set of points in $\mathbb{E}^n$ with diameter at most $R - 1$ and $|K| > 10^{4n} \log R$, then $\mathbb{E}^n \centernot \longrightarrow (\ell_2,K)$. \end{theorem} In particular, for $m = 10^{5n}$, we see that $\mathbb{E}^n \centernot\longrightarrow (\ell_2, \ell_m)$. This simple corollary is already enough to answer a problem raised by Erd\H{o}s et al.~\cite{EGMRSS2}, namely, whether, for every natural number $d$, there is a natural number $n$ depending only on $d$ such that $\mathbb{E}^n \rightarrow (\ell_2, K)$ for every $K \subset \mathbb{E}^d$. Erd\H{o}s et al.~state that they expect the answer to this question to be negative and our result confirms this already for $d = 1$, a special case stressed in~\cite{EGMRSS2}, showing that $n$ must grow logarithmically in the size of $|K|$. The exponential dependence in Theorem~\ref{thm:main}, and hence the logarithmic dependence above, is also necessary. In fact, Szlam~\cite{Sz01} proved the stronger result that every red/blue-coloring of $\mathbb{E}^n$ contains either a red copy of $\ell_2$ or a blue \emph{translate} of any set $K$ of size at most $2^{c'n}$. For the sake of completeness, we include his short proof here. We will need the seminal result of Frankl and Wilson~\cite{FW} that there exists a positive constant $c'$ such that any coloring of $\mathbb{E}^n$ with at most $2^{c'n}$ colors contains a pair of points of distance one with the same color (see~\cite{R00} for the current best estimate on $c'$). Suppose now that $K = \{k_1, \dots, k_t\} \subset \mathbb{E}^n$ is a set of size at most $2^{c' n}$ and there is a red/blue-coloring of $\mathbb{E}^n$ with no blue copy of $K$. Then, for each $p \in \mathbb{E}^n$, there is at least one $i$ such that $p + k_i$ is red, since otherwise the set $p + K$ would be a blue translate of $K$. We may therefore color the points of $\mathbb{E}^n$ in $t \leq 2^{c'n}$ colors, giving the point $p$ the color $i$ for some $i$ such that $p + k_i$ is red, always choosing the minimum such $i$. By the result of Frankl and Wilson, there must then exist two points $p$ and $p'$ of distance one which receive the same color, say $j$. But then $p + k_j$ and $p' + k_j$ are two points of distance one both of which are colored red. This gives the required result. In particular, we have the following counterpart to Theorem~\ref{thm:main}, which we again stress is due to Szlam~\cite{Sz01}. \begin{theorem} \label{thm:FW} There exists a positive constant $c'$ such that $\mathbb{E}^n \longrightarrow (\ell_2,K)$ for any set $K \subset \mathbb{E}^n$ of size at most $2^{c'n}$. \end{theorem} \section{Proof of Theorem~\ref{thm:main}} We will prove the existence of a periodic red/blue-coloring of $\mathbb{E}^n$ (with period $R$ in the standard coordinates) such that no two red points have distance one and there is no blue copy of $K$. Let $\mathbb{T}_R^n=(\mathbb{E}/R \mathbb{Z})^n$ be the $n$-dimensional torus with period $R$ in each direction. Let $P$ be any maximal $1/3$-separated subset of $\mathbb{T}_R^n$. One can simply construct such a set $P$ greedily. Consider the Voronoi decomposition of $\mathbb{T}_R^n$ with respect to $P$. This partitions $\mathbb{T}_R^n$ into cells $V_p$, one for each point $p \in P$, where $V_p$ consists of the set of points closer to $p$ than any other point in $P$. From the maximality of $P$, every point in $V_p$ has distance at most $1/3$ from $p$. In particular, each $V_p$ has diameter at most $2/3$. \begin{lemma} $|P| \leq (4n^{1/2}R)^n$. \end{lemma} \begin{proof} Since each pair of points in $P$ have distance at least $1/3$, the balls of radius $r=1/6$ around each point are disjoint. A ball in $n$-space of radius $r$ has volume $r^n \pi^{n/2}/\Gamma(n/2 + 1)$, where the gamma function satisfies $\Gamma(n/2+1)=(n/2)!$ if $n$ is even and $\Gamma(n/2+1)=\sqrt{\pi} \cdot n!!/2^{(n+1)/2}$ if $n$ is odd. In either case, we have $\Gamma(n/2 + 1) \leq n^{n/2}$, so the volume of the $n$-dimensional ball is at least $(r^2\pi/n)^{n/2}$. The balls of radius $1/6$ around the points of $P$ are disjoint and the volume of the torus $\mathbb{T}_R^n$ is $R^n$, so the number of points in $P$ is at most $R^n/((1/6)^2\pi/n)^{n/2} = (36nR^2/\pi)^{n/2} < (4n^{1/2}R)^n$. \end{proof} \begin{lemma}\label{basic} If $S \subset \mathbb{E}^n$ is $t$-separated, then, for any point $p \in \mathbb{E}^n$ and any $s \geq 0$, the number of points of $S$ within distance $s$ of $p$ is at most $(2s/t + 1)^n$. \end{lemma} \begin{proof} The balls of radius $t/2$ around each point of $S$ are disjoint and, for each point $p' \in S$ with distance at most $s$ from $p$, the ball of radius $s+t/2$ around $p$ contains the ball of radius $t/2$ around $p'$. Hence, by a volume argument, there are at most $(\frac{s+t/2}{t/2})^n = (2s/t + 1)^n$ points of distance at most $s$ from $p$. \end{proof} \begin{lemma}\label{halfspace} Each copy in $\mathbb{E}^n$ of the Voronoi cell $V_p$ is a convex body defined by the intersection of at most $5^n$ half-spaces. \end{lemma} \begin{proof} A point $q$ on the boundary of $V_p$ is on the hyperplane equidistant to $p$ and some other point $p' \in P$, where this distance is at most $1/3$. This implies that $p'$ has distance at most 2/3 from $p$. Since the points in $P$ are $1/3$-separated, Lemma \ref{basic} implies that there are at most $5^n$ points of $P$ of distance at most $2/3$ from $p$. Therefore, since the Voronoi cell $V_p$ is the intersection of half-spaces that are defined by hyperplanes which are equidistant from $p$ and $p'$ for some $p'$ of distance at most $2/3$ from $p$, the result follows. \end{proof} \begin{lemma}\label{sep} If $K$ is a $1$-separated point set in $\mathbb{E}^n$ and $s \geq 1$, then there is a set $K' \subset K$ that is $s$-separated and has size at least $|K|/(2s+1)^n$. \end{lemma} \begin{proof} By Lemma \ref{basic} with $t=1$, for each point $p$, there are at most $(2s+1)^n$ points of $K$ within distance $s$ of $p$ (including $p$ itself). We can then greedily construct the set $K'$, getting at least one point in $K'$ for every $(2s+1)^n$ points from $K$, giving the desired bound. \end{proof} Let $Q$ be a random subset of $P$ formed by picking each point in $P$ with probability $x=20^{-n}$ independently of the other points. Let $S$ be the subset of $Q$ where $s \in S$ if and only if there is no other point $s' \in Q$ of distance at most $5/3$ from $s$. By Lemma \ref{basic}, there are at most $(2(5/3)/(1/3)+1)^n = 11^n$ points of $P$ of distance at most $5/3$ from any point. For a given point $p \in P$, the probability that $p \in S$ is therefore at least $x(1-x)^{11^n} > x/2$ as $x=20^{-n}$. Let $V_1,\ldots,V_m$ be the Voronoi cells of points in $S$. We will color the points in these Voronoi cells red, including the boundaries, and everything else blue. We consider the periodic coloring of $\mathbb{E}^n$ given by the coloring of $\mathbb{T}_R^n$. Observe that there is a pair of red points of distance one in $\mathbb{T}_R^n$ if and only if there is a pair of red points of distance one in the periodic coloring of $\mathbb{E}^n$ and there is a blue copy of $K$ in $\mathbb{T}_R^n$ if and only if there is a blue copy of $K$ in $\mathbb{E}^n$. We first claim that there are no two red points $q$ and $q'$ at distance one. Indeed, if $q$ and $q'$ are in the same Voronoi cell, then, as the diameter of each Voronoi cell is at most $2/3$, we have a contradiction. If $q$ and $q'$ are in copies of the same cell in the periodic tiling, then their distance is at least $R-2/3 > 1$. If $q$ and $q'$ are in different cells, with $q \in V_p$ and $q' \in V_{p'}$, then, since $q$ has distance at most $1/3$ from $p$ and $q'$ has distance at most $1/3$ from $p'$, $p$ and $p'$ have distance at most $5/3$. However, by construction, if $p \in S$, then $p'$ is not in $S$, so these Voronoi cells are not both red and $q$ and $q'$ cannot both be red. To finish the proof, we need to show that with positive probability, there is no blue copy of $K$. Observe that since the points of $K$ have distance at most $R - 1$ from each other, if there is a blue copy of $K$ in the coloring of $\mathbb{E}^n$, then we already have a blue copy in the axis-aligned box with one corner at the origin and side length $3R$. This box contains $3^n|P|$ Voronoi cells, which we label $U_1,\ldots,U_{3^n|P|}$. Let $K'$ be a maximum subset of $K$ which is $5$-separated, so $|K'| \geq 11^{-n} |K|$ by Lemma~\ref{sep}. Denote the points of $K'$ by $K'=\{k_0, k_1,\ldots,k_{|K'| - 1}\}$, where we may assume that $k_0$ is the origin and $k_1,\ldots,k_d$ with $d \leq n$ form a basis for the vector space spanned by $K'$, so each element of $K'$ is a linear combination of $k_1,\ldots,k_d$. It suffices to show that with positive probability there is no blue copy of $K'$. For a map $f: \{0,1, \dots, |K'|-1\} \rightarrow \{1, 2, \dots, 3^n|P|\}$, consider the bad event $B_f$ that there is a blue copy of $K'$ with the copy of $k_i$ in $U_{f(i)}$. As each pair of points from $K'$ have distance at least $5$, the Voronoi cells $V_p$ and $V_{p'}$ that they map to under an isometry have centers of distance at least $5-2/3 = 13/3> 2 \cdot 5/3$ apart. Moreover, since $K$ has diameter at most $R - 1$, the centers have distance at most $R - 1 + 2/3 < R$, so points from two copies of the same cell are never used. Hence, the probability that $p$ and $p'$ are in $S$ are independent. Therefore, for any fixed $f$, the probability that $B_f$ happens is at most $(1-x/2)^{|K'|}<e^{-x|K'|/2}$. We next estimate the number of bad events $B_f$ that are realizable. That is, the number of $f$ for which there is a copy of $K'$ with the copy of $k_i$ in $U_{f(i)}$. Given a copy of $K'$ in $\mathbb{E}^n$ where $k_i$ maps to $g(i) \in \mathbb{E}^n$ for each $i$, we map the copy of $K'$ to the point $(g(0), g(1),\ldots,g(d)) \in \mathbb{E}^{(d+1)n}$. This is an injective map from the copies of $K'$ in $\mathbb{E}^n$ to $\mathbb{E}^{(d+1)n}$ since the copy of $K'$ is determined by which points $k_0,k_1,\ldots,k_d$ map to. Let $U$ be one of the Voronoi cells $U_1,\ldots,U_{3^n|P|}$, with center $p$. The Voronoi cell $U$ is given as the intersection of half-spaces $H_{pp'}$ which contain $p$ and whose boundary is the hyperplane equidistant from $p$ and $p'$. By Lemma \ref{halfspace}, there are at most $5^n$ such half-spaces. The linear inequality defining whether a point $(x_1,\ldots,x_n)$ is in a half-space $H$ is of the form $a_1x_1+\cdots+a_nx_n \leq b$ for some $a_1,\ldots,a_n,b \in \mathbb{E}^1$. As $k_i$ is a linear combination of $k_1, \dots, k_d$, these observations show that, for any copy of $K'$ in $\mathbb{E}^n$ and any $i$ and $j$, we can determine whether $k_i$ is mapped into $U_j$ by considering a system of at most $5^n$ linear inequalities in the $(d+1)n$ coordinates of the point $(g(0), g(1),\ldots,g(d)) \in \mathbb{E}^{(d+1)n}$ that $K'$ is mapped to. Since the number of pairs $(i,j)$ is $|K'| \cdot 3^n|P|$, we can therefore tell which $B_f$ are feasible (i.e., which mappings of the points of $K'$ to Voronoi cells are actually realizable by a copy of $K'$) by the sign patterns of a sequence of $5^n |K'| \cdot 3^n|P|$ linear equations. We can now bound the number of feasible $B_f$ by using an appropriate version of the Milnor--Thom theorem~\cite{Milnor,OP,Thom}. For a discussion of this theorem and its history, as well as the statement we present below, we refer the interested reader to Section 6.2 of Matou\v sek's book~\cite{Mat}. \begin{theorem} For $M \geq N \geq 2$, the number of sign patterns of $M$ polynomials in $N$ variables, each of degree at most $D$, is at most $\left(\frac{50DM}{N}\right)^N$. \end{theorem} Taking $D=1$, $M = 5^n |K'| \cdot 3^n|P| \leq |K'|(60n^{1/2}R)^n$ and $N = (d+1)n$, we see that the number of feasible bad events $B_f$ is at most $$\left(\frac{50DM}{(d+1)n}\right)^{(d+1)n}\leq \left(50|K'|(60n^{1/2}R)^n\right)^{2 n^2} \leq e^{2n^2\ln(50 |K'|) + 2n^3\ln(60 n^{1/2} R)}.$$ Therefore, since each event $B_f$ holds with probability at most $e^{-x|K'|/2}$, we see that as long as $x|K'|/2>2n^2\ln(50 |K'|) + 2n^3\ln(60 n^{1/2} R)$, then, with positive probability, the desired coloring exists. By using $x=20^{-n}$, $|K'| \geq 11^{-n}|K|$ and $|K| \geq 10^{4n} \log R$, one may verify that $x|K'|/4 > 2n^2\ln(50 |K'|)$ and $x|K'|/4 > 2n^3\ln(60 n^{1/2} R)$, completing the proof. \section{Concluding remarks} Let us say that a set $X \subset \mathbb{E}^d$ is \emph{$f$-Ramsey} for a function $f : \mathbb{N} \rightarrow \mathbb{N}$ if any coloring of $\mathbb{E}^n$, $n \geq d$, with at most $f(n)$ colors contains a monochromatic copy of $X$. For instance, the result of Frankl and Wilson~\cite{FW} used to prove Theorem~\ref{thm:FW} was the statement that $\ell_2$ is $2^{c'n}$-Ramsey. By substituting any $f$-Ramsey set $X$ for $\ell_2$ in the proof of that theorem, we can easily deduce the following result. \begin{theorem} \label{thm:super} For any $f$-Ramsey set $X$, $\mathbb{E}^n \longrightarrow (X,K)$ for any set $K \subset \mathbb{E}^n$ of size at most $f(n)$. \end{theorem} When $X$ is a rectangular parallelepiped or a non-degenerate simplex, results of Frankl and R\"odl~\cite{FR} show that one may take $f(n) = 2^{c'n}$, where $c'$ may depend on the given configuration $X$. For all such $X$, Theorem~\ref{thm:main} easily implies that the estimate on the size of $K$ in Theorem~\ref{thm:super} is best possible up to the constant $c'$ in the exponent. Following~\cite{EGMRSS1}, we say that a set $X \subset \mathbb{E}^d$ is \emph{Ramsey} if it is $f$-Ramsey for some function $f : \mathbb{N} \rightarrow \mathbb{N}$ with the property that $f(n) \rightarrow \infty$ as $n \rightarrow \infty$. Theorem~\ref{thm:super} then says that for any Ramsey set $X$ and any finite set $K \subset \mathbb{E}^m$, there exists $n$ such that $\mathbb{E}^n \longrightarrow (X,K)$. In particular, by a beautiful result of K\v r\'i\v z~\cite{K91}, this holds when $X$ is a regular polygon. The following result gives a converse. \begin{theorem} Assuming the axiom of choice, if $X \subset \mathbb{E}^d$ is a finite set which is not Ramsey, there exists a natural number $m$ and a finite set $K \subset \mathbb{E}^m$ such that $\mathbb{E}^n \centernot\longrightarrow (X, K)$ for all $n$. \end{theorem} \begin{proof} Since $X$ is not Ramsey, there exists a least natural number $r$ such that $\mathbb{E}^n \overset{r}{\centernot\longrightarrow} X$ for all $n$. By the minimality of $r$, there is an $m$ such that every $(r-1)$-coloring of $\mathbb{E}^m$ contains a monochromatic copy of $X$. But then, by the De Bruijn--Erd\H{o}s theorem (and it is here that we invoke the axiom of choice), there must be a finite set $K \subset \mathbb{E}^m$ such that every $(r-1)$-coloring of $K$ contains a monochromatic copy of $X$. Suppose now that $\chi : \mathbb{E}^n \rightarrow \{1, 2, \dots, r\}$ is an $r$-coloring of $\mathbb{E}^n$ containing no monochromatic copy of $X$. We claim that the red/blue-coloring of $\mathbb{E}^n$ where a point is colored red if it received color $1$ under $\chi$ and blue otherwise contains no red copy of $X$ and no blue copy of $K$. Indeed, a red copy of $X$ would yield a copy of $X$ in color $1$ under $\chi$, while a blue copy of $K$ would yield an $(r-1)$-colored copy of $K$ under $\chi$, which, by the choice of $K$, would contain a monochromatic copy of $X$. In either case, this would contradict the definition of $\chi$. \end{proof} Being more particular about our choice of $K$, we were unable to decide whether, for every natural number $m$, there exists a natural number $n$ such that $\mathbb{E}^n \longrightarrow (\ell_3, \ell_m)$. It seems unlikely that this holds for large $m$, but we were at a loss to exhibit a coloring which confirms our suspicion. A first step in the right direction was made by Erd\H{o}s et al.~\cite{EGMRSS1}, who showed that $\mathbb{E}^n \centernot\longrightarrow (\ell_6, \ell_6)$ for all $n$. As a final note, we mention another problem of Erd\H{o}s et al.~\cite{EGMRSS2}: for any natural number $n$, does there exist a natural number $m$ depending only on $n$ such that for every set $K \subset \mathbb{E}^n$ of size $m$ there is a two-coloring of $\mathbb{E}^n$ with no monochromatic copy of $K$? By rescaling, we may assume that the smallest distance between any two points in $K$ is equal to one and, therefore, that $\ell_2 \subset K$ and $K$ is $1$-separated. Theorem~\ref{thm:main} then implies that if the diameter of $K$ is at most $R - 1$ and $m \geq 10^{4n} \log R$, there is indeed a two-coloring of $\mathbb{E}^n$ with no monochromatic copy of $K$. This partially answers the question of Erd\H{o}s et al.~and a complete answer would follow if we could remove the dependence on $R$ in Theorem~\ref{thm:main} (a problem which is also interesting in its own right). \vspace{3mm} {\bf Acknowledgements.} This paper was written while both authors were visiting the Simons Institute for the Theory of Computing in Berkeley and we are grateful for their generous support. The authors would also like to thank Noga Alon and Ben Green for helpful discussions. Finally, we wish to thank David Ellis, Ron Graham and an anonymous referee for a number of useful comments and corrections. In particular, the anonymous referee was the one to point us to the paper by Szlam~\cite{Sz01}, helping us to greatly improve the results in the concluding remarks.
108,449
Ways Not to Say No Ways Not to Say No By Sanjena Sathian WHY YOU SHOULD CARE Because the dynamics of being female are subtle. By Sanjena Sathian The headlines get you first. Banaras: Russian tourist has acid thrown in her face in the home of the holy Ganges. Local boy feels “spurned.” Delhi: Uber driver sexually assaults female passenger. She was out late. She was there. Bangalore: Woman gang-raped. She met the guards on a sunny day in middle-class Cubbon Park; they attacked her that night. People guess they’re migrants, the guards, that they were far from home and wives. They were alone. There’s nothing like a full stomach and a happy house to calibrate your moral compass, right? You talk about the headlines at parties in air-conditioned apartments. You talk about them with men. Each time one comes up, it balloons bigger and more hideous. Each time one comes up, you ink a new contract with a new man: you agree to check the danger at the door and be safe and protected here, inside. Men are always telling you to take care. You are forever saying ”thank you.” He is not in any headlines; no one calls him a wolf. The neighbor is so helpful. He knows you’re new, that your Hindi is weak. He calls electricians and Internet providers for you. He stops by in the morning and in the evening, on his way to and from his job at the American pharmaceutical corporation, just to check in. “You have to be careful with all these guys,” he says, gesturing to the rest of the world. After the party where you saw the Bollywood star and glimpsed the Four Seasons sign lighting up the city, the college acquaintance from the States goes out of his way. It’s too late to take a cab safely, he counsels. The couch is yours if you want it. Or even the guest room, with the locking door. You agree, because it would be strange, rude and paranoid not to. Because even if you weren’t scared before, you feel a chill now, as he talks of the lecherous wolves out there, howling up at the Four Seasons sign, talks of the headlines. Of course you will stay, it’s so kind of him. During the interview, the guy your friend’s friend introduced you to offers you cigarettes and tea. “Americans,” he says of you, knowingly, beginning to discuss his French girlfriend. As you talk longer and later, your little table at the Mumbai bar acquires a portside view of the rest of the world. In a sentence you sweep from the Himalayas to London. And yet, when you stand up to go, after he has reached for your hand, after you have shaken it off, embarrassed, after you have stayed despite the grab because you weren’t done hearing what you came to hear, the world has never felt so small. “I don’t know this driver — anything could happen,” he says. He gets in the car with you to make sure you get home safe. As you drive across the water, you hear the echo of what he implied, and what you agreed to by letting him. It’s only late afternoon. A sunny day, in middle-class south Calcutta. There are filmmakers asking you to act in their next productions — to be shot at September Durga puja, when the goddess of power and strength reigns. People are showing you their draft black-and-white photography books, which capture history and telescope it out into modernity. They’re listening to ghazels and qawwalis and teaching you about the nation’s poetry. He is older, educated, bicontinental. Back in America, he gave you India, spoon-fed it to you, taught you the word “postcolonial” and told you that if you wanted to be a writer you had to do two things: fall in love, and move to your motherland. Here you are now, years later, thanking him for the advice. He congratulates you, and then he shields you from oncoming traffic, translates for you, encourages you to try street food, teaches you about your own ancestors. Each time he annotates simple sightings with rich history, you grow weaker. How would you ever know this place without him? You look away when he says something about your thick South Indian hair. He’s got money. He’s got women. He is not the struggling emigrant from Bihar living in squalor on the outskirts of Delhi, sexually repressed, furious at modern women for wearing jeans and holding jobs. He is not in any headlines; no one calls him a wolf. When he walks the two minutes from the artists’ squat to your guesthouse, or slides into the car, eyeing the driver powerfully, or knocks on the locked guest room door, or steps into your kitchen to investigate the lighting fixture, nothing happens. He could go back to his girlfriend or wife and say, “Nothing happened.” And you — if you were to look him in the eye and say, “Stop,” you would feel ashamed for presuming there was anything to reject. You go to wherever it is you are calling home at that moment: your new, foreign apartment or your hotel, where you will sit alone sweating under the ceiling fan and swatting mosquitoes. You could have stayed inside and said no when he invited you out. The thought of the white plaster walls and the gecko for company made you say sure. He hugs you too hard. He says he’s never had the opportunity to fall in love with someone like you. He tells you to be safe, to mind the wolves. You say good night and you say thank you.
158,484
For those who are no longer college students, they might not be so well-versed in the drinking traditions of partiers these days. Do you recognize the term “pre-funking?” If not, then you likely don’t participate in the norm of getting drunk before heading out to a bar for the night. A Swiss study looked into the behavior of what others call “pre-gaming” and compared the behavior to the rate of dangerous incidents that took place among the pool of subjects. Those who “pre-funk” are more likely to drive drunk, have unsafe sex and more. The study followed a group of 250 college students and tracked when and how much the subjects drank and whether they were involved in any risky behaviors after their drinking activities. According to the study’s findings, those who drank before going out drank more once they got to their destination. They were also more likely to have unprotected sex, be involved in violence, drive drunk, get injured and use drugs. Pre-funking is reportedly popular among Swiss students, and a sociology professor in the U.S. has the theory that the high-risk drinking behavior might be even more popular here. The drinking age is higher in the states, potentially leading to more underage drinkers who would pre-drink before going out. Law enforcement in Tennessee is serious about preventing drunk driving. In fact, the governor recently announced that he will designate more money and efforts toward targeting supposed drunk drivers. College-age drivers can be easy targets for the police because it’s assumed that they’ve been partying and didn’t know better than to drink and drive. It’s important for students to understand their rights, since it isn’t uncommon for there to be a heavy law enforcement presence around school campuses. As DUI defense attorneys, we are experienced in fighting drunk driving charges. Therefore, we know what anyone should know before getting in the situation of facing a DUI charge. There are things to know about your DUI case in Tennessee that you can learn on our site. Source: Los Angeles Times, “‘Pre-drinking’ or ‘pre-funking’ common among young alcohol users,” Monte Morin, Nov. 8, 2012
315,697
Greek submarine HS POSEIDON takes part in Exercise Noble Justification, which tests the collective defense & crisis response capabilities of NATO's Response Force. New U.S. navy base in Deveselu, Romania, scheduled to be operational in 2015, will be part of NATO's ballistic missile defense (BMD) system. U.S. & Bulgaria Commited to Preserving European Security Amidst Recent Russian Aggression "NATO: A Bedrock for Transatlantic Peace and Security": op-ed by U.S. Ambassador to NATO Lute New rotation of U.S. forces in support of Operation Atlantic Resolve arrives in Estonia U.S. 1st Brigade Combat Team, 1st Cavalry Division equipment arrives for Atlantic Resolve, a multinational exercise taking place across Estonia, Latvia, Lithuania and Poland. Statement by the President on the Signing of the Bilateral Security Agreement and NATO Status of Forces Agreement in Afghanistan Planes, Trains & Ferries: 1st Brigade Combat Team, 1st Cavalry Division’s Equipment Arrives in Europe Map NATO on Duty Where is NATO active? Missile Defense Missile Defense Partners SDA Report on NATO Partnerships ! AFGHANISTAN Commander's Corner (COMISAF) COMISAF's blog & updates on Afghanistan ISAF Troop Numbers & Contributions by Country Approximate numbers of forces provided to ISAF by Allied and other contributing nations, the location and lead of Provincial Reconstruction Teams, and the countries responsible for ISAF Regional Commands » Adobe Reader Download Free All downloadable documents on this page are provided in PDF format. To view PDFs you must have a copy of Adobe Acrobat Reader. You may download a free version by clicking the link above.
292,984
TITLE: Order of elements in non-abelian groups QUESTION [3 upvotes]: Now I've in this algebra class I had to prove that for all abelian groups, if $x$, $y$ have finite order then $xy$ also has a finite order. Fair enough. However I was wondering: what happens to this property for non-abelian groups? REPLY [2 votes]: $\newcommand{\Z}{\mathbb{Z}}$Just as a variation on the previous answers, let $A = \Z / n \Z$, for $n \ge 0$ (where for $n = 0$ we understand $A = \Z$). Then the two matrices with coefficients in $A$ $$ x = \begin{bmatrix} -1 & 1\\ 0 & 1\\ \end{bmatrix}, \quad y = \begin{bmatrix} -1 & 0\\ 0 & 1\\ \end{bmatrix}, $$ have order $2$, but their product $$ x y = \begin{bmatrix} 1 & 1\\ 0 & 1\\ \end{bmatrix} $$ has order $n$ if $n > 0$, and infinity if $n = 0$. So even when the product of two elements of order $2$ is finite, it can be anything.
119,542
Artist's Bio - Brenda Reinertson I also like assembling found objects, mostly “rusty bits” and have included them in several of my paintings. These can be seen as an homage to the environment, but in truth, I just love the beauty of these objects. (I have also assembled them into “sculptures.”) I adore color and depth. In my color field paintings, if you look closely you will realize that the color that results is made up of many layers of colors – I stop when I feel the colors that have emerged fit my “mood.” I have done a series of mixed media on paper, primarily ink, which I felt were what one would see if they magnified a minute section of an object or plant. They are extremely detailed and colorful. I stretch my own canvases and prep them with many coats of Gesso to get the surface I desire. I sand, re-gesso and sand again. I continue my paintings around the edges because I find white unfinished edges distracting. Framing of my oil paintings is an option but I feel that they can sometimes distract from my vision. I hope that others enjoy the results of my efforts and that my work invokes, mystery, inspiration, contentment, curiosity and/or pleasure. Brenda Reinertson
100,874
\begin{document} \title{Canonical Forms for Unitary Congruence and *Congruence } \author{Roger A. Horn\thanks{Mathematics Department, University of Utah, Salt Lake City, Utah, USA 84103, \texttt{rhorn@math.utah.edu}}\quad and Vladimir V. Sergeichuk\thanks{Institute of Mathematics, Tereshchenkivska 3, Kiev, Ukraine, \texttt{sergeich@imath.kiev.ua. } Partially supported by FAPESP (S\~{a}o Paulo), processo 05/59407-6\texttt{. }}} \date{} \maketitle \begin{abstract} We use methods of the general theory of congruence and *congruence for complex matrices---regularization and cosquares---to determine a unitary congruence canonical form (respectively, a unitary *congruence canonical form) for complex matrices $A$ such that $\bar{A}A$ (respectively, $A^{2}$) is normal. As special cases of our canonical forms, we obtain---in a coherent and systematic way---known canonical forms for conjugate normal, congruence normal, coninvolutory, involutory, projection, $\lambda$-projection, and unitary matrices. But we also obtain canonical forms for matrices whose squares are Hermitian or normal, and other cases that do not seem to have been investigated previously. We show that the classification problems under (a) unitary *congruence when $A^{3}$ is normal, and (b) unitary congruence when $A\bar{A}A$ is normal, are both unitarily wild, so there is no reasonable hope that a simple solution to them can be found. \end{abstract} \section{Introduction} We use methods of the general theory of congruence and *congruence for complex matrices---regularization and cosquares---to determine a unitary congruence canonical form (respectively, a unitary *congruence canonical form) for complex matrices $A$ such that $\bar{A}A$ (respectively, $A^{2}$) is normal. We prove a regularization algorithm that reduces any singular matrix by unitary congruence or unitary *congruence to a special block form. For matrices of the two special types under consideration, this special block form is a direct sum of a nonsingular matrix and a singular matrix; the singular summand is a direct sum of a zero matrix and some canonical singular 2-by-2 blocks. Analysis of the cosquare and *cosquare of the nonsingular direct summand reveals 1-by-1 and 2-by-2 nonsingular canonical blocks. As special cases of our canonical forms, we obtain---in a coherent and systematic way---known canonical forms for conjugate normal, congruence normal, coninvolutory, involutory, projection, and unitary matrices. But we also obtain canonical forms for matrices whose squares are Hermitian or normal, $\lambda$-projections, and other cases that do not seem to have been investigated previously. Moreover, the meaning of the parameters in the various canonical forms is revealed, along with an understanding of when two matrices in a given type are in the same equivalence class. Finally, we show that the classification problems under (a) unitary *congruence when $A^{3}$ is normal, and (b) unitary congruence when $A\bar {A}A$ is normal, are both unitarily wild, so there is no reasonable hope that a simple solution to them can be found. \section{Notation and definitions} All the matrices that we consider are complex. We denote the set of $n$-by-$n $ complex matrices by $M_{n}$. The \textit{transpose} of $A=[a_{ij}]\in M_{n} $ is $A^{T}=[a_{ji}]$ and the \textit{conjugate transpose} is $A^{\ast} =\bar{A}^{T}=[\bar{a}_{ji}]$; the \emph{trace} of $A$ is $\operatorname{tr} A=a_{11}+\cdots+a_{nn}$. We say that $A\in M_{n}$ is: \emph{unitary} if $A^{\ast}A=I$; \emph{coninvolutory} if $\bar{A}A=I$; a $\lambda$-\emph{projection} if $A^{2}=\lambda A$ for some $\lambda\in\mathbb{C}$ (\emph{involutory} if $\lambda=1$); \emph{normal} if $A^{\ast}A=AA^{\ast}$; \emph{conjugate normal} if $A^{\ast}A=\overline{AA^{\ast}}$; \emph{squared normal} if $A^{2}$ is normal; and \emph{congruence normal} if $\bar{A}A$ is normal. For example, a unitary matrix is both normal and conjugate normal; a Hermitian matrix is normal but need not be conjugate normal; a symmetric matrix is conjugate normal but need not be normal. If $A$ is nonsingular, it is convenient to write $A^{-T}=(A^{-1})^{T}$ and $A^{-\ast}=(A^{-1})^{\ast}$; the \emph{cosquare} of $A$ is $A^{-T}A$ and the \emph{*cosquare} is $A^{-\ast}A$. We consider the \emph{congruence} equivalence relation ($A=SBS^{T}$ for some nonsingular $S$) and the finer equivalence relation \emph{unitary congruence} ($A=UBU^{T}$ for some unitary $U$). We also consider the \emph{*congruence} equivalence relation ($A=SBS^{\ast}$ for some nonsingular $S$) and the finer equivalence relation \emph{unitary *congruence} ($A=UBU^{\ast}$ for some unitary $U$). Two pairs of square matrices of the same size $(A,B)$ and $(C,D)$ are said to be \emph{congruent}, and we write $(A,B)=S(C,D)S^{T}$, if there is a nonsingular $S$ such that $A=SBS^{T}$ and $C=SDS^{T}$; \emph{unitary congruence}, \emph{*}$\emph{congruence}$, and \emph{unitary *}$\emph{congruence}$ of two pairs of matrices are defined analogously. Our consistent point of view is that unitary *congruence is a special kind of *congruence (rather than a special kind of similarity) that is to be analyzed with methods from the general theory of *congruence. In a parallel development, we treat unitary congruence as a special kind of congruence, rather than as a special kind of consimilarity. \cite[Section 4.6]{HJ1} The \emph{null space} of a matrix $A$ is denoted by $N(A)=\{x\in\mathbb{C} ^{n}:Ax=0\}$; $\dim N(A)$, the dimension of $N(A)$, is the \emph{nullity} of $A$. The quantities $\dim N(A)$, $\dim N(A^{T})$, $\dim\left( N(A)\cap N(A^{T})\right) $, $\dim N(A^{\ast})$, and $\dim\left( N(A)\cap N(A^{\ast })\right) $ play an important role because of their invariance properties: $\dim N(A)$, $\dim N(A^{T})$, and $\dim\left( N(A)\cap N(A^{T})\right) $ are invariant under congruence; $\dim N(A)$, $\dim N(A^{\ast})$, and $\dim\left( N(A)\cap N(A^{\ast})\right) $ are invariant under *congruence. Suppose $A,U\in M_{n}$ and $U$ is unitary. A computation reveals that if $A$ is conjugate normal (respectively, congruence normal) then $UAU^{T}$ is conjugate normal (respectively, congruence normal); if $A$ is normal (respectively, squared normal) then $UAU^{\ast}$ is normal (respectively, squared normal). Moreover, if $A\in M_{n}$ and $B\in M_{m}$, one verifies that $A\oplus B$ is, respectively, conjugate normal, congruence normal, normal, or squared normal if and only if each of $A$ and $B$ has the respective property. Matrices $A,B$ of the same size (not necessarily square) are \emph{unitarily equivalent} if there are unitary matrices $V,W$ such that $A=VBW$. Two matrices are unitarily equivalent if and only if they have the same singular values, that is, the \emph{singular value decomposition} is a canonical form for unitary equivalence. Each $A\in M_{n}$ has a \emph{left} (respectively, \emph{right}) \emph{polar decomposition} $A=PW$ (respectively, $A$ $=WQ$) in which the Hermitian positive semidefinite factors $P=(AA^{\ast})^{1/2}$ and $Q=(A^{\ast}A)^{1/2}$ are uniquely determined, $W$ is unitary, and $W=AQ^{-1}=P^{-1}A$ is uniquely determined if $A$ is nonsingular. A matrix of the form \[ J_{k}(\lambda)=\left[ \begin{array} [c]{cccc} \lambda & 1 & & 0\\ & \ddots & \ddots & \\ & & \ddots & 1\\ & & & \lambda \end{array} \right] \in M_{k} \] is a \emph{Jordan block with eigenvalue} $\lambda$. The $n$-by-$n$ identity and zero matrices are denoted by $I_{n}$ and $0_{n}$, respectively. The \emph{Frobenius norm} of a matrix $A$ is $\left\Vert A\right\Vert _{F}=\sqrt{\operatorname{tr}\left( A^{\ast}A\right) }$: the square root of the sum of the squares of the absolute values of the entries of $A$. The \textit{spectral norm} of $A$ is its largest singular value. In matters of notation and terminology, we follow the conventions in \cite{HJ1}. \section{Cosquares, *cosquares, and canonical forms for congruence and *congruence} The Jordan Canonical Form of a cosquare or a *cosquare has a very special structure. \begin{theorem} [{\cite{HScongruence}, \cite[Theorem 2.3.1]{Wall}}] \label{CosquareCharacterize}Let $\mathfrak{A}\in M_{n}$ be nonsingular. \medskip\medskip \newline (a) $\mathfrak{A}$ is a cosquare if and only if its Jordan Canonical Form is \begin{equation} {\displaystyle\bigoplus\limits_{k=1}^{\rho}} \left( J_{r_{k}}\left( \left( -1\right) ^{r_{k}+1}\right) \right) \oplus {\displaystyle\bigoplus\limits_{j=1}^{\sigma}} \left( J_{s_{j}}\left( \gamma_{j}\right) \oplus J_{s_{j}}\left( \gamma _{j}^{-1}\right) \right) \text{,\quad}\gamma_{j}\in\mathbb{C}\text{, } 0\neq\gamma_{j}\neq\left( -1\right) ^{s_{j}+1}\text{.}\label{JCFcosquare} \end{equation} $\mathfrak{A}$ is a cosquare that is diagonalizable by similarity if and only if its Jordan Canonical Form is \begin{equation} I\oplus {\displaystyle\bigoplus\limits_{j=1}^{q}} \left[ \begin{array} [c]{cc} \mu_{j}I_{n_{j}} & 0\\ 0 & \mu_{j}^{-1}I_{n_{j}} \end{array} \right] \text{,\quad}\mu_{j}\in\mathbb{C}\text{, }0\neq\mu_{j}\neq 1\text{,}\label{JCFdiagonalizableCosquare} \end{equation} in which $\mu_{1},\mu_{1}^{-1},\ldots,\mu_{q},\mu_{q}^{-1}$ are the distinct eigenvalues of $\mathfrak{A}$ such that each $\mu_{j}\neq1$; $n_{1} ,n_{1},\ldots,n_{q},n_{q}$ are their respective multiplicities; the parameters $\mu_{j}$ in (\ref{JCFdiagonalizableCosquare}) are determined by $\mathfrak{A}$ up to replacement by $\mu_{j}^{-1}$. \medskip \newline (b) $\mathfrak{A}$ is a *cosquare if and only if its Jordan Canonical Form is \begin{equation} \bigoplus_{k=1}^{\rho}J_{r_{k}}(\beta_{k})\oplus\bigoplus_{j=1}^{\sigma }\left( J_{s_{j}}(\gamma_{j})\oplus J_{s_{j}}(\bar{\gamma}_{j}^{-1})\right) \text{,}\quad\beta_{k},\gamma_{j}\in\mathbb{C}\text{, \ }|\beta_{k}|=1\text{, }0<\left\vert \gamma_{j}\right\vert <1\text{.}\label{JCF*cosquare} \end{equation} $\mathfrak{A}$ is a *cosquare that is diagonalizable by similarity if and only if its Jordan Canonical Form is \begin{equation} {\displaystyle\bigoplus\limits_{k=1}^{p}} \lambda_{k}I_{m_{k}}\oplus {\displaystyle\bigoplus\limits_{j=1}^{q}} \left[ \begin{array} [c]{cc} \mu_{j}I_{n_{j}} & 0\\ 0 & \bar{\mu}_{j}^{-1}I_{n_{j}} \end{array} \right] \text{,\quad}\lambda_{k},\mu_{j}\in\mathbb{C}\text{, }\left\vert \lambda_{k}\right\vert =1\text{, }0<\left\vert \mu_{j}\right\vert <1\text{,}\label{JCFdiagonalizable*Cosquare} \end{equation} in which $\mu_{1},\bar{\mu}_{1}^{-1},\ldots,\mu_{q},\bar{\mu}_{q}^{-1}$ are the distinct eigenvalues of $\mathfrak{A}$ such that each $\left\vert \mu _{j}\right\vert \in(0,1)$; $n_{1},n_{1},\ldots,n_{q},n_{q}$ are their respective multiplicities. The distinct unimodular eigenvalues of $\mathfrak{A}$ are $\lambda_{1},\ldots,\lambda_{p}$ and their respective multiplicities are $m_{1},\ldots,m_{p}$. \end{theorem} The following theorem involves three types of blocks \begin{equation} \Gamma_{k}= \begin{bmatrix} 0 & & & & (-1)^{k+1}\\ & & & \text{ \begin{picture}(12,8) \put(-2,-4){$\cdot$} \put(3,0){$\cdot$} \put (8,4){$\cdot$} \end{picture} } & (-1)^{k}\\ & & 1 & \text{ \begin{picture}(12,8) \put(-2,-4){$\cdot$} \put(3,0){$\cdot$} \put (8,4){$\cdot$} \end{picture} } & \\ & -1 & -1 & & \\ 1 & 1 & & & 0 \end{bmatrix} \in M_{k},\quad\text{(}\Gamma_{1}=[1]\text{),}\label{Gamman} \end{equation} \begin{equation} \Delta_{k}= \begin{bmatrix} 0 & & & 1\\ & & \text{ \begin{picture}(12,8) \put(-2,-4){$\cdot$} \put(3,0){$\cdot$} \put (8,4){$\cdot$} \end{picture} } & i\\ & 1 & \text{ \begin{picture}(12,8) \put(-2,-4){$\cdot$} \put(3,0){$\cdot$} \put (8,4){$\cdot$} \end{picture} } & \\ 1 & i & & 0 \end{bmatrix} \in M_{k},\quad\text{(}\Delta_{1}=[1]\text{),}\label{Deltan} \end{equation} and \begin{equation} H_{2k}(\mu)= \begin{bmatrix} 0 & I_{k}\\ J_{k}(\mu) & 0 \end{bmatrix} \in M_{2k},\quad\text{(}H_{2}(\mu)=\left[ \begin{array} [c]{cc} 0 & 1\\ \mu & 0 \end{array} \right] \text{).}\label{Hn} \end{equation} \begin{theorem} [\cite{HScongruence}]\label{CongruenceCanonicalForms}Let $A\in M_{n}$ be nonsingular. \newline (a) $A$ is congruent to a direct sum, uniquely determined up to permutation of summands, of the form \begin{equation} {\displaystyle\bigoplus\limits_{k=1}^{\rho}} \Gamma_{r_{k}}\oplus {\displaystyle\bigoplus\limits_{j=1}^{\sigma}} H_{2s_{j}}\left( \gamma_{j}\right) ,\quad\gamma_{j}\in\mathbb{C}\text{, }0\neq\gamma_{j}\neq(-1)^{s_{j}+1}\text{,}\label{ccf} \end{equation} in which each $\gamma_{j}$ is determined up to replacement by $\gamma_{j} ^{-1}$. If (\ref{JCFcosquare}) is the Jordan Canonical Form of $A^{-T}A$, then the direct summands in (\ref{ccf}) can be arranged so that the parameters $\rho$, $\sigma$, $r_{k}$, $s_{j}$, and $\gamma_{j}$ in (\ref{ccf}) are identical to the same parameters in (\ref{JCFcosquare}). Two nonsingular matrices are congruent if and only if their cosquares are similar. \newline (b) $A$ is *congruent to a direct sum, uniquely determined up to permutation of summands, of the form \begin{equation} {\displaystyle\bigoplus\limits_{k=1}^{\rho}} \alpha_{k}\Delta_{n_{k}}\oplus {\displaystyle\bigoplus\limits_{j=1}^{\sigma}} H_{2m_{j}}\left( \gamma_{j}\right) \text{,\quad}\alpha_{k},\gamma_{j} \in\mathbb{C}\text{, }|\alpha_{k}|=1\text{, }0<|\gamma_{j}|<1\text{, }\label{*ccf} \end{equation} If (\ref{JCF*cosquare}) is the Jordan Canonical Form of $A^{-\ast}A$, then the direct summands in (\ref{*ccf}) can be arranged so that the parameters $r_{k} $, $s_{j}$, and $\gamma_{j}$ in (\ref{*ccf}) are identical to the same parameters in (\ref{JCF*cosquare}), and the parameters $\alpha_{k}$ in (\ref{*ccf}) and $\beta_{k}$ in (\ref{JCF*cosquare}) satisfy $\alpha_{k} ^{2}=\beta_{k}$ for each $k=1,\ldots,r$. \end{theorem} Among many applications of the canonical form (\ref{*ccf}), it follows that any complex square matrix is *congruent to its transpose, and the *congruence can be achieved via a coninvolutory matrix. This conclusion is actually valid for any square matrix over any field of characteristic not two with an involution (possibly the identity involution). \cite{HStranspose} If $A$ is nonsingular and $U$ is unitary, then \[ \left( UAU^{T}\right) ^{-T}\left( UAU^{T}\right) =\bar{U}\left( A^{-T}A\right) \bar{U}^{\ast} \] and \[ \left( UAU^{\ast}\right) ^{-\ast}\left( UAU^{\ast}\right) =U\left( A^{-\ast}A\right) U^{\ast}\text{,} \] so a unitary congruence (respectively, a unitary *congruence) of a nonsingular matrix corresponds to a unitary similarity of its cosquare (respectively, *cosquare), both via the same unitary matrix. If the cosquare or *cosquare of $A\in M_{n}$ is diagonalizable by unitary similarity, what can be said about a canonical form for $A$ under unitary congruence or unitary *congruence?{} \section{Normal matrices, intertwining, and zero blocks} Intertwining identities involving normal matrices lead to characterizations and canonical forms for unitary congruences. \begin{lemma} \label{Fuglede}Let $A,L,P\in M_{n}$ and assume that $L$ and $P$ are normal. Then \medskip \newline (a) $AL=PA$ if and only if $AL^{\ast}=P^{\ast}A$. \medskip \newline (b) If $L$ and $P$ are nonsingular, then $AL=PA$ if and only if $AL^{-\ast }=P^{-\ast}A$. \end{lemma} \begin{proof} Let $L=U\Lambda U^{\ast}$ and $P=V\Pi V^{\ast}$ for some unitary $U,V$ and diagonal $\Lambda,\Pi$. The intertwining condition $AL=PA$ implies that $Ag(L)=g(P)A$ for any polynomial $g(t)$. \medskip \newline (a) Let $g(t)$ be any polynomial such that $g(\Lambda)=\bar{\Lambda}$ and $g(\Pi)=\bar{\Pi}$, that is, $g(t)$ interpolates the function $z\rightarrow \bar{z}$ on the spectra of $L$ and $P$. Then \[ AL^{\ast}=Ag(L)=g(P)A=P^{\ast}A\text{.} \] (b) Use the same argument, but let $g(t)$ interpolate the function $z\rightarrow\bar{z}^{-1}$ on the spectra of $L$ and $P$.\hfill \end{proof} \medskip The following lemma reveals fundamental patterns in the zero blocks of a partitioned matrix that is normal, conjugate normal, squared normal, or congruence normal. \begin{lemma} \label{Zero Blocks}Let $A\in M_{n}$ be given. \medskip \newline (a) Suppose \[ A=\left[ \begin{array} [c]{cc} A_{11} & A_{12}\\ 0 & A_{22} \end{array} \right] \text{,} \] in which $A_{11}$ and $A_{22}$ are square. If $A$ is normal or conjugate normal, then \[ A=\left[ \begin{array} [c]{cc} A_{11} & 0\\ 0 & A_{22} \end{array} \right] \text{.} \] If $A$ is normal, then $A_{11}$ and $A_{22}$ are normal; if $A$ is conjugate normal, then $A_{11}$ and $A_{22}$ are conjugate normal. \medskip \newline (b) Suppose \begin{equation} A=\left[ \begin{array} [c]{ccc} A_{11} & A_{12} & 0\\ A_{21} & A_{22} & A_{23}\\ 0 & 0 & 0_{k} \end{array} \right] \text{,}\label{generalReduction} \end{equation} in which $A_{11}$ and $A_{22}$ are square, and both $[A_{11}~A_{12}]$ and $A_{23}$ have full row rank. If $A$ is squared normal or congruence normal, then \[ A=\left[ \begin{array} [c]{ccc} A_{11} & 0 & 0\\ 0 & 0 & A_{23}\\ 0 & 0 & 0_{k} \end{array} \right] \] and $A_{11}$ is nonsingular. If $A$ is squared normal, then $A_{11}$ is squared normal; if $A$ is congruence normal, then $A_{11}$ is congruence normal. \end{lemma} \begin{proof} (a) If $A$ is normal, then \[ A^{\ast}A=\left[ \begin{array} [c]{cc} A_{11}^{\ast}A_{11} & \bigstar\\ \bigstar & \bigstar \end{array} \right] =\left[ \begin{array} [c]{cc} A_{11}A_{11}^{\ast}+A_{12}A_{12}^{\ast} & \bigstar\\ \bigstar & \bigstar \end{array} \right] =AA^{\ast}\text{.} \] We have $A_{11}^{\ast}A_{11}=A_{11}A_{11}^{\ast}+A_{12}A_{12}^{\ast}$, so $\operatorname{tr}$ $\left( A_{11}^{\ast}A_{11}\right) =\operatorname{tr} \left( A_{11}A_{11}^{\ast}\right) =\operatorname{tr}\left( A_{11} A_{11}^{\ast}\right) +\operatorname{tr}\left( A_{12}A_{12}^{\ast}\right) $. Then $\operatorname{tr}\left( A_{12}A_{12}^{\ast}\right) =\left\Vert A_{12}^{\ast}\right\Vert _{F}^{2}=0$, so $A_{12}=0$ and $A_{11}^{\ast} A_{11}=A_{11}A_{11}^{\ast}$. If $A$ is conjugate normal, then \[ \overline{A^{\ast}A}=\left[ \begin{array} [c]{cc} \overline{A_{11}^{\ast}A_{11}} & \bigstar\\ \bigstar & \bigstar \end{array} \right] =\left[ \begin{array} [c]{cc} A_{11}A_{11}^{\ast}+A_{12}A_{12}^{\ast} & \bigstar\\ \bigstar & \bigstar \end{array} \right] =AA^{\ast}\text{.} \] We have $\operatorname{tr}\left( \overline{A_{11}^{\ast}A_{11}}\right) =\operatorname{tr}\left( A_{11}A_{11}^{\ast}\right) =\operatorname{tr} \left( A_{11}A_{11}^{\ast}\right) +\operatorname{tr}\left( A_{12} A_{12}^{\ast}\right) $, so $\operatorname{tr}\left( A_{12}A_{12}^{\ast }\right) =\left\Vert A_{12}^{\ast}\right\Vert _{F}^{2}=0$. Then $A_{12}=0$ and $\overline{A_{11}^{\ast}A_{11}}=A_{11}A_{11}^{\ast}$. \medskip \newline (b) Compute \begin{equation} A^{2}=\left[ \begin{array} [c]{ccc} \bigstar & \bigstar & A_{12}A_{23}\\ \bigstar & \bigstar & A_{22}A_{23}\\ 0 & 0 & 0_{k} \end{array} \right] \text{ and }\bar{A}A=\left[ \begin{array} [c]{ccc} \bigstar & \bigstar & \overline{A_{12}}A_{23}\\ \bigstar & \bigstar & \overline{A_{22}}A_{23}\\ 0 & 0 & 0_{k} \end{array} \right] \text{.}\label{squares} \end{equation} If $A$ is squared normal (or congruence normal), then (a) ensures that both $A_{12}A_{23}$ and $A_{22}A_{23}$ (or both $\overline{A_{12}}A_{23}$ and $\overline{A_{22}}A_{23}$) are zero blocks; since $A_{23}$ has full row rank, it follows that both $A_{12}$ and $A_{22}$ are zero blocks and hence \[ A=\left[ \begin{array} [c]{ccc} A_{11} & 0 & 0\\ A_{21} & 0 & A_{23}\\ 0 & 0 & 0_{k} \end{array} \right] \text{,} \] in which $A_{11}$ is nonsingular. Now compute \[ A^{2}=\left[ \begin{array} [c]{cc} A_{11}^{2} & 0\\ A_{21}A_{11} & 0 \end{array} \right] \oplus0_{k}\text{ and }\bar{A}A=\left[ \begin{array} [c]{cc} \overline{A_{11}}A_{11} & 0\\ \overline{A_{21}}A_{11} & 0 \end{array} \right] \oplus0_{k}\text{.} \] If $A$ is squared normal (or if $A$ is congruence normal), then (a) ensures that $A_{21}A_{11}=0$ (or that $\overline{A_{21}}A_{11}=0$); since $A_{11}$ is nonsingular, it follows that $A_{21}=0$ and $A_{11}$ is squared normal (or congruence normal). \hfill \end{proof} \medskip A matrix $A\in M_{n}$ is said to be \emph{range Hermitian} if $A$ and $A^{\ast}$ have the same range. If $\operatorname{rank}A=r$ and there is a unitary $U$ and a nonsingular $C\in M_{r}$ such that $U^{\ast}AU=C\oplus 0_{n-r}$, then $A$ is range Hermitian; the converse assertion follows from Theorem \ref{UnitaryRegularization}(b). For example, every normal matrix is range Hermitian. The following lemma shows that, for a range Hermitian matrix and a normal matrix, commutativity follows from a generally weaker condition. \begin{lemma} \label{Commutivity Implication}Let $A,B\in M_{n}$. Suppose $A$ is range Hermitian and $B$ is normal. Then $ABA=A^{2}B$ if and only if $AB=BA$. \end{lemma} \begin{proof} If $AB=BA$, then $A(BA)=A(AB)=A^{2}B$. Conversely, suppose $ABA=A^{2}B$. Let $A=U(C\oplus0_{n-r})U^{\ast}$, in which $U\in M_{n}$ is unitary and $C\in M_{r}$ is nonsingular. Partition the normal matrix $U^{\ast}BU=[B_{ij} ]_{i,j=1}^{2}$ conformally to $C\oplus0_{n-r}$. Then \begin{align*} U^{\ast}(ABA)U & =(U^{\ast}AU)(U^{\ast}BU)(U^{\ast}AU)\\ & =\left[ \begin{array} [c]{cc} C & 0\\ 0 & 0 \end{array} \right] \left[ \begin{array} [c]{cc} B_{11} & B_{12}\\ B_{21} & B_{22} \end{array} \right] \left[ \begin{array} [c]{cc} C & 0\\ 0 & 0 \end{array} \right] =\left[ \begin{array} [c]{cc} CB_{11}C & 0\\ 0 & 0 \end{array} \right] \end{align*} and \begin{align*} U^{\ast}(A^{2}B)U & =(U^{\ast}AU)^{2}(U^{\ast}BU)\\ & =\left[ \begin{array} [c]{cc} C^{2} & 0\\ 0 & 0 \end{array} \right] \left[ \begin{array} [c]{cc} B_{11} & B_{12}\\ B_{21} & B_{22} \end{array} \right] =\left[ \begin{array} [c]{cc} C^{2}B_{11} & C^{2}B_{12}\\ 0 & 0 \end{array} \right] \text{,} \end{align*} so $C^{2}B_{12}=0$, which implies that $B_{12}=0$. Lemma \ref{Zero Blocks}(a) ensures that $B_{21}=0$ as well, so $U^{\ast}BU=B_{11}\oplus B_{22}$. Moreover, $CB_{11}C=C^{2}B_{11}$, so $B_{11}C=CB_{11}$. We conclude that $U^{\ast}AU$ commutes with $U^{\ast}BU$, and hence $A$ commutes with $B$.\hfill \end{proof} \section{Normal cosquares and *cosquares} A nonsingular matrix $A$ whose cosquare is normal (respectively, whose *cosquare is normal) has a simple canonical form under unitary congruence (respectively, under unitary *congruence). Moreover, normality of the cosquare or *cosquare of $A$ is equivalent to simple properties of $A$ itself that are the key---via regularization---to obtaining canonical forms under unitary congruence or unitary *congruence even when $A$ is singular. \subsection{Normal cosquares} If $A\in M_{n}$ is nonsingular and its cosquare $\mathfrak{A}$ is normal, then $\mathfrak{A}$ is unitarily diagonalizable and we may assume that its Jordan Canonical Form has the form (\ref{JCFdiagonalizableCosquare}). For our analysis it is convenient to separate the eigenvalue pairs $\{-1,-1\}$ of $\mathfrak{A}$ from the reciprocal pairs of its other eigenvalues in (\ref{JCFdiagonalizableCosquare}). Any unitary similarity that puts $\mathfrak{A}$ in the diagonal form (\ref{JCFdiagonalizableCosquare}) induces a unitary congruence of $A$ that puts it into a special block diagonal form. \begin{theorem} \label{NormalCosquareDiagonalBlocks}Let $A\in M_{n}$ be nonsingular and suppose that its cosquare $\mathfrak{A}=A^{-T}A$ is normal. Let $\mu_{1} ,\mu_{1}^{-1},\ldots,\mu_{q},\mu_{q}^{-1}$ be the distinct eigenvalues of $\mathfrak{A}$ with $-1\neq\mu_{j}\neq1$ for each $j=1,\ldots,q$, and let $n_{1},n_{1},\ldots,n_{q},n_{q}$ be their respective multiplicities. Let $n_{+}$ and $2n_{-}$be the multiplicities of $+1$ and $-1$, respectively, as eigenvalues of $\mathfrak{A}$. Let \begin{equation} \Lambda=I_{n_{+}}\oplus\left( -I_{2n_{-}}\right) \oplus {\displaystyle\bigoplus\limits_{j=1}^{q}} \left[ \begin{array} [c]{cc} \mu_{j}I_{n_{j}} & 0\\ 0 & \mu_{j}^{-1}I_{n_{j}} \end{array} \right] \text{,\quad}\mu_{j}\neq0\text{, }-1\neq\mu_{j}\neq1\text{,} \label{NormalCosquareLambda} \end{equation} let $U\in M_{n}$ be any unitary matrix such that $\mathfrak{A}=U\Lambda U^{\ast}$, and let $\mathcal{A}=U^{T}AU$. Then \begin{equation} \mathcal{A}=\mathcal{A}_{+}\oplus\mathcal{A}_{-}\oplus\mathcal{A}_{1} \oplus\cdots\oplus\mathcal{A}_{q}\text{,}\label{NormalCosquareBlocks} \end{equation} in which $\mathcal{A}_{+}\in M_{n_{+}}$ is symmetric, $\mathcal{A}_{-}\in M_{2n_{-}}$ is skew symmetric, and each $\mathcal{A}_{j}\in M_{2n_{j}}$ has the form \begin{equation} \mathcal{A}_{j}=\left[ \begin{array} [c]{cc} 0_{n_{j}} & Y_{j}\\ \mu_{j}Y_{j}^{T} & 0_{n_{j}} \end{array} \right] \text{,\quad}Y_{j}\in M_{n_{j}}\text{ is nonsingular.} \label{NormalCosquareThirdTypeBlock} \end{equation} The unitary congruence class of each of the $q+2$ blocks in (\ref{NormalCosquareBlocks}) is uniquely determined. \end{theorem} \begin{proof} The presentation (\ref{NormalCosquareLambda}) of the Jordan Canonical Form of $\mathfrak{A}$ differs from that in (\ref{JCFdiagonalizableCosquare}) only in the separate identification of the eigenvalue pairs $\{-1,-1\}$. We have $A=A^{T}\mathfrak{A}=A^{T}U\Lambda U^{\ast}$, which implies that \[ \mathcal{A}=U^{T}AU=U^{T}A^{T}U\Lambda=\mathcal{A}^{T}\Lambda \] and hence \[ \mathcal{A}=\mathcal{A}^{T}\Lambda=\left( \mathcal{A}^{T}\Lambda\right) ^{T}\Lambda=\Lambda\mathcal{A}\Lambda\text{,} \] that is, \begin{equation} \Lambda^{-1}\mathcal{A}=\mathcal{A}\Lambda\text{.}\label{CosquareCommute} \end{equation} Partition $\mathcal{A}=\left[ \mathcal{A}_{ij}\right] _{i,j=1}^{q+2}$ conformally to $\Lambda$. The $q+2$ diagonal blocks of $\Lambda$ have mutually distinct spectra; the spectra of corresponding diagonal blocks of $\Lambda$ and $\Lambda^{-1}$ are the same. The identity (\ref{CosquareCommute}) and Sylvester's Theorem on Linear Matrix Equations \cite[Section 2.4, Problems 9 and 13]{HJ1} ensure that $\mathcal{A}$ is block diagonal and conformal to $\Lambda$, that is, \[ \mathcal{A}=\mathcal{A}_{11}\oplus\cdots\oplus\mathcal{A}_{q+2,q+2} \] is block diagonal. Moreover, the identity $\mathcal{A}=\mathcal{A}^{T}\Lambda$ ensures that (a) $\mathcal{A}_{11}=\mathcal{A}_{11}^{T}$, so $\mathcal{A} _{+}:=\mathcal{A}_{11}$ is symmetric; (b) $\mathcal{A}_{22}=-\mathcal{A} _{22}^{T}$, so $\mathcal{A}_{-}:=\mathcal{A}_{22}$ is skew symmetric; and (c) for each $i=3,\ldots,q+2$ the nonsingular block $\mathcal{A}_{jj}$ has the form \[ \left[ \begin{array} [c]{cc} X & Y\\ Z & W \end{array} \right] \text{,\quad}X,Y,Z,W\in M_{n_{j}} \] and satisfies an identity of the form \[ \left[ \begin{array} [c]{cc} X & Y\\ Z & W \end{array} \right] =\left[ \begin{array} [c]{cc} X & Y\\ Z & W \end{array} \right] ^{T}\left[ \begin{array} [c]{cc} \mu I & 0\\ 0 & \mu^{-1}I \end{array} \right] =\left[ \begin{array} [c]{cc} \mu X^{T} & \mu^{-1}Z^{T}\\ \mu Y^{T} & \mu^{-1}W^{T} \end{array} \right] \] in which $\mu^{2}\neq1$. But $X=\mu X^{T}=\mu^{2}X$ and $W=\mu^{-1}W^{T} =\mu^{-2}W$, so $X=W=0$. Moreover, $Z=\mu Y^{T}$, so $\mathcal{A}_{jj}$ has the form (\ref{NormalCosquareThirdTypeBlock}). What can we say if $\mathfrak{A}$ can be put into the form (\ref{NormalCosquareLambda}) via unitary similarity with a different unitary matrix $V$? If $\mathfrak{A}=U\Lambda U^{\ast}=V\Lambda V^{\ast}$ and both $U$ and $V$ are unitary, then $\Lambda\left( U^{\ast}V\right) =\left( U^{\ast }V\right) \Lambda$, so another application of Sylvester's Theorem ensures that the unitary matrix $U^{\ast}V$ is block diagonal and conformal to $\Lambda$. Thus, in the respective presentations (\ref{NormalCosquareBlocks}) associated with $U$ and $V$, corresponding diagonal blocks are unitarily congruent.\hfill \end{proof} \begin{theorem} \label{NonsingularEquivalence}Let $A\in M_{n}$. The following are equivalent: \newline (a) $\bar{A}A$ is normal. \newline (b) $A\left( \overline{AA^{\ast}}\right) =\left( \overline{A^{\ast} A}\right) A$, that is, $A\bar{A}A^{T}=A^{T}\bar{A}A$. \medskip \newline If $A$ is nonsingular, then (a) and (b) are equivalent to \newline (c) $A^{-T}A$ is normal. \end{theorem} \begin{proof} (a) $\Rightarrow$ (b): Consider the identity \[ A\left( \bar{A}A\right) =A\bar{A}A=\left( A\bar{A}\right) A\text{.} \] Since $\bar{A}A$ is normal, $A\bar{A}=\overline{(\bar{A}A)}$ is normal and Lemma \ref{Fuglede}(a) ensures that \[ A\left( \bar{A}A\right) ^{\ast}=AA^{\ast}A^{T}=A^{T}A^{\ast}A=\left( A\bar{A}\right) ^{\ast}A\text{.} \] Taking the transpose of the middle identity, and using Hermicity of $AA^{\ast }$ and $A^{\ast}A$, gives \[ A\left( AA^{\ast}\right) ^{T}=A\left( \overline{AA^{\ast}}\right) =\left( \overline{A^{\ast}A}\right) A=\left( A^{\ast}A\right) ^{T}A\text{.} \] (b) $\Rightarrow$ (a): We use (b) in the form $AA^{\ast}A^{T}=A^{T}A^{\ast}A$ to compute \[ \left( \bar{A}A\right) \left( \bar{A}A\right) ^{\ast}=\bar{A}\left( AA^{\ast}A^{T}\right) =\bar{A}\left( A^{T}A^{\ast}A\right) \text{,} \] so $\bar{A}A^{T}A^{\ast}A$ is Hermitian: \begin{align*} \left( \bar{A}A\right) \left( \bar{A}A\right) ^{\ast} & =\bar{A} A^{T}A^{\ast}A=\left( \bar{A}A^{T}A^{\ast}A\right) ^{\ast}\\ & =A^{\ast}\left( A\bar{A}A^{T}\right) =A^{\ast}\left( A^{T}\bar {A}A\right) =\left( \bar{A}A\right) ^{\ast}\left( \bar{A}A\right) \text{.} \end{align*} (c) $\Rightarrow$ (b): Consider the identity \[ A(A^{-T}A)=AA^{-T}A=(A^{-T}A)^{-T}A\text{.} \] Since $A^{-T}A$ is normal, Lemma \ref{Fuglede}(b) ensures that \[ A(A^{-T}A)^{-\ast}=\left( (A^{-T}A)^{-T}\right) ^{-\ast}A=\overline {(A^{-T}A)}A\text{,} \] so $A\bar{A}A^{-\ast}=A^{-\ast}\bar{A}A$, from which it follows that $A^{\ast }A\bar{A}=\bar{A}AA^{\ast}$, which is the conjugate of (b).\medskip \newline (b) $\Rightarrow$ (c): Since $A$ is nonsingular, the identity (b) is equivalent to \[ A^{T}\bar{A}A=\left( A^{T}A^{\ast}A\right) ^{T}=\left( AA^{\ast} A^{T}\right) ^{T}=A\bar{A}A^{T}\text{,} \] which in turn is equivalent to \[ AA^{-T}\bar{A}^{-1}=\bar{A}^{-1}A^{-T}A\text{.} \] Now compute \begin{align*} (A^{-T}A)(A^{-T}A)^{\ast} & =A^{-T}\left( AA^{\ast}\right) \bar{A} ^{-1}=A^{-T}\left( A^{T}A^{\ast}AA^{-T}\right) \bar{A}^{-1}\\ & =A^{\ast}\left( AA^{-T}\bar{A}^{-1}\right) =A^{\ast}\left( \bar{A} ^{-1}A^{-T}A\right) =(A^{-T}A)^{\ast}(A^{-T}A)\text{.} \end{align*} \hfill \end{proof} \begin{theorem} \label{nonsingularTheorem}Let $A\in M_{n}$ be nonsingular. If $\bar{A}A$ is normal, then $A$ is unitarily congruent to a direct sum of blocks, each of which is \begin{equation} \left[ \sigma\right] \text{ or }\tau\left[ \begin{array} [c]{cc} 0 & 1\\ \mu & 0 \end{array} \right] \text{,\quad}\sigma>0\text{, }\tau>0\text{, }\mu\in\mathbb{C}\text{, }0\neq\mu\neq1\text{.}\label{cnc0} \end{equation} This direct sum is uniquely determined by $A$ up to permutation of its blocks and replacement of any $\mu$ by $\mu^{-1}$. Conversely, if $A$ is unitarily congruent to a direct sum of blocks of the two types in (\ref{cnc0}), then $\bar{A}A$ is normal. \end{theorem} \begin{proof} Normality of $\bar{A}A$ implies normality of the cosquare $A^{-T}A$. Theorem \ref{NormalCosquareDiagonalBlocks} ensures that $A$ is unitarily congruent to a direct sum of the form (\ref{NormalCosquareBlocks}), and the unitary congruence class of each summand is uniquely determined by $A$. It suffices to consider the three types of blocks that occur in (\ref{NormalCosquareBlocks}): (a) a symmetric block $\mathcal{A}_{+}$, (b) a skew-symmetric block $\mathcal{A}_{-}$, and (c) a block of the form (\ref{NormalCosquareThirdTypeBlock}). \medskip \newline (a) The special singular value decomposition available for a nonsingular symmetric matrix \cite[Corollary 4.4.4]{HJ1} ensures that there is a unitary $V$ and a positive diagonal matrix $\Sigma=\operatorname{diag}(\sigma _{1},\ldots,\sigma_{n_{+}})$ such that $\mathcal{A}_{+}=V\Sigma V^{T}$. The singular values $\sigma_{i}$ of $\mathcal{A}_{+}$ are the source of all of the 1-by-1 blocks in (\ref{cnc0}). They are unitary congruence invariants of $\mathcal{A}_{+}$, so they are uniquely determined by $A$. \medskip \newline (b) The special singular value decomposition available for a nonsingular skew-symmetric matrix \cite[Problem 26, Section 4.4]{HJ1} ensures that there is a unitary $V$ and a nonsingular block diagonal matrix \begin{equation} \Sigma=\tau_{1}\left[ \begin{array} [c]{cc} 0 & 1\\ -1 & 0 \end{array} \right] \oplus\cdots\oplus\tau_{n_{-}}\left[ \begin{array} [c]{cc} 0 & 1\\ -1 & 0 \end{array} \right] \label{skewsymmetric} \end{equation} such that $\mathcal{A}_{-}=V\Sigma V^{T}$. These blocks are the source of all of the 2-by-2 blocks in (\ref{cnc0}) in which $\mu=-1$. The parameters $\tau_{1},\tau_{1},\ldots,\tau_{n_{-}},\tau_{n_{-}}$ are the singular values of $\mathcal{A}_{-}$, which are unitary congruence invariants of $\mathcal{A}_{-}$, so they are uniquely determined by $A$. \medskip \newline (c) Consider a block of the form \[ \mathcal{A}_{j}=\left[ \begin{array} [c]{cc} 0 & Y_{j}\\ \mu_{j}Y_{j}^{T} & 0 \end{array} \right] \] in which $Y_{j}\in M_{n_{j}}$ is nonsingular. The singular value decomposition \cite[Theorem 7.3.5]{HJ1} ensures that there are unitary $V_{j},W_{j}\in M_{n_{j}}$ and a positive diagonal matrix $\Sigma_{j}=\operatorname{diag} (\tau_{1}^{(j)},\ldots,\tau_{n_{j}}^{(j)})$ such that $Y_{j}=V_{j}\Sigma _{j}W_{j}^{\ast}$. Then \[ \mathcal{A}_{j}=\left[ \begin{array} [c]{cc} 0 & V_{j}\Sigma_{j}W_{j}^{\ast}\\ \mu_{j}\bar{W}_{j}\Sigma_{j}V_{j}^{T} & 0 \end{array} \right] =\left[ \begin{array} [c]{cc} V_{j} & 0\\ 0 & \bar{W}_{j} \end{array} \right] \left[ \begin{array} [c]{cc} 0 & \Sigma_{j}\\ \mu_{j}\Sigma_{j} & 0 \end{array} \right] \left[ \begin{array} [c]{cc} V_{j} & 0\\ 0 & \bar{W}_{j} \end{array} \right] ^{T} \] is unitarily congruent to \[ \left[ \begin{array} [c]{cc} 0 & \Sigma_{j}\\ \mu_{j}\Sigma_{j} & 0 \end{array} \right] \text{,} \] which is unitarily congruent (permutation similar) to \[ {\displaystyle\bigoplus\limits_{i=1}^{n_{j}}} \tau_{i}^{(j)}\left[ \begin{array} [c]{cc} 0 & 1\\ \mu_{j} & 0 \end{array} \right] \text{,\quad}\tau_{i}^{(j)}>0\text{.} \] These blocks contribute $n_{j}$ 2-by-2 blocks to (\ref{cnc0}), all with $\mu=\mu_{j}$. Given $\mu_{j}\neq0$, the parameters $\tau_{1}^{(j)} ,\ldots,\tau_{n_{j}}^{(j)}$ are determined by the eigenvalues of $\overline{\mathcal{A}_{j}}\mathcal{A}_{j}$, which are invariant under unitary congruence of $\mathcal{A}_{j}$. Conversely, if $A$ is unitarily congruent to a direct sum of blocks of the form (\ref{cnc0}), then $\bar{A}A$ is unitarily similar to a direct sum of blocks, each of which is \[ \lbrack\sigma^{2}]\text{ or }\tau^{2}\mu I_{2}\text{,\quad\ }0\neq\mu \neq1\text{,} \] so $\bar{A}A$ is normal.\hfill \end{proof} \subsection{Normal *cosquares} If $A\in M_{n}$ is nonsingular and its *cosquare $\mathfrak{A}$ is normal, we can deduce a unitary *congruence canonical form for $A$ with an argument largely parallel to that in the preceding section, starting with the Jordan Canonical Form (\ref{JCFdiagonalizable*Cosquare}). We find that any unitary similarity that diagonalizes $\mathfrak{A}$ induces a unitary *congruence of $A$ that puts it into a special block diagonal form. \begin{theorem} \label{Normal*CosquareDiagonalBlocks}Let $A\in M_{n}$ be nonsingular and suppose that its *cosquare $\mathfrak{A}=A^{-\ast}A$ is normal. Let $\mu _{1},\bar{\mu}_{1}^{-1},\ldots,\mu_{q},\bar{\mu}_{q}^{-1}$ be the distinct eigenvalues of $\mathfrak{A}$ with $0<|\mu_{j}|<1$ for each $j=1,\ldots,q$, and let $n_{1},n_{1},\ldots,n_{q},n_{q}$ be their respective multiplicities. Let $\lambda_{1},\ldots,\lambda_{p}$ be the distinct unimodular eigenvalues of $\mathfrak{A}$, with respective multiplicities $m_{1},\ldots,m_{p}$, and choose any unimodular parameters $\alpha_{1},\ldots,\alpha_{p}$ such that $\alpha_{k}^{2}=\lambda_{k}$ for each $k=1,\ldots,p$. Let \begin{equation} \Lambda= {\displaystyle\bigoplus\limits_{k=1}^{p}} \lambda_{k}I_{m_{k}}\oplus {\displaystyle\bigoplus\limits_{j=1}^{q}} \left[ \begin{array} [c]{cc} \mu_{j}I_{n_{j}} & 0\\ 0 & \overline{\mu_{j}}^{-1}I_{n_{j}} \end{array} \right] \text{,\quad}\lambda_{k},\mu_{j}\in\mathbb{C}\text{, }\left\vert \lambda_{k}\right\vert =1\text{, }0<\left\vert \mu_{j}\right\vert <1\text{,}\label{Normal*CosquareLambda} \end{equation} let $U\in M_{n}$ be any unitary matrix such that $\mathfrak{A}=U\Lambda U^{\ast}$, and let $\mathcal{A}=U^{\ast}AU$. Then $\mathcal{A}$ is block diagonal and has the form \begin{equation} \mathcal{A}=\alpha_{1}\mathcal{H}_{1}\oplus\cdots\oplus\alpha_{p} \mathcal{H}_{p}\oplus\mathcal{A}_{1}\oplus\cdots\oplus\mathcal{A}_{q} \text{,}\label{Normal*CosquareBlocks} \end{equation} in which $\mathcal{H}_{k}\in M_{m_{k}}$ is Hermitian for each $k=1,\ldots,p$, and each $\mathcal{A}_{j}\in M_{2n_{j}}$ has the form \begin{equation} \mathcal{A}_{j}=\left[ \begin{array} [c]{cc} 0 & Y_{j}\\ \mu_{j}Y_{j}^{\ast} & 0 \end{array} \right] \text{,\quad}Y_{j}\in M_{n_{j}}\text{ is nonsingular.} \label{Normal*CosquareThirdTypeBlock} \end{equation} For a given ordering of the blocks in (\ref{Normal*CosquareLambda}) the unitary *congruence class of each of the $p+q$ blocks in (\ref{Normal*CosquareBlocks}) is uniquely determined. \end{theorem} \begin{proof} We have $A=A^{\ast}\mathfrak{A}=A^{\ast}U\Lambda U^{\ast}$, which implies that \[ \mathcal{A}=U^{\ast}AU=U^{\ast}A^{\ast}U\Lambda=\mathcal{A}^{\ast}\Lambda \] and hence \[ \mathcal{A}=\mathcal{A}^{\ast}\Lambda=\left( \mathcal{A}^{\ast} \Lambda\right) ^{\ast}\Lambda=\bar{\Lambda}\mathcal{A}\Lambda\text{,} \] that is, \begin{equation} \bar{\Lambda}^{-1}\mathcal{A}=\mathcal{A}\Lambda\text{.} \label{*CosquareCommute} \end{equation} Partition $\mathcal{A}=\left[ \mathcal{A}_{ij}\right] _{i,j=1}^{p+q}$ conformally to $\Lambda$. The $p+q$ diagonal blocks of $\Lambda$ have mutually distinct spectra; the spectra of corresponding blocks of $\Lambda$ and $\bar{\Lambda}^{-1}$ are the same. The identity (\ref{*CosquareCommute}) and Sylvester's Theorem on Linear Matrix Equations ensure that $\mathcal{A}$ is block diagonal and conformal to $\Lambda$, that is, \[ \mathcal{A}=\mathcal{A}_{11}\oplus\cdots\oplus\mathcal{A}_{pp}\oplus \mathcal{A}_{p+1,p+1}\oplus\cdots\oplus\mathcal{A}_{p+q,p+q} \] is block diagonal. Moreover, the identity $\mathcal{A}=\mathcal{A}^{\ast }\Lambda$ ensures that $\mathcal{A}_{kk}=\lambda_{k}\mathcal{A}_{kk}^{\ast }=\alpha_{k}^{2}\mathcal{A}_{kk}^{\ast}$, so if we define $\mathcal{H} _{k}:=\overline{\alpha_{k}}\mathcal{A}_{kk}$, then \[ \mathcal{H}_{k}=\overline{\alpha_{k}}\mathcal{A}_{kk}=\overline{\alpha_{k} }\alpha_{k}^{2}\mathcal{A}_{kk}^{\ast}=\alpha_{k}\mathcal{A}_{kk}^{\ast }=\left( \overline{\alpha_{k}}\mathcal{A}_{kk}\right) ^{\ast}=\mathcal{H} _{k}^{\ast}\text{,} \] so $\mathcal{H}_{k}$ is Hermitian. For each $j=p+1,\ldots,p+q$ the block $\mathcal{A}_{jj}$ has the form \[ \left[ \begin{array} [c]{cc} X & Y\\ Z & W \end{array} \right] \] and satisfies an identity of the form \[ \left[ \begin{array} [c]{cc} X & Y\\ Z & W \end{array} \right] =\left[ \begin{array} [c]{cc} X & Y\\ Z & W \end{array} \right] ^{\ast}\left[ \begin{array} [c]{cc} \mu I & 0\\ 0 & \bar{\mu}^{-1}I \end{array} \right] =\left[ \begin{array} [c]{cc} \mu X^{\ast} & \bar{\mu}^{-1}Z^{\ast}\\ \mu Y^{\ast} & \bar{\mu}^{-1}W^{\ast} \end{array} \right] \] in which $\left\vert \mu\right\vert ^{2}>1$. But $X=\mu X^{\ast}=\left\vert \mu\right\vert ^{2}X$ and $W=\overline{\mu^{-1}}W^{\ast}=\left\vert \mu\right\vert ^{-2}W$, so $X=W=0$. Moreover, $Z=\mu Y^{\ast}$, so $\mathcal{A}_{jj}$ has the form (\ref{Normal*CosquareThirdTypeBlock}). If $\mathfrak{A}=U\Lambda U^{\ast}=V\Lambda V^{\ast}$ and both $U$ and $V$ are unitary, then $\Lambda\left( U^{\ast}V\right) =\left( U^{\ast}V\right) \Lambda$, so the unitary matrix $U^{\ast}V$ is block diagonal and conformal to $\Lambda$. Thus, in the presentations (\ref{Normal*CosquareBlocks}) corresponding to $U$ and to $V$, corresponding diagonal blocks are unitarily *congruent.\hfill \end{proof} \begin{theorem} \label{Nonsingular*equivalence}Let $A\in M_{n}$. The following are equivalent: \newline (a) $A^{2}$ is normal. \newline (b) $A\left( AA^{\ast}\right) =\left( A^{\ast}A\right) A$, that is, $A^{2}A^{\ast}=A^{\ast}A^{2}$. \medskip \newline If $A$ is nonsingular, then (a) and (b) are equivalent to \newline (c) $A^{-\ast}A$ is normal. \end{theorem} \begin{proof} (a) $\Rightarrow$ (b): Consider the identity \[ \left( A^{2}\right) A=A\left( A^{2}\right) \text{.} \] Since $A^{2}$ is normal, Lemma \ref{Fuglede}(a) ensures that \[ \left( A^{2}\right) ^{\ast}A=A\left( A^{2}\right) ^{\ast}\text{,} \] and hence \[ A^{2}A^{\ast}=A\left( AA^{\ast}\right) =\left( A^{\ast}A\right) A=A^{\ast }A^{2}. \] (b) $\Rightarrow$ (a): Assuming (b), we have \begin{align*} A^{2}\left( A^{2}\right) ^{\ast} & =\left( A^{2}A^{\ast}\right) A^{\ast }=\left( A^{\ast}A^{2}\right) A^{\ast}=A^{\ast}\left( A^{2}A^{\ast}\right) \\ & =A^{\ast}\left( A^{\ast}A^{2}\right) =\left( A^{\ast}\right) ^{2} A^{2}=\left( A^{2}\right) ^{\ast}A^{2}\text{.} \end{align*} (c) $\Rightarrow$ (b): The identity $A=(A^{-\ast}A)^{\ast}A(A^{-\ast}A)$ is equivalent to \begin{equation} (A^{-\ast}A)^{-\ast}A=A(A^{-\ast}A)\text{.}\label{squared1} \end{equation} Since $AA^{-\ast}$ is normal, Lemma \ref{Fuglede}(b) ensures that \[ (A^{-\ast}A)A=\left( (A^{-\ast}A)^{-\ast}\right) ^{-\ast}A=A(A^{-\ast }A)^{-\ast}\text{,} \] which implies that $A^{-\ast}A^{2}=A^{2}A^{-\ast}$ and $A^{2}A^{\ast}=A^{\ast }A^{2}$.\medskip \newline (b) $\Rightarrow$ (c): Assuming (b), we have $AAA^{\ast}=A^{\ast}AA$, which is equivalent to $AA^{\ast}A^{\ast}=A^{\ast}A^{\ast}A$ and (since $A$ is nonsingular) to \[ A^{-\ast}AA^{\ast}=A^{\ast}AA^{-\ast}\text{,} \] The inverse of this identity is \[ A^{-\ast}A^{-1}A^{\ast}=A^{\ast}A^{-1}A^{-\ast}\text{.} \] Now compute \[ (A^{-\ast}A)(A^{-\ast}A)^{\ast}=\left( A^{-\ast}AA^{\ast}\right) A^{-1}=\left( A^{\ast}AA^{-\ast}\right) A^{-1}\text{,} \] which is Hermitian, so \begin{align*} (A^{-\ast}A)(A^{-\ast}A)^{\ast} & =\left( A^{\ast}AA^{-\ast}A^{-1}\right) ^{\ast}=\left( A^{-\ast}A^{-1}A^{\ast}\right) A\\ & =\left( A^{\ast}A^{-1}A^{-\ast}\right) A=(A^{-\ast}A)^{\ast}(A^{-\ast }A)\text{.} \end{align*} \hfill \end{proof} \begin{theorem} \label{*nonsingularTheorem}Let $A\in M_{n}$ be nonsingular. If $A^{2}$ is normal, then $A$ is unitarily *congruent to a direct sum of blocks, each of which is \begin{equation} \lbrack\lambda]\text{ or }\tau\left[ \begin{array} [c]{cc} 0 & 1\\ \mu & 0 \end{array} \right] \text{,\quad}\lambda,\mu\in\mathbb{C}\text{, }\lambda\neq0\text{, }\tau>0\text{, }0<|\mu|<1\text{.}\label{*nonsingular} \end{equation} This direct sum is uniquely determined by $A$, up to permutation of its blocks. Conversely, if $A$ is unitarily *congruent to a direct sum of blocks of the form (\ref{*nonsingular}), then $A^{2}$ is normal. \end{theorem} \begin{proof} Normality of $A^{2}$ implies normality of its *cosquare $A^{-\ast}A$, so $A$ is unitarily *congruent to a direct sum of the form (\ref{Normal*CosquareBlocks}), and the unitary *congruence class of each summand is uniquely determined by $A$. It suffices to consider the two types of blocks that occur in (\ref{Normal*CosquareBlocks}): (a) a unimodular scalar multiple of a Hermitian matrix $\mathcal{H}_{k}$, and (b) a block of the form (\ref{Normal*CosquareThirdTypeBlock}). \medskip \newline (a) The spectral theorem ensures that there is a unitary $V_{k}\in M_{m_{k}}$ and a real nonsingular diagonal $L_{k}\in M_{m_{k}}$ such that $\mathcal{H} _{k}=V_{k}L_{k}V_{k}^{\ast}$. The diagonal entries of $\alpha_{k}L_{k}$ (that is, the eigenvalues of $\alpha_{k}\mathcal{H}_{k}=$ $\mathcal{A}_{k}$) are unitary *congruence invariants of $\mathcal{A}_{k}$, so they are uniquely determined by $A$; they all lie on the line $\{t\alpha_{k}:-\infty<t<\infty \}$. The diagonal entries of $\alpha_{1}L_{1},\ldots,\alpha_{p}L_{p}$ are the source of all the 1-by-1 blocks in (\ref{*nonsingular}). \medskip \newline (b) Consider a block of the form \[ \mathcal{A}_{j}=\left[ \begin{array} [c]{cc} 0 & Y_{j}\\ \mu_{j}Y_{j}^{\ast} & 0 \end{array} \right] \text{,\quad}0<\left\vert \mu_{j}\right\vert <1\text{,} \] in which $Y_{j}\in M_{n_{j}}$ is nonsingular. The singular value decomposition ensures that there are unitary $V_{j},W_{j}\in M_{n_{j}}$ and a positive diagonal matrix $\Sigma_{j}=\operatorname{diag}(\tau_{1}^{(j)},\ldots ,\tau_{n_{j}}^{(j)})$ such that $Y_{j}=V_{j}\Sigma_{j}W_{j}^{\ast}$. Then \[ \mathcal{A}_{j}=\left[ \begin{array} [c]{cc} 0 & V_{j}\Sigma_{j}W_{j}^{\ast}\\ \mu_{j}W_{j}\Sigma_{j}V_{j}^{\ast} & 0 \end{array} \right] =\left[ \begin{array} [c]{cc} V_{j} & 0\\ 0 & W_{j} \end{array} \right] \left[ \begin{array} [c]{cc} 0 & \Sigma_{j}\\ \mu_{j}\Sigma_{j} & 0 \end{array} \right] \left[ \begin{array} [c]{cc} V_{j} & 0\\ 0 & W_{j} \end{array} \right] ^{\ast} \] is unitarily *congruent to \[ \left[ \begin{array} [c]{cc} 0 & \Sigma_{j}\\ \mu_{j}\Sigma_{j} & 0 \end{array} \right] \text{,} \] which is unitarily *congruent (permutation similar) to \[ {\displaystyle\bigoplus\limits_{i=1}^{n_{j}}} \tau_{i}^{(j)}\left[ \begin{array} [c]{cc} 0 & 1\\ \mu_{j} & 0 \end{array} \right] \text{,\quad}\tau_{i}^{(j)}>0\text{.} \] These blocks contribute $n_{j}$ 2-by-2 blocks to (\ref{*nonsingular}), all with $\mu=\mu_{j}$. Given $\mu_{j}\neq0$, the parameters $\tau_{1} ^{(j)},\ldots,\tau_{n_{j}}^{(j)}$ are determined by the eigenvalues of $\mathcal{A}_{j}^{2}$, which are invariant under unitary *congruence of $\mathcal{A}_{j}$. Conversely, if $A$ is unitarily *congruent to a direct sum of blocks of the two types in (\ref{*nonsingular}), then $A^{2}$ is normal since it is unitarily *congruent to a direct sum of diagonal blocks of the two types $[\lambda^{2}]$ and $\tau^{2}\mu I_{2}$.\hfill \end{proof} \section{Unitary regularization} The following theorem describes a reduced form that can be achieved for any singular nonzero matrix under both unitary congruence and unitary *congruence. It is the key to separating a singular nonzero matrix into a canonical direct sum of its regular and singular parts under unitary congruence or unitary *congruence. \begin{theorem} \label{UnitaryRegularization}Let $A\in M_{n}$ be singular and nonzero, let $m_{1}$ be the nullity of $A$, let the columns of $V_{1}$ be any orthonormal basis for the range of $A$, let the columns of $V_{2}$ be any orthonormal basis for the null space of $A^{\ast}$, and form the unitary matrix $V=[V_{1}~V_{2}]$. Then \medskip \newline (a) $A$ is unitarily congruent to a reduced form \begin{equation} \left[ \begin{array} [c]{cc|c} A^{\prime} & B & 0\\ C & D & \left[ \Sigma~0\right] \\\hline \multicolumn{2}{c|}{0} & 0_{m_{1}} \end{array} \right] \hspace{-0.13in} \begin{array} [c]{l} \\ \}m_{2}\\ \}m_{1} \end{array} \label{Reduced Form} \end{equation} in which $m_{2}=m_{1}-\dim\left( N(A)\cap N(A^{T})\right) $; if $m_{2}>0$ then $D\in M_{m_{2}}$, $\Sigma=\operatorname{diag}(\sigma_{1},\ldots ,\sigma_{m_{2}})$, and all $\sigma_{i}>0$; if $m_{1}+m_{2}<n$, then $A^{\prime}$ is square and $[A^{\prime}~B]$ has linearly independent rows; the integers $m_{1}$, $m_{2}$ and the unitary congruence class of the block \begin{equation} \left[ \begin{array} [c]{cc} A^{\prime} & B\\ C & D \end{array} \right] \label{invariant block} \end{equation} are uniquely determined by $A$. The parameters $\sigma_{1},\ldots ,\sigma_{m_{2}}$ are the positive singular values of $V_{1}^{\ast} A\overline{V_{2}} $, so they are also uniquely determined by $A$.\medskip \newline (b) $A$ is unitarily *congruent to a reduced form (\ref{Reduced Form}) in which $m_{2}=m_{1}-\dim\left( N(A)\cap N(A^{\ast})\right) $; if $m_{2}>0$ then $D\in M_{m_{2}}$, $\Sigma=\operatorname{diag}(\sigma_{1},\ldots ,\sigma_{m_{2}})$, and all $\sigma_{i}>0$; if $m_{1}+m_{2}<n$, then $A^{\prime}$ is square and $[A^{\prime}~B]$ has linearly independent rows; the integers $m_{1}$, $m_{2}$ and the unitary *congruence class of the block (\ref{invariant block}) are uniquely determined by $A$. The parameters $\sigma_{1},\ldots,\sigma_{m_{2}}$ are the positive singular values of $V_{1}^{\ast}AV_{2}$, so they are also uniquely determined by $A$. \end{theorem} \begin{proof} We have \[ V^{\ast}A=\left[ \begin{array} [c]{c} V_{1}^{\ast}A\\ V_{2}^{\ast}A \end{array} \right] =\left[ \begin{array} [c]{c} V_{1}^{\ast}A\\ 0 \end{array} \right] \text{.} \] The next step depends on whether we want to perform a unitary congruence or a unitary *congruence. \medskip \newline (a) Let $N=V_{1}^{\ast}A\overline{V_{2}}$ and form the unitary congruence \[ V^{\ast}A\left( V^{\ast}\right) ^{T}=\left[ \begin{array} [c]{c} V_{1}^{\ast}A\\ 0 \end{array} \right] \left[ \begin{array} [c]{cc} \overline{V_{1}} & \overline{V_{2}} \end{array} \right] =\left[ \begin{array} [c]{cc} V_{1}^{\ast}A\overline{V_{1}} & V_{1}^{\ast}A\overline{V_{2}}\\ 0 & 0_{m_{1}} \end{array} \right] =\left[ \begin{array} [c]{cc} M & N\\ 0 & 0 \end{array} \right] \text{.} \] Now let $m_{2}=\operatorname{rank}N$. If $m_{2}=0$ then $N=0$ and the form (\ref{Reduced Form}) has been achieved with $A^{\prime}=M$. If $m_{2}>0$, use the singular value decomposition to write $N=X\Sigma_{2}Y^{\ast}$, in which $X$ and $Y$ are unitary, \begin{equation} \Sigma_{2}=\left[ \begin{array} [c]{cc} 0 & 0\\ \Sigma & 0 \end{array} \right] \text{ and }\Sigma=\operatorname{diag}(\sigma_{1},\ldots ,\sigma_{m_{2}})\text{,}\label{Sigma2} \end{equation} and the diagonal entries $\sigma_{i}$ are the positive singular values of $N $. Let $Z=X^{\ast}\oplus Y^{T}$ and form the unitary congruence \[ Z\left( V^{\ast}A\bar{V}\right) Z^{T}=\left[ \begin{array} [c]{cc} X^{\ast}M\bar{X} & X^{\ast}NY\\ 0 & 0_{m_{1}} \end{array} \right] =\left[ \begin{array} [c]{cc|c} A^{\prime} & B & 0\\ C & D & \left[ \Sigma~0\right] \\\hline \multicolumn{2}{c|}{0} & 0_{m_{1}} \end{array} \right] \hspace{-0.13in} \begin{array} [c]{l} \\ \}m_{2}\\ \}m_{1} \end{array} \text{.} \] The block $X^{\ast}M\bar{X}$ has been partitioned so that $D\in M_{m_{2}}$. Finally, inspection of (\ref{Reduced Form}) shows that $\dim(N(A)\cap N(A^{T}))=m_{1}-m_{2}$. Suppose that $R,\underline{R},U\in M_{n}$, $U$ is unitary, and $R=U\underline {R}U^{T}$, so $R$ and $\underline{R}$ have the same parameters $m_{1}$ and $m_{2}$. Suppose \[ R=\left[ \begin{array} [c]{cc|c} A^{\prime} & B & 0\\ C & D & \left[ \Sigma~0\right] \\\hline \multicolumn{2}{c|}{0} & 0_{m_{1}} \end{array} \right] \hspace{-0.13in} \begin{array} [c]{l} \\ \}m_{2}\\ \}m_{1} \end{array} \text{ and }\underline{R}=\left[ \begin{array} [c]{cc|c} \underline{A}^{\prime} & \underline{B} & 0\\ \underline{C} & \underline{D} & \left[ \underline{\Sigma}~0\right] \\\hline \multicolumn{2}{c|}{0} & 0_{m_{1}} \end{array} \right] \hspace{-0.13in} \begin{array} [c]{l} \\ \}m_{2}\\ \}m_{1} \end{array} \text{,} \] partition $U=[U_{ij}]_{i,j=1}^{2}$ so that $U_{11}\in M_{n-m_{1}}$ and $U_{22}\in M_{m_{1}}$, and partition \[ \underline{R}=\left[ \begin{array} [c]{c} Z\\ 0 \end{array} \right] \hspace{-0.13in} \begin{array} [c]{l} \\ \}m_{1} \end{array} \] in which \[ Z=\left[ \begin{array} [c]{ccc} \underline{A}^{\prime} & \underline{B} & 0\\ \underline{C} & \underline{D} & \left[ \underline{\Sigma}~0\right] \end{array} \right] \] has full row rank. Then \[ \left[ \begin{array} [c]{c} \bigstar\\ 0 \end{array} \right] =R\bar{U}=U\underline{R}=\left[ \begin{array} [c]{c} \bigstar\\ U_{21}Z \end{array} \right] \text{,} \] so $U_{21}=0$. Lemma \ref{Zero Blocks}(a) ensures that $U_{12}=0$ as well, so $U=U_{11}\oplus U_{22}$ and both direct summands are unitary. Then \begin{align*} R & =\left[ \begin{array} [c]{cc} \left[ \begin{array} [c]{cc} A^{\prime} & B\\ C & D \end{array} \right] & \left[ \begin{array} [c]{c} 0\\ \left[ \Sigma~0\right] \end{array} \right] \\ 0 & 0_{m_{1}} \end{array} \right] \\ & =U\underline{R}U^{T}=\left[ \begin{array} [c]{cc} U_{11}\left[ \begin{array} [c]{cc} \underline{A}^{\prime} & \underline{B}\\ \underline{C} & \underline{D} \end{array} \right] U_{11}^{T} & U_{11}\left[ \begin{array} [c]{c} 0\\ \left[ \underline{\Sigma}~0\right] \end{array} \right] U_{22}^{T}\\ 0 & 0_{m_{1}} \end{array} \right] \end{align*} and the uniqueness assertion follows. \medskip \newline (b) Let $N=V_{1}^{\ast}AV_{2}$ and form the unitary $^{\ast}$congruence \[ V^{\ast}A\left( V^{\ast}\right) ^{\ast}=\left[ \begin{array} [c]{c} V_{1}^{\ast}A\\ V_{2}^{\ast}A \end{array} \right] \left[ \begin{array} [c]{cc} V_{1} & V_{2} \end{array} \right] =\left[ \begin{array} [c]{cc} V_{1}^{\ast}AV_{1} & V_{1}^{\ast}AV_{2}\\ 0 & 0_{m_{1}} \end{array} \right] =\left[ \begin{array} [c]{cc} M & N\\ 0 & 0 \end{array} \right] \text{.} \] Let $m_{2}=\operatorname{rank}N$. If $m_{2}=0$ then $N=0$ and the form (\ref{Reduced Form}) has been achieved with $A^{\prime}=M$. If $m_{2}>0$, use the singular value decomposition to write $N=X\Sigma_{2}Y^{\ast}$, in which $X$ and $Y$ are unitary, $\Sigma_{2}$ has the form (\ref{Sigma2}), and the diagonal entries $\sigma_{i}$ are the positive singular values of $N$. Let $Z=X^{\ast}\oplus Y^{\ast}$ and form the unitary *congruence \[ Z\left( V^{\ast}AV\right) Z^{\ast}=\left[ \begin{array} [c]{cc} X^{\ast}MX & X^{\ast}NY\\ 0 & 0_{m_{1}} \end{array} \right] =\left[ \begin{array} [c]{cc|c} A^{\prime} & B & 0\\ C & D & \left[ \Sigma~0\right] \\\hline \multicolumn{2}{c|}{0} & 0_{m_{1}} \end{array} \right] \hspace{-0.13in} \begin{array} [c]{l} \\ \}m_{2}\\ \}m_{1} \end{array} \text{.} \] The block $X^{\ast}MX$ has been partitioned so that $D\in M_{m_{2}}$. The uniqueness assertion follows from an argument parallel to the one employed in (a).\hfill \end{proof} \medskip We are concerned here with only the simplest cases of unitary congruence and *congruence, and the preceding theorem suffices for our purpose; a general sparse form that can be achieved via unitary congruence and *congruence is given in \cite[Theorem 6(d)]{HSregularization}. \begin{corollary} \label{Simplest Regularizations}Let $A\in M_{n}$ be singular and nonzero. Let $m_{1}$ be the nullity of $A$, let the columns of $V_{1}$ be an orthonormal basis for the range of $A$, let the columns of $V_{2}$ be an orthonormal basis for the null space of $A^{\ast}$, and form the unitary matrix $V=[V_{1} ~V_{2}]$. \medskip \newline (a) Suppose $A$ is congruence normal. Then it is unitarily congruent to a direct sum of the form \begin{equation} A^{\prime}\oplus {\displaystyle\bigoplus\limits_{i=1}^{m_{2}}} \sigma_{i}\left[ \begin{array} [c]{cc} 0 & 1\\ 0 & 0 \end{array} \right] \oplus0_{m_{1}-m_{2}}\text{,\quad}\sigma_{i}>0\text{,} \label{regularization2} \end{equation} in which $A^{\prime}$ is either absent or it is nonsingular and congruence normal; $m_{2}=\operatorname{rank}A-\operatorname{rank}\bar{A} A=\operatorname{rank}V_{1}^{\ast}A\overline{V_{2}}$; and the parameters $\sigma_{1},\ldots,\sigma_{m_{2}}$ are the positive singular values of $V_{1}^{\ast}A\overline{V_{2}}$. The unitary congruence class of $A^{\prime}$, $m_{2}$, $\sigma_{1}$,$\ldots$, $\sigma_{m_{2}}$, and $m_{1}$ are uniquely determined by $A$. \medskip \newline (b) Suppose $A$ is squared normal. Then it is unitarily *congruent to a direct sum of the form (\ref{regularization2}), in which $A^{\prime}$ is either absent or it is nonsingular and squared normal; $m_{2}=\operatorname{rank} A-\operatorname{rank}A^{2}=\operatorname{rank}V_{1}^{\ast}AV_{2}$; and the parameters $\sigma_{1},\ldots,\sigma_{m_{2}}$ are the positive singular values of $V_{1}^{\ast}AV_{2}$. The unitary *congruence class of $A^{\prime}$, $m_{2}$, $\sigma_{1}$,$\ldots$, $\sigma_{m_{2}}$, and $m_{1}$ are uniquely determined by $A$. \end{corollary} \begin{proof} Combine Lemma \ref{Zero Blocks}(b) with Theorem \ref{UnitaryRegularization} .\hfill \end{proof} \medskip The matrix $A^{\prime}$ in (\ref{regularization2}) is the \emph{regular part} of $A$ under unitary congruence (respectively, unitary *congruence); the direct sum of the singular summands in (\ref{regularization2}) is the \emph{singular part} of $A$ under unitary congruence (respectively, unitary *congruence). \section{Canonical forms} We have now completed all the steps required to establish canonical forms for a conjugate normal matrix $A$ under unitary congruence, and a squared normal matrix $A$ under unitary *congruence: First apply the unitary regularization described in Corollary \ref{Simplest Regularizations} to obtain the regular and singular parts of $A$, then use Theorems \ref{nonsingularTheorem} and \ref{*nonsingularTheorem} to identify the canonical form of the regular part. \begin{theorem} \label{General Congruence Normal}Let $A\in M_{n}$. If $\bar{A}A$ is normal, then $A$ is unitarily congruent to a direct sum of blocks, each of which has the form \begin{equation} \left[ \sigma\right] \text{ or }\tau\left[ \begin{array} [c]{cc} 0 & 1\\ \mu & 0 \end{array} \right] \text{, }\sigma,\tau\in\mathbb{R}\text{, }\sigma\geq0\text{, } \tau>0\text{, }\mu\in\mathbb{C}\text{, and }\mu\neq1\text{.}\label{gcon0} \end{equation} This direct sum is uniquely determined by $A$ up to permutation of its blocks and replacement of any parameter $\mu$ by $\mu^{-1}$. Conversely, if $A$ is unitarily congruent to a direct sum of blocks of the form (\ref{gcon0}), then $\bar{A}A$ is normal. \end{theorem} \begin{proof} The unitary congruence regularization (\ref{regularization2}) reveals two types of singular blocks \begin{equation} \lbrack0]\text{ and }\gamma\left[ \begin{array} [c]{cc} 0 & 1\\ 0 & 0 \end{array} \right] \text{, }\gamma>0\text{,}\label{gcon1} \end{equation} while Theorem \ref{nonsingularTheorem} reveals two types of nonsingular blocks \begin{equation} \left[ \sigma\right] \text{ and }\tau\left[ \begin{array} [c]{cc} 0 & 1\\ \mu & 0 \end{array} \right] \text{, }\sigma>0\text{, }\tau>0\text{, and }0\neq\mu\neq 1\text{.}\label{gcon2} \end{equation} \hfill \end{proof} \begin{theorem} \label{squared normal canonical 1}Let $A\in M_{n}$. If $A^{2}$ is normal, then $A$ is unitarily *congruent to a direct sum of blocks, each of which is \begin{equation} \lbrack\lambda]\text{ or }\tau\left[ \begin{array} [c]{cc} 0 & 1\\ \mu & 0 \end{array} \right] \text{, }\tau\in\mathbb{R}\text{, }\lambda,\mu\in\mathbb{C}\text{, }\tau>0\text{, and }\left\vert \mu\right\vert <1\text{.}\label{g*0} \end{equation} This direct sum is uniquely determined by $A$, up to permutation of its blocks. Conversely, if $A$ is unitarily *congruent to a direct sum of blocks of the form (\ref{g*0}), then $A^{2}$ is normal. \end{theorem} \begin{proof} The unitary *congruence regularization (\ref{Reduced Form}) reveals the singular blocks and Theorem \ref{*nonsingularTheorem} reveals the nonsingular blocks.\hfill \end{proof} \medskip For some applications, it can be convenient to know that the $2$-by-$2$ blocks in (\ref{g*0}) may be replaced by canonical upper triangular blocks. The set \begin{equation} \mathcal{D}_{+}:=\{z\in\mathbb{C}:\operatorname{Re}z>0\}\cup\{it:t\in \mathbb{R}\text{ and }t\geq0\}\label{D+} \end{equation} has the useful property that every complex number has a unique square root in $\mathcal{D}_{+}$. We use the following criterion of Pearcy: \begin{lemma} [\cite{PearcyPaper}]\label{Pearcy}Let $X,Y\in M_{2}$. Then $X$ and $Y$ are unitarily *congruent if and only if $\operatorname{tr}X=\operatorname{tr}Y$, $\operatorname{tr}X^{2}=\operatorname{tr}Y^{2}$, and $\operatorname{tr} X^{\ast}X=\operatorname{tr}Y^{\ast}Y$. \end{lemma} \begin{theorem} \label{squared normal canonical 2}Let $A\in M_{n}$. If $A^{2}$ is normal, then $A$ is unitarily *congruent to a direct sum of blocks, each of which is \begin{equation} \lbrack\lambda]\text{ or }\left[ \begin{array} [c]{cc} \nu & r\\ 0 & -\nu \end{array} \right] \text{, }\lambda,\nu\in\mathbb{C}\text{, }r\in\mathbb{R}\text{, }r>0\text{, and }\nu\in\mathcal{D}_{+}\text{.}\label{t*0} \end{equation} This direct sum is uniquely determined by $A$ up to permutation of its blocks. Conversely, if $A$ is unitarily *congruent to a direct sum of blocks of the form (\ref{t*0}), then $A^{2}$ is normal. \end{theorem} \begin{proof} It suffices to show that if $\tau>0$ and $\left\vert \mu\right\vert <1$, and if we define \begin{equation} \nu:=\tau\sqrt{\mu}\in\mathcal{D}_{+}\label{t*1} \end{equation} and \begin{equation} r:=\tau\left( 1-\left\vert \mu\right\vert \right) \text{,}\label{t*2} \end{equation} then \[ C_{1}:=\left[ \begin{array} [c]{cc} \nu & r\\ 0 & -\nu \end{array} \right] \text{ and }C_{2}:=\tau\left[ \begin{array} [c]{cc} 0 & 1\\ \mu & 0 \end{array} \right] \] are unitarily *congruent. One checks that \[ \operatorname{tr}C_{1}=0=\operatorname{tr}C_{2}\text{,} \] \[ \operatorname{tr}C_{1}^{2}=2\nu^{2}=2\tau^{2}\mu=\operatorname{tr}C_{2} ^{2}\text{,} \] and \[ \operatorname{tr}C_{1}^{\ast}C_{1}=2\left\vert \nu\right\vert ^{2}+r^{2} =2\tau^{2}\left\vert \mu\right\vert +\tau^{2}\left( 1-\left\vert \mu\right\vert \right) ^{2}=\tau^{2}+\tau^{2}\mu^{2}=\operatorname{tr} C_{2}^{\ast}C_{2}\text{,} \] so our assertion follows from Lemma \ref{Pearcy}.\hfill \end{proof} \section{Beyond normality} We conclude with several results involving unitary congruence and unitary *congruence. \subsection{Criteria for unitary congruence and *congruence} To show that two matrices are unitarily congruent (or unitarily *congruent), in certain cases it suffices to show only that they are congruent (or *congruent). \begin{theorem} \label{Criterion}Let $A,B,S\in M_{n}$ be nonsingular and let $S=WQ$ be a right polar decomposition. Then \medskip \newline (a) $A$ and $B$ are unitarily congruent if and only if the pairs $(A,A^{-\ast })$ and $(B,B^{-\ast})$ are congruent. In fact, if $(A,A^{-\ast} )=S(B,B^{-\ast})S^{T}$, then $(A,A^{-\ast})=W(B,B^{-\ast})W^{T}$. \medskip \newline (b) $A$ and $B$ are unitarily *congruent if and only if the pairs $(A,A^{-\ast})$ and $(B,B^{-\ast})$ are *congruent. In fact, if $(A,A^{-\ast })=S(B,B^{-\ast})S^{\ast}$, then $(A,A^{-\ast})=W(B,B^{-\ast})W^{\ast}$. \end{theorem} \begin{proof} Let $\lambda_{1}>\cdots>\lambda_{d}>0$ be the distinct eigenvalues of $S^{\ast}S$ and let $p(t)$ be any polynomial such that $p(\lambda _{i})=+\lambda_{i}^{1/2}$ and $p(\lambda_{i}^{-1})=+\lambda_{i}^{-1/2}$ for each $i=1,\ldots,d$. Then $Q=p(S^{\ast}S)$ is Hermitian and positive definite, $Q^{2}=S^{\ast}S$, and $Q^{-1}=p\left( (S^{\ast}S)^{-1}\right) $. \newline (a) If there is a unitary $U$ such that $A=UBU^{T}$, then \[ (A,A^{-\ast})=(UBU^{T},UB^{-\ast}U^{T})=U(B,B^{-\ast})U^{T}\text{.} \] Conversely, if $(A,A^{-\ast})=S(B,B^{-\ast})S^{T}$, then \[ SBS^{T}=A=\left( A^{-\ast}\right) ^{-\ast}=\left( SB^{-\ast}S^{T}\right) ^{-\ast}=S^{-\ast}B\bar{S}^{-1}\text{,} \] so \[ \left( S^{\ast}S\right) B=B\left( S^{\ast}S\right) ^{-T}\text{.} \] It follows that \[ g(S^{\ast}S)B=Bg(\left( S^{\ast}S\right) ^{-1})^{T} \] for any polynomial $g(t)$. Choosing $g(t)=p(t)$, we have \[ QB=p(S^{\ast}S)B=Bp(\left( S^{\ast}S\right) ^{-1})^{T}=BQ^{-T}\text{,} \] so $QBQ^{T}=B$ and \[ A=SBS^{T}=WQBQ^{T}W^{T}=WBW^{T}\text{.} \] (b) If there is a unitary $U$ such that $A=UBU^{\ast}$, then \[ (A,A^{-\ast})=(UBU^{T},UB^{-\ast}U^{\ast})=U(B,B^{-\ast})U^{\ast}\text{.} \] Conversely, if $(A,A^{-\ast})=S(B,B^{-\ast})S^{\ast}$, then \[ SBS^{\ast}=A=\left( A^{-\ast}\right) ^{-\ast}=\left( SB^{-\ast}S^{\ast }\right) ^{-\ast}=S^{-\ast}BS^{-1}\text{,} \] so \[ \left( S^{\ast}S\right) B=B\left( S^{\ast}S\right) ^{-1} \] and hence \[ g(S^{\ast}S)B=Bg(\left( S^{\ast}S\right) ^{-1}) \] for any polynomial $g(t)$. Choosing $g(t)=p(t)$, we have \[ QB=p(S^{\ast}S)B=Bp(\left( S^{\ast}S\right) ^{-1})=BQ^{-1}\text{,} \] so $QBQ=QBQ^{\ast}=B$ and \[ A=SBS^{\ast}=WQBQ^{\ast}W^{\ast}=WQBQW^{\ast}=WBW^{\ast}\text{.} \] \hfill \end{proof} \begin{corollary} \label{CriterionCorollary}Let $A,B\in M_{n}$ be given. \medskip \newline (a) If $A$ and $B$ are either both unitary or both coninvolutory, then $A$ and $B$ are unitarily congruent if and only if they are congruent. \medskip \newline (b) If $A$ and $B$ are either both unitary or both involutory, then $A$ and $B$ are unitarily *congruent if and only if they are *congruent. \end{corollary} \begin{proof} The key observation is that $A^{-\ast}=A$ if $A$ is unitary, $A^{-\ast}=A^{T}$ if $A$ is coninvolutory, and $A^{-\ast}=A^{\ast}$ if $A$ is involutory. \medskip \newline (a) Suppose $A=SBS^{T}$. If $A$ and $B$ are unitary, then \[ (A,A^{-\ast})=(A,A)=(SBS^{T},SBS^{T})=S(B,B)S^{T}=S(B,B^{-\ast})S^{T}\text{,} \] so Theorem \ref{Criterion}(a) ensures that $A$ is unitarily congruent to $B$. If $A$ and $B$ are coninvolutory, then \[ (A,A^{-\ast})=(A,A^{T})=(SBS^{T},SB^{T}S^{T})=S(B,B^{T})S^{T}=S(B,B^{-\ast })S^{T}\text{,} \] so $A$ is again unitarily congruent to $B$. \medskip \newline (b) Suppose $A=SBS^{\ast}$. If $A$ and $B$ are unitary, then \[ (A,A^{-\ast})=(A,A)=(SBS^{\ast},SBS^{\ast})=S(B,B)S^{\ast}=S(B,B^{-\ast })S^{\ast}\text{,} \] so $A$ is unitarily *congruent to $B$. If $A$ and $B$ are involutory, then \[ (A,A^{-\ast})=(A,A^{\ast})=(SBS^{\ast},SB^{\ast}S^{\ast})=S(B,B^{\ast} )S^{\ast}=S(B,B^{-\ast})S^{\ast}\text{,} \] so $A$ is unitarily congruent to $B$.\hfill \end{proof} \subsection{Hermitian cosquares} \begin{theorem} \label{HermitianCosquaresTheorem}Suppose that $A\in M_{n}$ is nonsingular. The following are equivalent:\medskip \newline (a) $A^{-T}A$ is Hermitian. \medskip \newline (b) $\bar{A}A$ is Hermitian. \medskip \newline (c) $A$ is unitarily congruent to a direct sum of real blocks, each of which is \begin{equation} \left[ \sigma\right] \text{ or }\tau\left[ \begin{array} [c]{cc} 0 & 1\\ \mu & 0 \end{array} \right] \text{,\quad}\sigma>0\text{, }\tau>0\text{, }\mu\in\mathbb{R}\text{, }0\neq\mu\neq1\text{.}\label{HermitianCongruence} \end{equation} This direct sum is uniquely determined by $A$, up to permutation of its blocks and replacement of any $\mu$ by $\mu^{-1}$. Conversely, if $A$ is unitarily congruent to a direct sum of blocks of the form (\ref{HermitianCongruence}), then $\bar{A}A$ is Hermitian. \end{theorem} \begin{proof} $A^{-T}A$ is Hermitian if and only if \[ A^{-T}A=\left( A^{-T}A\right) ^{\ast}=A^{\ast}\bar{A}^{-1} \] if and only if \[ A\bar{A}=A^{T}A^{\ast}=\left( A\bar{A}\right) ^{\ast} \] if and only if $\bar{A}A=\overline{A\bar{A}}$ is Hermitian. The canonical blocks (\ref{HermitianCongruence}) are the same as those in (\ref{cnc0}), with the restriction that $\mu$ must be real.\hfill \end{proof} \medskip Any coninvolution $A$ satisfies the hypotheses of the preceding theorem: $\bar{A}A=I$. \begin{corollary} Suppose that $A\in M_{n}$ and $\bar{A}A=I$. Then $A$ is unitarily congruent to \begin{equation} I_{n-2q}\oplus {\displaystyle\bigoplus\limits_{j=1}^{q}} \left[ \begin{array} [c]{cc} 0 & \sigma_{j}^{-1}\\ \sigma_{j} & 0 \end{array} \right] \text{,\quad}\sigma_{j}>1\text{,}\label{ConinvolutionCongruence} \end{equation} in which $\sigma_{1},\sigma_{1}^{-1},\ldots,\sigma_{q},\sigma_{q}^{-1}$ are the singular values of $A$ that are different from $1$ and each $\sigma_{j} >1$. Conversely, if $A$ is unitarily congruent to a direct sum of the form (\ref{ConinvolutionCongruence}), then $A$ is coninvolutory. Two coninvolutions of the same size are unitarily congruent if and only if they have the same singular values. \end{corollary} \begin{proof} $\bar{A}A$ is Hermitian, so $A$ is unitarily congruent to a direct sum of blocks of the two types (\ref{HermitianCongruence}). But $\bar{A}A=I$, so $\sigma=1$ and $\tau^{2}\mu=1$. Then $\tau=\left( \tau\mu\right) ^{-1}$, so \[ \tau\left[ \begin{array} [c]{cc} 0 & 1\\ \mu & 0 \end{array} \right] =\left[ \begin{array} [c]{cc} 0 & \left( \tau\mu\right) ^{-1}\\ \tau\mu & 0 \end{array} \right] \text{,} \] which has singular values $\tau\mu$ and $\left( \tau\mu\right) ^{-1}$.\hfill \end{proof} \medskip The general case is obtained by specializing Theorem \ref{General Congruence Normal}. \begin{theorem} Let $A\in M_{n}$ and suppose that $\bar{A}A$ is Hermitian. Then $A$ is unitarily congruent to a direct sum of blocks, each of which is \begin{equation} \left[ \sigma\right] \text{ or }\tau\left[ \begin{array} [c]{cc} 0 & 1\\ \mu & 0 \end{array} \right] \text{, }\sigma,\tau,\mu\in\mathbb{R}\text{, }\sigma\geq0\text{, } \mu\neq1\text{.}\label{HermitianUnitaryCongruenceBlocks} \end{equation} This direct sum is uniquely determined by $A$ up to permutation of its blocks and replacement of any (real) parameter $\mu$ by $\mu^{-1}$. Conversely, if $A$ is unitarily congruent to a direct sum of blocks of the form (\ref{HermitianUnitaryCongruenceBlocks}), then $\bar{A}A$ is Hermitian. \end{theorem} \subsection{Unitary cosquares} \begin{theorem} \label{NonsingularConjugateNormal}Suppose that $A\in M_{n}$ is nonsingular. The following are equivalent:\medskip \newline (a) $A^{-T}A$ is unitary.\medskip \newline (b) $A$ is conjugate normal.\medskip \newline (c) $A$ is unitarily congruent to a direct sum of blocks, each of which is \begin{equation} \left[ \sigma\right] \text{ or }\tau\left[ \begin{array} [c]{cc} 0 & 1\\ e^{i\theta} & 0 \end{array} \right] \text{, }\sigma,\tau,\theta\in\mathbb{R}\text{, }\sigma>0\text{, }\tau>0\text{, }0<\theta\leq\pi\text{.}\label{ConjugateNormalCongruence} \end{equation} This direct sum is uniquely determined by the eigenvalues of $\bar{A}A$, up to permutation of its summands. If $A$ is unitarily congruent to a direct sum of blocks of the form (\ref{ConjugateNormalCongruence}), then $A$ is conjugate normal. \end{theorem} \begin{proof} $A^{-T}A$ is unitary if and only if \[ A^{-1}A^{T}=\left( A^{-T}A\right) ^{-1}=\left( A^{-T}A\right) ^{\ast }=A^{\ast}\bar{A}^{-1} \] if and only if \[ AA^{\ast}=A^{T}\bar{A}=\overline{A^{\ast}A}\text{.} \] The canonical blocks (\ref{ConjugateNormalCongruence}) follow from (\ref{cnc0}) by specialization. The eigenvalues of $\bar{A}A$ are of two types: positive eigenvalues that correspond to squares of blocks of the first type in (\ref{ConjugateNormalCongruence}), and conjugate pairs of non-positive (but possibly negative) eigenvalues $\{\tau^{2}e^{i\theta},\tau^{2}e^{-i\theta}\}$ that correspond to blocks of the second type with $0<\theta\leq\pi$. Thus, the parameters $\sigma$, $\tau$, and $e^{i\theta}$ of the blocks in (\ref{ConjugateNormalCongruence}) can be inferred from the eigenvalues of $\bar{A}A$.\hfill \end{proof} \medskip The unitary congruence canonical blocks (\ref{ConjugateNormalCongruence}) for a conjugate normal matrix are a subset of the canonical blocks (\ref{gcon0}) for a congruence normal matrix; the 2-by-2 singular blocks are omitted, and the 2-by-2 nonsingular blocks are required to be positive scalar multiples of a unitary block. This observation shows that every conjugate normal matrix is congruence normal. Moreover, examination of the canonical blocks of a conjugate normal matrix shows that it is unitarily congruent to a direct sum of a positive diagonal matrix, positive scalar multiples of unitary matrices (which the following corollary shows may be taken to be real), and a zero matrix. Thus, a conjugate normal matrix is unitarily congruent to a real normal matrix. If $A$ itself is unitary, then its cosquare $A^{-T}A=\bar{A}A$ is certainly unitary, so the unitary congruence canonical form of a unitary matrix follows from the preceding theorem. Of course, the eigenvalues of the cosquare of a unitary matrix are all unimodular and are constrained by the conditions in (\ref{JCFdiagonalizableCosquare}): any eigenvalue $\mu\neq1$ (even $\mu=-1$) occurs in a conjugate pair $\{\mu,\bar{\mu}\}$. \begin{corollary} \label{UnitarySpecialCase}Suppose that $U\in M_{n}$ is unitary. Then $U$ is unitarily congruent to \begin{equation} I_{n-2q}\oplus {\displaystyle\bigoplus\limits_{j=1}^{q}} \left[ \begin{array} [c]{cc} 0 & 1\\ \mu_{j} & 0 \end{array} \right] \text{,\quad\ }\mu\in\mathbb{C}\text{, }\left\vert \mu_{j}\right\vert =1\text{, }\mu_{j}\neq1\text{,}\label{UnitarySpecialCaseBlocks} \end{equation} in which $\mu_{1},\bar{\mu}_{1},\ldots,\mu_{q},\bar{\mu}_{q}$ are the eigenvalues of $\bar{U}U$ that are different from $1$. If $\mu_{j} =e^{i\theta_{j}}$, then each unitary 2-by-2 block $H_{2}(e^{i\theta_{j}})$ in (\ref{UnitarySpecialCaseBlocks}) can be replaced by a real orthogonal block \begin{equation} Q_{2}(\theta)=\left[ \begin{array} [c]{cc} \alpha & \beta\\ -\beta & \alpha \end{array} \right] \label{RealBlock} \end{equation} in which $\alpha=\cos(\theta/2)$ and $\beta=\sin(\theta/2)$, or by a Hermitian unitary block \begin{equation} \mathcal{H}_{2}(\theta)=\left[ \begin{array} [c]{cc} 0 & e^{-i\theta/2}\\ e^{i\theta/2} & 0 \end{array} \right] \label{HermitianBlock} \end{equation} Thus, $U$ is unitarily congruent to a real orthogonal matrix as well as to a Hermitian unitary matrix. \end{corollary} \begin{proof} $U^{-T}U=\bar{U}U$ so the parameters $\mu$ in (\ref{ConjugateNormalCongruence} ) correspond to the pairs of unimodular conjugate eigenvalues of $\bar{U}U$. Since each block in (\ref{ConjugateNormalCongruence}) must be unitary, the parameters $\sigma$ and $\tau$ must be $+1$. One checks that the cosquares of $H_{2}(e^{i\theta_{j}})$ and $Q_{2}(\theta)$ (\ref{RealBlock}) (both unitary) have the same eigenvalues (namely, $e^{\pm i\theta_{j}})$, so they are similar. Theorem \ref{CongruenceCanonicalForms}(a) ensures that $H_{2} (e^{i\theta_{j}})$ and $Q_{2}(\theta)$ are congruent, and Corollary \ref{CriterionCorollary}(a) ensures that they are actually unitarily congruent. The unitary congruence \[ \left[ \begin{array} [c]{cc} e^{-i\theta/4} & 0\\ 0 & e^{-i\theta/4} \end{array} \right] \left[ \begin{array} [c]{cc} 0 & 1\\ e^{i\theta} & 0 \end{array} \right] \left[ \begin{array} [c]{cc} e^{-i\theta/4} & 0\\ 0 & e^{-i\theta/4} \end{array} \right] =\left[ \begin{array} [c]{cc} 0 & e^{-i\theta/2}\\ e^{i\theta/2} & 0 \end{array} \right] \] shows that the 2-by-2 blocks in (\ref{UnitarySpecialCaseBlocks}) may be replaced by Hermitian blocks of the form $\mathcal{H}_{2}(\theta)$.\hfill \end{proof} \medskip In order to state the general case of Theorem \ref{NonsingularConjugateNormal} , we need to know what the singular part of a conjugate normal matrix is, after regularization by unitary congruence. \begin{lemma} Let $A\in M_{n}$ be singular and conjugate normal; let $m_{1}$ be the nullity of $A$. Then \medskip \newline (a) The angle between $Ax$ and $Ay$ is the same as the angle between $A^{T}x$ and $A^{T}y$ for all $x,y\in\mathbb{C}^{n}$. \medskip \newline (b) $\left\Vert Ax\right\Vert =\left\Vert A^{T}x\right\Vert $ for all $x\in\mathbb{C}^{n}$; \medskip \newline (c) $N(A)=N(A^{T})$; and \medskip \newline (d) $A$ is unitarily congruent to $A^{\prime}\oplus0_{m_{1}}$ in which $A^{\prime}$ is nonsingular and conjugate normal. \end{lemma} \begin{proof} Compute \[ (Ax)^{\ast}(Ay)=x^{\ast}A^{\ast}Ay=x^{\ast}\overline{AA^{\ast}}y=(A^{T} x)^{\ast}(A^{T}y)\text{;} \] when $x=y$ we have $\left\Vert Ax\right\Vert ^{2}=\left\Vert A^{T}x\right\Vert ^{2}$. In particular, $Ax=0$ if and only if $A^{T}x=0$. In the reduced form (\ref{Reduced Form}) of $A$ we have \[ m_{2}=\dim N(A)-\dim(N(A)\cap N(A^{T}))=\dim N(A)-\dim N(A)=0\text{.} \] Thus, $A$ is unitarily congruent to $A^{\prime}\oplus0_{m_{1}}$; $A^{\prime}$ is nonsingular and its unitary congruence class is uniquely determined; and \[ \left( A^{\prime}\right) ^{\ast}A^{\prime}\oplus0_{m_{1}}=A^{\ast }A=\overline{AA^{\ast}}=\overline{A^{\prime}\left( A^{\prime}\right) ^{\ast }}\oplus0_{m_{1}}\text{,} \] so $A^{\prime}$ is conjugate normal.\hfill \end{proof} \medskip \begin{corollary} \label{ConjugateNormalUnitaryCongruenceCorollary}Let $A\in M_{n}$ and suppose that $A$ is conjugate normal. Then $A$ is unitarily congruent to a direct sum of blocks, each of which is \begin{equation} \left[ \sigma\right] \text{ or }\tau\left[ \begin{array} [c]{cc} 0 & 1\\ e^{i\theta} & 0 \end{array} \right] \text{, }\sigma,\tau,\theta\in\mathbb{R}\text{, }\sigma\geq0\text{, }\tau>0\text{, }0<\theta\leq\pi\text{.} \label{ConjugateNormalUnitaryCongruenceBlocks} \end{equation} This direct sum is uniquely determined by the eigenvalues of $\bar{A}A$, up to permutation of its blocks: there is one block $\sqrt{\rho}H_{2}(e^{i\theta})$ (with $\sqrt{\rho}>0$) corresponding to each conjugate eigenvalue pair $\{\rho e^{i\theta},\rho e^{-i\theta}\}$ of $\bar{A}A$ with $\rho>0$ and $0<\theta \leq\pi$; the number of blocks $[\sigma]$ with $\sigma>0$ equals the multiplicity of $\sigma$ as a (positive) eigenvalue of $\bar{A}A$, so the total number of blocks of this type equals the number of positive eigenvalues of $\bar{A}A$; the number of blocks $[0]$ equals the nullity of $A$. \medskip \newline If $B\in M_{n}$ is conjugate normal, then $A$ is unitarily congruent to $B$ if and only if $\bar{A}A$ and $\bar{B}B$ have the same eigenvalues. \medskip \newline Each unitary block $H_{2}(e^{i\theta})$ in (\ref{ConjugateNormalUnitaryCongruenceBlocks}) can be replaced by a real orthogonal block \[ \left[ \begin{array} [c]{cc} \alpha & \beta\\ -\beta & \alpha \end{array} \right] \text{, }\alpha=\cos(\theta/2)\text{ and }\beta=\sin(\theta /2)\text{.} \] If $A$ is unitarily congruent to a direct sum of blocks of the form (\ref{ConjugateNormalUnitaryCongruenceBlocks}), then $A$ is conjugate normal.\hfill \end{corollary} \subsection{Hermitian *cosquares} \begin{theorem} \label{Hermitian*cosquaresTheorem}Suppose that $A\in M_{n}$ is nonsingular. The following are equivalent:\medskip \newline (a) $A^{-\ast}A$ is Hermitian. \medskip \newline (b) $A^{2}$ is Hermitian. \medskip \newline (c) $A$ is unitarily *congruent to a direct sum (uniquely determined by $A$ up to permutation of summands) of blocks, each of which is \begin{equation} \left[ \lambda\right] \text{, }\left[ i\nu\right] \text{, or }\tau\left[ \begin{array} [c]{cc} 0 & 1\\ \mu & 0 \end{array} \right] \text{,\quad}\lambda,\nu,\mu\in\mathbb{R}\text{, }\lambda\neq0\neq \nu\text{, }\tau>0\text{, }0<\left\vert \mu\right\vert <1\text{.} \label{Hermitian*Congruence} \end{equation} If $\mu_{1},\mu_{1}^{-1},\ldots,\mu_{q},\mu_{q}^{-1}$ are the (real) eigenvalues of $A^{-\ast}A$ that are not equal to $\pm1$ and satisfy $0<\left\vert \mu_{j}\right\vert <1$ for $j=1,\ldots,q$, and if $A^{-\ast}A $ has $p$ eigenvalues equal to $+1$, then the unitary *congruence canonical form of $A$ has $p$ blocks of the first type in (\ref{Hermitian*Congruence}), $n-2q-p$ blocks of the second type, and $q$ blocks of the third type. \end{theorem} \begin{proof} $A^{-\ast}A$ is Hermitian if and only if \[ A^{-\ast}A=\left( A^{-\ast}A\right) ^{\ast}=A^{\ast}A^{-1} \] if and only if \[ A^{2}=\left( A^{2}\right) ^{\ast}\text{.} \] The canonical blocks (\ref{Hermitian*Congruence}) follow from (\ref{*nonsingular}) by specialization.\hfill \end{proof} \medskip Any involutory matrix satisfies the hypotheses of the preceding theorem. \begin{corollary} Let $A\in M_{n}$, suppose that $A^{2}=I$, and suppose that $A$ has $p$ eigenvalues equal to $1$. The singular values of $A$ that are different from $1$ occur in reciprocal pairs: $\sigma_{1},\sigma_{1}^{-1},\ldots,\sigma _{q},\sigma_{q}^{-1}$ in which each $\sigma_{i}>1$. Then $A$ is unitarily *congruent to \begin{equation} I_{p-q}\oplus\left( -I_{n-p-q}\right) {\displaystyle\bigoplus\limits_{j=1}^{q}} \left[ \begin{array} [c]{cc} 0 & \sigma_{j}^{-1}\\ \sigma_{j} & 0 \end{array} \right] \text{,\quad}\sigma_{j}>1\label{Involution*Congruence} \end{equation} as well as to \begin{equation} I_{p-q}\oplus\left( -I_{n-p-q}\right) {\displaystyle\bigoplus\limits_{i=1}^{q}} \left[ \begin{array} [c]{cc} 1 & \sigma_{i}-\sigma_{i}^{-1}\\ 0 & -1 \end{array} \right] \text{.}\label{Involution*Congruence2} \end{equation} Conversely, if $A$ is unitarily *congruent to a direct sum of either form (\ref{Involution*Congruence}) or (\ref{Involution*Congruence2}), then $A$ is an involution and has $p-q$ eigenvalues equal to $1$. Two involutions of the same size are unitarily *congruent if and only if they have the same singular values and $+1$ is an eigenvalue with the same multiplicity for each of them. \end{corollary} \begin{proof} Since $A=A^{-1}$, $A^{-\ast}A=A^{\ast}A$ and the eigenvalues of the *cosquare are just the squares of the singular values of $A$; the eigenvalues of the *cosquare $A^{\ast}A$ that are not equal to $1$ must occur in reciprocal pairs. Let $\sigma_{1},\ldots,\sigma_{q}$ be the singular values of $A$ that are greater than $1$. Each 2-by-2 block in (\ref{Hermitian*Congruence}) has the form \[ \tau_{j}\left[ \begin{array} [c]{cc} 0 & 1\\ \sigma_{j}^{2} & 0 \end{array} \right] \text{,} \] which has singular values $\tau_{j}\sigma_{j}^{2}$ and $\tau_{j}$; they are reciprocal if and only if $\tau_{j}=\sigma_{j}^{-1}$. Each 2-by-2 block contributes a pair of eigenvalues $\pm1$, which results in the asserted summands $I_{p-q}\oplus\left( -I_{n-p-q}\right) $ since all of the eigenvalues of $A$ are $\pm1$. To confirm that $A$ is unitarily *congruent to the direct sum (\ref{Involution*Congruence2}), it suffices to show that \[ C_{1}=\left[ \begin{array} [c]{cc} 1 & \sigma-\sigma^{-1}\\ 0 & -1 \end{array} \right] \quad\text{and\quad}C_{2}=\left[ \begin{array} [c]{cc} 1 & \sigma^{-1}\\ \sigma & 0 \end{array} \right] \] are unitarily *congruent. Using Lemma \ref{Pearcy}, it suffices to observe that \begin{align*} \operatorname{tr}C_{1} & =0=\operatorname{tr}C_{2}\\ \operatorname{tr}C_{1}^{2} & =2=\operatorname{tr}C_{2}^{2}\text{, and}\\ \operatorname{tr}C_{1}^{\ast}C_{1} & =\sigma^{2}+\sigma^{-2} =\operatorname{tr}C_{2}^{\ast}C_{2} \end{align*} \hfill \end{proof} \medskip The general case is obtained by specialization of Theorems \ref{squared normal canonical 1} and \ref{squared normal canonical 2}. \begin{theorem} Let $A\in M_{n}$ and suppose $A^{2}$ is Hermitian. Then $A$ is unitarily *congruent to a direct sum of blocks, each of which is \begin{equation} \left[ \lambda\right] \text{, }\left[ i\lambda\right] \text{, or } \tau\left[ \begin{array} [c]{cc} 0 & 1\\ \mu & 0 \end{array} \right] \text{, }\lambda,\mu,\tau\in\mathbb{R}\text{, }\tau>0\text{, } -1<\mu<1\text{.}\label{sH1} \end{equation} Alternatively, $A$ is unitarily *congruent to a direct sum of blocks, each of which is \[ \left[ \lambda\right] \text{, }\left[ i\lambda\right] \text{, or }\left[ \begin{array} [c]{cc} \tau\sqrt{\mu} & \tau\left( 1-\left\vert \mu\right\vert \right) \\ 0 & -\tau\sqrt{\mu} \end{array} \right] \text{, }\sqrt{\mu}\in\mathcal{D}_{+}\text{,} \] in which the parameters $\tau$ and $\mu$ satisfy the conditions in (\ref{sH1}). \end{theorem} \subsection{Unitary *cosquares} \begin{theorem} \label{Unitary*cosquaresTheorem}Suppose $A\in M_{n}$ is nonsingular. The following are equivalent:\medskip \newline (a) $A^{-\ast}A$ is unitary. \medskip \newline (b) $A$ is normal. \medskip \newline (c) $A$ is unitarily *congruent to a direct sum of blocks, each of which is \begin{equation} \left[ \lambda\right] \text{,\quad}\lambda\in\mathbb{C}\text{, }\lambda \neq0\text{.}\label{Normal*Congruence} \end{equation} \end{theorem} \begin{proof} $A^{-\ast}A$ is unitary if and only if \[ A^{-1}A^{\ast}=\left( A^{-\ast}A\right) ^{-1}=\left( A^{-\ast}A\right) ^{\ast}=A^{\ast}A^{-1} \] if and only if \[ AA^{\ast}=A^{\ast}A\text{.} \] The canonical blocks (\ref{Normal*Congruence}) follow from (\ref{*nonsingular} ) by specialization.\hfill \end{proof} \subsection{Projections and $\lambda$-projections} The unitary *congruence regularization algorithm described in Theorem \ref{UnitaryRegularization}(b) permits us to identify a unitary *congruence canonical form for $\lambda$-projections, that is, matrices $A\in M_{n}$ such that $A^{2}=\lambda A$. A $1$-projection is an ordinary projection, while a nonzero $0$-projection is a nilpotent matrix with index $2$. A complex matrix whose minimal polynomial is quadratic is a translation of a $\lambda$-projection. \begin{theorem} \label{LambdaProjection}Let $A\in M_{n}$ be singular and nonzero, let $\lambda\in\mathbb{C}$ be given, and suppose that $A^{2}=\lambda A$. Let $m_{1}$ be the nullity of $A$ and let $\tau_{1},\ldots,\tau_{m_{2}}$ be the singular values of $A$ that are strictly greater than $|\lambda|$ ($m_{2}=0$ is possible). Then \medskip \newline (a) $A$ is unitarily *congruent to \begin{equation} \lambda I_{n-m_{1}-m_{2}}\oplus {\displaystyle\bigoplus\limits_{i=1}^{m_{2}}} \left[ \begin{array} [c]{cc} \lambda & \sqrt{\tau_{i}^{2}-\left\vert \lambda\right\vert ^{2}}\\ 0 & 0 \end{array} \right] \oplus0_{m_{1}-m_{2}}\text{.}\label{ProjectorBlocks} \end{equation} This direct sum is uniquely determined by $\lambda$ and the singular values of $A$, which are $\tau_{1},\ldots,\tau_{m_{2}}$, $\left\vert \lambda\right\vert $ with multiplicity $n-m_{1}-m_{2}$, and $0$ with multiplicity $m_{1}$. \medskip \newline (b) For a given $\lambda$, two $\lambda$-projections of the same size are unitarily *congruent if and only if they have the same rank and the same singular values. \medskip \newline (c) Suppose $0\neq A\neq\lambda I$ and let $\nu=\min\{m_{1},n-m_{1}\}$. Then $\nu>0$ and the $\nu$ largest singular values of $A$ and $A-\lambda I$ are the same. In particular, the spectral norms of $A$ and $A-\lambda I$ are equal. \end{theorem} \begin{proof} (a) Let $F$ denote a reduced form (\ref{Reduced Form}) for $A$ under unitary *congruence, which is also a $\lambda$-projection. Compute \[ F^{2}=\left[ \begin{array} [c]{ccc} A^{\prime} & B & 0\\ C & D & \left[ \Sigma~0\right] \\ 0 & 0 & 0_{m_{1}} \end{array} \right] ^{2}=\left[ \begin{array} [c]{ccc} \bigstar & \bigstar & B\left[ \Sigma~0\right] \\ \bigstar & \bigstar & D\left[ \Sigma~0\right] \\ 0 & 0 & 0_{m_{1}} \end{array} \right] \] and \[ \lambda F=\left[ \begin{array} [c]{ccc} \bigstar & \bigstar & 0\\ \bigstar & \bigstar & \lambda\left[ \Sigma~0\right] \\ 0 & 0 & 0_{m_{1}} \end{array} \right] \text{.} \] Since the block $\left[ \Sigma~0\right] $ has full row rank, we conclude that $B=0$ and $D=\lambda I_{m_{2}}$. Moreover, $A^{\prime}$ must be nonsingular because $[A^{\prime}~B]$ has full row rank. Now examine \[ F^{2}=\left[ \begin{array} [c]{ccc} A^{\prime} & 0 & 0\\ C & \lambda I_{m_{2}} & \left[ \Sigma~0\right] \\ 0 & 0 & 0_{m_{1}} \end{array} \right] ^{2}=\left[ \begin{array} [c]{ccc} \left( A^{\prime}\right) ^{2} & 0 & 0\\ CA^{\prime}+\lambda C & \lambda^{2}I_{m_{2}} & \lambda\left[ \Sigma~0\right] \\ 0 & 0 & 0_{m_{1}} \end{array} \right] \] and \[ \lambda F=\left[ \begin{array} [c]{ccc} \lambda A^{\prime} & 0 & 0\\ \lambda C & \lambda^{2}I_{m_{2}} & \lambda\left[ \Sigma~0\right] \\ 0 & 0 & 0_{m_{1}} \end{array} \right] \text{,} \] so that $\left( A^{\prime}\right) ^{2}=\lambda A^{\prime}$ (and $A^{\prime}$ is nonsingular), and $CA^{\prime}+\lambda C=\lambda C$. The first of these identities tells us that $A^{\prime}=\lambda I_{n-m_{1}-m_{2}}$, and the second tells us that $C=0$. Thus, $A$ is unitarily *congruent to \[ \lambda I_{n-m_{2}-m_{1}}\oplus\left[ \begin{array} [c]{cc} \lambda I_{m_{2}} & \left[ \Sigma~0\right] \\ 0 & 0_{m_{1}} \end{array} \right] \text{,\quad}\Sigma=\operatorname{diag}(\sigma_{1},\ldots ,\sigma_{m_{2}})\text{, all }\sigma_{i}>0\text{,} \] which is unitarily *congruent (permutation similar) to \[ \lambda I_{n-m_{2}-m_{1}}\oplus {\displaystyle\bigoplus\limits_{i=1}^{m_{2}}} \left[ \begin{array} [c]{cc} \lambda & \sigma_{i}\\ 0 & 0 \end{array} \right] \oplus0_{m_{1}-m_{2}}\text{.} \] (b) The singular values of the 2-by-2 blocks are $0$ and $\tau_{i} =\sqrt{\left\vert \lambda\right\vert ^{2}+\sigma_{i}^{2}}>\left\vert \lambda\right\vert $, so $\sigma_{i}=\sqrt{\tau_{i}^{2}-\left\vert \lambda\right\vert ^{2}}$. \medskip \newline (c) $A-\lambda I$ is unitarily *congruent to \[ 0_{n-m_{2}-m_{1}}\oplus {\displaystyle\bigoplus\limits_{i=1}^{m_{2}}} \left[ \begin{array} [c]{cc} 0 & \sigma_{i}\\ 0 & -\lambda \end{array} \right] \oplus(-\lambda)I_{m_{1}-m_{2}}\text{,} \] so its singular values are: $\tau_{1},\ldots,\tau_{m_{2}}$, $|\lambda|$ with multiplicity $m_{1}-m_{2}$, and $0$ with multiplicity $n-m_{1}$.\hfill \end{proof} \medskip Let $q(t)=(t-\lambda_{1})(t-\lambda_{2})$ be a given quadratic polynomial (possibly $\lambda_{1}=\lambda_{2}$). If $q(t)$ is the minimal polynomial of a given $A\in M_{n}$, then $q(A)=0$, $A-\lambda_{1}I$ is a $\lambda$-projection with $\lambda:=\lambda_{2}-\lambda_{1}$, and $A$ is not a scalar matrix. Theorem \ref{LambdaProjection} gives a canonical form to which $A-\lambda_{1}I$ (and hence $A$ itself) can be reduced by unitary *congruence. \begin{corollary} \label{QuadraticMinimalPoly}Suppose the minimal polynomial of a given $A\in M_{n}$ has degree two, and suppose that $\lambda_{1},\lambda_{2}$ are the eigenvalues of $A$ with respective multiplicities $d$ and $n-d$; if $\lambda_{1}=\lambda_{2}$, let $d=n$. Suppose that $|\lambda_{1}|\geq |\lambda_{2}|$ and let $\sigma_{1},\ldots,\sigma_{m}$ be the singular values of $A $ that are strictly greater than $|\lambda_{1}|$ ($m=0$ is possible). Then: (a) $A$ is unitarily *congruent to \begin{equation} \lambda_{1}I_{n-d-m}\oplus {\displaystyle\bigoplus\limits_{i=1}^{m}} \left[ \begin{array} [c]{cc} \lambda_{1} & \gamma_{i}\\ 0 & \lambda_{2} \end{array} \right] \oplus\lambda_{2}I_{d-m}\text{,}\label{QuadraticCanonical} \end{equation} in which each \[ \gamma_{i}=\sqrt{\sigma_{i}^{2}+|\lambda_{1}\lambda_{2}|^{2}\sigma_{i} ^{-2}-|\lambda_{1}|^{2}-|\lambda_{2}|^{2}}>0\text{.} \] The direct sum (\ref{QuadraticCanonical}) is uniquely determined, up to permutation of summands, by the eigenvalues and singular values of $A$. The singular values of $A$ are $\sigma_{1},\ldots,\sigma_{m}$, $|\lambda _{1}\lambda_{2}|\sigma_{1}^{-1},\ldots,|\lambda_{1}\lambda_{2}|\sigma_{m} ^{-1}$, $|\lambda_{1}|$ with multiplicity $n-d-m$, and $|\lambda_{2}|$ with multiplicity $d-m$. \medskip \newline (b) Two square complex matrices that have quadratic minimal polynomials are unitarily *congruent if and only if they have the same eigenvalues and the same singular values. \end{corollary} \begin{proof} If $A$ is singular, then $\lambda_{2}=0$, $A$ is a $\lambda_{1}$-projection, and the validity of the assertions of the corollary is ensured by Theorem \ref{LambdaProjection}. Now assume that $A$ is nonsingular. The hypotheses ensure that $A-\lambda _{1}I$ is singular and nonzero, and that it is a $\lambda$-projection with $\lambda:=\lambda_{2}-\lambda_{1}$. It is therefore unitarily *congruent to a direct sum of the form (\ref{ProjectorBlocks}) with $m_{1}=d$ and $m=m_{2}$; after a translation by $\lambda_{1}I$, we find that $A$ is unitarily *congruent to a direct sum of the form (\ref{QuadraticCanonical}) in which each $\gamma_{i}=(\tau_{i}^{2}-\left\vert \lambda\right\vert ^{2})^{1/2}>0$. Therefore, $A$ has $n-d-m$ singular values equal to $|\lambda_{1}|$ and $d-m$ singular values equal to $|\lambda_{2}|$. In addition, corresponding to each 2-by-2 block in (\ref{QuadraticCanonical}) is a pair of singular values $\left( \sigma_{i},\rho_{i}\right) $ of $A$ such that \begin{equation} \sigma_{i}\geq\rho_{i}>0\text{ and }\sigma_{i}\rho_{i}=|\lambda_{1}\lambda _{2}|\label{constraints} \end{equation} for each $i=1,\ldots,m$. Since the spectral norm always dominates the spectral radius, we have $\sigma_{i}\geq|\lambda_{1}|$ for each $i=1,\ldots,m$; calculating the Frobenius norm tells us that \[ \sigma_{i}^{2}+\rho_{i}^{2}=|\lambda_{1}|^{2}+|\lambda_{2}|^{2}+\gamma_{i} ^{2}\text{.} \] If $\sigma_{i}=|\lambda_{1}|$ then (\ref{constraints}) ensures that $\rho _{i}=|\lambda_{2}|$, which is impossible since $\gamma_{i}>0$. Thus, $A$ has $m$ singular values $\sigma_{1},\ldots,\sigma_{m}$ that are strictly greater than $|\lambda_{1}|$, and $m$ corresponding singular values $\rho_{1} ,\ldots,\rho_{m}$ that are strictly less than $|\lambda_{1}|$; each pair $\left( \sigma_{i},\rho_{i}\right) $ satisfies (\ref{constraints}). Thus, the parameters $\gamma_{i}$ in (\ref{QuadraticCanonical}) satisfy \[ \gamma_{i}^{2}=\sigma_{i}^{2}+\rho_{i}^{2}-|\lambda_{1}|^{2}-|\lambda_{2} |^{2}=\sigma_{i}^{2}+|\lambda_{1}\lambda_{2}|^{2}\sigma_{i}^{-2}-|\lambda _{1}|^{2}-|\lambda_{2}|^{2}\text{.} \] If two complex matrices of the same size have quadratic minimal polynomials, and if they have the same eigenvalues and singular values, then each is unitarily *congruent to a direct sum of the form (\ref{QuadraticCanonical}) in which the parameters $\lambda_{1},\lambda_{2},d,m$, and $\{\gamma_{1} ,\ldots,\gamma_{m}\}$ are the same; the two direct sums must be the same up to permutation of their summands. Conversely, \textit{any} two unitarily *congruent matrices have the same eigenvalues and singular values.\hfill \end{proof} Let $p(t)=t^{2}-2at+b^{2}$ be a given monic polynomial of degree two. Corollary \ref{QuadraticMinimalPoly} tells us that if $p(A)=0$, then $A$ is unitarily *congruent to a direct sum of certain special 1-by-1 and 2-by-2 blocks. We can draw a similar conclusion under the weaker hypothesis that $p(A)$ is normal. \begin{proof} \begin{proposition} \label{QuadraticPolynomialNormal}Let $A\in M_{n}$ and suppose there are $a,b\in\mathbb{C}$ such that $N=A^{2}-2aA+bI$ is normal. Then $A$ is unitarily *congruent to a direct sum of blocks, each of which is \[ \left[ \lambda\right] \quad\text{or \quad}\left[ \begin{array} [c]{cc} a+\nu & r\\ 0 & a-\nu \end{array} \right] ,\quad\lambda,\nu\in\mathbb{C},r\mathbf{\in}\mathbb{R},r>0,\text{ and }\nu\in\mathcal{D}_{+}\text{.} \] \end{proposition} \end{proof} \begin{proof} A calculation reveals that $(A-aI)^{2}=N+(a^{2}-b)I$, which is normal. The conclusion follows from applying Theorem \ref{squared normal canonical 2} to the squared normal matrix $A-aI$.\hfill \end{proof} \subsection{Characterizations} Corollary \ref{ConjugateNormalUnitaryCongruenceCorollary} tells us that a conjugate normal matrix is unitarily congruent to a direct sum of a zero matrix and positive scalar multiples of real orthogonal matrices; such a matrix is real and normal. The following theorem gives additional characterizations of conjugate normal matrices. \begin{theorem} \label{CconjugateNormal}Let $A\in M_{n}$ and let $A=PU=UQ$ be left and right polar decompositions. Let $\sigma_{1}>\sigma_{2}>\cdots>\sigma_{d}\geq0$ be the ordered distinct singular values of $A$ with respective multiplicities $n_{1},\ldots,n_{d}$ (if $A=0$ let $d=1$ and $\sigma_{1}=0$). Let $A=\mathcal{S}+\mathcal{C}$, in which $\mathcal{S}=\left( A+A^{T}\right) /2$ is symmetric and $\mathcal{C}=\left( A-A^{T}\right) /2$ is skew symmetric. The following are equivalent: \medskip \newline (a) $\mathcal{S\bar{C}}=\mathcal{C\bar{S}}$. \medskip \newline (b) $A$ is conjugate normal. \medskip \newline (c) $Q=\bar{P}$, that is, $A=PU=U\bar{P}$. \medskip \newline (d) $PA=A\bar{P}$. \medskip \newline (e) There are unitary matrices $W_{1},\ldots,W_{d}$ with respective sizes $n_{1},\ldots,n_{d}$ such that $A$ is unitarily congruent to \begin{equation} \sigma_{1}W_{1}\oplus\cdots\oplus\sigma_{d}W_{d}.\label{e1e} \end{equation} (f) There are real orthogonal matrices $Q_{1},\ldots,Q_{d}$ with respective sizes $n_{1},\ldots,n_{d}$ such that $A$ is unitarily congruent to the real normal matrix \begin{equation} \sigma_{1}Q_{1}\oplus\cdots\oplus\sigma_{d}Q_{d}.\label{e2e} \end{equation} \end{theorem} \begin{proof} (a) $\Leftrightarrow$ (b): Compute \begin{align*} A^{\ast}A & =\left( \mathcal{\bar{S}}-\mathcal{\bar{C}}\right) \left( \mathcal{S}+\mathcal{C}\right) =\mathcal{\bar{S}S+\bar{S}C-\bar{C}S-\bar{C} C}\\ \overline{AA^{\ast}} & =\left( \mathcal{\bar{S}}+\mathcal{\bar{C}}\right) \left( \mathcal{S}-\mathcal{C}\right) =\mathcal{\bar{S}S-\bar{S}C+\bar {C}S-\bar{C}C}\\ A^{\ast}A-\overline{AA^{\ast}} & =2\left( \mathcal{\bar{S}C-\bar{C} S}\right) \text{.} \end{align*} Thus, $A^{\ast}A=\overline{AA^{\ast}}$ if and only if $\mathcal{\bar{S} C}=\mathcal{\bar{C}S}$.\medskip \newline (b) $\Rightarrow$ (c): If $p(t)$ is any polynomial such that $p(\sigma_{i} ^{2})=\sigma_{i}$ for each $i=1,...,d$, then $Q=p\left( A^{\ast}A\right) $ and $P=p(AA^{\ast})$. If $A^{\ast}A=\overline{AA^{\ast}}$ then \[ Q=p\left( A^{\ast}A\right) =p(\overline{AA^{\ast}})=p\left( \left( AA^{\ast}\right) ^{T}\right) =p\left( AA^{\ast}\right) ^{T}=P^{T}=\bar {P}\text{.} \] (c) $\Rightarrow$ (d): $A\bar{P}=P(U\bar{P})=P(PU)=PA$.\medskip \newline (d) $\Rightarrow$ (c): Let $P=V\Lambda V^{\ast}$ in which $V$ is unitary and $\Lambda$ is nonnegative diagonal. Let $W=V^{\ast}U\bar{V}$. Then \[ A\bar{P}=PU\bar{P}=(V\Lambda V^{\ast})U(\bar{V}\Lambda V^{T})=V(\Lambda W\Lambda)V^{T} \] and \[ PA=P^{2}U=V\Lambda^{2}V^{\ast}U=V(\Lambda^{2}W)V^{T}\text{,} \] so $\Lambda W\Lambda=\Lambda^{2}W$. Lemma \ref{Commutivity Implication} ensures that $\Lambda W=W\Lambda$, so \[ PU=V\Lambda V^{\ast}U\bar{V}V^{T}=V\Lambda WV^{T}=VW\Lambda V^{T}=U\bar {V}\Lambda V^{T}=U\bar{P}\text{.} \] \medskip(c) $\Rightarrow$ (e): Suppose $P=V\Lambda V^{\ast}$, in which $\Lambda=\sigma_{1}I_{n_{1}}\oplus\cdots\oplus\sigma_{d}I_{n_{d}}$ and $V$ is unitary. If $Q=\bar{P}$ then \[ A=PU=V\Lambda V^{\ast}U=U\bar{V}\Lambda V^{T}=U\bar{P}=UQ=A \] and hence \[ \Lambda\left( V^{\ast}U\bar{V}\right) =\left( V^{\ast}U\bar{V}\right) \Lambda\text{,} \] which implies that the unitary matrix $V^{\ast}U\bar{V}=W_{1}\oplus \cdots\oplus W_{d}$ is block diagonal; each $W_{i}$ is unitary and has size $n_{i}$. Thus, \[ U=V\left( W_{1}\oplus\cdots\oplus W_{d}\right) V^{T} \] and \begin{align*} A & =PU=V\Lambda V^{\ast}U=V\Lambda V^{\ast}V\left( W_{1}\oplus\cdots\oplus W_{d}\right) V^{T}\\ & =V\left( \sigma_{1}W_{1}\oplus\cdots\oplus\sigma_{d}W_{d}\right) V^{T}\text{.} \end{align*} (e) $\Rightarrow$ (f): Corollary \ref{UnitarySpecialCase} ensures that each $W_{j}$ in (\ref{e1e}) is unitarily congruent to a real orthogonal matrix.\medskip \newline (f) $\Rightarrow$ (a): Let $Z$ denote the direct sum (\ref{e2e}) and suppose $A=UZU^{T}$ for some unitary $U$. Then $\mathcal{S}= \frac12 U(Z+Z^{T})U^{T}$ and $\mathcal{C}= \frac12 U(Z-Z^{T})U^{T}$, so it suffices to show that $Z$ commutes with $Z^{T}$. But each $Q_{i}$ is real orthogonal, so \[ ZZ^{T}=\sigma_{1}^{2}Q_{1}Q_{1}^{T}\oplus\cdots\oplus\sigma_{d}^{2}Q_{d} Q_{d}^{T}=\sigma_{1}^{2}I_{n_{1}}\oplus\cdots\oplus\sigma_{d}^{2}I_{n_{d} }=Z^{T}Z\text{.} \] \hfill \end{proof} \medskip For normal matrices, an analog of Theorem \ref{CconjugateNormal} is the following set of equivalent statements: \medskip \newline (a) $HK=KH$, in which $H=(A+A^{\ast})/2$ and $K=(A-A^{\ast})/(2i)$. \medskip \newline (b) $A$ is normal. \medskip \newline (c) $Q=P$, that is, $A=PU=UP$. \medskip \newline (d) $PA=AP$. \medskip \newline (e) $A$ is unitarily *congruent to a direct sum of the form (\ref{e1e}), in which $\sigma_{1}>\cdots>\sigma_{d}\geq0$ are the distinct singular values of $A$ and $W_{1},\ldots,W_{d}$ are unitary. \medskip The following theorem about conjugate normal matrices is an analog of a known result about *congruence of normal matrices \cite{Ikramov} (and, more generally, about unitoid matrices \cite[p. 289]{JF Sylvester}). \begin{theorem} (a) A nonsingular complex matrix is congruent to a conjugate normal matrix if and only if it is congruent to a unitary matrix. \medskip \newline (b) A singular complex matrix is congruent to a conjugate normal matrix if and only if it is congruent to a direct sum of a unitary matrix and a zero matrix.\medskip \newline (c) Each conjugate normal matrix $A\in M_{n}$ is congruent to a direct sum, uniquely determined up to permutation of summands, of the form \begin{equation} I_{r-2q}\oplus {\displaystyle\bigoplus\limits_{j=1}^{q}} \left[ \begin{array} [c]{cc} 0 & 1\\ e^{i\theta_{j}} & 0 \end{array} \right] \oplus0_{n-r}\text{,\quad}0<\theta_{j}\leq\pi\text{,} \label{conjugateNormalCanonical} \end{equation} in which $r=\operatorname{rank}A$ and there is one block $H_{2}(e^{i\theta _{j}})$ corresponding to each eigenvalue of $\bar{A}A$ that lies on the open ray $\{te^{i\theta_{j}}:t>0\}$. The summand $I_{r-2q}$ corresponds to the $r-2q$ positive eigenvalues of $\bar{A}A$. \medskip \newline (d) Two conjugate normal matrices $A$ and $B$ of the same size are congruent if and only if for each $\theta\in\lbrack0,\pi]$, $\bar{A}A$ and $\bar{B}B$ have the same number of eigenvalues on each open ray $\{te^{i\theta}:t>0\}$. \end{theorem} \begin{proof} Only assertion (d) requires comment. If $A$ is conjugate normal and nonsingular, the decomposition (\ref{e1e}) ensures that $\bar{A}A$ is unitarily similar to (and hence has the same eigenvalues as) \[ \mathcal{W}=\sigma_{1}^{2}\overline{W_{1}}W_{1}\oplus\cdots\oplus\sigma _{d}^{2}\overline{W_{d}}W_{d}\text{.} \] Of course, $\mathcal{W}$ and the unitary matrix \[ W_{1}\overline{W_{1}}\oplus\cdots\oplus W_{d}\overline{W_{d}} \] have the same number of eigenvalues on each open ray $\{te^{i\theta}:t>0\}$; this number is the same as the number of blocks $H_{2}(e^{i\theta_{j}})$ in the direct sum (\ref{conjugateNormalCanonical}). The argument is similar if $A$ is singular; just omit the last direct summand $\sigma_{d}W_{d}$.\hfill \end{proof} \medskip There is an analog of Theorem \ref{CconjugateNormal} for congruence normal matrices. \begin{theorem} \label{CongruenceNormalCharacterization}Let $A\in M_{n}$ and let $A=PU=UQ$ be left and right polar decompositions. Let $A=\mathcal{S}+\mathcal{C}$, in which $\mathcal{S}=(A+A^{T})/2$ is symmetric and $\mathcal{C}=(A-A^{T})/2$ is skew symmetric. The following are equivalent: \medskip \newline (a) $\mathcal{\bar{S}S}+\mathcal{\bar{C}C}$ commutes with $\mathcal{\bar{S} C}+\mathcal{\bar{C}S}$. \medskip \newline (b) $A$ is congruence normal. \medskip \newline (c) $A\bar{P}=\bar{Q}A$. \medskip \newline (d) $\{\bar{P},Q,\bar{U}U\}$ is a commuting family. \end{theorem} \begin{proof} (a) $\Leftrightarrow$ (b): A computation reveals that the Hermitian part of $\bar{A}A$ is $\mathcal{\bar{S}S}+\mathcal{\bar{C}C}$, while the skew-Hermitian part is $\mathcal{\bar{S}C}+\mathcal{\bar{C}S}$. Of course, $\bar{A}A$ is normal if and only if its Hermitian and skew-Hermitian parts commute. \medskip \newline (b) $\Leftrightarrow$ (c): Theorem \ref{NonsingularEquivalence} tells us that if $A$ is congruence normal then $A\left( \bar{P}\right) ^{2}=\left( \bar{Q}\right) ^{2}A$, which is the same as $A\left( P^{2}\right) ^{T}=\left( Q^{2}\right) ^{T}A$, which implies that $Ap(P^{2})^{T} =p(Q^{2})^{T}A$ for any polynomial $p(t)$. Choose $p(t)$ such that $p(t)=+\sqrt{t}$ on the spectrum of $P$ (and hence also on the spectrum of $Q$), and conclude that $AP^{T}=Q^{T}A$, or $A\bar{P}=\bar{Q}A$. The converse implication is immediate: \[ A\bar{P}=\bar{Q}A\Rightarrow A\left( \bar{P}\right) ^{2}=\left( \bar {Q}\right) ^{2}A\text{.} \] (b) $\Rightarrow$ (d): Suppose $V$ is unitary, let $\mathcal{A}:=VAV^{T}$, and consider the factors of the left and right polar decompositions $\mathcal{A} =\mathcal{PU}=\mathcal{UQ}$. One checks that $\mathcal{P}=VPV^{\ast}$, $\mathcal{Q}=\bar{V}QV^{T}$, and $\mathcal{U}=VUV^{T}$. Moreover, $\{\bar {P},Q,\bar{U}U\}$ is a commuting family if and only if $\{\mathcal{\bar{P} },\mathcal{Q},\overline{\mathcal{U}}\mathcal{U}\}$ is a commuting family. Thus, if $A$ is congruence normal, there is no lack of generality to assume that it is a direct sum of blocks of the form (\ref{gcon0}). Blocks of the first type in (\ref{gcon0}) are 1-by-1, so commutation is trivial. For blocks of the second type, the polar factors are \[ P=\tau\left[ \begin{array} [c]{cc} 1 & 0\\ 0 & |\mu| \end{array} \right] \text{, }Q=\tau\left[ \begin{array} [c]{cc} |\mu| & 0\\ 0 & 1 \end{array} \right] \text{, and }U=\left[ \begin{array} [c]{cc} 0 & 1\\ e^{i\theta} & 0 \end{array} \right] \text{, so }\bar{U}U=\left[ \begin{array} [c]{cc} e^{i\theta} & 0\\ 0 & e^{-i\theta} \end{array} \right] \text{.} \] For both types of blocks, $\{\bar{P},Q,\bar{U}U\}$ is a diagonal family, so it is a commuting family. \medskip \newline (d) $\Rightarrow$ (b): If $\{\bar{P},Q,\bar{U}U\}$ is a commuting family, then \begin{align*} \bar{A}A & =\bar{P}\bar{U}UQ=\bar{P}\left( \bar{U}U\right) Q=\left( \bar{U}U\right) \left( \bar{P}Q\right) \\ & =\bar{P}\left( \bar{U}U\right) Q=\left( \bar{P}Q\right) \left( \bar {U}U\right) \text{.} \end{align*} Since $\bar{P}$ and $Q$ are commuting positive semidefinite Hermitian matrices, $\bar{P}Q$ is positive semidefinite Hermitian. But $\bar{U}U$ is unitary without further assumptions, so we have a polar decomposition of $\bar{A}A$ in which the factors commute. This ensures that $\bar{A}A$ is normal.\hfill \end{proof} \medskip A calculation reveals that if $\mathcal{S\bar{C}}=\mathcal{C\bar{S}} $ then $\mathcal{\bar{S}S+\bar{C}C}$ commutes with $\mathcal{\bar{S}C+\bar {C}S}$, and that if $PU=U\bar{P}$ then $\{\bar{P},Q,\bar{U}U\}$ is a commuting family. Thus, Theorems \ref{CconjugateNormal} and \ref{CongruenceNormalCharacterization} permit us to conclude (again) that every conjugate normal matrix is congruence normal. For squared normal matrices, an analog of Theorem \ref{CongruenceNormalCharacterization} is the following set of equivalent statements: \medskip \newline (a) $H^{2}-K^{2}$ commutes with $HK+KH$, in which $H=(A+A^{\ast})/2$ and $K=(A-A^{\ast})/(2i)$. \medskip \newline (b) $A^{2}$ is normal. \medskip \newline (c) $AP=QA$. \medskip \newline (d) $\{P,Q,U^{2}\}$ is a commuting family. \medskip Our final characterization links the parallel expositions we have given for squared normality and congruence normality. \begin{theorem} \label{BarBlocks}Let $A\in M_{n}$ and let \begin{equation} \mathcal{A}=\left[ \begin{array} [c]{cc} 0 & A\\ \bar{A} & 0 \end{array} \right] \label{BarBlockDef} \end{equation} Then: \newline (a) $A^{2}$ is normal if and only if $\mathcal{A}$ is congruence normal. \newline (b) $A$ is congruence normal if and only if $\mathcal{A}^{2}$ is normal. \newline (c) $A$ is normal if and only if $\mathcal{A}$ is conjugate normal. \newline (d) $A$ is conjugate normal if and only if $\mathcal{A}$ is normal. \newline (e) $\mathcal{A}\overline{\mathcal{A}}\mathcal{A}^{T}=\mathcal{A}^{T} \overline{\mathcal{A}}\mathcal{A}$ if and only if $A^{\ast}A^{2}=A^{2}A^{\ast }$. \newline (f) $\mathcal{A}^{\ast}\mathcal{A}^{2}=\mathcal{A}^{2}\mathcal{A}^{\ast}$ if and only if $A\bar{A}A^{T}=A^{T}\bar{A}A$. \medskip \newline Now suppose that $A$ is nonsingular. Then: \newline (g) $A^{-T}A$ is normal (respectively, Hermitian, unitary) if and only if $\mathcal{A}^{-\ast}\mathcal{A}$ is normal (respectively, Hermitian, unitary). \newline (h) $A^{-\ast}A$ is normal (respectively, Hermitian, unitary) if and only if $\mathcal{A}^{-T}\mathcal{A}$ is normal (respectively, Hermitian, unitary). \end{theorem} \begin{proof} Each assertion follows from a computation. For example, (a) follows from \[ \overline{\mathcal{A}}\mathcal{A}=\left[ \begin{array} [c]{cc} \bar{A}^{2} & 0\\ 0 & A^{2} \end{array} \right] \text{,} \] (g) follows from \[ \mathcal{A}^{-\ast}\mathcal{A}=\left[ \begin{array} [c]{cc} \overline{A^{-T}A} & 0\\ 0 & A^{-T}A \end{array} \right] \text{,} \] and (h) follows from \[ \mathcal{A}^{-T}\mathcal{A}=\left[ \begin{array} [c]{cc} \overline{A^{-\ast}A} & 0\\ 0 & A^{-\ast}A \end{array} \right] \text{.} \] \hspace*{\fill} \end{proof} Using Theorem \ref{BarBlocks}, we can show that Theorems \ref{NonsingularEquivalence} and \ref{Nonsingular*equivalence} are actually equivalent: First apply Theorem \ref{Nonsingular*equivalence} to $\mathcal{A},$ which tells us that $\mathcal{A}^{2}$ is normal if and only if $\mathcal{A}^{\ast}\mathcal{A}^{2}=\mathcal{A}^{2}\mathcal{A}^{\ast}$ if and only if $\mathcal{A}^{-\ast}\mathcal{A}$ is normal (if $A$ is nonsingular). Theorem \ref{BarBlocks} (b), (f), and (g) now ensure that $A$ is congruence normal if and only if $A\bar{A}A^{T}=A^{T}\bar{A}A$ if and only if $A^{-T}A$ is normal (if $A$ is nonsingular). Thus, Theorem \ref{Nonsingular*equivalence} implies Theorem \ref{NonsingularEquivalence}. The reverse implication follows from applying Theorem \ref{NonsingularEquivalence} to $\mathcal{A}$ and using Theorem \ref{BarBlocks} (a), (e), and (h). A similar argument shows that the equivalence of Theorem \ref{HermitianCosquaresTheorem} (a) and (b) (respectively, Theorem \ref{NonsingularConjugateNormal} (a) and (b)) implies and is implied by the equivalence of Theorem \ref{Hermitian*cosquaresTheorem} (a) and (b) (respectively, Theorem \ref{Unitary*cosquaresTheorem} (a) and (b)). \subsection{The classification problem for cubed normals is unitarily wild} We have seen that there are simple canonical forms for squared normal matrices under unitary *congruence, and also for congruence normal matrices under unitary congruence. However, the situation for cubed normal matrices under unitary *congruence (and for matrices $A$ such that $A\bar{A}A$ is normal, under unitary congruence) is completely different; the classification problems in these cases are very difficult. A problem involving complex matrices is said to be \textit{unitarily wild }if it contains the problem of classifying arbitrary square complex matrices under unitary *congruence. Since the latter problem contains the problem of classifying an arbitrary system of linear mappings on unitary spaces \cite[Section 2.3]{VVSquiver}, it is reasonable to regard any unitarily wild problem as hopeless (by analogy with nonunitary matrix problems that contain the problem of classifying pairs of matrices under similarity \cite{B+S}). Two lemmas are useful in showing that the unitary congruence classification problems for (a) cubed normal matrices under unitary *congruence, and (b) for matrices $A$ such that $A\bar{A}A$ is normal, are both unitarily wild. \begin{lemma} \label{Basic} Let $\lambda_{1},\ldots,\lambda_{d}$ be given distinct complex numbers and let $F,F^{^{\prime}}\in M_{n}$ be given conformally partitioned block upper triangular matrices \[ F=\left[ \begin{array} [c]{cccc} \lambda_{1}I_{n_{1}} & F_{12} & \cdots & F_{1d}\\ & \lambda_{2}I_{n_{2}} & \cdots & F_{2d}\\ & & \ddots & \vdots\\ 0 & & & \lambda_{d}I_{n_{d}} \end{array} \right] ,\qquad F^{^{\prime}}=\left[ \begin{array} [c]{cccc} \lambda_{1}I_{n_{1}} & F_{12}^{^{\prime}} & \cdots & F_{1d}^{^{\prime}}\\ & \lambda_{2}I_{n_{2}} & \cdots & F_{2d}^{^{\prime}}\\ & & \ddots & \vdots\\ 0 & & & \lambda_{d}I_{n_{d}} \end{array} \right] \] in which $n_{1}+n_{2}+\cdots+n_{d}=n$. If $S\in M_{n}$ and $SF=F^{^{\prime}} S$, then $S$ is block upper triangular conformal to $F$. If, in addition, $S$ is normal, then $S$ is block diagonal conformal to $F$. \end{lemma} \begin{proof} Partition $S=[S_{ij}]_{i,j=1}^{d}$ conformally to $F$. Compare corresponding $(i,j)$ blocks of $SF$ and $F^{^{\prime}}S$ in the order $(d,1),(d,2),\ldots ,(d,d-1)$ to conclude that each of $S_{d,1},S_{d,2},\ldots,S_{d,d-1}$ is a zero block. Then continue by comparing the blocks in positions $(d-1,1),(d-1,2),\ldots,(d-1,d-2)$, etc. If $S$ is normal and block upper triangular, then Lemma \ref{Zero Blocks}(a) ensures that it is block diagonal.\hspace*{\fill} \end{proof} \begin{lemma} \label{SVDunique}Let $\sigma_{1}>\sigma_{2}>\cdots>\sigma_{d}\geq0$ and $\sigma_{1}^{^{\prime}}>\sigma_{2}^{^{\prime}}>\cdots>\sigma_{d}^{^{\prime} }\geq0$ be given nonnegative real numbers, let $D,D^{^{\prime}}\in M_{n}$ be given conformally partitioned block diagonal matrices \[ D=\left[ \begin{array} [c]{cccc} \sigma_{1}I_{n_{1}} & & & \\ & \sigma_{2}I_{n_{2}} & & \\ & & \ddots & \\ & & & \sigma_{d}I_{n_{d}} \end{array} \right] ,\qquad D^{^{\prime}}=\left[ \begin{array} [c]{cccc} \sigma_{1}^{^{\prime}}I_{n_{1}} & & & \\ & \sigma_{2}^{^{\prime}}I_{n_{2}} & & \\ & & \ddots & \\ & & & \sigma_{d}^{^{\prime}}I_{n_{d}} \end{array} \right] \] in which $n_{1}+n_{2}+\cdots+n_{d}=n$. If $U,V\in M_{n}$ are unitary and $DU=VD^{^{\prime}}$, then $\sigma_{i}=\sigma_{i}^{^{\prime}}$ for each $i=1,\ldots,d$, and there are unitary matrices $W_{1}\in M_{n_{1}} ,\ldots,W_{d-1}\in M_{n_{d-1}}$ and $\tilde{U},\tilde{V}\in M_{n_{d}}$ such that $U=W_{1}\oplus\cdots\oplus W_{d-1}\oplus\tilde{U}$ and $V=W_{1} \oplus\cdots\oplus W_{d-1}\oplus\tilde{V}$; if $\sigma_{d}>0$ then $\tilde {U}=\tilde{V}$. \end{lemma} \begin{proof} Let $A=DU=VD^{^{\prime}}$. The eigenvalues of $AA^{\ast}=D^{2}$ and $A^{\ast }A=(D^{^{\prime}})^{2}$ are the same, so $D=D^{^{\prime}}$. Moreover, \[ AA^{\ast}=\left( DU\right) \left( DU\right) ^{\ast}=D^{2}=VD^{2}V^{\ast} \] and \[ A^{\ast}A=U^{\ast}D^{2}U=\left( VD\right) ^{\ast}\left( VD\right) =D^{2}, \] so $D^{2}$ commutes with both $U$ and $V$ and hence each of $U$ and $V$ is block diagonal conformal to $D$. The identity $DU=VD$ ensures that the diagonal blocks of $U$ and $V$ corresponding to each $\sigma_{i}>0$ are equal.\hfill \end{proof} \begin{theorem} \label{WildCube}The problem of classifying square complex matrices $A$ up to unitary *congruence is unitarily wild in both of the following two cases: \medskip \newline (a) $A^{3}=0$. \medskip \newline (b) $A$ is nonsingular and $A^{3}$ is normal. \end{theorem} \begin{proof} (a) Let $F,F^{^{\prime}}\in M_{k}$ be given. Define \begin{equation} A=\left[ \begin{array} [c]{ccc} 0_{k} & I_{k} & F\\ 0_{k} & 0_{k} & I_{k}\\ 0_{k} & 0_{k} & 0_{k} \end{array} \right] \quad\text{and\quad}A^{^{\prime}}=\left[ \begin{array} [c]{ccc} 0_{k} & I_{k} & F^{^{\prime}}\\ 0_{k} & 0_{k} & I_{k}\\ 0_{k} & 0_{k} & 0_{k} \end{array} \right] \text{,}\label{AandAprime} \end{equation} so that $A^{3}=(A^{^{\prime}})^{3}=0$ for any choices of $F$ and $F^{^{\prime }}$. Suppose $A$ and $A^{^{\prime}}$ are unitarily *congruent, that is, suppose there is a unitary $U=[U_{ij}]_{i,j=1}^{3}\in M_{3k}$, partitioned conformally to $A$, such that $AU=UA^{^{\prime}}$. Then $A^{2}U=(UA^{^{\prime }})^{2}$; the $1,3$ blocks of $A^{2}$ and $(A^{^{\prime}})^{2}$ are $I_{k}$ and all their other blocks are $0_{k}$. Comparison of the first block rows and the third block columns of both sides of $A^{2}U=(UA^{^{\prime}})^{2} $\ reveals that $U_{11}=U_{33}$ and $U_{31}=U_{32}=U_{21}=0_{k}$. It follows that $U$ is block diagonal since it is normal and block upper triangular. Comparison of the $1,3$ blocks of both sides of $AU=UA^{^{\prime}}$ shows that $FU_{22}=U_{11}F^{^{\prime}}$; comparison of the $1,2$ blocks shows that $U_{11}=U_{22}$. Thus, $A$ and $A^{^{\prime}}$ are unitarily *congruent if and only if $F$ and $F^{^{\prime}}$ are unitarily *congruent. \medskip \newline (b) Let $F,F^{^{\prime}}\in M_{k}$ be given. Define the two nonsingular matrices \[ A=\left[ \begin{array} [c]{ccc} \lambda I_{2k} & 0 & A_{13}\\ 0 & \mu I_{3k} & G\\ 0 & 0 & I_{3k} \end{array} \right] \quad\text{and\quad}A^{^{\prime}}=\left[ \begin{array} [c]{ccc} \lambda I_{2k} & 0 & A_{13}^{^{\prime}}\\ 0 & \mu I_{3k} & G\\ 0 & 0 & I_{3k} \end{array} \right] \text{,} \] in which $\lambda=(-1+i\sqrt{3})/2$ and $\mu=\bar{\lambda}$ are the two distinct roots of $t^{2}+t+1=0$, \[ G=\left[ \begin{array} [c]{ccc} 3I_{k} & 0 & 0\\ 0 & 2I_{k} & 0\\ 0 & 0 & I_{k} \end{array} \right] \text{,\quad}A_{13}=\left[ \begin{array} [c]{ccc} I_{k} & I_{k} & F\\ 0_{k} & I_{k} & I_{k} \end{array} \right] \text{,\quad and }A_{13}^{^{\prime}}=\left[ \begin{array} [c]{ccc} I_{k} & I_{k} & F^{^{\prime}}\\ 0_{k} & I_{k} & I_{k} \end{array} \right] \text{.} \] A computation reveals that $A^{3}=(A^{^{\prime}})^{3}=\lambda^{3}I_{2k} \oplus\mu^{3}I_{3k}\oplus I_{3k}$ is diagonal (and hence normal) for any choices of $F$ and $F^{^{\prime}}$. Suppose $A$ and $A^{^{\prime}}$ are unitarily *congruent, that is, suppose there is a unitary $U=[U_{ij} ]_{i,j=1}^{3}\in M_{8k}$, partitioned conformally to $A$, such that $AU=UA^{^{\prime}}$. Lemma \ref{Basic} ensures that $U$ is block diagonal. Since \[ \left( AU\right) _{23}=GU_{33}=U_{22}G=\left( UA\right) _{23}\text{,} \] Lemma \ref{SVDunique} ensures that $U_{33}=U_{22}$ and that $U_{33} =V_{1}\oplus V_{2}\oplus V_{3}$ is block diagonal conformal to $G$. Partition $U_{11}=[W_{ij}]_{i,j=1}^{2}$, in which $W_{11},W_{22}\in M_{k}$. Equating the $1,3$ blocks of both sides of the identity $AU=UA^{^{\prime}}$ gives the identity \[ A_{13}U_{33}=\left[ \begin{array} [c]{ccc} I_{k}V_{1} & V_{2} & FV_{3}\\ 0_{k} & V_{2} & V_{3} \end{array} \right] =\left[ \begin{array} [c]{cc} W_{11} & W_{12}\\ W_{21} & W_{22} \end{array} \right] \left[ \begin{array} [c]{ccc} I_{k} & I_{k} & F^{^{\prime}}\\ 0_{k} & I_{k} & I_{k} \end{array} \right] =U_{11}A_{13}^{^{\prime}}\text{.} \] Comparison of the $2,1$ blocks of both sides of this identity tells us that $W_{21}=0$, so $W$ is block upper triangular and hence $W_{12}=0$ as well. Comparison of the $2,2$ and $2,3$ blocks tells us that $V_{2}=W_{22}=V_{3}$ and comparison of the $1,2$ blocks tells us that $V_{2}=W_{11}$. Finally, comparison of the $1,3$ blocks and using $V_{3}=W_{11}$ reveals that $FV_{3}=V_{3}F^{^{\prime}}$, so $A$ and $A^{^{\prime}}$ are unitarily *congruent if and only if $F$ and $F^{^{\prime}}$ are unitarily *congruent.\hfill \end{proof} \begin{theorem} (a) The problem of classifying square complex matrices $A$ such that $A\bar {A}A=0$ up to unitary congruence contains the problem of classifying arbitrary square matrices up to unitary congruence. \medskip \newline (b) The problem of classifying square complex matrices up to unitary congruence is unitarily wild. \end{theorem} \begin{proof} (a) Suppose the matrices $A$ and $A^{^{\prime}}$ in (\ref{AandAprime}) are unitarily congruent, that is, $AU=\bar{U}A^{^{\prime}}$ for some unitary $U=[U_{ij}]_{i,j=1}^{3}$ that is partitioned conformally to $A$. Then \begin{equation} \left[ \begin{array} [c]{ccc} U_{21}+FU_{31} & U_{22}+FU_{32} & U_{23}+FU_{33}\\ U_{31} & U_{32} & U_{33}\\ 0 & 0 & 0 \end{array} \right] =\left[ \begin{array} [c]{ccc} 0 & \bar{U}_{11} & \bar{U}_{11}F^{^{\prime}}+\bar{U}_{12}\\ 0 & \bar{U}_{21} & \bar{U}_{21}F^{^{\prime}}+\bar{U}_{22}\\ 0 & \bar{U}_{31} & \bar{U}_{31}F^{^{\prime}}+\bar{U}_{32} \end{array} \right] \text{.}\label{3by3} \end{equation} Comparing the $2,1$ blocks of both sides of (\ref{3by3}) tells us that $U_{31}=0$, and then comparing the $1,1$ blocks as well as the $3,3$ blocks tells us that $U_{21}=U_{32}=0$. Since $U$ is block upper triangular and normal, it is block diagonal. Comparing the $1,2$ blocks and the $2,3$ blocks of (\ref{3by3}) now tells us that $\bar{U}_{11}=U_{22}=\bar{U}_{33}$, so $U_{11}=U_{33}$. Finally, comparing the $1,3$ blocks reveals that $FU_{11}=\bar{U}_{11}F^{^{\prime}}$, that is, $A$ and $A^{^{\prime}}$ are unitarily congruent if and only if $F$ and $F^{^{\prime}}$ are unitarily congruent. \medskip \newline (b) Let $F,F^{^{\prime}}\in M_{k}$ be given and suppose that \[ A=\left[ \begin{array} [c]{cccc} 0_{k} & I_{k} & 0 & F\\ 0 & 0_{k} & I_{k} & 0\\ 0 & 0 & 0_{k} & I_{k}\\ 0 & 0 & 0 & 0_{k} \end{array} \right] \quad\text{and\quad}A^{^{\prime}}=\left[ \begin{array} [c]{cccc} 0_{k} & I_{k} & 0 & F^{^{\prime}}\\ 0 & 0_{k} & I_{k} & 0\\ 0 & 0 & 0_{k} & I_{k}\\ 0 & 0 & 0 & 0_{k} \end{array} \right] \] are unitarily congruent, that is, $AU=\bar{U}A^{^{\prime}}$ for some unitary $U=[U_{ij}]_{i,j=1}^{4}$ that is partitioned conformally to $A$. An adaptation of the argument in part (a) shows that $U$ is block diagonal, $U_{22}=\bar {U}_{11}$, $U_{33}=U_{11}$, and $U_{44}=\bar{U}_{11}$. Hence, $FU_{11} =U_{11}F^{^{\prime}}$. We conclude that $A$ and $A^{^{\prime}}$ are unitarily congruent if and only if $F$ and $F^{^{\prime}}$ are unitarily *congruent.\hfill \end{proof} \subsection{A bounded iteration\label{BoundedIteration}} Suppose $A\in M_{n}$ is nonsingular and let $x_{0}\in\mathbb{C}^{n}$ be given. Define $x_{1},x_{2},\ldots$ by \begin{equation} A^{T}x_{k+1}+Ax_{k}=0\text{, }k=0,1,2,\ldots.\label{TransposeIteration} \end{equation} Under what conditions on $A$ is the sequence $x_{1},x_{2},\ldots$ bounded for all choices of $x_{0}$? We have \[ x_{k+1}=-A^{-T}Ax_{k}=\cdots=(-1)^{k}\left( A^{-T}A\right) ^{k+1} x_{0}\text{,} \] so boundedness of the solution sequence for all choices of $x_{0}$ requires that no eigenvalue of the cosquare $A^{-T}A$ has modulus greater than 1. Moreover, every Jordan block of any eigenvalue of modulus 1 must be 1-by-1. Inspection of (\ref{JCFcosquare}) reveals that the Jordan Canonical Form of $A^{-T}A$ must have the form (\ref{JCFdiagonalizableCosquare}), in which each $\mu_{j}\neq1$ and $\left\vert \mu_{j}\right\vert =1$. Theorem \ref{CongruenceCanonicalForms}(a) ensures that $A$ is congruent to a direct sum of blocks of the two types \begin{equation} \lbrack1]\text{ and }\left[ \begin{array} [c]{cc} 0 & 1\\ \mu & 0 \end{array} \right] \text{,\quad}\left\vert \mu\right\vert =1\neq\mu\text{.} \label{BoundedBlocks} \end{equation} Corollary \ref{UnitarySpecialCase} ensures that the 2-by-2 blocks in (\ref{BoundedBlocks}) may be replaced by 2-by-2 real orthogonal blocks (\ref{RealBlock}) or by 2-by-2 Hermitian unitary blocks (\ref{HermitianBlock}). Conversely, if $A=SUS^{T}$ for some nonsingular $S$ and unitary $U$, then \[ 0=A^{T}x_{k+1}+Ax_{k}=SU^{T}S^{T}x_{k+1}+SUS^{T}x_{k}\text{,\quad}k=1,2,\ldots \] if and only if \[ \xi_{k+1}=\left( -1\right) ^{k}\left( \bar{U}U\right) ^{k+1}\xi _{0}\text{,\quad}\xi_{k}:=S^{T}x_{k}\text{, }k=1,2,\ldots\text{.} \] The sequence $\xi_{0},\xi_{1},...$ is bounded since $\bar{U}U$ is unitary. In summary, we have the following \begin{theorem} \label{BoundedSequenceTheorem}Let $A\in M_{n}$ be nonsingular. The following are equivalent: \medskip \newline (a) The sequence $x_{1},x_{2}\ldots$ defined by \[ A^{T}x_{k+1}+Ax_{k}=0\text{,\quad}k=0,1,2,\ldots \] is bounded for each given $x_{0}\in\mathbb{C}^{n}$. \medskip \newline (b) $A$ is congruent to a unitary matrix. \medskip \newline (c) $A$ is congruent to a real orthogonal matrix. \medskip \newline (d) $A$ is congruent to a Hermitian unitary matrix. \medskip \newline (e) $A$ is congruent to a nonsingular conjugate normal matrix. \end{theorem} Parallel reasoning using Theorems \ref{CosquareCharacterize}(b) and \ref{CongruenceCanonicalForms}(b) leads to similar conclusions about the conjugate transpose version of (\ref{TransposeIteration}). \begin{theorem} Let $A\in M_{n}$ be nonsingular. The following are equivalent: \medskip \newline (a) The sequence $x_{1},x_{2}\ldots$ defined by \[ A^{\ast}x_{k+1}+Ax_{k}=0\text{,\quad}k=0,1,2,\ldots \] is bounded for each given $x_{0}\in\mathbb{C}^{n}$. \medskip \newline (b) $A$ is *congruent to a unitary matrix. \medskip \newline (c) $A$ is diagonalizable by *congruence. \medskip \newline (d) $A$ is *congruent to a nonsingular normal matrix. \end{theorem} \section{Some comments about previous work} Lemma \ref{Fuglede}(a) is often called the Fuglede-Putnam Theorem. The assertion in Corollary \ref{CriterionCorollary}(b) that two unitary matrices are *congruent if and only if they are unitarily *congruent was proved in \cite{JF Sylvester} with an elegant use of uniqueness of the polar decomposition. The unitary congruence canonical form (\ref{ConinvolutionCongruence}) for a coninvolutory matrix was proved in \cite[Theorem 1.5]{HM1}. Wigner \cite{Wigner} obtained a unitary congruence canonical form (\ref{UnitarySpecialCaseBlocks}) for unitary matrices in which the 2-by-2 blocks are the Hermitian unitary blocks (\ref{HermitianBlock}). In \cite{Autonne}, Autonne used a careful study of uniqueness of the unitary factors in the singular value decomposition to prove many basic results, for example: a nonsingular complex symmetric matrix is diagonalizable under unitary congruence; a complex normal matrix is unitarily similar to a diagonal matrix; a real normal matrix is real orthogonally similar to a real block diagonal matrix with 1-by-1 and 2-by-2 blocks, in which the latter are scalar multiples of real orthogonal matrices; similar unitary matrices are unitarily similar. Lemma \ref{SVDunique} is a special case of Autonne's uniqueness theorem; for an exposition see \cite[Theorem 3.1.1$^{^{\prime}}$]{HJ2}. Hua proved the canonical form (\ref{skewsymmetric}) for a nonsingular skew symmetric matrix under unitary congruence in \cite[Theorem 7]{Hua}; Theorem 5 in the same paper is the corresponding canonical form for a nonsingular symmetric matrix. The first studies of conjugate normal and congruence normal matrices seem to be \cite{VujicicI} and \cite{HerbutII}. The canonical form (\ref{t*0}) for a squared normal matrix (and hence the canonical form (\ref{g*0})) can be deduced from Lemma 2.2 of \cite{VVSquiver}). Each squared normal matrix can be reduced to the form (\ref{t*0}) by employing the key ideas in Littlewood's algorithm \cite{Littlewood} for reducing matrices to canonical form by unitary similarity. An exposition of this alternative approach to Theorem \ref{squared normal canonical 2}, as well as a canonical form for real squared normal matrices under real orthogonal congruences, is in \cite{FHS}. D.\v{Z}. \raisebox{1.5pt}{-}\!\!Dokovi\'{c} proved the canonical form (\ref{ProjectorBlocks}) for ordinary projections ($\lambda=1$) in \cite{Dj}; for a different proof see \cite[p. 46]{VVSquiver}. George and Ikramov \cite{GI} used D.\v{Z}. \raisebox{1.5pt}{-}\!\!Dokovi\'{c}'s canonical form to derive a decomposition of the form (\ref{Involution*Congruence2}) for an involution; in addition, they used Specht's Criterion to prove Corollary \ref{QuadraticMinimalPoly}(b). For an ordinary projection $P$, and without employing any canonical form for $P$, Lewkowicz \cite{Lewkowicz} identified all of the singular values of $P$ and $I-P$. The block matrix (\ref{BarBlockDef}) and the characterization of conjugate normal matrices in Theorem \ref{BarBlocks}(d) was studied in \cite[Proposition 2]{F+I}. The characterization of conjugate normal matrices via the criterion in Theorem \ref{CconjugateNormal}(a) is in \cite[Proposition 3]{F+I}. Theorem \ref{WildCube}(a) was proved in \cite[p. 45]{VVSquiver}. In \cite{Ikr1997}, Ikramov proved that any matrix with a quadratic minimal polynomial is unitarily *congruent to a direct sum of the form (\ref{QuadraticCanonical}). His characterization of the positive parameters $\gamma_{i}$ is different from ours: If $\lambda_{1}\neq\lambda_{2}$, he found that $\gamma=\left\vert \lambda_{1}-\lambda_{2}\right\vert \tan\alpha$, in which $\alpha$ is the angle between any pair of left and right $\lambda_{1} $-eigenvectors of the block \[ \left[ \begin{array} [c]{cc} \lambda_{1} & \gamma_{i}\\ 0 & \lambda_{2} \end{array} \right] \text{.} \] This pleasant characterization fails if $\lambda_{1}=\lambda_{2}$; our characterization (using eigenvalues and singular values) is valid for all $\lambda_{1},\lambda_{2}$. The authors learned about the bounded iteration problem in Section \ref{BoundedIteration} from Leiba Rodman and Peter Lancaster, who solved it using canonical pairs.
40,011
\begin{document} \begin{abstract} We show that for $n \ge 3$ there are $3 \cdot 2^{n-1}$ complex com\-mon tangent lines to $2n-2$ general spheres in $\mathbb{R}^n$ and that there is a choice of spheres with all common tangents real. \end{abstract} \maketitle \section{Introduction} We study the following problem from (real) enumerative geometry. \medskip \begin{description} \item[Given] $2n-2$ (not necessarily disjoint) spheres with centers $c_i \in \mathbb{R}^n$ and radii $r_i$, $1 \le i \le 2n-2$. \smallskip \item[Question] In the case of finitely many common tangent lines, what is their maximum number? \end{description} The number of $2n-2$ spheres guarantees that in the generic case there is indeed a finite number of common tangent lines. In particular, for $n=2$ the answer is~4 since two disjoint circles have 4 common tangents. The reason for studying this question---which, of course, is an appealing and fundamental geometric question in itself---came from different motivations. An essential task in statistical analysis is to find the line that best fits the data in the sense of minimizing the maximal distance to the points~(see, e.g., \cite{chan-2000}). More precisely, the decision variant of this problem asks: Given $m,n \in \mathbb{N}$, $r > 0$, and a set of points $y_1, \ldots, y_m \in \mathbb{R}^n$, does there exist a line $l$ in $\mathbb{R}^n$ such every point $y_i$ has Euclidean distance at most $r$ from~$l$. From the complexity-theoretical point of view, for fixed dimension the problem can be solved in polynomial time via quantifier elimination over the reals~\cite{fks-96}. However, currently no practical algorithms focusing on exact computation are known for $n > 3$ (for approximation algorithms, see~\cite{chan-2000}). From the algebraic perspective, for dimension~3 it was shown in~\cite{aas-99,ssty-2000} how to reduce the algorithmic problem to an algebraic-geometric core problem: finding the real lines which all have the same prescribed distance from 4~given points; or, equivalently, finding the real common tangent lines to 4~given unit spheres in $\mathbb{R}^3$. This problem in dimension 3 was treated in~\cite{MPT01}. \begin{prop}\label{prop:MPTh} Four unit spheres in $\mathbb{R}^3$ have at most $12$ common tangent lines unless their centers are collinear. Furthermore, there exists a configuration with $12$ different real tangent lines. \end{prop} The same reduction idea to the algebraic-geometric core problem also applies to arbitrary dimensions, in this case leading to the general problem stated at the beginning. {}From the purely algebraic-geometric point of view, this tangent problem is interesting for the following reason. In dimension~3, the formulation of the problem in terms of Pl\"ucker coordinates gives 5 quadratic equations in projective space $\mathbb{P}^5_{\mathbb R}$, whose common zeroes in $\mathbb{P}^5_{\mathbb C}$ include a 1-dimensional component at infinity (accounting for the ``missing'' $2^5-12 = 20$ solutions). Quite remarkably, as observed in \cite{aluffi-fulton-2001}, this excess component cannot be resolved by a single blow-up. Experimental results in~\cite{sottile-macaulay-2001} for $n=4,5$, and $6$, indicate that for higher dimensions the generic number of solutions differs from the B\'{e}zout number of the straightforward polynomial formulation even more. We discuss this further in Section~5. Our main result can be stated as follows. \begin{thm} \label{th:ndimnumber} Suppose $n\geq 3$. \begin{enumerate} \item[(a)] Let $c_1,\ldots,c_{2n-2}\in\mathbb{R}^n$ affinely span ${\mathbb R}^n$, and let $r_1,\ldots,r_{2n-2}>0$. If the $2n-2$ spheres with centers $c_i$ and radii $r_i$ have only a finite number of complex common tangent lines, then that number is bounded by $3 \cdot 2^{n-1}$. \item[(b)] There exists a configuration with $3 \cdot 2^{n-1}$ different real common tangent lines. Moreover, this configuration can be achieved with unit spheres. \end{enumerate} \end{thm} Thus the bound for real common tangents equals the ({\it a priori greater}) bound for complex common tangents; so this problem of common tangents to spheres is fully real in the sense of enumerative real algebraic geometry~\cite{So97c,So-DIMACS}. We prove Statement (a) in Section 2 and Statement (b) in Section 3, where we explicitly describe configurations with $3 \cdot 2^{n-1}$ common real tangents. Figure~\ref{Fig1} shows a configuration of 4 spheres in ${\mathbb R}^3$ with 12 common tangents (as given in~\cite{MPT01}). \ifpictures \begin{figure}[htb] $$ \epsfxsize=3.025in \epsfbox{figures/12lines.eps} $$ \caption{Spheres with 12 real common tangents}\label{Fig1} \end{figure} \fi In Section~4, we show that there are configurations of spheres with affinely dependent centers having $3 \cdot 2^{n-1}$ \emph{complex} common tangents; thus, the upper bound of Theorem~\ref{th:ndimnumber} also holds for spheres in this special position. Megyesi~\cite{Me02} has recently shown that all $3 \cdot 2^{n-1}$ may be real. We also show that if the centers of the spheres are the vertices of the crosspolytope in ${\mathbb R}^{n-1}$, there will be at most $2^n$ common tangents, and if the spheres overlap but do not contain the centroid of the crosspolytope, then all $2^n$ common tangents will be real. We conjecture that when the centers are affinely dependent and all spheres have the same radius, then there will be at most $2^n$ real common tangents. Strong evidence for this conjecture is provided by Megyesi~\cite{Me01}, who showed that there are at most 8 real common tangents to 4 unit spheres in ${\mathbb R}^3$ whose centers are coplanar but not collinear. In Section 5, we put the tangent problem into the perspective of common tangents to general quadric hypersurfaces. In particular, we discuss the problem of common tangents to $2n-2$ smooth quadrics in \emph{projective} $n$-space, and describe the excess component at infinity for this problem of spheres. In this setting, Theorem~\ref{th:ndimnumber}(a) implies that there will be at most $3 \cdot 2^{n-1}$ isolated common tangents to $2n-2$ quadrics in projective $n$-space, when the quadrics all contain the same (smooth) quadric in a given hyperplane. In particular, the problem of the spheres can be seen as the case when the common quadric is at infinity and contains no real points. We conclude with the question of how many of these common tangents may be real when the shared quadric has real points. For $n=3$, there are 5 cases to consider, and for each, all 12 lines can be real~\cite{sottile-macaulay-2001}. Megyesi~\cite{Me02} has recently shown that all common tangents may be real, for many cases of the shared quadric. \section{Polynomial Formulation with Affinely Independent Centers}\label{sec:indep} For $x, y \in \mathbb{C}^n$, let $x \cdot y := \sum_{i=1}^n x_i y_i$ denote their Euclidean dot product. We write $x^2$ for $x \cdot x$. We represent a line in ${\mathbb C}^n$ by a point $\bp\in{\mathbb C}^n$ lying on the line and a direction vector $\bv\in{\mathbb P}_\mathbb{C}^{n-1}$ of that line. (For notational convenience we typically work with a representative of the direction vector in $\mathbb{C}^n \setminus \{0\}$.) If $v^2 \neq 0$ we can make $p$ unique by requiring that $p \cdot v = 0$. By definition, a line $\ell = (\bp,\bv)$ is tangent to the sphere with center $\bc\in{\mathbb R}^n$ and radius $r$ if and only if it is tangent to the quadratic hypersurface $(x-c)^2 = r^2$, i.e., if and only if the quadratic equation $(p +tv - c)^2 = r^2$ has a solution of multiplicity two. When $\ell$ is real then this is equivalent to the metric property that $\ell$ has Euclidean distance $r$ from $c$. \ifpictures $$ \setlength{\unitlength}{.9167pt} \begin{picture}(106,56) \put(0,0){\epsfxsize=91.67pt\epsfbox{figures/perp.eps}} \put(28,24){$\ell$} \put(87,3){$\bc$} \put(65,16){$r$} \end{picture} $$ \fi For any line $\ell \subset \mathbb{C}^n$, the algebraic tangent condition on $\ell$ gives the equation $$ \frac{[\bv\cdot(\bp-\bc)]^2}{\bv^2} - (\bp-\bc)^2\ + r^2\ =\ 0 \,. $$ For $v^2 \neq 0$ this is equivalent to \begin{equation}\label{eq:tanSphere} \bv^2\bp^2-2\bv^2 \bp\cdot\bc + \bv^2\bc^2 - [\bv\cdot\bc]^2-r^2\bv^2\ =\ 0\,. \end{equation} To prove part (a) of Theorem~\ref{th:ndimnumber}, we can choose $\bc_{2n-2}$ to be the origin and set $r:=r_{2n-2}$. Then the remaining centers span ${\mathbb R}^n$. Subtracting the equation for the sphere centered at the origin from the equations for the spheres $1,\ldots,2n-3$ gives the system \begin{equation}\label{eq:origintransform} \begin{array}{rcl} \bp\cdot\bv &=& 0 \, ,\\ \rule{0pt}{20pt} \bp^2 &=& r^2 \, , \quad\mbox{and} \\ \rule{0pt}{20pt} 2 \bv^2\bp\cdot\bc_i & = & \bv^2 \bc_i^2 - [\bv \cdot \bc_i]^2 - \bv^2 (r_i^2 - r^2) \, , \qquad i=1,2,\ldots,2n{-}3\,. \end{array} \end{equation} \begin{rem} Note that this system of equations does not have a solution with $v^2 = 0$. Namely, if we had $v^2 = 0$, then $\bv \cdot \bc_i=0$ for all $i$. Since the centers span ${\mathbb R}^n$, this would imply $\bv=0$, contradicting $\bv\in{\mathbb P}^{n-1}_{\mathbb C}$. This validates our assumption that $v^2\neq 0$ prior to~(\ref{eq:tanSphere}). \end{rem} Since $n \ge 3$, the bottom line of~(\ref{eq:origintransform}) contains at least $n$ equations. We can assume $\bc_1,\ldots,\bc_n$ are linearly independent. Then the matrix $M := (\bc_1, \ldots,\bc_n)^{\mathrm{T}}$ is invertible, and we can solve the equations with indices $1, \ldots, n$ for~$p$: \begin{equation}\label{eq:pequation} p\ =\ \frac{1}{2\bv^2} M^{-1} \left( \begin{array}{c} \bv^2 \bc_1^2 - [\bv \cdot \bc_1]^2 -\bv^2(r_1^2-r^2)\\ \vdots \\ \bv^2 \bc_n^2 - [\bv \cdot \bc_n]^2-\bv^2(r_n^2-r^2)\rule{0pt}{12.5pt} \end{array} \right). \end{equation} Now substitute this expression for $p$ into the first and second equation of the system~(\ref{eq:origintransform}), as well as into the equations for $i=n+1,\ldots,2n-3$, and then clear the denominators. This gives $n-1$ homogeneous equations in the coordinate $\bv$, namely one cubic, one quartic, and $n-3$ quadrics. By B\'{e}zout's Theorem, this means that if the system has only finitely many solutions, then the number of solutions is bounded by $3 \cdot 4 \cdot 2^{n-3} = 3 \cdot 2^{n-1}$, for $n \ge 3$. For small values of $n$, these values are shown in Table~$1$. The value 12 for $n=3$ was computed in~\cite{MPT01}, and the values for $n=4,5,6$ were computed experimentally in~\cite{sottile-macaulay-2001}. \begin{table}[htb] \begin{center} \begin{tabular}{|c||c|c|c|c|c|} \hline $n$ & 3 & 4 & 5 & 6 & 7\\ \hline maximum \# tangents & 12 & 24 & 48 & 96 & 192\\ \hline \end{tabular}\smallskip \end{center} \label{ta:ndimvalues} \caption{Maximum number of tangents in small dimensions} \end{table} We simplify the cubic equation obtained by substituting~(\ref{eq:pequation}) into the equation $p\cdot v=0$ by expressing it in the basis $c_1, \ldots, c_n$. Let the representation of $v$ in the basis $c_1, \ldots, c_n$ be \[ v\ =\ \sum_{i=1}^n t_i c_i \] with homogeneous coordinates $t_1, \ldots, t_n$. Further, let $c_1', \ldots, c_n'$ be a dual basis to $c_1, \ldots, c_n$; i.e., let $c_1', \ldots, c_n'$ be defined by $c_i' \cdot c_j = \delta_{ij}$, where $\delta_{ij}$ denotes Kronecker's delta function. By elementary linear algebra, we have $t_i = c_i' \cdot v$. When expressing $p$ in this dual basis, $p = \sum p_i' c_i'$, the third equation of~(\ref{eq:origintransform}) gives \[ p_i'\ =\ \frac{1}{v^2} \left(v^2 c_i^2 - [v \cdot c_i]^2 - v^2 (r_i^2 - r^2)\right) \, . \] Substituting this representation of $p$ into the equation \[ 0\ =\ 2 v^2 (p \cdot v) \ =\ 2 v^2 \left(\sum_{i=1}^n p_i' c_i'\right) \cdot v \ =\ 2 v^2 \sum_{i=1}^n p_i' t_i \, , \] we obtain the cubic equation \[ \sum_{i=1}^n (v^2 c_i^2 - [v \cdot c_i]^2 - v^2 (r_i^2 - r^2)) t_i\ =\ 0 \, . \] In the case that all radii are equal, expressing $\bv^2$ in terms of the $t$-variables yields \[ \sum_{1 \le i \neq j \le n} \alpha_{ij} t_i^2 t_j + \sum_{1 \le i < j < k \le n} 2 \beta_{ijk} t_i t_j t_k\ =\ 0 \, , \] where \begin{eqnarray*} \alpha_{ij} & = & (\text{vol}_2(\bc_i,\bc_j))^2\ =\ \det \left( \begin{array}{cc} \bc_i\cdot \bc_i & \bc_i\cdot \bc_j \\ \bc_j\cdot \bc_i & \bc_j\cdot \bc_j \end{array} \right), \\ \beta_{ijk} & = & \det \left( \begin{array}{cc} \bc_i\cdot \bc_j & \bc_i\cdot \bc_k \\ \bc_k\cdot \bc_j & \bc_k\cdot \bc_k \end{array} \right) + \det \left( \begin{array}{cc} \bc_i\cdot \bc_k & \bc_i\cdot \bc_j \\ \bc_j\cdot \bc_k & \bc_j\cdot \bc_j \end{array} \right) \\ & & + \det \left( \begin{array}{cc} \bc_j\cdot \bc_k & \bc_j\cdot \bc_i \\ \bc_i\cdot \bc_k & \bc_i\cdot \bc_i \end{array} \right), \end{eqnarray*} and $\text{vol}_2(\bc_i,\bc_j)$ denotes the oriented area of the parallelogram spanned by $\bc_i$ and $\bc_j$. In particular, if $0c_1 \ldots c_n$ constitutes a regular simplex in $\mathbb{R}^n$, then we obtain the following characterization. \smallskip \begin{thm} Let $n \ge 3$. If\/ $0\bc_1\ldots \bc_n$ is a regular simplex and all spheres have the same radius, then the cubic equation expressed in the basis $\bc_1, \ldots, \bc_n$ is equivalent to \begin{equation} \label{eq:cubicndimsimplex} \sum_{1 \le i \neq j \le n} t_i^2 t_j + 2 \sum_{1 \le i < j < k \le n} t_i t_j t_k\ =\ 0. \end{equation} For $n=3$, this cubic equation factors into three linear terms; for $n \ge 4$ it is irreducible. \end{thm} \smallskip \begin{proof} Let $e$ denote the edge length of the regular simplex. Then the form of the cubic equation follows from computing $\alpha_{ij} = e^2 ( 1 \cdot 1 - 1/2 \cdot 1/2) = 3e^2/4$, $\beta_{ijk} = 3 e^2 (1/2 \cdot 1 - 1/2 \cdot 1/2) = 3e^2/4$. Obviously, for $n=3$ the cubic polynomial factors into $(t_1 + t_2)(t_1 + t_3)(t_2 + t_3)$ (cf.~\cite{schaal-85,MPT01}). For $t \ge 4$, assume that there exists a factorization of the form \[ \left(t_1 + \sum_{i=2}^n \rho_i t_i\right) \left(\sum_{1 \le i \le j \le n} \sigma_{ij} t_i t_j\right) \] with $\sigma_{12} = 1$. Since~(\ref{eq:cubicndimsimplex}) does not contain a monomial $t_i^3$, we have either $\rho_i = 0$ or $\sigma_{ii} = 0$ for $1 \le i \le n$. If there were more than one vanishing coefficient $\rho_i$, say $\rho_i = \rho_j = 0$, then the monomials $t_i^2 t_j$ could not be generated. So only two cases have to be investigated. \smallskip \noindent \emph{Case~1}: $\rho_i \neq 0$ for $2 \le i \le n$. Then $\sigma_{ii} = 0$ for $1 \le i \le n$. Furthermore, $\sigma_{ij} = 1$ for $i \neq j$ and $\rho_i = 1$ for all $i$. Hence, the coefficient of the monomial $t_1 t_2 t_3$ is 3, which contradicts~(\ref{eq:cubicndimsimplex}). \smallskip \noindent \emph{Case~2}: There exists exactly one coefficient $\rho_i = 0$, say, $\rho_4 = 0$. Then $\sigma_{11} = \sigma_{22} = \sigma_{33} = 0$, $\sigma_{44} = 1$. Further, $\sigma_{ij} = 1$ for $1 \le i < j \le 3$ and $\rho_i = 1$ for $1 \le i \le 3$. Hence, the coefficient of the monomial $t_1 t_2 t_3$ is 3, which is again a contradiction. \end{proof} \section{Real Lines} In the previous section, we have given the upper bound of $3 \cdot 2^{n-1}$ for the number of complex solutions to the tangent problem. Now we complement this result by providing a class of configurations leading to $3 \cdot 2^{n-1}$ real common tangents. Hence, the upper bound is tight, and is achieved by real tangents. There are no general techniques known to find and prove configurations with a maximum number of real solutions in enumerative geometry problems like the one studied here. For example, for the classical enumerative geometry problem of 3264 conics tangent to five given conics (dating back to Steiner in 1848~\cite{St1848} and solved by Chasles in 1864~\cite{Ch1864}) the existence of five real conics with all 3264 \emph{real} was only recently established (\cite{rtv-97} and \cite[{\S}7.2]{fulton-b96}). Our construction is based on the following geometric idea. For 4 spheres in $\mathbb{R}^3$ centered at the vertices $(1,1,1)^{\mathrm{T}}$, $(1,-1,-1)^{\mathrm{T}}$, $(-1,1,-1)^{\mathrm{T}}$, $(-1,-1,1)^{\mathrm{T}}$ of a regular tetrahedron, there are~\cite{MPT01} \begin{itemize} \item 3 different real tangents (of multiplicity~4) for radius $r=\sqrt{2}$; \item 12 different real tangents for $\sqrt{2} < r < 3/2$; \item 6 different real tangents (of multiplicity~2) for $r=3/2$. \end{itemize} Furthermore, based on the explicit calculations in~\cite{MPT01}, it can be easily seen that the symmetry group of the tetrahedron acts transitively on the tangents. By this symmetry argument, all 12 tangents have the same distance $d$ from the origin. In order to construct a configuration of spheres with many common tangents, say, in $\mathbb{R}^4$, we embed the centers via \[ (x_1,x_2,x_3)^{\mathrm{T}}\ \longmapsto\ (x_1,x_2,x_3,0)^{\mathrm{T}} \] into $\mathbb{R}^4$ and place additional spheres with radius $r$ at $(0,0,0,a)^{\mathrm{T}}$ and $(0,0,0,-a)^{\mathrm{T}}$ for some appropriate value of $a$. If $a$ is chosen in such a way that the centers of the two additional spheres have distance $r$ from the above tangents, then, intuitively, all common tangents to the six four-dimensional spheres are located in the hyperplane $x_4 = 0$ and have multiplicity~2 (because of the two different possibilities of signs when perturbing the situation). By perturbing this configuration slightly, the tangents are no longer located in the hyperplane $x_4 = 0$, and therefore the double tangents are forced to split. The idea also generalizes to dimension $n \ge 5$. Formally, suppose that the $2n-2$ spheres in $\mathbb{R}^n$ all have the same radius, $r$, and the first four have centers \begin{eqnarray*} \bc_1 &:=& (\hh 1,\hh 1,\hh 1,\ 0,\ldots,0)^{\mathrm{T}}, \\ \bc_2 &:=& (\hh 1,-1,-1,\ 0,\ldots,0)^{\mathrm{T}}, \\ \bc_3 &:=& (-1,\hh 1,-1,\ 0,\ldots,0)^{\mathrm{T}}, \mbox{\quad and}\\ \bc_4 &:=& (-1,-1,\hh 1,\ 0,\ldots,0)^{\mathrm{T}} \end{eqnarray*} at the vertices of a regular tetrahedron inscribed in the 3-cube $(\pm1,\pm1,\pm1,0,\ldots,0)^{\mathrm{T}}$. We place the subsequent centers at the points $\pm a\be_j$ for $j=4,5,\ldots,n$, where $\be_1,\ldots,\be_n$ are the standard unit vectors in ${\mathbb R}^n$. \begin{thm}\label{thm:real-cmplx} Let $n \ge 4$, $r > 0$, $a > 0$, and $\gamma:=a^2(n-1)/(a^2+n-3)$. If \begin{equation}\label{eq:discrim} (r^2-3)\,(3-\gamma)\,(a^2-2)\,(r^2-\gamma)\, \left((3-\gamma)^2 +4\gamma - 4r^2\right)\ \neq\ 0\,, \end{equation} then there are exactly $3\cdot 2^{n-1}$ different lines tangent to the $2n-2$ spheres. If \begin{equation}\label{eq:ineqs} a^2 > 2,\quad \gamma < 3,\quad \mbox{and}\quad \gamma\ <\ r^2\ <\ \gamma + {\textstyle\frac{1}{4}}\left(3-\gamma\right)^2 \,, \end{equation} then all these $3\cdot 2^{n-1}$ lines are real. Furthermore, this system of inequalities defines a nonempty subset of the $(a,r)$-plane. \end{thm} Given values of $a$ and $r$ satisfying~(\ref{eq:ineqs}), we may scale the centers and parameters by $1/r$ to obtain a configuration with unit spheres, proving Theorem~\ref{th:ndimnumber}~(b). \begin{rem} The set of values of $a$ and $r$ which give all solutions real is nonempty. To show this, we calculate \begin{equation}\label{eq:gamma} \gamma\ =\ \frac{a^2(n-1)}{a^2+n-3}\ =\ (n-1)\left(1-\frac{n-3}{a^2+n-3}\right)\,, \end{equation} which implies that $\gamma$ is an increasing function of $a^2$. Similarly, set $\delta:=\gamma+(3-\gamma)^2/4$, the upper bound for $r^2$. Then $$ \frac{d}{d\gamma}\;\delta\ =\ \frac{d}{d\gamma}\left(\frac{\gamma + (3-\gamma)^2}{4}\right)p\ =\ 1 + \frac{\gamma-3}{2}\,, $$ and so $\delta$ is an increasing function of $\gamma$ when $\gamma>1$. When $a^2=2$, we have $\gamma=2$; so $\delta$ is an increasing function of $a$ in the region $a^2>2$. Since when $a=\sqrt{2}$, we have $\delta=\frac{9}{4}>\gamma$, the region defined by~(\ref{eq:ineqs}) is nonempty. Moreover, we remark that the region is qualitatively different in the cases $n=4$ and $n \ge 5$. For $n=4$, $\gamma$ satisfies $\gamma<3$ for any $a>\sqrt{2}$. Hence, $\delta<3$ and $r<\sqrt{3}$. Thus the maximum value of 24 real lines may be obtained for arbitrarily large $a$. In particular, we may choose the two spheres with centers $\pm ae_4$ disjoint from the first four spheres. Note, however, that the first four spheres do meet, since we have $\sqrt{2}<r<\sqrt{3}$. For $n\geq 5$, there is an upper bound to $a$. The upper and lower bounds for $r^2$ coincide when $\gamma=3$; so we always have $r^2<3$. Solving $\gamma=3$ for $a^2$, we obtain $a^2<3(n-3)/(n-4)$. When $n=5$, Figure~\ref{fig:realShade} displays the discriminant locus (defined by~(\ref{eq:discrim})) and shades the region consisting of values of $a$ and $r$ for which all solutions are real. \ifpictures \begin{figure}[htb] $$ \setlength{\unitlength}{1.32pt} \begin{picture}(308,145)(3,0) \put(0,0){\epsfxsize=403pt\epsffile{figures/5.eps}} \put(265,10){$a$} \put(8,115){$r$} \put(8, 8){$0$} \put(3,74){$1$} \put( 88,5){$1$} \put(159,5){$2$} \put(230,5){$3$} \put(50,123){$r=\sqrt{3}$} \put(32, 90){$r=\sqrt{\delta}$} \put(57, 50){$r=\sqrt{\gamma}$} \put(123,60){$a=\sqrt{2}$} \put(197,75){$\gamma=3$} \put(197,60){$(a=\sqrt{6})$} \put(125,134){all solutions real} \end{picture} $$ \caption{Discriminant locus and values of $a,r$ giving all solutions real\label{fig:realShade}} \end{figure} \fi \end{rem} \noindent \emph{Proof of Theorem~\ref{thm:real-cmplx}}. We prove Theorem~\ref{thm:real-cmplx} by treating $a$ and $r$ as parameters and explicitly solving the resulting system of polynomials in the coordinates $(\bp,\bv)\in{\mathbb C}^n\times{\mathbb P}_{\mathbb C}^{n-1}$ for lines in ${\mathbb C}^n$. This shows that there are $3\cdot 2^{n-1}$ {\it complex} lines tangent to the given spheres, for the values of the parameters $(a,r)$ given in Theorem~\ref{thm:real-cmplx}. The inequalities~(\ref{eq:ineqs}) describe the parameters for which all solutions are real. \medskip First consider the equations~(\ref{eq:tanSphere}) for the line to be tangent to the spheres with centers $\pm a \be_j$ and radius $r$: \begin{eqnarray*} \bv^2\bp^2-2 a \bv^2p_j + a^2\bv^2 - a^2v_j^2-r^2\bv^2&=&0,\\ \bv^2\bp^2+2 a \bv^2p_j + a^2\bv^2 - a^2v_j^2-r^2\bv^2&=&0. \end{eqnarray*} Taking their sum and difference (and using $a\bv^2\neq0$), we obtain \begin{eqnarray} p_j&=&0, \hspace{2.64cm}\qquad 4 \le j \le n, \label{eq:pj=0}\\ a^2v_j^2 &=&(\bp^2+a^2-r^2)\bv^2, \qquad\ 4 \le j \le n. \label{eq:vj} \end{eqnarray} Subtracting the equations~(\ref{eq:tanSphere}) for the centers $c_1, \ldots, c_4$ pairwise gives \[ 4 v^2 (p_2 + p_3)\ =\ -4 (v_1 v_3 + v_1 v_2) \] (for indices 1,2) and analogous equations. Hence, \[ p_1\ =\ - \frac{v_2 v_3}{v^2}, \qquad p_2\ =\ - \frac{v_1 v_3}{v^2}, \qquad p_3\ =\ - \frac{v_1 v_2}{v^2}. \] Further, $p \cdot v = 0$ implies $v_1 v_2 v_3 = 0$. Thus we have 3 symmetric cases. We treat one, assuming that $v_1 = 0$. Then we obtain \[ p_1\ =\ - \frac{v_2 v_3}{v^2}, \qquad p_2 = p_3 = 0. \] Hence, the tangent equation~(\ref{eq:tanSphere}) for the first sphere becomes $$ \bv^2p_1^2-2\bv^2p_1 + 3\bv^2-(v_2+v_3)^2-r^2\bv^2\ =\ 0\,. $$ Using $0 = \bv^2p_1+v_2v_3$, we obtain \begin{equation}\label{eq:redE1} v_2^2+v_3^2\ =\ \bv^2(p_1^2+3 -r^2)\,. \end{equation} The case $j=4$ of (\ref{eq:vj}) gives $a^2v_4^2=\bv^2(p_1^2+a^2-r^2)$, since $p_2=p_3=0$. Combining these, we obtain $$ v_2^2+v_3^2\ =\ a^2 v_4^2 + \bv^2(3-a^2)\,. $$ Using $\bv^2=v_2^2+v_3^2+(n-3)v_4^2$ yields \[ (a^2-2)(v_2^2+v_3^2)\ =\ v_4^2(3(a^2+n-3)-a^2(n-1)). \] We obtain \begin{equation}\label{eq:v234} (a^2-2)(v_2^2+v_3^2)\ =\ v_4^2(a^2+n-3)(3-\gamma)\,, \end{equation} where $\gamma=a^2(n-1)/(a^2+n-3$). Note that $a^2+n-3>0$ since $n>3$. If neither $3-\gamma$ nor $a^2-2$ are zero, then we may use this to compute \begin{eqnarray*} (a^2+n-3)(3-\gamma)\bv^2 &=& [(a^2+n-3)(3-\gamma)+(n-3)(a^2-2)](v_2^2+v_3^2)\\ &=& (a^2+n-3)(v_2^2+v_3^2)\, , \end{eqnarray*} and so \begin{equation}\label{eq:RealOne} (3-\gamma)\bv^2\ =\ v_2^2+v_3^2\,. \end{equation} Substituting~(\ref{eq:RealOne}) into~(\ref{eq:redE1}) and dividing by $\bv^2$ gives \begin{equation}\label{eq:RealTwo} p_1^2 \ =\ r^2-\gamma\,. \end{equation} Combining this with $\bv^2p_1+v_2v_3=0$, we obtain \begin{equation}\label{eq:RealThree} p_1(v_2^2+v_3^2) + (3-\gamma)v_2v_3\ =\ 0\,. \end{equation}\medskip Summarizing, we have $n$ linear equations $$ v_1\ =\ p_2\ =\ p_3\ =\ p_4\ =\ \cdots\ =\ p_n\ =\ 0\,, $$ and $n-4$ simple quadratic equations $$ v_4^2\ =\ v_5^2\ =\ \cdots\ =\ v_n^2\,, $$ and the three more complicated quadratic equations,~(\ref{eq:v234}),~(\ref{eq:RealTwo}), and~(\ref{eq:RealThree}). \smallskip We now solve these last three equations. We solve~(\ref{eq:RealTwo}) for $p_1$, obtaining $$ p_1\ =\ \pm\sqrt{r^2-\gamma}\,. $$ Then we solve~(\ref{eq:RealThree}) for $v_2$ and use~(\ref{eq:RealTwo}), obtaining $$ v_2\ =\ -\frac{3-\gamma\pm\sqrt{(3-\gamma)^2-4(r^2-\gamma)}}{2 p_1}\, v_3\,. $$ Finally,~(\ref{eq:v234}) gives $$ v_4\sqrt{a^2+n-3}\ =\ \pm\sqrt{\frac{a^2-2}{3-\gamma}(v_2^2+v_3^2)}\,. $$ Since $v_3 = 0$ would imply $v=0$ and hence contradict $v \in {\mathbb P}^{n-1}_{\mathbb C}$, we see that $v_3 \neq 0$. Thus we can conclude that when none of the following expressions $$ r^2-3\,,\;\ 3-\gamma\,,\;\ a^2-2\,,\;\ r^2-\gamma\,,\; \ (3-\gamma)^2 +4\gamma - 4r^2 $$ vanish, there are $8=2^3$ different solutions to the last 3 equations. For each of these, the simple quadratic equations give $2^{n-4}$ solutions; so we see that the case $v_1 = 0$ contributes $2^{n-1}$ different solutions, each of them satisfying $v_2 \neq 0$, $v_3 \neq 0$. Since there are three symmetric cases, we obtain $3\cdot2^{n-1}$ solutions in all, as claimed. \bigskip We complete the proof of Theorem~\ref{thm:real-cmplx} and determine which values of the parameters $a$ and $r$ give all these lines real. We see that \begin{enumerate} \item[(1)] $p_1$ is real if $r^2-\gamma >0$. \item[(2)] Given that $p_1$ is real, $v_2/v_3$ is real if $(3-\gamma)^2+4\gamma-4r^2>0$. \item[(3)] Given this, $v_4/v_3$ is real if $(a^2-2)/(3-\gamma)>0$. \end{enumerate} Suppose the three inequalities above are satisfied. Then all solutions are real, and~(\ref{eq:RealOne}) implies that $3-\gamma>0$, and so we also have $a^2-2>0$. This completes the proof of Theorem~\ref{thm:real-cmplx}. \qed \section{Affinely Dependent Centers} In our derivation of the B\'ezout number $3\cdot 2^{n-1}$ of common tangents for Theorem~\ref{th:ndimnumber}, it was crucial that the centers of the spheres affinely spanned ${\mathbb R}^n$. Also, the construction in Section 3 of configurations with $3\cdot 2^{n-1}$ real common tangents had centers affinely spanning ${\mathbb R}^n$. When the centers are affinely dependent, we prove the following result. \begin{thm}\label{thm:af-dep} For $n \ge 4$, there are $3\cdot 2^{n-1}$ complex common tangent lines to $2n-2$ spheres whose centers are affinely dependent, but otherwise general. There is a choice of such spheres with $2^n$ real common tangent lines. \end{thm} \begin{rem} Theorem~\ref{thm:af-dep} extends the results of~\cite[Section 4]{MPT01}, where it is shown that when $n=3$, there are 12 complex common tangents. Megyesi~\cite{Me01} has shown that there is a configuration with 12 real common tangents, but that the number of tangents is bounded by 8 for the case of unit spheres. For $n \ge 4$, we are unable either to find a configuration of spheres with affinely dependent centers and equal radii having more than $2^n$ real common tangents, or to show that the maximum number of real common tangents is less than $3\cdot 2^{n-1}$. Similar to the case $n=3$, it might be possible that the case of unit spheres and the case of spheres with general radii might give different maximum numbers. \end{rem} \begin{rem} Megyesi~\cite{Me02} recently showed that there are $2n-2$ spheres with affinely dependent centers having all $3\cdot 2^{n-1}$ common tangents real. Furthermore, all but one of the spheres in his construction have equal radii. \end{rem} By Theorem~\ref{th:ndimnumber}, $3\cdot 2^{n-1}$ is the upper bound for the number of complex common tangents to spheres with affinely dependent centers. Indeed, if there were a configuration with more common tangents, then---since the system is a complete intersection---perturbing the centers would give a configuration with affinely independent centers and more common tangent lines than allowed by Theorem~\ref{th:ndimnumber}. By this discussion, to prove Theorem~\ref{thm:af-dep} it suffices to give $2n-2$ spheres with affinely dependent centers having $3\cdot 2^{n-1}$ complex common tangents and also such a configuration of $2n-2$ spheres with $2^n$ real common tangents. For this, we use spheres with equal radii whose centers are the vertices of a perturbed crosspolytope in a hyperplane. We work with the notation of Sections 2 and 3. Let $a\neq -1$ and suppose we have spheres with equal radii $r$ and centers at the points $$ ae_2,\ \,-e_2,\quad\mbox{and}\quad \pm e_j, \quad \mbox{for}\ 3\leq j\leq n\,. $$ Then we have the equations \begin{eqnarray} p\cdot v&=&0,\label{eq:first}\\ f\ :=\ v^2(p^2-2ap_2+a^2-r^2)-a^2v_2^2&=&0,\\ g\ :=\hspace{2.24em} v^2(p^2+2p_2+1-r^2)-v_2^2&=&0,\label{eq:XX}\\ v^2(p^2\pm2p_j+1-r^2)-v_j^2&=&0, \qquad 3\leq j\leq n\,.\label{eq:j} \end{eqnarray} As in Section 3, the sum and difference of the equations~(\ref{eq:j}) for the spheres with centers $\pm e_j$ give $$ \begin{array}{rcl} p_j&=&0,\\ v^2(p^2+1-r^2)&=&v_j^2.\rule{0pt}{14pt} \end{array}\qquad 3\leq j\leq n\,. $$ Thus we have the equations \begin{equation}\label{eq:jj} \begin{array}{c} p_3\ =\ p_4\ =\ \cdots\ =\ p_n\ =\ 0,\\ v_3^2\ =\ v_4^2\ =\ \cdots\ =\ v_n^2. \rule{0pt}{15pt} \end{array} \end{equation} Similarly, we have \begin{eqnarray*} f+ag&=& (1+a)\left[v^2(p^2-r^2+a) - av_2^2\right] = 0,\\ f-a^2g&=& (1+a)v^2\left[ (1-a)(p^2-r^2) + 2ap_2\right] = 0. \end{eqnarray*} As before, $v^2\neq 0$: If $v^2=0$, then~(\ref{eq:XX}) and~(\ref{eq:j}) imply that $v_2=\cdots=v_n=0$. With $v^2=0$, this implies that $v_1=0$ and hence $v=0$, contradicting $\bv\in{\mathbb P}^{n-1}_{\mathbb C}$. By~(\ref{eq:jj}), we have $p^2=p_1^2+p_2^2$, and so we obtain the system of equations in the variables $p_1,p_2,v_1,v_2,v_3$: \begin{equation}\label{eq:sm-sys} \begin{array}{r} p_1v_1+p_2v_2\ =\ 0,\\ (1-a)(p_1^2+p_2^2-r^2) + 2ap_2\ =\ 0\rule{0pt}{15pt},\\ v^2(p_1^2+p_2^2-r^2+a) - av_2^2\ =\ 0\rule{0pt}{15pt},\\ v^2(p_1^2+p_2^2-r^2+1) - v_3^2\ =\ 0\rule{0pt}{15pt}. \end{array} \end{equation} (For notational sanity, we do not yet make the substitution $v^2=v_1^2+v_2^2+(n-2)v_3^2$.) We assume that $a\neq1$ and will treat the case $a=1$ at the end of this section. Using the second equation of~(\ref{eq:sm-sys}) to cancel the terms $v^2(p_1^2+p_2^2)$ from the third equation and dividing the result by $a$, we can solve for $p_2$: $$ p_2\ =\ \frac{(1-a)(v^2-v_2^2)}{2v^2}\,. $$ If we substitute this into the first equation of~(\ref{eq:sm-sys}), we may solve for $p_1$: $$ p_1\ =\ -\frac{(1-a)(v^2-v_2^2)v_2}{2v^2v_1}\,. $$ Substitute these into the second equation of~(\ref{eq:sm-sys}), clear the denominator $(4v_1^2v^4)$, and remove the common factor $(1-a)$ to obtain the sextic \begin{equation}\label{eq:sextic} (1-a)^2(v_1^2+v_2^2)(v^2-v_2^2)^2 \,-\, 4r^2v_1^2v^4 \,+\, 4av_1^2v^2(v^2-v_2^2)\ =\ 0\,. \end{equation} Subtracting the third equation of~(\ref{eq:sm-sys}) from the fourth equation and recalling that $v^2=v_1^2+v_2^2+(n-2)v_3^2$, we obtain the quadratic equation \begin{equation}\label{eq:quadric} (1-a)v_1^2+v_2^2+ \left[(n-3)-a(n-2)\right]v_3^2\ =\ 0\,. \end{equation} Consider the system consisting of the two equations~(\ref{eq:sextic}) and~(\ref{eq:quadric}) in the homogeneous coordinates $v_1,v_2,v_3$. Any solution to this system gives a solution to the system~(\ref{eq:sm-sys}), and thus gives $2^{n-3}$ solutions to the original system~(\ref{eq:first})--(\ref{eq:j}). These last two equations~(\ref{eq:sextic}) and~(\ref{eq:quadric}) are polynomials in the squares of the variables $v_1^2, v_2^2, v_3^2$. If we substitute $\alpha=v_1^2, \beta=v_2^2$, and $\gamma=v_3^2$, then we have a cubic and a linear equation, and any solution $\alpha,\beta,\gamma$ to these with nonvanishing coordinates gives 4 solutions to the system~(\ref{eq:sextic}) and~(\ref{eq:quadric}): $(v_1,v_2,v_3)^{\mathrm{T}}:=(\alpha^{1/2},\pm\beta^{1/2},\pm\gamma^{1/2})^{\mathrm{T}}$, as $v_1,v_2,v_3$ are homogeneous coordinates. Solving the linear equation in $\alpha,\beta,\gamma$ for $\beta$ and substituting into the cubic equation gives a homogeneous cubic in $\alpha$ and $\gamma$ whose coefficients are polynomials in $a,n,r$ \silentfootnote{Maple V.5 code verifying this and other explicit calculations presented in this manuscript is available at {\tt www.math.umass.edu/\~{}sottile/pages/spheres}.}. The discriminant of this cubic is a polynomial with integral coefficients of degree 16 in the variables $a,n,r$ having 116 terms. Using a computer algebra system, it can be verified that this discriminant is irreducible over the rational numbers. Thus, for any fixed integer $n\geq 3$, the discriminant is a nonzero polynomial in $a,r$. This implies that the cubic has 3 solutions for general $a,r$ and any integer $n$. Since the coefficients of this cubic similarly are nonzero polynomials for any $n$, the solutions $\alpha,\beta,\gamma$ will be nonzero for general $a,r$ and any $n$. We conclude: $$ \mbox{\begin{minipage}[c]{4.3in} For any integer $n\geq 3$ and general $a,r$, there will be $3\cdot 2^{n-1}$ complex common tangents to spheres of radius $r$ with centers $$ ae_2,\ \,-e_2,\quad\mbox{and}\quad \pm e_j, \ \quad \mbox{for}\ 3\leq j\leq n\,. $$ \end{minipage}} $$ We return to the case when $a=1$, i.e., the centers are the vertices of the crosspolytope $\pm e_j$ for $j=2,\ldots,n$. Then our equations~(\ref{eq:jj}) and~(\ref{eq:sm-sys}) become \begin{equation}\label{eq:2n} \begin{array}{r} p_2\ =\ p_3\ =\ \cdots\ =\ p_n\ =\ 0,\\ v_2^2\ =\ v_3^2\ =\ \cdots\ =\ v_n^2, \rule{0pt}{15pt}\\ p_1v_1\ =\ 0,\rule{0pt}{15pt}\\ v^2(p_1^2-r^2+1) - v_2^2\ =\ 0.\rule{0pt}{15pt} \end{array} \end{equation} As before, $v^2=v_1^2+(n-1)v_2^2$. We solve the last two equations. Any solution they have (in ${\mathbb C}^1\times{\mathbb P}^1_{\mathbb C}$) gives rise to $2^{n-2}$ solutions, by the second list of equations $v_3^2=\cdots=v_n^2$. By the penultimate equation $p_1v_1=0$, one of $p_1$ or $v_1$ vanishes. If $v_1=0$, then the last equation becomes $$ (n-1)v_2^2(p_1^2-r^2+1)\ =\ v_2^2\,. $$ Since $v_2=0$ implies $v^2=0$, we have $v_2\neq 0$ and so we may divide by $v_2^2$ and solve for $p_1$ to obtain $$ p_1\ =\ \pm\sqrt{r^2-1+\frac{1}{n-1}}\,. $$ If instead $p_1=0$, then we solve the last equation to obtain $$ \frac{v_1}{v_2}\ =\ \pm\sqrt{\frac{1}{1-r^2}+1-n}\,. $$ Thus for general $r$, there will be $2^n$ common tangents to the spheres with radius $r$ and centers $\pm e_j$ for $j=2,\ldots,n$. We investigate when these are real. We will have $p_1$ real when $r^2\ >\ 1-1/(n-1)$. Similarly, $v_1/v_2$ will be real when $1/(1-r^2)\ >\ n-1$. In particular, $1-r^2>0$ and so $1>r^2$. Using this we get $$ 1-r^2\ <\ \frac{1}{n-1}\qquad\mbox{so that}\qquad r^2\ >\ 1-\frac{1}{n-1}\,, $$ which we previously obtained. We conclude that there will be $2^n$ real common tangents to the spheres with centers $\pm e_j$ for $j=2,\ldots,n$ and radius $r$ when $$ \sqrt{1-\frac{1}{n-1}}\ <\ r\ <\ 1 \,. $$ This concludes the proof of Theorem~\ref{thm:af-dep}. \section{Lines Tangent to Quadrics} Suppose that in our original question we ask for common tangents to ellipsoids, or to more general quadric hypersurfaces. Since all smooth quadric hypersurfaces are projectively equivalent, a natural setting for this question is the following: ``How many common tangents are there to $2n-2$ general quadric hypersurfaces in (complex) projective space ${\mathbb P}^n_{\mathbb C}$?'' \begin{thm}\label{thm:bezout} There are at most $$ 2^{2n-2}\cdot\frac{1}{n}\binom{2n-2}{n-1} $$ isolated common tangent lines to $2n-2$ quadric hypersurfaces in ${\mathbb P}^n_{\mathbb C}$. \end{thm} \begin{proof} The space of lines in ${\mathbb P}^n_{\mathbb C}$ is the Grassmannian of 2-planes in ${\mathbb C}^{n+1}$. The Pl\"ucker embedding~\cite{MR48:2152} realizes this as a projective subvariety of ${\mathbb P}_{\mathbb C}^{\binom{n+1}{2}-1}$ of degree $$ \frac{1}{n}\binom{2n-2}{n-1}\,. $$ The theorem follows from the refined B\'ezout theorem~\cite[\S 12.3]{Fu84a} and from the fact that the condition for a line to be tangent to a quadric hypersurface is a homogeneous quadratic equation in the Pl\"ucker coordinates for lines~\cite[\S 5.4]{sottile-macaulay-2001}. \end{proof} In Table~\ref{ta:quadvalues}, we compare the upper bound of Theorem~\ref{thm:bezout} for the number of lines tangent to $2n-2$ quadrics to the number of lines tangent to $2n-2$ spheres of Theorem~\ref{th:ndimnumber}, for small values of $n$. \begin{table}[htb] \begin{center} \begin{tabular}{|c||c|c|c|c|c|} \hline $n$ & 3 & 4 & 5 & 6 & 7\\ \hline \# for spheres & 12 & 24 & 48 & 96 & 192\\ \hline \# for quadrics &32 &320 &3580 &43008&540672\\ \hline \end{tabular}\smallskip \end{center} \label{ta:quadvalues} \caption{Maximum number of tangents in small dimensions} \end{table} The bound of 32 tangent lines to 4 quadrics in ${\mathbb P}^3_{\mathbb C}$ is sharp, even under the restriction to real quadrics and real tangents \cite{sottile-theobald-progress}. In a computer calculation, we found 320 lines in ${\mathbb P}^4_{\mathbb C}$ tangent to 6 general quadrics; thus, the upper bound of Theorem~\ref{thm:bezout} is sharp also for $n=4$, and indicating that it is likely sharp for $n > 4$. The question arises: what is the source of the huge discrepancy between the second and third rows of Table~\ref{ta:quadvalues}? Consider a sphere in affine $n$-space $$ (x_1-c_1)^2 + (x_2-c_2)^2 + \cdots + (x_n-c_n)^2\ =\ r^2\,. $$ Homogenizing this with respect to the new variable $x_0$, we obtain $$ (x_1-c_1x_0)^2 + (x_2-c_2x_0)^2 + \cdots + (x_n-c_nx_0)^2\ =\ r^2x_0^2\,. $$ If we restrict this sphere to the hyperplane at infinity, setting $x_0=0$, we obtain \begin{equation}\label{eq:imaginary} x_1^2+x_2^2+\cdots+x_n^2\ =\ 0\,, \end{equation} the equation for an imaginary quadric at infinity. We invite the reader to check that every line at infinity tangent to this quadric is tangent to the original sphere. Thus the equations for lines in ${\mathbb P}^n_{\mathbb C}$ tangent to $2n-2$ spheres define the $3\cdot 2^{n-1}$ lines we computed in Theorem~\ref{th:ndimnumber}, as well as this excess component of lines at infinity tangent to the imaginary quadric~(\ref{eq:imaginary}). Thus, this excess component contributes some portion of the B\'ezout number of Theorem~\ref{thm:bezout} to the total number of lines. Indeed, when $n=3$, Aluffi and Fulton~\cite{aluffi-fulton-2001} have given a careful argument that this excess component contributes 20, which implies that there are $32-20=12$ isolated common tangent lines to 4 spheres in 3-space, recovering the result of~\cite{MPT01}. The geometry of that calculation is quite interesting. Given a system of equations on a space (say the Grassmannian) whose set of zeroes has a positive-dimensional excess component, one method to compute the number of isolated solutions is to first modify the underlying space by blowing up the excess component and then compute the number of solutions on this new space. In many cases, the equations on this new space have only isolated solutions. However, for this problem of lines tangent to spheres, the equations on the blown up space will \emph{still} have an excess intersection and a further blow-up is required. This problem of lines tangent to 4 spheres in projective 3-space is by far the simplest enumerative geometric problem with an excess component of zeroes which requires two blow-ups (technically speaking, blow-ups along smooth centers) to resolve the excess zeroes. It would be interesting to understand the geometry also when $n > 3$. For example, how many blow-ups are needed to resolve the excess component? \bigskip Since all smooth quadrics are projectively equivalent, Theorem~\ref{th:ndimnumber} has the following implication for this problem of common tangents to projective quadrics. \begin{thm}\label{thm:UB} Given $2n-2$ quadrics in ${\mathbb P}^n_{\mathbb C}$ whose intersection with a fixed hyperplane is a given smooth quadric $Q$, but are otherwise general, there will be at most $3\cdot 2^{n-1}$ isolated lines in ${\mathbb P}^n_{\mathbb C}$ tangent to each quadric. \end{thm} When the quadrics are all real, we ask: how many of these $3\cdot 2^{n-1}$ common isolated tangents can be real? This question is only partially answered by Theorem~\ref{th:ndimnumber}. The point is that projective real quadrics are classified up to real projective transformations by the absolute value of the signature of the quadratic forms on ${\mathbb R}^{n+1}$ defining them. Theorem~\ref{th:ndimnumber} implies that all lines can be real when the shared quadric $Q$ has no real points (signature is $\pm n$). In~\cite{sottile-macaulay-2001}, it is shown that when $n=3$, each of the five additional cases concerning nonempty quadrics can have all 12 lines real. \smallskip Recently, Megyesi~\cite{Me02} has largely answered this question. Specifically, he showed that, for any nonzero real numbers $\lambda_3,\ldots,\lambda_n$, there are $2n-2$ quadrics of the form $$ (x_1-c_1)^2 + (x_2-c_2)^2 + \sum_{j=3}^n \lambda_j(x_j-c_j)^2 \ =\ R $$ having all $3\cdot 2^{n-1}$ tangents real. These all share the same quadric at infinity $$ x_1^2 + x_2^2 + \lambda_3 x_3^2 + \cdots + \lambda_n x_n^2\ =\ 0\,, $$ and thus the upper bound of Theorem~\ref{thm:UB} is attained, when the shared quadric is this quadric. \bigskip \noindent {\bf Acknowledgments:} The authors would like to thank I.~G.~Macdonald for pointing out a simplification in Section~2, as well as Gabor Megyesi and an unkwown referee for their useful suggestions. \providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace} \providecommand{\MR}{\relax\ifhmode\unskip\space\fi MR } \providecommand{\MRhref}[2]{ \href{http://www.ams.org/mathscinet-getitem?mr=#1}{#2} } \providecommand{\href}[2]{#2}
111,485
It's not if but when a fire will happen. What type of fire do you want? Presented by Dr. Kate Wilkin Event Details. About our Speaker Dr. Kate Wilkin is the Forest and Fire Adviser with UC Cooperative Extension in Butte, Nevada, Sutter, and Yuba Counties. She was a PhD and postdoctoral researcher at UC Berkeley where she researched fire in relation to emissions, water, biodiversity, and fuels treatments in forests and shrublands. Today, her research and outreach focus on fire adapted communities with a focus on fire resistant buildings, defensible space, and prescribed fire. Before moving to California, Kate grew up in rural Appalachia and then explored other fire-prone regions of the US as a natural resource manager and prescribed fire practioner on public and nonprofit lands. Based on these experiences and more, she knows that we need to use solutions responsibly, both old and new, to solve our forest health crisis. Now more than ever we need collaborative and creative solutions, and she looks forward to being part of the solution. This presentation will be held on Tuesday evening, September 10th from 6:30 – 7:30 pm, in the Multipurpose Center, building, N-12, at the Nevada County Campus. Come early and enjoy a meet-and-greet and refreshments at 6:00 pm..
112,033
\begin{document} \title{A representation formula of the viscosity solution of the contact Hamilton-Jacobi equation and its applications} \author{Panrui Ni \and Lin Wang \and Jun Yan} \address{Shanghai Center for Mathematical Sciences, Fudan University, Shanghai 200433, China} \email{prni18@fudan.edu.cn} \address{School of Mathematics and Statistics, Beijing Institute of Technology, Beijing 100081, China} \email{lwang@bit.edu.cn} \address{School of Mathematical Sciences, Fudan University, Shanghai 200433, China} \email{yanjun@fudan.edu.cn} \subjclass[2010]{37J50; 35F21; 35D40.} \keywords{weak KAM theory, Hamilton-Jacobi equations, viscosity solutions} \begin{abstract} \noindent Assume $M$ is a closed, connected and smooth Riemannian manifold. We consider the following two forms of Hamilton-Jacobi equations \begin{equation*} \left\{ \begin{aligned} &\partial_t u(x,t)+H(x,u(x,t),\partial_xu(x,t))=0,\quad (x,t)\in M\times(0,+\infty).\\ &u(x,0)=\varphi(x),\quad x\in M,\ \varphi\in C(M,\mathbb R).\\ \end{aligned} \right. \end{equation*} and \begin{equation*} H(x,u(x),\partial_x u(x))=0, \end{equation*} where $H(x,u,p)$ is continuous, convex and coercive in $p$, uniformly Lipschitz in $u$. By introducing a solution semigroup, we provide a {\it representation formula} of the viscosity solution of the evolutionary equation. As its applications, we obtain a necessary and sufficient condition for the existence of the viscosity solutions of the stationary equations. Especially, we prove a new comparison result with a {\it necessary} neighborhood of projected Aubry set for a special class of Hamilton-Jacobi equations that do not satisfy the ``proper" condition introduced in the seminal paper \cite{CHL2}. \end{abstract} \date{\today} \maketitle \tableofcontents \section{Introduction and main results} The study of the theory of viscosity solutions of the following two forms of Hamilton-Jacobi equations \begin{equation}\label{evll} \partial_t u(x,t)+H(x,u(x,t),\partial_xu(x,t))=0, \end{equation} and \begin{equation}\label{hj} H(x,u(x),\partial_x u(x))=0 \end{equation} has a long history. There are numerous results on the existence, uniqueness, stability and large-time behavior problems for the viscosity solutions of the above first-order nonlinear partial differential equations (see \cite{Intr,CEL,CHL2,Vis} for instance). Especially for the cases with the Hamiltonian independent of the argument $u$, whose characteristic equations are classical Hamilton equations. For the Hamilton-Jacobi equations depending on the unknown functions, the corresponding characteristic equations are called the contact Hamilton equations. In \cite{Wa1}, the authors introduced an implicit variational principle for the contact Hamilton equations. Based on the implicit variational principle, \cite{Wa2} gave the representation formula for the unique viscosity solution of the evolutionary equation by introducing a solution semigroup. The existence of the solutions for the ergodic problem was also proved. \cite{Wa3} developed the Aubry-Mather theory for contact Hamiltonian systems with moderate increasing in $u$. In \cite{Wa4}, the authors further studied the strictly decreasing case, and discussed large time behavior of the solution of the evolutionary case. In order to get the $C^1$-regularity of the minimizers, \cite{Wa1} assumes $H(x,u,p)$ to be at least $C^3$. Thus, the results in \cite{Wa2,Wa3,Wa4} based on the implicit variational principle, all require the contact Hamiltonian to be at least $C^3$. This paper is devoted to reducing the dynamical assumptions on the Hamiltonian: {\it $C^3$, strictly convex} and {\it superlinear} to the standard PDE assumptions: {\it continuous, convex} and {\it coercive}. In this general case, the contact Hamiltonian equations can not be defined. Nevertheless, the dynamical idea is still useful. One can refer \cite{gen,Fa} for the pioneering works in classical Hamiltonian cases with time-independence. Different from the previous works \cite{gen,Fa,Wa2,Wa3,Wa4}, one has to face certain new difficulties due to the lack of compactness of minimizers and the appearance of the Lavrentiev phenomenon caused by time-dependence. By combining and developing the PDE and dynamical approaches, we provide a {\it representation formula} of the viscosity solution of the evolutionary equation. As its applications, we obtain a necessary and sufficient condition for the existence of the viscosity solutions of the stationary equations. It is well known that the comparison theorem plays a central role in the viscosity solution theory. We prove a new comparison result with a neighborhood of the projected Aubry set. An example is constructed to show that the requirement of the neighborhood is {\it necessary} for a special class of Hamilton-Jacobi equations that do not satisfy the ``proper" condition introduced in the seminal paper \cite{CHL2}. Comparably, the viscosity solution is determined completely by the projected Aubry set itself for the ``proper" cases (\cite[Theorem 1.6]{WWY4}). Throughout this paper, we assume $M$ is a closed, connected and smooth Riemannian manifold and $H:T^*M\times\mathbb R\rightarrow\mathbb R$ satisfies \begin{itemize} \item [\textbf{(C):}] $H(x,u,p)$ is continuous; \item [\textbf{(CON):}] $H(x,u,p)$ is convex in $p$, for any $(x,u)\in M\times \mathbb R$; \item [\textbf{(CER):}] $H(x,u,p)$ is coercive in $p$, i.e. $\lim_{\|p\|\rightarrow +\infty}(\inf_{x\in M}H(x,0,p))=+\infty$; \item [\textbf{(LIP):}] $H(x,u,p)$ is Lipschitz in $u$, uniformly with respect to $(x,p)$, i.e., there exists $\lambda>0$ such that $|H(x,u,p)-H(x,v,p)|\leq \lambda|u-v|$, for all $(x,p)\in\ T^*M$ and all $u,v\in\mathbb R$. \end{itemize} Correspondingly, one has the Lagrangian associated to $H$: \begin{equation*} L(x,u,\dot x):=\sup_{p\in T^*_xM}\{\langle \dot x,p\rangle-H(x,u,p)\}, \end{equation*} which satisfies the following properties (see \cite[Proposition 2.7]{gen} for instance) \begin{itemize} \item [\textbf{(LSC):}] $L(x,u,\dot x)$ is lower semicontinuous in $\dot x$; \item [\textbf{(CON):}] $L(x,u,\dot x)$ is convex in $\dot x$, for any $(x,u)\in M\times \mathbb R$; \item [\textbf{(LIP):}] $L(x,u,\dot x)$ is Lipschitz in $u$, uniformly with respect to $(x,\dot x)$, i.e., there exists $\lambda>0$ such that $|L(x,u,\dot x)-L(x,v,\dot x)|\leq \lambda|u-v|$, for all $(x,\dot x)\in\ TM$ and all $u,v\in\mathbb R$. \end{itemize} We list notations which will be used later in the present paper. \begin{itemize} \item [$\centerdot$] $\textrm{diam}(M)$ denotes the diameter of $M$. \item [$\centerdot$] $d(x,y)$ denotes the distance between $x$ and $y$ induced by the Riemannian metric $g$ on $M$. \item [$\centerdot$] $\|\cdot\|$ denotes the norms induced by $g$ on both tangent and cotangent spaces of $M$. \item[{$\centerdot$}] $B(v,r)$ stands for the open norm ball on $T_xM$ centered at $v\in T_xM$ with radius $r$, and $\bar B(v,r)$ stands for its closure. \item [$\centerdot$] $C(M)$ stands for the space of continuous functions on $M$. $Lip(M)$ stands for the space of Lipschitz continuous functions on $M$. \item [$\centerdot$] $\|\cdot\|_\infty$ stands for the supremum norm of the vector valued functions on its domain. \end{itemize} \subsection{A representation of the solution of the evolutionary equation} Consider the viscosity solution of the Cauchy problem \begin{equation}\tag{$CP_H$}\label{C} \left\{ \begin{aligned} &\partial_t u(x,t)+H(x,u(x,t),\partial_xu(x,t))=0,\quad (x,t)\in M\times(0,+\infty). \\ &u(x,0)=\varphi(x),\quad x\in M. \\ \end{aligned} \right. \end{equation} We have the following results. \begin{result}\label{m1} Assume $H:T^*M\times\mathbb R\rightarrow\mathbb R$ satisfies (C)(CON)(CER)(LIP). The following backward solution semigroup \begin{equation}\tag{T-}\label{T-} T^-_t\varphi(x)=\inf_{\gamma(t)=x} \left\{\varphi(\gamma(0))+\int_0^tL(\gamma(\tau),T^-_\tau\varphi(\gamma(\tau)),\dot{\gamma}(\tau)){d}\tau\right\} \end{equation} is well-defined. The infimum is taken among absolutely continuous curves $\gm:[0,t]\rightarrow M$ with $\gamma(t)=x$. Moreover, if the initial condition $\varphi$ is continuous, then $ u(x,t):=T^-_t\varphi(x)$ represents the unique continuous viscosity solution of (\ref{C}). If $\varphi$ is Lipschitz continuous, then $u(x,t):=T^-_t\varphi(x)$ is also locally Lipschitz continuous on $M\times [0,+\infty)$. \end{result} The main difficulty to prove Main Result \ref{m1} is stated as follows. \begin{itemize} \item Compared to contact HJ equations under the Tonelli conditions, the contact Hamilton flow can not be defined. Consequently, we do not have the compactness of the minimizing orbit set, which plays a crucial role in the authors' previous work (see \cite[Lemma 2.1]{Wa2}). \item Compared to classical HJ equations in less regular cases (see \cite{gen,Fa}), the backward semigroup is implicit defined, which causes $t$-dependence in the Lagrangians. Due to the Lavrentiev phenomenon, it is not direct to prove the Lipschitz continuity of the minimizers of $T^-_t\varphi(x)$ (see \cite{ball} for various counterexamples). \end{itemize} Consequently, we have to make more efforts to obtain the {\it Lipschitz continuity} of the minimizers of $T^-_t\varphi(x)$ and itself under the general assumptions (C) (CON) (CER) and (LIP). It is achieved by combining and developing the PDE and dynamical approaches, together with a new variational inequality introduced in \cite{Bet}. Define $F(x,u,p):=H(x,-u,-p)$, according to Main Result \ref{m1}, there exists a backward solution semigroup $\bar T^-_t$ corresponding to $F$. Then the forward solution semigroup of $H$ can be defined by $T^+_t\varphi:=-\bar T^-_t(-\varphi)$. Equivalently, \begin{equation}\label{T+eq}\tag{T+} T^+_t\varphi(x)=\sup_{\gamma(0)=x}\left\{\varphi(\gamma(t))-\int_0^tL(\gamma(\tau),T^+_{t-\tau}\varphi(\gamma(\tau)),\dot{\gamma}(\tau))d\tau\right\}. \end{equation} By Main Result \ref{m1}, the fixed points of $T^-_t$ are viscosity solutions of \begin{equation}\label{E}\tag{$E_H$} H(x,u(x),\partial_x u(x))=0. \end{equation} It is worth mentioning that an alternative variational formulation was provided in \cite{CCWY,CCJWY,LTW} in light of G. Herglotz's work \cite{her}, which is related to nonholonomic constraints. By using the Herglotz variational principle, various kinds of representation formulas for the viscosity solutions of (\ref{evll}) were also obtained in \cite{HCHZ}. \subsection{The existence of the solutions of the stationary equation} \begin{remark}\label{Perron} Assume $H:T^*M\times\mathbb R\rightarrow\mathbb R$ satisfies (C)(CER)(LIP), according to the classical Perron method \cite{Ish} (see also \cite[Theorem 1.1]{erg}), if (\ref{E}) has a subsolution $\psi$ and a supersolution $\Psi$, both are Lipschitz continuous and satisfy $\psi\leq \Psi$, then the above equation admits a Lipschitz viscosity solution. \end{remark} In light of \cite{Ish}, we introduce another necessary and sufficient condition for (\ref{E}) to admit solutions. \begin{result}\label{S} Assume $H:T^*M\times\mathbb R\rightarrow\mathbb R$ satisfies (C)(CON)(CER)(LIP). Given $\varphi\in Lip(M)$, if $T^-_t\varphi(x)$ has a bound independent of $t$, then \begin{equation*} \check{\varphi}=\liminf_{t\rightarrow +\infty}T^-_t\varphi(x) \end{equation*} exist, and is a backward weak KAM solution of (\ref{E}). More generally, the following statements are equivalent: \begin{itemize} \item [(1)] (\ref{E}) admits viscosity solutions; \item [(2)] There exist two continuous functions $\varphi$ and $\psi$ such that $T^-_t\varphi$ has a lower bound independent of $t$, and $T^-_t\psi$ has a upper bound independent of $t$; \item [(3)] There exist two continuous functions $\varphi$ and $\psi$, and two constants $t_1$, $t_2>0$ such that $T^-_{t_1}\varphi\geq \varphi$ and $T^-_{t_2}\psi\leq \psi$. \end{itemize} \end{result} If (\ref{E}) admits a solution $u_-$, one can take $u_-$ as the initial function, the statement (2) and (3) hold true obviously. Thus, we only need to show the opposite direction, which will be proved in Section \ref{secS}. It is worth noting that the upper bound of $T^-_t\psi$ may be less than the lower bound of $T^-_t\varphi$. We denote by $\mathcal S_-$ and $\mathcal S_+$ the set of all backward weak KAM solutions and the set of all forward weak KAM solutions respectively. In the discussion below, we need to introduce the following assumption \begin{itemize} \item [\textbf{(S):}] The set $\mathcal S_-$ is nonempty. Namely, (\ref{E}) admits a viscosity solution $u_-$. \end{itemize} \begin{result}\label{m2'} Assume $H:T^*M\times\mathbb R\rightarrow\mathbb R$ satisying (C)(CON)(CER)(LIP) and the condition (S), then $\lim_{t\rightarrow +\infty} T^+_tu_-$ converges uniformly to $u_+$, where $u_+\in\mathcal S_+$ and $-u_+$ is a viscosity solution of \begin{equation}\label{E'}\tag{$E_F$} F(x,u(x),\partial_x u(x))=0, \end{equation}where $F(x,u,p):=H(x,-u,-p)$. Moreover, the existence of the viscosity solutions of (\ref{E}) is equivalent to the existence of the viscosity solutions of (\ref{E'}). \end{result} \begin{definition} Let $u_-\in \mathcal{S}_-$, $u_+\in \mathcal{S}_+$. Define the projected Aubry set \begin{equation*} \mathcal I_{(u_-,u_+)}:=\{x\in M:\ u_-(x)=u_+(x)\}. \end{equation*} \end{definition} If $u_+=\lim_{t\rightarrow +\infty}T^+_tu_-$ or $u_-=\lim_{t\rightarrow +\infty}T^-_tu_+$, then Corollary \ref{xt} below shows that $\mathcal I_{(u_-,u_+)}$ is nonempty. \subsection{Strictly increasing cases} We assume $H:T^*M\times\mathbb R\rightarrow\mathbb R$ satisfies (C), (CON), (CER), (LIP) and \begin{itemize} \item [\textbf{(STI):}] $H(x,u,p)$ is strictly increasing in $u$. \end{itemize} The corresponding Lagrangian satisfies (LSC)(CON)(LIP) and \begin{itemize} \item [\textbf{(STD):}] $L(x,u,\dot x)$ is strictly decreasing in $u$. \end{itemize} \begin{result}\label{m3} Assume $H:T^*M\times\mathbb R\rightarrow\mathbb R$ satisfies (C)(CON)(CER)(STI) and the condition (S), then $u_-$ is the unique viscosity solution of (\ref{E}), and \begin{itemize} \item [(1)] the limit function $\lim_{t\rightarrow +\infty} T^+_tu_-$ exists, denoted by $u_+$, we have \[u_-=\lim_{t\rightarrow +\infty} T^-_tu_+;\] \item [(2)] $u_+$ is the maximal element in $\mathcal S_+$; \item [(3)] for $v_+\in\mathcal S_+$, define \begin{equation*} \mathcal I_{v_+}:=\{x\in M:\ u_-(x)=v_+(x)\}, \end{equation*} then $\mathcal I_{v_+}$ is nonempty, and $\mathcal I_{v_+}\subseteq \mathcal I_{(u_-,u_+)}$. \end{itemize} \end{result} From weak KAM point of view, we call $(u_-,u_+)$ a conjugate pair. \subsection{Strictly decreasing cases} In this part, we are concerned with further properties of {viscosity solutions for a special class of Hamilton-Jacobi equations that do not satisfy the ``proper" condition introduced in the seminal paper \cite{CHL2}, \[ H(x,r,p)\leqslant H(x,s,p)\ \ \text{whenever}\ r\leqslant s. \] If $H(x,u,p)$ satisfies (STI), then $F(x,u,p)=H(x,-u,-p)$ satisfies \begin{itemize} \item [\textbf{(STD):}] $F(x,u,p)$ is strictly decreasing in $u$. \end{itemize} We denote by $\mathcal{VS}(F)$ the set of the viscosity solutions of (\ref{E'}). We also define a partial order $\preceq$ on this set: \centerline{$v_1\preceq v_2$ if and only if $v_1(x)\leq v_2(x)$ for all $x\in M$.} \vspace{1ex} Under the assumption of Main Result \ref{m2'}, the existence of $u_-\in\mathcal S_-$ guarantees the existence of $u_+\in\mathcal S_+$. Note that $-u_+\in\mathcal{VS}(F)$. In view of \cite[Example 1.1]{Wa3} (see also Remark \ref{ex} below), this set may not be a singleton. Let $\bar u_+$ be the unique forward weak KAM solution of (\ref{E'}), namely the unique fixed point of the forward semigroup $\bar T^+_t$ corresponding to $F$. \begin{result}\label{m4} Assume $F:T^*M\times\mathbb R\rightarrow\mathbb R$ satisfies (C)(CON)(CER)(STD). Then \begin{itemize} \item [(1)] The set $\mathcal{VS}(F)$ is compact in the topology induced by the $\|\cdot\|_\infty$-norm; \item [(2)] Let $v_-:=\min_{v\in\mathcal{VS}(F)}v$, then $v_-\in\mathcal{VS}(F)$. The maximal elements in $\mathcal{VS}(F)$ exist in the sense of the partial order; \item [(3)] For $v\in\mathcal{VS}(F)$, define $\mathcal I_v:=\{x\in M:\ v(x)=\bar u_+(x)\}$. Let $v_1$ and $v_2\in\mathcal{VS}(F)$, then \begin{itemize}\item [(i)] If $v_1\leq v_2$, then $\emptyset\neq \mathcal I_{v_2}\subseteq \mathcal I_{v_1}\subseteq \mathcal I_{v_-}$, where $v_-:=\min_{v\in\mathcal{VS}(F)}v$; \item [(ii)] If there exists a neighbourhood $\mathcal O$ of $\mathcal I_{v_1}$ such that $v_2|_{\mathcal O}\leq v_1 |_{\mathcal O}$, then $v_2\leq v_1$ everywhere; \item [(iii)] If $\mathcal I_{v_2}=\mathcal I_{v_1}$ and $v_2|_{\mathcal O}=v_1|_{\mathcal O}$, then $v_1=v_2$ everywhere. \end{itemize} \end{itemize} \end{result} \begin{example}\label{ex} In order to explain the necessity of the neighbourhood condition above, we consider the following example \begin{equation}\label{E0} -\lambda u(x)+\frac{1}{2}|\partial_x u(x)|^2+V(x)=0,\quad x\in\mathbb S\simeq(-1,1], \end{equation} where $V(x)$ is the restriction of $x^2/2$ on $\mathbb S$. Then $F(x,u,p)=-\lambda u+|p|^2/2+V(x)$ defined on $T^*\mathbb S\times \mathbb R$ is Lipschitz continuous and smooth for $x\in (-1,1]$. Assume $\lambda>2$, then two viscosity solutions of the above equation are \begin{equation*} u_1(x)=\frac{\lambda+\sqrt{\lambda^2-4}}{2}V(x),\quad u_2(x)=\frac{\lambda-\sqrt{\lambda^2-4}}{2}V(x). \end{equation*} One can prove that the unique forward weak KAM solution $\bar u_+$ satisfies $\bar u_+(0)=u_1(0)=u_2(0)$ and $\bar u_+(x)<u_2(x)$ on $(0,1]$. It means that although $\mathcal I_{u_1}=\mathcal I_{u_2}=\{0\}$, one can not conclude $u_1=u_2$ everywhere. \end{example} A detailed analysis of Example \ref{ex} is given in Section \ref{sdec} below. Next, we consider the large-time behavior of the viscosity solution of the following Cauchy problem \begin{equation}\tag{$CP_F$}\label{C'} \left\{ \begin{aligned} &\partial_t u(x,t)+F(x,u(x,t),\partial_xu(x,t))=0,\quad (x,t)\in M\times(0,+\infty). \\ &u(x,0)=\varphi(x),\quad x\in M. \\ \end{aligned} \right. \end{equation} We assume that (\ref{E'}) admits viscosity solutions in the following. There holds \begin{result}\label{m5} Assume $F:T^*M\times\mathbb R\rightarrow\mathbb R$ satisfies (C)(CON)(CER)(STD). For given $\varphi\in Lip(M)$, let $w(x,t)$ be the unique Lipschitz viscosity solution of (\ref{C'}), then \begin{itemize} \item [(1)] If $\varphi$ satisfies the following condition \begin{itemize} \item [($\ast$)] $\varphi\geq \bar u_+$ and there exists a point $x_0$ such that $\varphi(x_0)=\bar u_+(x_0)$. \end{itemize} \noindent Then $w(x,t)$ is bounded by a constant independent of $t$ and $\varphi$; \item [(2)] If the condition ($\ast$) does not hold, then we have the following: \begin{itemize} \item [(a)] if there exists a point $x_0$ such that $\varphi(x_0)<\bar u_+(x_0)$, then $w(x,t)$ tends to $-\infty$ uniformly on $x$ as $t$ tends to $+\infty$; \item [(b)] if $\varphi>\bar u_+$, then $w(x,t)$ tends to $+\infty$ uniformly on $x$ as $t$ tends to $+\infty$. \end{itemize} \end{itemize} \end{result} \subsection{Higher regular cases} In the final part of this paper, we require the Hamiltonian $H:T^*M\times\mathbb R\rightarrow\mathbb R$ has higher regularity. More precisely, we assume $H(x,u,p)$ satisfies (C)(CER)(LIP) and \begin{itemize} \item [\textbf{(STC):}] $H(x,u,p)$ is strictly convex in $p$, for any $(x,u)\in M\times \mathbb R$; \item [\textbf{(LL):}] $(x,p)\mapsto H(x,u,p)$ is locally Lipschitz continuous, for all $u\in\mathbb R$. \end{itemize} \vspace{1ex} \begin{result}\label{tilde} If $H:T^*M\times\mathbb R\rightarrow\mathbb R$ satisfies (LL)(STC)(CER)(LIP) and (S), then for a conjugate pair $(u_-,u_+)$, we have \begin{itemize} \item [(1)] $u_-$ and $u_+$ are differentiable on $\mathcal I_{(u_-,u_+)}$ with the same derivative. We then define \begin{equation*} \tilde{\mathcal I}_{(u_-,u_+)}:=\{(x,u,p):\ x\in\mathcal I_{(u_-,u_+)},\ u=u_-(x),\ p=\partial_xu_-(x)\}. \end{equation*} \item [(2)] $u_-$ and $u_+$ are of class $C^1$ on $\mathcal I_{(u_-,u_+)}$, or equivalently, the lift from $\mathcal I_{(u_-,u_+)}$ to $\tilde{\mathcal I}_{(u_-,u_+)}$ is continuous. \end{itemize} We then require higher regularity on $H(x,u,p)$. \begin{itemize} \item [(1)] If $H(x,u,p)$ is of class $C^1$, then the contact Hamilton equations can be defined. For $x\in\mathcal I_{(u_-,u_+)}$, there exists a $C^1$ curve $\gamma:(-\infty,\infty)\rightarrow M$ with $\gamma(0)=x$ such that $u_-(\gamma(t))=u_+(\gamma(t))$ for all $t\in\mathbb R$, and \[x(t):=\gamma(t),\quad u(t):=u_\pm(\gamma(t)),\quad p(t):=\partial_{\gamma(t)}u_\pm(\gamma(t))\] satisfies the contact Hamilton equations \begin{equation}\label{CH} \left\{ \begin{aligned} &\dot{x}= \partial_pH(x,u,p),\\ &\dot{p}= -\partial_xH(x,u,p)-\partial_u H(x,u,p)p,\quad a.e.\\ &\dot{u}= \partial_pH(x,u,p)\cdot p. \end{aligned} \right. \end{equation} \item [(2)] If $H(x,u,p)$ is in addition of class $C^{1,1}$, then $u_-$ and the $u_+$ are of class $C^{1,1}$ on $\mathcal I_{(u_-,u_+)}$. Equivalently, the projection $\pi:T^*M\times\mathbb R\rightarrow M$ induces a bi-Lipschitz map between $\mathcal I_{(u_-,u_+)}$ and $\tilde{\mathcal I}_{(u_-,u_+)}$. \end{itemize} \end{result} Here we note that the condition (STC) is necessary to verify Main Result \ref{tilde}. On one hand, the locally semiconcavity of the viscosity solutions of (\ref{E}) and (\ref{E'}) requires (STC). On the other hand, the proof of the differentiability of $u_\pm$ on the calibrated curves also depends on this condition. The rest of this paper is organized as follows. In Section \ref{main1}, we prove Main result \ref{m1}. To achieve that, we need some technical lemmas whose proofs are given in Appendix \ref{apbb} and \ref{apcc}. Main Result \ref{S} is proved in Section \ref{secS}. Main Result \ref{m2'} is proved in Section \ref{pm2'}. In Section \ref{sinc} and \ref{sdec}, we consider the monotone cases. Main Result \ref{m3} is proved in Section \ref{sinc}. Main Result \ref{m4} and \ref{m5} are proved in Section \ref{sdec}. The regularity of weak KAM solutions is considered in Section \ref{Regu}, and Main Result \ref{tilde} is proved. In addition, we give the basic results on the existence and regularity of the minimizers of the one dimensional variational problem in Appendix \ref{preli}, and we also provide some basic properties of the solution semigroup, weak KAM solution and viscosity solution in Appendix \ref{swv} for the reader's convenience. \section{Representation of the evolutionary solution}\label{main1} In this part, we are devoted to proving \begin{itemize} \item [(1)] if the initial condition $\varphi$ is Lipschitz continuous, then $ u(x,t):=T^-_t\varphi(x)$ is the Lipschitz viscosity solution of (\ref{C}); \item [(2)]if $\varphi$ is continuous, then $u(x,t):=T^-_t\varphi(x)$ is the continuous viscosity solution of (\ref{C}). \end{itemize} \subsection{On Item (1): Lipschitz initial conditions} As a preparation, we need the following \begin{lemma}\label{begin} Let $u_0=\varphi\in C(M)$. For $k\in\mathbb N$, consider the following iteration procedure \begin{equation}\label{k} u_k(x,t)=\inf_{\gamma(t)=x}\left\{\varphi(\gamma(0))+\int_0^tL(\gamma(\tau),u_{k-1}(\gamma(\tau),\tau),\dot{\gamma}(\tau)){d}\tau\right\}. \end{equation} There hold \begin{itemize} \item [(i)] the sequence $\{u_k(x,t)\}_{k\in \mathbb{N}}$ has a subsequence uniformly converging to $u(x,t):=T_t^-\varphi(x)$; \item [(ii)] let $\varphi\in Lip(M)$. Once we prove that each $u_{k}(x,t)$ is locally Lipschitz continuous on $M\times (0,T]$, and is the viscosity solution of \begin{equation}\label{uk} \left\{ \begin{aligned} &\partial_t u(x,t)+H(x,u_{k-1}(x,t),\partial_xu(x,t))=0, \\ &u(x,0)=\varphi(x). \\ \end{aligned} \right. \end{equation} on $M\times [0,T]$, then $u_k(x,t)$ is Lipschitz on $M\times [0,T]$ and the Lipschitz constant of $u_k(x,t)$ depends only on $\|u_{k-1}\|_\infty$ and $\|\partial_x\varphi(x)\|_\infty$. \item [(iii)] if the assumption in (ii) is satisfied, then the Lipschitz constants of the elements belonging to the converging subsequence is uniform, the limit function $u(x,t)$ is Lipschitz continuous. \end{itemize} \end{lemma} \begin{proof} \noindent \textbf{Item (i).} Given $T>0$, there exists a constant $k_0$ large enough such that $(\lambda T)^{k_0}/(k_0)!<1$. Denote $\alpha:=(\lambda T)^{k_0}/(k_0)!$, according to \cite[Lemma 4.1]{Wa2} and (LIP), we have \begin{equation*} \|u_{(k+1)k_0}-u_{kk_0}\|_\infty\leq \alpha\|u_{kk_0}-u_{(k-1)k_0}\|_\infty\leq \dots \leq \alpha^k\|u_{k_0}-\varphi\|_\infty,\quad \forall k\in\mathbb N. \end{equation*} Thus for $k_1>k_2$, we have \begin{equation*} \|u_{k_1k_0}(x,t)-u_{k_2k_0}(x,t)\|_\infty\leq \sum_{k=k_2}^{k_1-1}\alpha^k\|u_{k_0}(x,t)-\varphi(x)\|_\infty\leq \frac{\alpha^{k_2}-\alpha^{k_1}}{1-\alpha}\|u_{k_0}(x,t)-\varphi(x)\|_\infty. \end{equation*} Therefore, the sequence $\{u_{kk_0}(x,t)\}_{k\in\mathbb N}$ is a Cauchy sequence in $(C(M\times[0,T]),\|\cdot\|_\infty)$ and it converges uniformly to a continuous function $u(x,t)$. This function is the unique fixed point of the operator $\mathcal A^{k_0}_\varphi$, where we define the operator \begin{equation*} \mathcal A_\varphi[u](x,t)=\inf_{\gamma(t)=x}\left\{\varphi(\gamma(0))+\int_0^tL(\gamma(\tau),u(\gamma(\tau),\tau),\dot{\gamma}(\tau)){d}\tau\right\}. \end{equation*} Since $\mathcal A^{k_0+1}_\varphi[u](x,t)=\mathcal A_\varphi[u](x,t)$, one can figure out that $\mathcal A_\varphi[u](x,t)$ is also a fixed point of $\mathcal A^{k_0}_\varphi$. Therefore $u(x,t)=\mathcal A_\varphi[u](x,t)$, i.e. it is the unique fixed point of $\mathcal A_\varphi$, equivalently it satisfies (\ref{T-}). Thus, we have shown that $u_k(x,t)$ has a subsequence uniformly converges to the semigroup (\ref{T-}). \vspace{1ex} \noindent \textbf{Item (ii).} Define \begin{equation*} M:=\sup \{|H(x,u,p)|:\ x\in M,\ |u|\leq \|{u_{k-1}}(x,t)\|_\infty,\ \|p\|\leq \|\partial_x\varphi(x)\|_\infty\}, \end{equation*} then the Lipschitz function $w(x,t)=\varphi(x)-Mt$ satisfies $\partial_t w+H(x,{u_{k-1}}(x,t),\partial_x w)\leq 0$ almost everywhere. According to \cite[Corollary 8.3.4]{Fa08}, it is a viscosity subsolution of (\ref{uk}). For any $h>0$, we define a continuous subsolution \begin{equation}\label{barw} \bar w(x,t)=\left\{ \begin{aligned} &\varphi(x)-Mt,\quad t\leq h. \\ &u_k(x,t-h)-Mh,\quad t>h. \\ \end{aligned} \right. \end{equation} By \cite[Theorem 5.1]{Intr}, since $\varphi(x)-Mt$ is Lipschitz in $x$, we have the comparison result $\bar w(x,h)=\varphi(x)-Mh\leq u_k(x,h)$. Note that $u_k(x,t)$ is Lipschitz on $M\times[h,T]$, we have the comparison result \begin{equation*} u_k(x,t)-Mh=\bar w(x,t+h)\leq u_k(x,t+h),\quad \forall t\geq 0,\ h>0. \end{equation*} Therefore $\partial_t u_k(x,t)\geq -M$. Plugging into (\ref{cu}), one obtain $H(x,{u_{k-1}}(x,t),\partial_x u_k(x,t))\leq M$, so \begin{equation*} H(x,0,\partial_xu_k(x,t))\leq M+\lambda \|{u_{k-1}}(x,t)\|_\infty. \end{equation*} Thus $\|\partial_x u_k(x,t)\|_\infty$ is bounded on $M\times[0,T]$. Plugging into (\ref{cu}) again, one obtain that $\|\partial_t u_k(x,t)\|_\infty$ is bounded on $M\times[0,T]$. We finally prove that $u_k(x,t)$ is Lipschitz on $M\times[0,T]$, and the Lipschitz constant depends only on $\|{u_{k-1}}(x,t)\|_\infty$ and $\|\partial_x\varphi(x)\|_\infty$. Since the converging subsequence of $\{u_k(x,t)\}_{k\in\mathbb N}$ is uniformly bounded, the proof of item (iii) is obvious. \end{proof} \begin{remark}\label{Lipx} If $u(x,t)$ has a bound independent of $t$, using the discussion in the proof of Lemma \ref{begin} item (ii), for any $t>0$, there exists a constant $K(t)>0$ such that for $k\geq K(t)$, we have $\|u_k(x,t)-u(x,t)\|_\infty\leq 1$. Define $l:=\|u(x,t)\|_\infty+1$ and \begin{equation*} M_l:=\sup \{|H(x,u,p)|:\ x\in M,\ |u|\leq \|u(x,t)\|_\infty+1,\ \|p\|\leq \|\partial_x\varphi(x)\|_\infty\}. \end{equation*} One can obtain that \[H(x,0,\partial_x u_k)\leq M_l+\lambda(\|u(x,t)\|_\infty+1)\] for all $k\geq K(t)$. Thus, the uniform limit $u(x,t)$ admits a Lipschitz constant in $x$, which is independent of $t$. Similarly, if $T^+_t\varphi(x)$ has a bound independent of $t$, then it has a Lipschitz constant in $x$ independent of $t$. \end{remark} The key point for the proof of Item (1) is to show for each $k\in \mathbb{N}$, $u_k(x,t)$ defined by (\ref{k}) is the Lipschitz continuous viscosity solution of (\ref{uk}). This will be verified by Lemma \ref{ukLip} blew. We divide the remaining proof into two steps. In Step 1, we prove Item (1) for the Hamiltonian $H(x,u,p)$ depending on $p$ superlinearly. In Step 2, the superlinearity is relaxed to (CER). \subsubsection{Step 1: Proof under the superlinear condition}\label{H3'} In this part, we assume the Hamiltonian $H:T^*M\times\mathbb R\rightarrow\mathbb R$ satisfies (C)(CON)(LIP) and \begin{itemize} \item [\textbf{(SL):}] For every $(x,u)\in M\times \mathbb{R}$, $H(x,u,p)$ is superlinear in $p$, i.e. there exists a function $\Theta:\mathbb R\rightarrow\mathbb R$ satisfying \begin{equation*} \lim_{r \rightarrow+\infty}\frac{\Theta(r)}{r}=+\infty,\quad \textrm{and}\quad H(x,u,p)\geq \Theta(\|p\|)\quad \textrm{for\ every}\ (x,u,p)\in T^*M\times\mathbb R. \end{equation*} \end{itemize} One can prove the corresponding Lagrangian satisfies (CON)(LIP) and \begin{itemize} \item [\textbf{(C):}] $L(x,u,\dot x)$ is continuous; \item [\textbf{(SL):}] For every $(x,u)\in M\times \mathbb{R}$, $L(x,u,\dot x)$ is superlinear in $\dot x$, i.e. there exists a function $\Theta:\mathbb R\rightarrow\mathbb R$ satisying \begin{equation*} \lim_{r \rightarrow+\infty}\frac{\Theta(r)}{r}=+\infty,\quad \textrm{and}\quad L(x,u,\dot x)\geq \Theta(\|\dot x\|)\quad \textrm{for\ every}\ (x,u,\dot x)\in TM\times\mathbb R. \end{equation*} \end{itemize} As the beginning, we need some technical results. \begin{lemma}\label{3.1} For given $T>0$ and $\varphi\in C(M)$, if $v(x,t)$ is a Lipschitz continuous function on $M\times[0,T]$, then \begin{itemize} \item [(1)] for any $(x,t)\in M\times[0,T]$, the minimizers of \begin{equation}\label{u} u(x,t)=\inf_{\gamma(t)=x}\left\{\varphi(\gamma(0))+\int_0^tL(\gamma(\tau),v(\gamma(\tau),\tau),\dot{\gamma}(\tau)){d}\tau\right\} \end{equation} are Lipschitz continuous. For any $r>0$, if $d(x,x')\leq r$ and $|t-t'|\leq r/2$, where $t\geq r> 0$, then the Lipschitz constant of the minimizers of $u(x',t')$ is independent of $(x',t')$, and only depends on $(x,t)$ and $r$. \item [(2)] the value function $u(x,t)$ defined in (\ref{u}) is locally Lipschitz continuous on $M\times(0,T]$. \item [(3)] the value function $u(x,t)$ defined by (\ref{u}) is the viscosity solution of \begin{equation}\label{cu} \left\{ \begin{aligned} &\partial_t u(x,t)+H(x,v(x,t),\partial_xu(x,t))=0, \\ &u(x,0)=\varphi(x). \\ \end{aligned} \right. \end{equation} on $M\times[0,T]$. \end{itemize} \end{lemma} For the sake of consistency, the proof of the above Lemma is given in the appendix. Based on that, we verify Item (1) under the assumption (SL). Let $u_0=\varphi\in Lip(M)$ in the iteration procedure given by (\ref{k}). By Lemma \ref{begin} (i), $u_k(x,t)$ has a subsequence uniformly converges to $u(x,t):=T_t^-\varphi(x)$. We still denote this subsequence by $\{u_k(x,t)\}$. By Lemma \ref{3.1} (2) and (3), $u_1(x,t)$ satisfies the condition stated in Lemma \ref{begin} (ii), which implies that $u_1(x,t)$ is Lipschitz on $M\times [0,T]$. Repeating the argument, one can obtain that $u_k(x,t)$ is the Lipschitz continuous viscosity solution of (\ref{uk}) on $M\times[0,T]$. Since $\{u_k(x,t)\}_{k\in\mathbb N}$ uniformly converges, then it is uniformly bounded. By Lemma \ref{begin} (iii), the Lipschitz constant of $u_k(x,t)$ is uniform with respect to $k$ on $M\times[0,T]$. Since $H(x,u_k(x,t),p):=H_k(t,x,p)$ converges uniformly on compact subsets of $\mathbb R\times T^*M$, and $u_k(x,t)$ converges uniformly on compact subsets of $M\times(0,+\infty)$, then as the limit of $u_k(x,t)$ (up to a subsequence), the backward semigroup $u(x,t):=T^-_t\varphi(x)$ is the Lipschitz viscosity solution of (\ref{C}) by the stability of viscosity solutions. \subsubsection{Step 2: Relaxed to the coercive condition}\label{H3} In this part, we assume the Hamiltonian $H:T^*M\times\mathbb R\rightarrow\mathbb R$ satisfies (C)(CON)(CER)(LIP). Since the Lagrangian does not take its value in $\mathbb R$, one can not use the results in Appendix \ref{A.2} to obtain the Lispchitz regularity of the minimizers of $T^-_t\varphi(x)$. In fact, for arbitrary two points, the minimizers between them may not exist. A counterexample can be given via the Weierstrass case (see \cite[Section 3.2]{One}). However, it can be proved that the minimizers for the Bolza problem exist, see Lemma \ref{Bol} below. We make a modification as in \cite[Appendix A]{gen} \begin{equation*} H_n(x,u,p):=H(x,u,p)+\max\{\|p\|^2-n^2,0\},\quad n\in\mathbb N. \end{equation*} Obviously $H_n$ is superlinear in $p$. The sequence $H_n$ is decreasing, and converges uniformly to $H$ on compact subsets of $T^*M\times\mathbb R$. The sequence of the corresponding Lagrangians $L_n$ is increasing, and converges to $L$ pointwisely. Denote by $u_{n,k}(x,t)$ the viscosity solution of (\ref{uk}) with $H$ replaced by $H_n$. \begin{lemma}\label{Bol} For given $x\in M$, $T>0$ and $\varphi\in C(M,\mathbb R)$, take any $v\in C(M\times [0,T],\mathbb R)$ and fix $t\in [0,T]$, the functional \begin{equation*} \mathbb L^t(\gamma):=\varphi(\gamma(0))+\int_0^t L(\gamma(s),v(\gamma(s),s),\dot{\gamma}(s))ds \end{equation*} reaches its infimum in the class of curves \begin{equation*} X_t(x)=\{\gamma\in W^{1,1}([0,t],M):\ \gamma(t)=x\}. \end{equation*} \end{lemma} The proof is similar to \cite[Proposition A.6]{gen}, and we provide it in the appendix for the sake of completeness. \begin{lemma}\label{ukLip}Let $L$ be the Lagrangian associated to $H$ satisfying (C)(CON)(CER)(LIP) and $\varphi\in Lip(M)$. Then for each $k\in\mathbb N$, the function $u_k(x,t)$ defined by (\ref{k}) is the Lipschitz continuous viscosity solution of (\ref{uk}). \end{lemma} \begin{proof} Given $n\in \mathbb{N}$, let \begin{equation}\label{k2.4x} u_{n,k}(x,t)=\inf_{\gamma(t)=x}\left\{\varphi(\gamma(0))+\int_0^tL_n(\gamma(\tau),u_{n,k-1}(\gamma(\tau),\tau),\dot{\gamma}(\tau)){d}\tau\right\}, \end{equation} with $u_{n,0}=\varphi\in Lip(M)$. We first prove the following assertion for each $k\in\mathbb N$ by induction. \begin{itemize} \item [\textbf{A[k]}] The sequence $u_{n,k}(x,t)$ is uniformly bounded and equi-Lipschitz continuous with respect to $n$, and converges uniformly to $u_k(x,t)$ on $M\times[0,T]$. Thus, the limit function $u_k(x,t)$ is Lipschitz continuous. \end{itemize} By \cite[Theorem 4.10]{meas}, the assertion A[1] holds. Assume the assertion A[k-1] holds. Then $u_{k-1}(x,t)$ is continuous, and $l_{k-1}:=\sup_{n\in\mathbb N} \|u_{n,k-1}(x,t)\|_\infty$ is finite. Plugging the continuous function $u_{k-1}(x,t)$ into (\ref{k}) and by Lemma \ref{Bol}, the minimizers of \begin{equation*} u_k(x,t)=\varphi(\gamma(0))+\int_0^tL(\gamma(\tau),u_{k-1}(\gamma(\tau),\tau),\dot{\gamma}(\tau)){d}\tau \end{equation*} exist in the class of absolutely continuous curves. Define \begin{equation*} M_k:=\sup\{|H(x,u,p)|:\ x\in M,\ |u|\leq l_{k-1},\ \|p\|\leq \|\partial_x\varphi(x)\|_\infty\}. \end{equation*} By definition, for $n\geq \|\partial_x\varphi(x)\|$, we have \begin{equation*} M_k =\sup\{|H_n(x,u,p)|:\ x\in M,\ |u|\leq l_{k-1},\ \|p\|\leq\|\partial_x\varphi(x)\|_\infty\}. \end{equation*} In Step 1, we have proved that each $u_{n,k}(x,t)$ is the viscosity solution of \begin{equation}\label{unk} \left\{ \begin{aligned} &\partial_t u(x,t)+H_n(x,u_{n,k-1}(x,t),\partial_xu(x,t))=0, \\ &u(x,0)=\varphi(x). \\ \end{aligned} \right. \end{equation} Replacing $M$ by $M_k$ and $u_k$ by $u_{n,k}$ in (\ref{barw}), one can construct a subsolution of (\ref{unk}). By the comparison theorem, we obtain $u_{n,k}(x)-M_k h\leq u_{n,k}(x,t+h)$, which implies that $\partial_tu_{n,k}(x,t)\geq -M_k$. Combining (\ref{unk}) and the definition of $H_n$, we have \begin{equation*} H(x,0,\partial_xu_{n,}(x,t))\leq H_n(x,0,\partial_xu_{n,k}(x,t))\leq M_k+\lambda l_{k-1},\quad \forall n\geq \|\partial_x\varphi(x)\|. \end{equation*} Therefore, the Lispchitz constant of $u_{n,k}(x,t)$ is uniform with respect to $n$. Note that \[u_{n,k}(x,0)=\varphi(x),\] the sequence $\{u_{n,k}(x,t)\}_{n\in \mathbb{N}}$ is uniformly bounded, so it has a converging subsequence. According to Lemma \ref{inf}, $u_{n,k}(x,t)$ converges to $u_{k}(x,t)$ pointwisely, so \[\lim_{n\to+\infty}u_{n,k}(x,t)=u_{k}(x,t),\quad \text{uniformly},\] which implies $u_k(x,t)$ is Lipschitz continuous. Thus, the assertion A[k] holds. Note that the Lipschitz constant may depend on $k$. Since that $H_n$ uniformly converges to $H$ on compact subsets of $T^*M\times\mathbb R$, and $u_{n,k}(x,t)$ converges uniformly to $u_k(x,t)$ on $M\times[0,T]$, by the stability of the viscosity solutions, we conclude that $u_k(x,t)$ is the Lipschitz continuous viscosity solution of (\ref{uk}). \end{proof} By Lemma \ref{begin}(i), $u_k(x,t)$ has a subsequence converging to $u(x,t)$. We still denote this subsequence by $u_k(x,t)$. By Lemma \ref{begin}(ii), the Lipschitz constants of the sequence $\{u_k(x,t)\}_{k\in \mathbb{N}}$ are uniform with respect to $k$. Therefore the limit function $u(x,t)=T^-_t\varphi(x)$ of the sequence $\{u_k(x,t)\}_{k\in\mathbb N}$ is the Lipschitz continuous viscosity solution of (\ref{C}). The Main Result \ref{m1} has been proved when $\varphi$ is Lipschitz continuous. \subsection{On Item (2): Continuous initial conditions} We first prove that given $T>0$ and $\varphi\in C(M)$, The value function $u_k(x,t)$ defined in (\ref{k}) is a continuous function on $M\times[0,T]$. In fact, for any $\varphi\in C(M)$, there exists a sequence of Lipschitz functions $\{\varphi_m\}_{m\in \mathbb{N}}$ converging uniformly to $\varphi$. We have already proven in Lemma \ref{ukLip} that, for initial functions $\varphi_m$, the solutions of (\ref{uk}), which are denoted by $u^m_k(x,t)$, are Lipschitz continuous. We then prove by induction. It is obvious that $u^m_0$ uniformly converges to $u_0$. Assume $u^m_{k-1}$ converges uniformly to $u_{k-1}$, then $u_{k-1}$ is continuous. By Lemma \ref{Bol}, $u_k(x,t)$ admits a minimizer $\gamma$. By definition \begin{equation*} u^m_k(x,t)-u_k(x,t) \leq \varphi_m(\gamma(0))-\varphi(\gamma(0))+\lambda\|u^m_{k-1}(x,t)-u_{k-1}(x,t)\|_\infty T. \end{equation*} Exchanging the roles of $u^m_k(x,t)$ and $u_k(x,t)$, we obtain $|u^m_k(x,t)-u_k(x,t)|\rightarrow 0$. \vspace{1ex} By Lemma \ref{begin}, $u_k(x,t)$ has a subsequence converges to $u(x,t)$, and the limit function satisfies (\ref{T-}). We have proven in Item (1) that for $\varphi\in Lip(M)$, $T^-_t\varphi(x)$ is the Lipschitz continuous viscosity solution of (\ref{C}). Using Proposition \ref{psg} (2) below, for $t\in [0,T]$, $T^-_t\varphi_m$ converges uniformly to $T^-_t\varphi$. According to the stability of viscosity solutions, we conclude that $T^-_t\varphi$ is the continuous viscosity solution of (\ref{C}) under the initial condition $u(x,0)=\varphi(x)$. \section{Existence of stationary solutions}\label{secS} The proof of Main Result \ref{S} is organized as follows. Firstly, we give two basic properties of the solution semigroups in Proposition \ref{psg}. Secondly, in Lemmas \ref{Tin} and \ref{S1}, we prove that for given $\varphi\in Lip(M)$, if $T^-_t\varphi(x)$ has a bound independent of $t$, then the limit inferior of the semigroup is a backward weak KAM solution. Then we finish the proof of the existence of the stationary solutions under the conditions mentioned in Main Result \ref{S}. \begin{proposition}\label{psg} The solution semigroups have the following properties: \begin{itemize} \item [(1)] For $\varphi_1$ and $\varphi_2 \in C(M)$, if $\varphi_1(x)<\varphi_2(x)$ for all $x\in M$, we have $T^-_t\varphi_1(x)< T^-_t\varphi_2(x)$ and $T^+_t\varphi_1(x)< T^+_t\varphi_2(x)$ for all $(x,t)\in M\times(0,+\infty)$. \item [(2)] Given any $\varphi$ and $\psi\in C(M)$, we have $\|T^-_t\varphi-T^-_t\psi\|_\infty\leq e^{\lambda t}\|\varphi-\psi\|_\infty$ and $\|T^+_t\varphi-T^+_t\psi\|_\infty\leq e^{\lambda t}\|\varphi-\psi\|_\infty$ for all $t>0$. \end{itemize} \end{proposition} \noindent \textbf{Proof of Item (1).} We argue by contradiction. Assume there exists $(x,t)\in M\times[0,+\infty)$ such that $T^-_t\varphi_1(x)\geq T^-_t\varphi_2(x)$. Let $\gamma:[0,t]\rightarrow M$ be a minimizer of $T^-_t\varphi_2(x)$ with $\gamma(t)=x$. Define \begin{equation*} F(s)=T^-_s\varphi_2(\gamma(s))-T^-_s\varphi_1(\gamma(s)),\quad s\in [0,t]. \end{equation*} Then $F(s)$ is a continuous function defined on $[0,t]$, and $F(0)> 0$. By assumption $F(t)\leq 0$. Then there is $s_0\in[0,t)$ such that $F(s_0)=0$ and $F(s)>0$ for all $s\in [0,s_0)$. Since $\gamma$ is a minimizer of $T^-_t\varphi_2(x)$, we have \begin{equation*} T^-_{s_0}\varphi_2(\gamma(s_0))=T^-_{s}\varphi_2(\gamma(s))+\int_{s}^{s_0}L(\gamma(\tau),T^-_\tau\varphi_2(\gamma(\tau)),\dot{\gamma}(\tau))d\tau, \end{equation*} and \begin{equation*} T^-_{s_0}\varphi_1(\gamma(s_0))\leq T^-_{s}\varphi_1(\gamma(s))+\int_{s}^{s_0}L(\gamma(\tau),T^-_\tau\varphi_1(\gamma(\tau)),\dot{\gamma}(\tau))d\tau, \end{equation*} which implies $F(s_0)\geq F(s)-\lambda \int_{s}^{s_0}F(\tau)d\tau$. Here $F(s_0)=0$, thus \begin{equation*} F(s)\leq \lambda \int_{s}^{s_0}F(\tau)d\tau. \end{equation*} By the Gronwall inequality, we conclude $F(s)\equiv 0$ for all $s\in [0,s_0)$, which contradicts $F(0)>0$. \vspace{1ex} \noindent \textbf{Proof of Item (2).} For given $x\in M$ and $t>0$, if $T^-_t\varphi(x)=T^-_t\psi(x)$, then the proof is finished. We then consider $T^-_t\varphi(x)>T^-_t\psi(x)$. Let $\gamma$ be a minimizer of $T^-_t\psi(x)$, define \begin{equation*} F(s):=T^-_s\varphi(\gamma(s))-T^-_s\psi(\gamma(s)),\quad \forall s\in [0,t]. \end{equation*} By assumption $F(t)>0$. If there is $\sigma\in[0,t)$ such that $F(\sigma)=0$ and $F(s)>0$ for all $s\in (\sigma,t]$, by definition \begin{equation*} T^-_s\varphi(\gamma(s))\leq T^-_t\varphi(\gamma(\sigma))+\int_\sigma^sL(\gamma(\tau),T^-_\tau\varphi(\gamma(\tau)),\dot\gamma(\tau))d\tau, \end{equation*} and \begin{equation*} T^-_s\psi(\gamma(s))=T^-_t\psi(\gamma(\sigma))+\int_\sigma^sL(\gamma(\tau),T^-_\tau\psi(\gamma(\tau)),\dot\gamma(\tau))d\tau, \end{equation*} which implies \begin{equation*} F(s)\leq F(\sigma)+\lambda\int_\sigma^sF(\tau)d\tau, \end{equation*} where $F(\sigma)=0$. By the Gronwall inequality we conclude $F(s)\equiv 0$ for all $s\in[\sigma,t]$, which contradicts $F(t)>0$. Therefore, for all $\sigma\in [0,t]$, we have $F(\sigma)>0$. Here $0<F(0)\leq \|\varphi-\psi\|_\infty$. By definition \begin{equation*} T^-_s\varphi(\gamma(\sigma))\leq T^-_t\varphi(\gamma(0))+\int_0^\sigma L(\gamma(\tau),T^-_\tau\varphi(\gamma(\tau)),\dot\gamma(\tau))d\tau, \end{equation*} and \begin{equation*} T^-_s\psi(\gamma(\sigma))=T^-_t\psi(\gamma(0))+\int_0^\sigma L(\gamma(\tau),T^-_\tau\psi(\gamma(\tau)),\dot\gamma(\tau))d\tau, \end{equation*} which implies \begin{equation*} F(\sigma)\leq F(0)+\lambda\int_0^\sigma F(\tau)d\tau. \end{equation*} By the Gronwall inequality we get $F(\sigma)\leq \|\varphi-\psi\|_\infty e^{\lambda\sigma}$, which implies $T^-_t\varphi(x)-T^-_t\psi(x)\leq \|\varphi-\psi\|_\infty e^{\lambda t}$ by taking $\sigma=t$. Exchanging the role of $\varphi$ and $\psi$, we finally obtain that $|T^-_t\varphi(x)-T^-_t\psi(x)|\leq \|\varphi-\psi\|_\infty e^{\lambda t}$. By definition, one can show the corresponding properties of $T^+$ easily. \qed \begin{lemma}\label{Tin} For any given $\varphi\in C(M)$ and $\sigma>0$, fixing $t>0$, we have \begin{equation*} \inf_{s\geq \sigma} T^-_{t+s} \varphi(x)=T^-_t(\inf_{s\geq \sigma}T^-_s \varphi(x)),\quad \sup_{s\geq \sigma} T^+_{t+s} \varphi(x)=T^+_t(\sup_{s\geq \sigma}T^+_s \varphi(x)). \end{equation*} \end{lemma} \begin{proof} We only prove that $T^-_t$ commutes with $\inf$. It is similar to obtain that $T^+_t$ commutes with $\sup$. On one hand, by definition we have \begin{equation*} \inf_{s\geq \sigma} T^-_s \varphi(x)\leq T^-_s \varphi(x),\quad \forall x\in M. \end{equation*} By Proposition \ref{psg} (1) we get \begin{equation*} T^-_t(\inf_{s\geq \sigma} T^-_s \varphi(x))\leq T^-_t\circ T^-_s \varphi(x),\quad \forall x\in M. \end{equation*} Therefore \begin{equation*} T^-_t(\inf_{s\geq \sigma}T^-_s \varphi(x))\leq \inf_{s\geq \sigma} T^-_{t+s} \varphi(x),\quad \forall x\in M. \end{equation*} On the other hand, we argue by a contradiction. Assume that there is a point $x\in M$ such that \[\inf_{s\geq \sigma}T^-_{t+s}\varphi(x)>T^-_t(\inf_{s\geq \sigma} T^-_s\varphi(x)).\] Let $\gamma:[0,t]\rightarrow M$ with $\gamma(t)=x$ be a minimizer of $T^-_t(\inf_{s\geq \sigma} T^-_s\varphi(x))$. For any $\varepsilon>0$, there exists $s_0\geq \sigma$ depending on $\gamma(0)$ and $\varepsilon$ such that \begin{equation}\label{Tinf} T^-_{s_0}\varphi(\gamma(0))\leq \inf_{s\geq \sigma}T^-_s \varphi(\gamma(0))+\varepsilon. \end{equation} By assumption we have \[T^-_{t+s_0}\varphi(x)\geq \inf_{s\geq \sigma}T^-_{t+s}\varphi(x)>T^-_t(\inf_{s\geq \sigma} T^-_s\varphi(x)).\] Define \[F(\tau)=T^-_{\tau+s_0}\varphi(\gamma(\tau))-T^-_\tau(\inf_{s\geq \sigma}T^-_s\varphi(\gamma(\tau))),\quad \tau\in[0,t].\] Then $F(t)>0$. If there exists $\sigma_0\in[0,t)$ such that $F(\sigma_0)\leq 0$, by continuity there is $\tau_0\in[\sigma_0,t)$ such that $F(\tau_0)=0$ and $F(\tau)>0$ for all $\tau\in (\tau_0,t]$. By definition, for $\sigma_1\in(\tau_0,t]$ we have \begin{equation*} T^-_{\sigma_1+s_0}\varphi(\gamma(\sigma_1))\leq T^-_{\tau_0+s_0}\varphi(\gamma(\tau_0))+\int_{\tau_0}^{\sigma_1}L(\gamma(\tau),T^-_{\tau+s_0}\varphi(\gamma(\tau)),\dot{\gamma}(\tau))d\tau, \end{equation*} and \begin{equation*} T^-_{\sigma_1}(\inf_{s\geq \sigma}T^-_s\varphi(\gamma(\sigma_1)))= T^-_{\tau_0}(\inf_{s\geq \sigma}T^-_s\varphi(\gamma(\tau_0)))+\int_{\tau_0}^{\sigma_1}L(\gamma(\tau),T^-_{\tau}(\inf_{s\geq \sigma}T^-_s\varphi(\gamma(\tau))),\dot{\gamma}(\tau))d\tau, \end{equation*} which implies $F(\sigma_1)\leq F(\tau_0)+\lambda \int_{\tau_0}^{\sigma_1}F(\tau)d\tau$. Here $F(\tau_0)=0$, thus \begin{equation*} F(\sigma_1)\leq \lambda \int_{\tau_0}^{\sigma_1}F(\tau)d\tau. \end{equation*} By the Gronwall inequality, we conclude $F(\sigma_1)\equiv 0$ for all $\sigma_1\in [\tau_0,t]$, which contradicts $F(t)>0$. Therefore, $F(\tau)>0$ for all $\tau\in[0,t]$. By definition, for $\sigma_1\in(0,t]$ we have \begin{equation*} T^-_{\sigma_1+s_0}\varphi(\gamma(\sigma_1))\leq T^-_{s_0}\varphi(\gamma(0))+\int_{0}^{\sigma_1}L(\gamma(\tau),T^-_{\tau+s_0}\varphi(\gamma(\tau)),\dot{\gamma}(\tau))d\tau, \end{equation*} and \begin{equation*} T^-_{\sigma_1}(\inf_{s\geq \sigma}T^-_s\varphi(\gamma(\sigma_1)))= \inf_{s\geq \sigma}T^-_s\varphi(\gamma(0))+\int_{0}^{\sigma_1}L(\gamma(\tau),T^-_{\tau}(\inf_{s\geq \sigma}T^-_s\varphi(\gamma(\tau))),\dot{\gamma}(\tau))d\tau, \end{equation*} which implies \[F(\sigma_1)\leq F(0)+\lambda \int_{0}^{\sigma_1}F(\tau)d\tau.\] By the Gronwall inequality, we conclude $F(\sigma_1)\leq F(0)e^{\lambda \sigma_1}$ for all $\sigma_1\in (0,t]$. Let $\sigma_1=t$ and by (\ref{Tinf}) we have \[T^-_{t+s_0}\varphi(x)-T^-_t(\inf_{s\geq \sigma} T^-_s\varphi(x))\leq (T^-_{s_0}\varphi(\gamma(0))-\inf_{s\geq \sigma}T^-_s\varphi(\gamma(0)))e^{\lambda t}\leq \varepsilon e^{\lambda t}.\] Let $\varepsilon\rightarrow 0^+$, we conclude \begin{equation*} T^-_{t+s_0}\varphi(x)\leq T^-_t(\inf_{s\geq \sigma} T^-_s\varphi(x)), \end{equation*} which gives a contradiction. Therefore \[\inf_{s\geq \sigma}T^-_{t+s}\varphi(x)\leq T^-_t(\inf_{s\geq \sigma} T^-_s\varphi(x)),\quad \forall x\in M.\] The proof is now completed. \end{proof} \begin{lemma}\label{S1} Given $\varphi\in Lip(M)$, if $T^-_t\varphi(x)$ has a bound independent of $t$, then \begin{equation*} \check{\varphi}=\liminf_{t\rightarrow +\infty}T^-_t\varphi(x) \end{equation*} exist, and is a backward weak KAM solution. Similarly, if $T^+_t\varphi(x)$ has a bound independent of $t$, then \begin{equation*} \hat{\varphi}(x)=\limsup_{t\rightarrow+\infty}T^+_t\varphi(x) \end{equation*} exist, and is a forward weak KAM solution. \end{lemma} \begin{proof} According to Remark \ref{Lipx}, the function $T^-_t\varphi(x)$ has a Lipschitz constant in $x$ independent of $t$. We denote it by $\kappa$. Therefore $\check{\varphi}(x)$ is well-defined. We then prove that $\check{\varphi}(x)$ is a fixed point of $T^-_t$. Similarly, one can prove that $\hat{\varphi}(x)$ is a fixed point of $T^+_t$. Note that \begin{equation*} |\inf_{s\geq t} T^-_s\varphi(x)-\inf _{s\geq t} T^-_s\varphi(y)|\leq \sup_{s\geq t}|T^-_s\varphi(x)-T^-_s\varphi(y)|\leq \kappa d(x,y), \end{equation*} so the limit procedure \begin{equation*} \check{\varphi}(x):=\lim_{t\rightarrow+\infty}\inf_{s\geq t} T^-_s\varphi(x) \end{equation*} is uniform in $x$. For given $t>0$, combing Proposition \ref{psg} (2) with Lemma \ref{Tin}, we have \begin{equation*} \|\inf_{s\geq \sigma}T^-_{t+s}\varphi(x)-T^-_t\check \varphi(x)\|_\infty\leq e^{\lambda t}\|\inf_{s\geq \sigma}T^-_s\varphi(x)-\check{\varphi}(x)\|_\infty \end{equation*} for all $\sigma>0$. Let $\sigma\rightarrow +\infty$, the right hand side tends to zero, and $\inf_{s\geq \sigma}T^-_{t+s}\varphi$ tends to $\check \varphi$, therefore $T^-_t\check \varphi=\check \varphi$. \end{proof} \begin{remark}\label{limsup} In fact, if $T^-_t\varphi(x)$ has a bound independent of $t$, then \begin{equation*} \hat{\varphi}=\limsup_{t\rightarrow +\infty}T^-_t\varphi(x) \end{equation*} exist, but may not be a backward weak KAM solution. Here is an example. Let $\mathbb S\simeq (-1/2,1/2]$ be the unit circle. Consider the contact Hamiltonian $H:T^*\mathbb S\times\mathbb R\rightarrow \mathbb R$ defined by \[H(x,u,p)=p^2-p-2u.\] The corresponding evolutionary equation is \begin{equation}\label{lsup} \partial_tu(x,t)+(\partial_x u(x,t))^2-\partial_x u(x,t)-2u(x,t)=0. \end{equation} One can check that \[u(x,t):=\min_{k\in\mathbb Z}\left\{\frac{1}{2}(x+t-k)^2\right\}\] is a viscosity solution of (\ref{lsup}) which is 1-periodic. Here, $u(x,0)$ is the restriction of $x^2/2$ on $\mathbb S$. When $t>0$, $u(x,t)$ can be regarded as the propagation of $u(x,0)$ along the $x$-axis. Notice that $\limsup_{t\rightarrow+\infty}u(x,t)=1/8$, but $v(x)=1/8$ is not the viscosity solution of $(\partial_x u)^2-\partial_x u-2u=0$. \end{remark} Generally speaking, the local boundedness of $L(x,u,\dot x)$ does not hold if $H(x,u,p)$ satisfies the assumption (CER). Fortunately, similar to \cite[Lemma 2.3]{Ish2}, one can prove the local boundedness of $L(x,u,\dot x)$ restricting on certain region. \begin{lemma}\label{CL} For $H(x,0,p)$ satisfying (C)(CON)(CER), there exist constants $\delta>0$ and $C_L>0$ such that the Lagrangian $L(x,0,\dot x)$ corresponding to $H(x,0,p)$ satisfies \begin{equation*} L(x,0,\xi)\leq C_L,\quad \forall (x,\xi)\in M\times \bar B(0,\delta). \end{equation*} \end{lemma} In the following part of this paper, we define $\mu:=\textrm{diam}(M)/\delta$. \begin{lemma}\label{S2} Given $\varphi\in C(M)$. \begin{itemize} \item [(1)] If $T^-_t\varphi(x)$ does not have a upper bound as $t$ tends to infinity, then for any $c\in\mathbb R$, there exists $t_c>0$ such that $T^-_{t_c}\varphi(x)>\varphi(x)+c$ for all $x\in M$. \item [(2)] If $T^-_t\varphi(x)$ does not have a lower bound as $t$ tends to infinity, then for any $c\in\mathbb R$, there exists $t_c>0$ such that $T^-_{t_c}\varphi(x)<\varphi(x)+c$ for all $x\in M$. \end{itemize} \end{lemma} \begin{proof} (1) We argue by contradiction. Assume that there exists $c_0\in\mathbb R$ such that for any $t>0$, we have a point $x_t\in M$ satisfying $T^-_t\varphi(x_t)\leq \varphi(x_t)+c_0$. Let $\alpha:[0,\mu]\rightarrow M$ be a geodesic connecting $x_t$ and $x$ with constant speed, where the constant $\mu$ was defined in Lemma \ref{CL}, then $\|\dot \alpha\|\leq \delta$. If $T^-_{t+\mu}\varphi(x)>\varphi(x_t)+c_0$, since $T^-_t\varphi(x_t)\leq \varphi(x_t)+c_0$, there exists $\sigma\in[0,\mu)$ such that $T^-_{t+\sigma}\varphi(\alpha(\sigma))=\varphi(x_t)+c_0$ and $T^-_{t+s}\varphi(\alpha(s))>\varphi(x_t)+c_0$ for all $s\in (\sigma,\mu]$. By definition \begin{equation*} \begin{aligned} T^-_{t+s}\varphi(\alpha(s))&\leq T^-_{t+\sigma}\varphi(\alpha(\sigma))+\int_\sigma^s L(\alpha(\tau),T^-_{t+\tau}\varphi(\alpha(\tau)),\dot \alpha(\tau))d\tau \\ &=\varphi(x_t)+c_0+\int_\sigma^s L(\alpha(\tau),T^-_{t+\tau}\varphi(\alpha(\tau)),\dot \alpha(\tau))d\tau, \end{aligned} \end{equation*} which implies \begin{equation*} \begin{aligned} &T^-_{t+s}\varphi(\alpha(s))-(\varphi(x_t)+c_0) \leq \int_\sigma^s L(\alpha(\tau),T^-_{t+\tau}\varphi(\alpha(\tau)),\dot \alpha(\tau))d\tau \\ &\leq \int_\sigma^s L(\alpha(\tau),\varphi(x_t)+c_0,\dot \alpha(\tau))d\tau +\lambda\int_\sigma^s(T^-_{t+\tau}\varphi(\alpha(\tau))-(\varphi(x_t)+c_0))d\tau \\ &\leq L_0\mu+\lambda\int_\sigma^s(T^-_{t+\tau}\varphi(\alpha(\tau))-(\varphi(x_t)+c_0))d\tau, \end{aligned} \end{equation*} where \begin{equation*} L_0:=C_L+\lambda \|\varphi+c_0\|_\infty. \end{equation*} By the Gronwall inequality, we have \begin{equation*} T^-_{t+s}\varphi(\alpha(s))-(\varphi(x_t)+c_0)\leq L_0\mu e^{\lambda(s-\sigma)}\leq L_0\mu e^{\lambda \mu},\quad \forall s\in(\sigma,\mu]. \end{equation*} Take $s=\mu$ we have $T^-_{t+\mu}\varphi(x)\leq \varphi(x_t)+c_0+L_0\mu e^{\lambda \mu}$. We conclude that $T^-_{t+\mu}\varphi(x)$ has a upper bound independent of $t$, which contradicts the assumption. \noindent (2) We argue by contradiction. Assume that there exists $c_1\in\mathbb R$ such that for any $s>0$, we have a point $x_s\in M$ satisfying $T^-_s \varphi(x_s)\geq \varphi(x_s)+c_1$. Take $s=t+\mu$, we have $T^-_{t+\mu} \varphi(x_{t+\mu})\geq \varphi(x_{t+\mu})+c_1$. Let $\alpha:[0,\mu]\rightarrow M$ be a geodesic connecting $x$ and $x_{t+\mu}$ with constant speed, then $\|\dot \alpha\|\leq \delta$. If $T^-_t\varphi(x)<\varphi(x_{t+\mu})+c_1$, since $T^-_{t+\mu} \varphi(x_{t+\mu})\geq \varphi(x_{t+\mu})+c_1$, there exists $\sigma\in(0,\mu]$ such that $T^-_{t+\sigma}\varphi(\alpha(\sigma))=\varphi(x_{t+\mu})+c_1$ and $T^-_{t+s}\varphi(\alpha(s))<\varphi(x_{t+\mu})+c_1$ for all $s\in [0,\sigma)$. By definition \begin{equation*} \begin{aligned} \varphi(x_{t+\mu})+c_1=T^-_{t+\sigma}\varphi(\alpha(\sigma))&\leq T^-_{t+s}\varphi(\alpha(s))+\int_s^\sigma L(\alpha(\tau),T^-_{t+\tau}\varphi(\alpha(\tau)),\dot \alpha(\tau))d\tau, \end{aligned} \end{equation*} which implies \begin{equation*} \begin{aligned} &\varphi(x_{t+\mu})+c_1-T^-_{t+s}\varphi(\alpha(s)) \leq \int_s^\sigma L(\alpha(\tau),T^-_{t+\tau}\varphi(\alpha(\tau)),\dot \alpha(\tau))d\tau \\ &\leq \int_s^\sigma L(\alpha(\tau),\varphi(x_{t+\mu})+c_1,\dot \alpha(\tau))d\tau +\lambda\int_s^\sigma (\varphi(x_{t+\mu})+c_1-T^-_{t+\tau}\varphi(\alpha(\tau)))d\tau \\ &\leq L_1\mu+\lambda\int_s^\sigma (\varphi(x_{t+\mu})+c_1-T^-_{t+\tau}\varphi(\alpha(\tau)))d\tau, \end{aligned} \end{equation*} where \begin{equation*} L_1:=C_L+\lambda \|\varphi+c_1\|_\infty. \end{equation*} Let $G(\sigma-s)=\varphi(x_{t+\mu})+c_1-T^-_{t+s}\varphi(\alpha(s))$, then \begin{equation*} G(\sigma-s)\leq L_1\mu+\lambda\int_0^{\sigma-s}G(\tau)d\tau. \end{equation*} By the Gronwall inequality, we have \begin{equation*} \varphi(x_{t+\mu})+c_1-T^-_{t+s}\varphi(\alpha(s))\leq L_1\mu e^{\lambda(s-\sigma)}\leq L_1\mu e^{\lambda \mu},\quad \forall s\in[0,\sigma). \end{equation*} Take $s=0$ we have $T^-_t\varphi(x)\geq \varphi(x_{t+\mu})+c_1-L_1\mu e^{\lambda \mu}$. We conclude that $T^-_t\varphi(x)$ has a lower bound independent of $t$, which contradicts the assumption. \end{proof} \begin{lemma}\label{S3} If there exist two continuous functions $\varphi_1$ and $\varphi_2$ on $M$ such that $T^-_t\varphi_1$ has a lower bound independent of $t$, and $T^-_t\varphi_2$ has a upper bound independent of $t$, then there is a constant $A$ such that $T^-_tA(x)$ is uniformly bounded. \end{lemma} \begin{proof} Define $A_1:=\|\varphi_1\|_\infty$ and $A_2:=-\|\varphi_2\|_\infty$, then $A_2\leq A_1$ and $T^-_tA_1(x)\geq T^-_t\varphi_1(x)$, $T^-_tA_2(x)\leq T^-_t\varphi_2(x)$ for all $x\in M$. If $T^-_t A_1(x)$ has a upper bound independent of $t$, then $A_1$ is the constant $A$ we are looking for. If $T^-_tA_1(x)$ does not have a upper bound independent of $t$, we define the following \begin{equation*} A^*:=\inf\{A:\ \exists t_A>0\ \textrm{such}\ \textrm{that}\ T^-_{t_A}A(x)\geq A,\ \forall x\in M\}. \end{equation*} By Lemma \ref{S2} (1), we take $c=0$, it is obvious that if $T^-_tA_1(x)$ is unbounded form above, then $A^*\leq A_1<+\infty$. The discussion is divided into two situations. \noindent Case (1): $A^*>-\infty$, then we can prove that $A^*$ is the constant $A$ we are looking for. We first show that $T^-_tA^*(x)$ has a upper bound independent of $t$, then $A^*<A_1$. We argue by contradiction. If $T^-_tA^*(x)$ does not have a upper bound, by Lemma \ref{S2} (1), for $c=1$, there is $t_1>0$ such that $T^-_t A^*(x)>A^*+1$ for all $x\in M$. By Proposition \ref{psg} (2), for any $\varepsilon >0$, we have $T^-_{t_1}(A^*-\varepsilon)(x)\geq T^-_{t_1}A^*(x)-e^{\lambda t_1}\varepsilon>A^*+1-e^{\lambda t_1}\varepsilon$. For every $0<\varepsilon<(e^{\lambda t_1}-1)^{-1}$, we have $T^-_{t_1}(A^*-\varepsilon)(x)>A^*-\varepsilon$. It means that we have found a smaller constant $A^*-\varepsilon$ such that if we take $t_{A^*-\varepsilon}=t_1$, then $T^-_{t_{A^*-\varepsilon}}(A^*-\varepsilon)(x)>A^*-\varepsilon$, which contradicts the definition of $A^*$. We then prove that $T^-_tA^*$ has a lower bound independent of $t$. We argue by contradiction. If $T^-_tA^*(x)$ does not have a lower bound, by Lemma \ref{S2} (2), for $c=-1$, there is $t_1>0$ such that $T^-_{t_1} A^*(x)<A^*-1$ for all $x\in M$. By Proposition \ref{psg} (2) and $A^*<A_1$, there is a constant $\delta_0>0$ such that $A^*+\delta<A_1$ and \begin{equation}\label{s1} T^-_{t_1}(A^*+\delta)(x)<A^*-\frac{1}{2}+\delta<A^*+\delta, \end{equation} for all $\delta\in[0,\delta_0)$. By the definition of $A^*$, there is $\bar A\in [A^*,A^*+\delta_0)$ and $t_2:=t_{\bar A}>0$ such that \begin{equation}\label{s2} T^-_{t_2}\bar A(x)\geq \bar A. \end{equation} Define $B^*:=\bar A-\frac{1}{2}$. According to the continuity of $T^-_t\varphi(x)$ at $t=0$ (see Lemma \ref{3.1} (3)), there exists $\varepsilon_0>0$ such that for $0\leq\sigma<\varepsilon_0$, we have \begin{equation}\label{s3} T^-_\sigma B^*(x)\leq \bar A-\frac{1}{4}. \end{equation} For $t_1$ and $t_2>0$, there exist $n_1$ and $n_2\in\mathbb N$, and $\varepsilon\in[0,\varepsilon_0)$ such that $n_1t_1+\varepsilon=n_2t_2$. By Proposition \ref{psg} (1) and (\ref{s1}), we have \begin{equation}\label{s4} T^-_{n_1t_1}\bar A(x)\leq T^-_{t_1}\bar A(x)<B^*. \end{equation} Take $\sigma=\varepsilon$ in (\ref{s3}), by Proposition \ref{psg} (1) and the second inequality in (\ref{s4}), we get \begin{equation}\label{s6} T^-_\varepsilon\circ T^-_{n_1t_1}\bar A(x)\leq T^-_\varepsilon B^*(x)\leq \bar A-\frac{1}{4}. \end{equation} By (\ref{s2}) one can easily figure out that $T^-_{n_2t_2}\bar A(x)\geq \bar A$, thus \begin{equation}\label{s7} T^-_\varepsilon\circ T^-_{n_1t_1}\bar A(x)=T^-_{n_2t_2}\bar A(x)\geq \bar A, \end{equation} which contradicts (\ref{s6}). \noindent Case (2): $A^*=-\infty$, then we can prove that for any $A<A_2$, the function $T^-_tA(x)$ is uniformly bounded. Since $T^-_tA(x)\leq T^-_tA_2(x)$, $T^-_tA*(x)$ has a upper bound. The proof of the existence of the lower bound of $T^-_tA(x)$ is similar to Case (1). \end{proof} If there is $\varphi\in C(M)$ and $t_a>0$ such that $T^-_{t_a}\varphi\geq \varphi$, for any $t>0$, there is $n\in\mathbb N$ and $r\in[0,t_a)$ such that $t=nt_a+r$. By Proposition \ref{psg} (1) we have $T^-_t\varphi\geq T^-_r\varphi$, i.e. $T^-_t\varphi$ has a lower bound independent of $t$. Similarly, if there is $\psi\in C(M)$ and $t_b>0$ such that $T^-_{t_b}\psi\leq \psi$, $T^-_t\psi$ has a upper bound independent of $t$. By Lemma \ref{S3}, there exists a constant $A$ such that $T^-_tA$ is uniformly bounded. By Lemma \ref{S1}, the condition (S) holds. \section{Existence of forward weak KAM solutions}\label{pm2'} In this section, we prove the convergence of $T^+_tu_-$. Corollaries \ref{<u-} and \ref{up} guarantee the boundedness of $T^+_tu_-$. We prove the equi-Lipschitz property of $T^+_tu_-(x)$ in $x$ with respect to $t>0$ in Lemma \ref{u<LLip}. The convergence of $T^+_tu_-$ for all $t>0$ is guaranteed by Corollary \ref{ti}. \begin{proposition}\label{*'} Given $\varphi\in C(M)$, if $\varphi$ satisfies the following condition \begin{itemize} \item [($\ast$')] $\varphi\leq u_-$ and there exists a point $x_0$ such that $\varphi(x_0)=u_-(x_0)$. \end{itemize} \noindent then $T^+_t\varphi(x)$ has a bound independent of $t$ and $\varphi$. \end{proposition} We divide the proof into three parts, that is, Lemmas \ref{51}, \ref{52} and \ref{53}. \begin{lemma}\label{51} Suppose $\varphi$ satisfies the condition ($\ast$'), then $T^+_t\varphi(x)\leq u_-(x)$ for all $t>0$. \end{lemma} \begin{proof} We argue by contradiction. Assume there exists $(x,t)$ such that $T^+_t\varphi(x)>u_-(x)$. Let $\gamma:[0,t]\rightarrow M$ with $\gamma(0)=x$ be a minimizer of $T^+_t\varphi(x)$. Define \begin{equation*} F(s)=T^+_{t-s}\varphi(\gamma(s))-u_-(\gamma(s)),\quad s\in [0,t]. \end{equation*} Then $F(s)$ is continuous and $F(t)=\varphi(\gamma(t))-u_-(\gamma(t))\leq 0$. By assumption $F(0)>0$. Then there is $\tau_0\in(0,t]$ such that $F(\tau_0)=0$ and $F(\tau)>0$ for all $s\in [0,\tau_0)$. For each $\tau\in[0,\tau_0]$, we have \begin{equation*} T^+_{t-\tau}\varphi(\gamma(\tau))=T^+_{t-\tau_0}\varphi(\gamma(\tau_0))-\int_{\tau}^{\tau_0}L(\gamma(s),T^+_{t-s}\varphi(\gamma(s)),\dot{\gamma}(s))ds. \end{equation*} Since $u_-=T^-_tu_-$ for all $t>0$, we have \begin{equation*} u_-(\gamma(\tau_0))\leq u_-(\gamma(\tau))+\int_{\tau}^{\tau_0}L(\gamma(s),u_-(\gamma(s)),\dot{\gamma}(s))ds. \end{equation*} Thus $F(\tau)\leq F(\tau_0)+\lambda \int_{\tau}^{\tau_0} F(s)ds$, where $F(\tau_0)=0$. Define $F(s)=G(\tau_0-s)$, we get \begin{equation*} G(\tau_0-\tau)\leq \lambda\int_0^{\tau_0-\tau}G(\sigma)d\sigma. \end{equation*} By the Gronwall inequality, we conclude $F(\tau)=G(\tau_0-\tau)\equiv 0$ for all $\tau\in [0,\tau_0]$, which contradicts $F(0)>0$. \end{proof} \begin{corollary}\label{<u-} Let $u_-\in\mathcal S_-$, then $T^+_tu_-\leq u_-$ for each $t>0$. \end{corollary} Combining Corollary \ref{<u-} with Proposition \ref{psg} (1), one can easily obtain that $T^+_tu_-=T^+_s\circ T^+_{t-s}u_-\leq T^+_su_-$ for all $t>s$, then we have \begin{corollary}\label{ti} $T^+_tu_-$ is decreasing in $t$. \end{corollary} \begin{lemma}\label{52} Suppose $\varphi$ satisfies the condition ($\ast$'), then for each $t>0$, there exists a point $x_t\in M$ such that $T^+_t\varphi(x_t)=u_-(x_t)$. \end{lemma} \begin{proof}Given $x_0\in M$. Since $u_-(x_0)=T^-_tu_-(x_0)$, let $\gamma_0:[0,t]\rightarrow M$ with $\gamma_0(t)=x_0$ be a minimizer of $T^-_tu_-(x_0))$. From the above discussion, for each $s\in[0,t]$, we have $u_-(\gamma_0(s))\geq T^+_{t-s}\varphi(\gamma_0(s))$. Define \begin{equation*} F(s)=u_-(\gamma_0(s))-T^+_{t-s}\varphi(\gamma_0(s)), \end{equation*} then $F(s)\geq 0$ and $F(t)=0$. If $F(0)>0$, then there is $s_0\in(0,t]$ such that $F(s_0)=0$ and $F(s)>0$ for all $s\in [0,s_0)$. By definition, for $s_1\in[0,s_0)$, we have \begin{equation*} u_-(\gamma_0(s_0))=u_-(\gamma_0(s_1))+\int_{s_1}^{s_0}L(\gamma_0(s),u_-(\gamma_0(s)),\dot \gamma_0(s))ds, \end{equation*} and \begin{equation*} T^+_{t-s_1}\varphi(\gamma_0(s_1))\geq T^+_{t-s_0}\varphi(\gamma_0(s_0))-\int_{s_1}^{s_0}L(\gamma_0(s),T^+_{t-s}\varphi(\gamma_0(s)),\dot \gamma_0(s))ds, \end{equation*} which implies \begin{equation*} F(s_1)\leq F(s_0)+\lambda \int_{s_1}^{s_0} F(s)ds. \end{equation*} By the Gronwall inequality, we conclude $F(s)\equiv 0$ for all $s\in[0,s_0]$, which contracts $F(0)>0$. Therefore $T^+_t\varphi(\gamma_0(0))=u_-(\gamma_0(0))$. \end{proof} \begin{corollary}\label{xt} If $u_+=\lim_{t\rightarrow +\infty}T^+_tu_-$ or $u_-=\lim_{t\rightarrow +\infty}T^-_tu_+$, then $\mathcal I_{(u_-,u_+)}$ is nonempty. \end{corollary} \begin{proof} We only prove the non-emptiness of $\mathcal I_{(u_-,u_+)}$ by assuming $u_+=\lim_{t\rightarrow +\infty}T^+_tu_-$, the other case is similar. By Lemma \ref{52}, for each $t>0$, there exists $x_t\in M$ such that $T^+_tu_-(x_t)=u_-(x_t)$. Since $M$ is compact, there is a sequence $x_{t_n}$ tending to some point $x^*$ as $t_n$ tends to infinity. The following inequality holds \begin{equation*} |T^+_{t_n}u_-(x_{t_n})-u_+(x^*)|\leq |T^+_{t_n}u_-(x_{t_n})-T^+_{t_n}u_-(x^*)|+|T^+_{t_n}u_-(x^*)-u_+(x^*)|. \end{equation*} By Corollary \ref{ti}, $u_+\leq T^+_tu_-\leq u_-$. By Remark \ref{Lipx}, $T^+_t u_-(x)$ has a Lipschitz constant in $x$ independent of $t$. So the first term in the right hand side tends to zero as $t_n$ tends to infinity. Since the limit function of $T^+_t u_-$ is $u_+$, the second term also tends to zero. Therefore, the limit of $T^+_{t_n}u_-(x_{t_n})$ is $u_+(x^*)$. On the other hand, by definition we have $T^+_{t_n}u_-(x_{t_n})=u_-(x_{t_n})$, which tends to $u_-(x^*)$ by the continuity of $u_-$. We finally conclude that $u_+(x^*)=u_-(x^*)$. Therefore, $\mathcal I_{(u_-,u_+)}$ is nonempty. \end{proof} \begin{lemma}\label{53} Suppose $\varphi$ satisfies the condition ($\ast$'), then $T^+_t\varphi(x)$ has a lower bound independent of $t$. \end{lemma} \begin{proof} Let $\alpha:[0,\mu]\rightarrow M$ be a geodesic connecting $x$ and $x_{t-\mu}$ with constant speed, then $\|\dot \alpha\|\leq \delta$. If $T^+_t\varphi(x)\geq u_-(x_{t-\mu})$, then the proof is finished. If $T^+_t\varphi(x)<u_-(x_{t-\mu})$, since $T^+_{t-\mu}\varphi(x_{t-\mu})=u_-(x_{t-\mu})$, there is $\sigma\in(0,\mu]$ such that $T^+_{t-\sigma}\varphi(\alpha(\sigma))=u_-(x_{t-\mu})$ and $T^+_{t-s}\varphi(\alpha(s))<u_-(x_{t-\mu})$ for all $s\in [0,\sigma)$. By definition \begin{equation*} \begin{aligned} T^+_{t-s}\varphi(\alpha(s))&\geq T^+_{t-\sigma}\varphi(\alpha(\sigma))-\int_s^\sigma L(\alpha(\tau),T^+_{t-\tau}\varphi(\alpha(\tau)),\dot \alpha(\tau))d\tau \\ &=u_-(x_{t-\mu})-\int_s^\sigma L(\alpha(\tau),T^+_{t-\tau}\varphi(\alpha(\tau)),\dot \alpha(\tau))d\tau, \end{aligned} \end{equation*} which implies \begin{equation*} \begin{aligned} &u_-(x_{t-\mu})-T^+_{t-s}\varphi(\alpha(s)) \leq \int_s^\sigma L(\alpha(\tau),T^+_{t-\tau}\varphi(\alpha(\tau)),\dot \alpha(\tau))d\tau \\ &\leq \int_s^\sigma L(\alpha(\tau),u_-(x_{t-\mu}),\dot \alpha(\tau))d\tau +\lambda\int_s^\sigma(u_-(x_{t-\mu})-T^+_{t-\tau}\varphi(\alpha(\tau)))d\tau \\ &\leq L_0\mu+\lambda\int_s^\sigma(u_-(x_{t-\mu})-T^+_{t-\tau}\varphi(\alpha(\tau)))d\tau, \end{aligned} \end{equation*} where \begin{equation*} L_0:=C_L+\lambda \|u_-\|_\infty. \end{equation*} Let $G(\sigma-s)=u_-(x_{t-\mu})-T^+_{t-s}\varphi(\alpha(s))$, then \begin{equation*} G(\sigma-s)\leq L_0\mu+\lambda\int_0^{\sigma-s}G(\tau)d\tau. \end{equation*} By the Gronwall inequality, we have \begin{equation*} u_-(x_{t-\mu})-T^+_{t-s}\varphi(\alpha(s))=G(\sigma-s)\leq L_0\mu e^{\lambda(\sigma-s)}\leq L_0\mu e^{\lambda \mu},\quad \forall s\in[0,\sigma). \end{equation*} Thus $T^+_t\varphi(x)\geq u_-(x_{t-\mu})-L_0\mu e^{\lambda \mu}$. We finally get a lower bound of $T^+_t\varphi(x)$ independent of $t$ and $\varphi$. \end{proof} \begin{corollary}\label{up} $T^+_tu_-$ has a lower bound independent of $t$. \end{corollary} \begin{lemma}\label{u<LLip} If $u\prec L$, then $u$ is a Lipschitz continuous function defined on $M$. \end{lemma} \begin{proof} For each $x,y\in M$, let $\alpha:[0,d(x,y)/\delta]\rightarrow M$ be a geodesic of length $d(x,y)$, with constant speed $\|\dot \alpha\|=\delta$ and connecting $x$ and $y$. Then \begin{equation*} L(\alpha(s),u(\alpha(s)),\dot{\alpha}(s))\leq C_L+\lambda \|u\|_\infty,\quad \forall s\in [0,d(x,y)/\delta]. \end{equation*} Then by $u\prec L$ we have \begin{equation*} u(y)-u(x)\leq \int_0^{d(x,y)/\delta}L(\alpha(s),u(\alpha(s)),\dot{\alpha}(s))ds\leq \frac{1}{\delta}(C_L+\lambda \|u\|_\infty) d(x,y). \end{equation*} Exchanging the role of $x$ and $y$, we get the Lipschitz continuity of $u$. \end{proof} Note that $u_-\prec L$, it is Lipschitz. By Remark \ref{Lipx}, the bound and the Lipschitz constant in $x$ of $T^+_tu_-$ is independent of $t$. By the Arzela-Ascoli theorem, $T^+_tu_-$ has a converging subsequence as $t\rightarrow +\infty$. Since $T^+_tu_-$ is decreasing in $t$, the limit function $\lim_{t\rightarrow +\infty} T^+_tu_-=u_+$ exists. By Lemma \ref{S1}, $u_+$ is a fixed point of $T^+_t$. Thus, $-u_+$ is a viscosity solution of (\ref{E'}). \section{Strictly increasing cases}\label{sinc} The proof of Main Result \ref{m3} is organized as follows. We prove the maximality of $u_+$ in the set of forward weak KAM solutions in Proposition \ref{u+max}. We then prove the projected Aubry set of each forward weak KAM solution is nonempty and contained in $\mathcal I_{(u_-,u_+)}$ in Proposition \ref{Iv+}. According to \cite[Theorem II.2]{Vis}, if the set of $\mathcal{S}_-$ is nonempty, it must be a singleton. Using Main Result \ref{m2'}, one has the limit function $\lim_{t\rightarrow +\infty} T^+_tu_-$ exists, denoted by $u_+$, and $T^-_tu_+$ converges to the unique element of $\mathcal{S}_-$. Namely, \[u_-=\lim_{t\rightarrow +\infty} T^-_tu_+;\] From weak KAM point of view \cite{Fa08}, we call $(u_-,u_+)$ a conjugate pair. \begin{proposition}\label{u+max} $u_+$ is the maximal element in $\mathcal S_+$. \end{proposition} \begin{proof} For each $v_+\in\mathcal S_+$, we first prove that $v_+\leq T^-_tv_+$ for all $t>0$. We argue by contradiction. Assume there exist $x\in M$ and $t>0$ such that $v_+(x)>T^-_tv_+(x)$. Let $\gamma:[0,t]\rightarrow M$ with $\gamma(t)=x$ be a minimizer of $T^-_tv_+(x)$. Let $F(s):=v_+(\gamma(s))-T^-_sv_+(\gamma(s))$, then $F(0)=0$. By assumption $F(t)>0$. Then there exists $s_0\in[0,t)$ such that $F(s_0)=0$ and $F(s)>0$ for all $s\in(s_0,t]$. By definition, we have \begin{equation*} T^-_sv_+(\gamma(s))=T^-_{s_0}v_+(\gamma(s_0))+\int_{s_0}^sL(\gamma(s),T^-_sv_+(\gamma(s)),\dot \gamma(s))ds, \end{equation*} and \begin{equation*} v_+(\gamma(s_0))\geq v_+(\gamma(s))-\int_{s_0}^sL(\gamma(s),v_+(\gamma(s)),\dot \gamma(s))ds. \end{equation*} Thus \begin{equation*} F(s)\leq F(s_0)+\lambda\int_{s_0}^sF(\sigma)d\sigma, \end{equation*} we get $F(s)\equiv 0$ for $s\in[s_0,t]$, which contradicts $F(t)>0$. Therefore $v_+\leq T^-_tv_+$, by a limit procedure we get $v_+\leq u_-$, applying $T^+_t$ we get $v_+\leq T^+_tu_-$, by a limit procedure again we finally conclude that $v_+\leq u_+$ for all $v_+\in\mathcal S_+$. \end{proof} \begin{proposition}\label{Iv+} For each $v_+\in\mathcal S_+$, the corresponding projected Aubry set $\mathcal I_{v_+}$ is nonempty and $\mathcal I_{v_+}\subseteq \mathcal I_{(u_-,u_+)}$. \end{proposition} \begin{proof} For given $v_+\in\mathcal S_+$, we define the barrier function \begin{equation*} B_{v_+}(x):=u_-(x)-v_+(x). \end{equation*} According to the discussion above, we have $B_{v_+}(x)\geq 0$. We first show $B_{v_+}(\gamma_+(t))$ is nonnegative nonincreasing along a $(v_+,L,0)$-calibrated curve $\gamma_+$. By definition \begin{equation*} v_+(\gamma_+(t'))-v_+(\gamma_+(t))=\int_t^{t'}L(\gamma_+(s),v_+(\gamma_+(s)),\dot{\gamma}_+(s))ds, \end{equation*} and \begin{equation*} u_-(\gamma_+(t'))-u_-(\gamma_+(t))\leq \int_t^{t'}L(\gamma_+(s),u_-(\gamma_+(s)),\dot{\gamma}_+(s))ds. \end{equation*} Since $L$ satisfies (STD) and $u_-\geq v_+$, we have \begin{equation*} u_-(\gamma_+(t'))-u_-(\gamma_+(t))\leq v_+(\gamma_+(t'))-v_+(\gamma_+(t)), \end{equation*} which implies \begin{equation*} B_{v_+}(\gamma_+(t'))\leq B_{v_+}(\gamma_+(t)),\quad \textrm{for}\ t'>t\geq 0. \end{equation*} Therefore the limit $\lim_{t\rightarrow +\infty}B_{v_+}(\gamma_+(t))=\delta\geq 0$ exists. We then prove $\delta=0$, which implies that $\mathcal I_{v_+}$ is nonempty. If not, assume $\delta>0$. By Lemma \ref{u<LLip}, each $v_+\in\mathcal S_+$ is Lipschitz continuous. Take \begin{equation*} R\geq \max\{\|\partial_x u_-\|_\infty,\|\partial_x v_+\|_\infty\}, \end{equation*} we can apply a modification on $H$ by \begin{equation*} H_R(x,u,p):=H(x,u,p)+\max\{\|p\|^2-R^2,0\}, \end{equation*} then $u_-$ and $v_+$ are also the viscosity solutions of \begin{equation*} H_R(x,u(x),\partial_x u(x))=0,\quad F_R(x,u(x),\partial_x u(x))=0 \end{equation*} respectively. It is obvious that $H_R$ satisfies (SL). Therefore, without any loss of generality, we can assume that $H(x,u,p)$ satisfies (SL). By the definition of the calibrated curves, for any $s_2>s_1\geq 0$, we have \begin{equation}\label{a1} v_+(\gamma_+(s_2))-v_+(\gamma_+(s_1))=\int_{s_1}^{s_2}L(\gamma_+(s),v_+(\gamma_+(s)),\dot{\gamma}_+(s))ds. \end{equation} Since the forward weak KAM solution $v_+$ is Lipschitz continuous, we have \begin{equation}\label{a2} \frac{v_+(\gamma_+(s_2))-v_+(\gamma_+(s_1))}{\gamma_+(s_2)-\gamma_+(s_1)}\cdot \frac{\gamma_+(s_2)-\gamma_+(s_1)}{s_2-s_1}\leq \|\partial_x v_+\|_\infty\frac{\gamma_+(s_2)-\gamma_+(s_1)}{s_2-s_1}. \end{equation} By continuity, there is $T_1>0$ such that for $s_2-s_1\leq T_1$ we have \begin{equation*} L(\gamma_+(s),v_+(\gamma_+(s)),\dot \gamma_+(s))\geq L(\gamma_+(s_1),v_+(\gamma_+(s_1)),\dot \gamma_+(s))-1,\quad s\in[s_1,s_2]. \end{equation*} The Jensen inequality implies \begin{equation}\label{a3} \begin{aligned} &\frac{1}{s_2-s_1}\int_{s_1}^{s_2}L(\gamma_+(s),v_+(\gamma_+(s)),\dot{\gamma}_+(s))ds \\ &\geq \frac{1}{s_2-s_1}\int_{s_1}^{s_2}L(\gamma_+(s_1),v_+(\gamma_+(s_1)),\dot{\gamma}_+(s))ds-1 \\ &\geq L\left(\gamma_+(s_1),v_+(\gamma_+(s_1)),\frac{1}{s_2-s_1}\int_{s_1}^{s_2}\dot{\gamma}_+(s)ds\right)-1 \\ &=L\left(\gamma_+(s_1),v_+(\gamma_+(s_1)),\frac{\gamma_+(s_2)-\gamma_+(s_1)}{s_2-s_1}\right)-1 \geq \Theta\left(\frac{\gamma_+(s_2)-\gamma_+(s_1)}{s_2-s_1}\right) -1. \end{aligned} \end{equation} From (\ref{a1})(\ref{a2})(\ref{a3}), we conclude that $\gamma_+(t)$ has a Lipschitz constant independent of $t$. Since $M$ is compact, we can take a converging subsequence $\gamma_+(t_n)\rightarrow \bar x$ with $t_n\rightarrow +\infty$. For any $T>0$, the sequence of curves $\gamma_n(s):=\gamma_+(t_n+s)$ is uniformly bounded and equi-Lipschitz on $s\in[0,T]$. Therefore it admits a uniformly converging subsequence, which will be still denoted by $\gamma_n$ for the sake of simplicity. We denote the uniform limit by $\bar \gamma$. We then prove that $\bar \gamma$ is a $(v_+,L,0)$-calibrated curve on $[0,T]$. By definition \begin{equation*} \begin{aligned} 2\|v_+\|_\infty&\geq v_+(\gamma_n(s))-v_+(\gamma_n(0)) \\ &=\int_0^sL(\gamma_n(\tau),v_+(\gamma_n(\tau)),\dot \gamma_n(\tau))d\tau \geq \int_0^s\Theta(\|\dot \gamma_n(\tau)\|)d\tau,\quad \forall s\in[0,T]. \end{aligned} \end{equation*} Therefore the sequence $\gamma_n$ is weakly compact in $W^{1,1}([0,T],M)$. Since $\gamma_n$ is a uniformly converging sequence \begin{equation*} \begin{aligned} v_+(\bar \gamma(s))-v_+(\bar x)&=\lim_{n\rightarrow +\infty}(v_+(\gamma_n(s))-v_+(\gamma_n(0))) \\ &=\lim_{n\rightarrow +\infty}\int_0^sL(\gamma_n(\tau),v_+(\gamma_n(s)),\dot\gamma_n(\tau))d\tau\\ &\geq \int_0^sL(\bar\gamma(\tau),v_+(\bar\gamma(\tau)),\dot {\bar \gamma}(\tau))d\tau, \end{aligned} \end{equation*} where the last inequality comes from Lemma \ref{TM}. Therefore $\bar \gamma$ is a $(v_+,L,0)$-calibrated curve. For any $s\in[0,T]$, by definition we have \begin{equation*} B_{v_+}(\bar \gamma(s))=\lim_{n\rightarrow +\infty}B_{v_+}(\gamma_n(s))=\lim_{n\rightarrow +\infty}B_{v_+}(\gamma_+(t_n+s))\equiv\delta. \end{equation*} Since the Lagrangian satisfies (STD) and by the assumption $\delta>0$ \begin{equation*} \begin{aligned} v_+(\bar \gamma(T))-v_+(\bar \gamma(0))&=\int_{0}^{T}L(\bar \gamma(s),v_+(\bar \gamma(s)),\dot{\bar \gamma}(s))ds \\ &>\int_{0}^{T}L(\bar \gamma(s),u_-(\bar \gamma(s)),\dot{\bar \gamma}(s))ds \geq u_-(\bar \gamma(T))-u_-(\bar \gamma(0)). \end{aligned} \end{equation*} We conclude $B_{v_+}(\bar \gamma(0))>B_{v_+}(\bar \gamma(T))$, which gives a contradiction. We then prove $\mathcal I_{v_+}\subseteq \mathcal I_{(u_-,u_+)}$. It is obvious that for $x\in\mathcal I_{v_+}$, we have \[u_-(x)=v_+(x)\leq u_+(x)\leq u_-(x),\] then $u_-(x)=v_+(x)=u_+(x)$, that is, $x\in\mathcal I_{(u_-,u_+)}$. \end{proof} \section{Strictly decreasing cases}\label{sdec} In this section, we consider the strictly decreasing cases. In order to study viscosity solutions of equation (\ref{E'}), we only need to study the forward weak KAM solutions of equation (\ref{E}). In Proposition \ref{W1}, we prove the uniform boundedness of forward weak KAM solutions of (\ref{E}) with respect to the $W^{1,\infty}$-norm. In Proposition \ref{minS+}, we show the existence of minimal forward weak KAM solutions of (\ref{E}). A comparison result of forward weak KAM solutions of (\ref{E}) is considered in Proposition \ref{comp}. The long time behavior of the forward solution semigroup $T^+_t\varphi$ is studied in Propositions \ref{long1}. \begin{proposition}\label{W1} The set $\mathcal S_+$ is compact in the topology induced by $\|\cdot\|_\infty$-norm. \end{proposition} \begin{proof} Since $v_+\leq u_-$, we only need to show that $v_+\in\mathcal S_+$ has a uniform lower bound. Take $y\in \mathcal I_{v_+}$, then $v_+(y)=u_-(y)$. Let $\alpha:[0,\mu]\rightarrow M$ be a geodesic connecting $x$ and $y$ with constant speed, then $\|\dot \alpha\|\leq \delta$. If $v_+(x)\geq u_-(y)$, then the proof is finished. If $T^+_\mu v_+(x)=v_+(x)<u_-(y)$, since $v_+(y)=u_-(y)$, there is $\sigma\in(0,\mu]$ such that $v_+(\alpha(\sigma))=u_-(y)$ and $v_+(\alpha(s))<u_-(y)$ for all $s\in [0,\sigma)$. By definition we have \begin{equation*} \begin{aligned} v_+(\alpha(s))&\geq v_+(\alpha(\sigma))-\int_s^\sigma L(\alpha(\tau),v_+(\alpha(\tau)),\dot \alpha(\tau))d\tau\\ &=u_-(y)-\int_s^\sigma L(\alpha(\tau),v_+(\alpha(\tau)),\dot \alpha(\tau))d\tau, \end{aligned} \end{equation*} which implies \begin{equation*} \begin{aligned} &u_-(y)-v_+(\alpha(s)) \leq \int_s^\sigma L(\alpha(\tau),v_+(\alpha(\tau)),\dot \alpha(\tau))d\tau \\ &\leq \int_s^\sigma L(\alpha(\tau),u_-(y),\dot \alpha(\tau))d\tau +\lambda\int_s^\sigma(u_-(y)-v_+(\alpha(\tau)))d\tau \\ &\leq L_0\mu+\lambda\int_s^\sigma(u_-(y)-v_+(\alpha(\tau)))d\tau, \end{aligned} \end{equation*} where \begin{equation*} L_0:=C_L+\lambda \|u_-\|_\infty. \end{equation*} Let $G(\sigma-s)=u_-(y)-v_+(\alpha(s))$, then \begin{equation*} G(\sigma-s)\leq L_0\mu+\lambda\int_0^{\sigma-s}G(\tau)d\tau. \end{equation*} By the Gronwall inequality we get \begin{equation*} u_-(y)-v_+(\alpha(s))=G(\sigma-s)\leq L_0\mu e^{\lambda(\sigma-s)}\leq L_0\mu e^{\lambda\mu},\quad \forall s\in[0,\sigma). \end{equation*} Therefore $v_+(x)\geq u_-(y)-L_0\mu e^{\lambda \mu}$, i.e. $v_+$ is uniformly bounded from below. Then there exists a constant $K>0$ such that $\|v_+\|_\infty\leq K$. We then prove that $v_+$ are equi-Lipschitz continuous. For each $x,y\in M$, let $\alpha:[0,d(x,y)/\delta]\rightarrow M$ be a geodesic of length $d(x,y)$ , with constant speed $\|\dot{\alpha}\|=\delta$ and connecting $x$ and $y$. Then \begin{equation*} L(\alpha(s),v_+(\alpha(s)),\dot{\alpha}(s))\leq C_L+\lambda K,\quad \forall s\in [0,d(x,y)/\delta]. \end{equation*} Since $v_+\prec L$, we have \begin{equation*} v_+(y)-v_+(x)\leq \int_0^{d(x,y)/\delta}L(\alpha(s),v_+(\alpha(s)),\dot{\alpha}(s))ds\leq \frac{1}{\delta}(C_L+\lambda K)d(x,y). \end{equation*} Exchanging the role of $x$ and $y$, we finally prove the uniform boundedness of $\|v_+\|_{W^{1,\infty}}$. \end{proof} We will show the existence of minimal forward weak KAM solutions of equation \eqref{E}. Note that $(\mathcal{S}_+,\leq)$ is a partially ordered set. In view of Zorn's lemma, if every chain in $\mathcal{S}_+$ has a lower bound in $\mathcal{S}_+$, then $\mathcal{S}_+$ contains at least one minimal element. \begin{proposition}\label{minS+} Let $A$ be a totally ordered subset of $\mathcal{S}_+$. Let $\bar{u}(x):=\inf_{u\in A}u(x)$ for each $x\in M$. Then $\bar{u}\in \mathcal{S}_+$. \end{proposition} \begin{proof} Since $v_+\in\mathcal S_+$ is bounded form below, the function $\bar u:=\inf_{u\in A}u(x)$ is well-defined for each $x\in M$. If $A$ is a finite set, the proof is finished. We then consider $A$ being a infinite set. By Proposition \ref{W1}, the pointwise convergence of a sequence $u_n$ in $\mathcal S_+$ implies the uniform convergence. We first prove the limit function is contained in $\mathcal S_+$. By Proposition \ref{psg} (2), we have \begin{equation*} \|T^+_tu_n-T^+_t\bar u\|_\infty\leq e^{\lambda t}\|u_n-\bar u\|_\infty, \end{equation*} the right hand side tends to zero uniformly, so \begin{equation*} T^+_t\bar u=\lim_{n\rightarrow+\infty}T^+_tu_n=\lim_{n\rightarrow+\infty}u_n=\bar u. \end{equation*} We then prove the existence of a pointwisely converging sequence in $A$ with the limit function $\bar u$. Then by the discussion above we have $\bar u\in\mathcal S_+$. According to Proposition \ref{W1}, every $u\in\mathcal S_+$ are $\kappa$-Lipschitz, we have \begin{equation}\label{baru} \bar u(x)-\bar u(y)\leq \sup_{u\in\mathcal S_+}|u(x)-u(y)|\leq \kappa |x-y|. \end{equation} Since $M$ is compact, it is also separable. Namely on can find a countable dense subset denoted by $U:=\{x_1,x_2,\dots,x_n,\dots\}$. \bigskip \noindent \textbf{Assertion.} There exists a sequence $\{u_n\}_{n\in\mathbb N}\subset A$ such that for a given $n\in\mathbb N$ and each $i\in\{1,2,\dots,n\}$, we have \begin{equation}\label{un} 0\leq u_n(x_i)-\bar u(x_i)<\frac{1}{n}. \end{equation} \bigskip \noindent Using the assertion above, one can prove that $u_n$ has a pointwise convergence to $\bar u$. In fact, for each $x\in M$, there exists a subsequence $V:=\{x_m\}_{m\in\mathbb N}\subseteq U$ such that $|x_m-x|<\frac{1}{m}$. Given $x\in M$ and $n\in \mathbb N$, for each $x_i\in\{x_1,x_2,\dots,x_n\}\cap V$, up to a rearrangement of $i$, using (\ref{baru}) we have \begin{equation*} \begin{aligned} |u_n(x)-\bar u(x)|&\leq |u_n(x)-u_n(x_i)|+|u_n(x_i)-\bar u(x_i)|+|\bar u(x)-\bar u(x_i)| \\ &\leq 2\kappa|x_i-x|+\frac{1}{n}\leq \frac{2\kappa}{i}+\frac{1}{n}. \end{aligned} \end{equation*} Let $n$ and $i$ tend to infinity, we get the pointwise convergence of $u_n$ to $\bar u$. \bigskip It remains to prove the assertion above. By the definition of $\bar u$, we have $u_n\geq \bar u$. In the following, we construct a sequence $\{u_n\}_{n\in\mathbb N}\subset A$ such that for a given $n\in\mathbb N$ and each $i\in\{1,2,\dots,n\}$, (\ref{un}) holds. First of all, for $x_1\in U$, we take $v_1\in A$ such that $v_1(x_1)-\bar u(x_1)<1/n$. Let $x_j\in U$ with $j\leq n$ satisfying $v_1(x_j)-\bar u(x_j)\geq 1/n$ and $v_1(x_i)-\bar u(x_i)<1/n$ for each $i\leq j-1$. For $x_j$, we take $v_2\in A$ satisfying $v_2(x_j)-\bar u(x_j)<1/n$. Then \begin{equation*} v_2(x_j)<\bar u(x_j)+\frac{1}{n}\leq v_1(x_j). \end{equation*} Note that $A$ is totally ordered, it yields $v_2\leq v_1$. Therefore, for each $i\leq j-1$ we have \begin{equation*} v_2(x_i)-\bar u(x_i)\leq v_1(x_i)-\bar u(x_i)<\frac{1}{n}. \end{equation*} Thus, we have found $v_2\in A$ satisfying that for all $i\in\{1,2,\dots,j\}$, we have $v_2(x_i)-\bar u(x_i)<1/n$. Repeating the process above, we finally obtain $v_k\in A$ with $k\leq n$ satisfying for each $i\in\{1,2,\dots,n\}$, the following property holds \begin{equation*} v_k(x_i)-\bar u(x_i)<\frac{1}{n}. \end{equation*} Take $u_n=v_k$, then the proof is completed. \end{proof} \begin{proposition}\label{comp} Let $v_+$ and $v_+'$ be two forward weak KAM solutions of (\ref{E}), the following holds: \begin{itemize} \item [(1)] If $v_+\leq v_+'$, then $\emptyset\neq \mathcal I_{v_+}\subseteq \mathcal I_{v_+'}\subseteq \mathcal I_{u_+}$; \item [(2)] If there is a neighborhood $\mathcal O$ of $\mathcal I_{v_+}$ such that $v_+'|_{\mathcal O}\geq v_+|_{\mathcal O}$, then $v_+'\geq v_+$ everywhere; \item [(3)] If $\mathcal I_{v_+'}=\mathcal I_{v_+}$ and $v_+'|_{\mathcal O}=v_+|_{\mathcal O}$, then $v_+'= v_+$ everywhere. \end{itemize} \end{proposition} \begin{proof} The result (1) comes from Proposition \ref{Iv+}, the result (3) comes from (2). It remains to prove the result (2). By Proposition \ref{Iv+}, for a $(v_+,L,0)$-calibrated curve $\gamma_+:[0,+\infty)\rightarrow M$ with $\gamma_+(0)=x$, we have $\lim_{t\rightarrow +\infty}B_{v_+}(\gamma_+(t))=0$. Then there is a $t_0$ large enough, such that $\gamma_+(t_0)\in\mathcal O$. Define \begin{equation*} F(s)=v_+(\gamma_+(s))-v_+'(\gamma_+(s)),\quad s\in[0,t_0]. \end{equation*} If $v_+>v_+'$, then $F(0)=v_+(x)-v_+'(x)>0$ and $F(t_0)=v_+(\gamma_+(t_0))-v_+'(\gamma_+(t_0))\leq 0$. Then there is $\sigma\in (0,t_0]$ such that $F(\sigma)=0$ and $F(s)>0$ for all $s\in[0,\sigma)$. By definition we have \begin{equation*} v_+(\gamma_+(\sigma))-v_+(\gamma_+(s))=\int_s^\sigma L(\gamma_+(\tau),v_+(\gamma_+(\tau)),\dot \gamma_+(\tau))d\tau, \end{equation*} and \begin{equation*} v_+'(\gamma_+(\sigma))-v_+'(\gamma_+(s))\leq \int_s^\sigma L(\gamma_+(\tau),v_+'(\gamma_+(\tau)),\dot \gamma_+(\tau))d\tau, \end{equation*} which implies \begin{equation*} F(s)\leq F(\sigma)+\lambda \int_s^\sigma F(\tau)d\tau. \end{equation*} By the Gronwall inequality we conclude $F(s)=G(\sigma-s)\equiv 0$ for all $s\in[0,\sigma]$, which contradicts $F(0)>0$. We finally have $v_+'\geq v_+$. \end{proof} \noindent \textbf{On the example \ref{ex}.} We have already known that $\bar u_+\leq u_-$ for all $u_-\in\mathcal S_-$. It is sufficient to show $\bar u_+(x)<u_2(x)$ for $x\in(-1,1]\backslash \{0\}$. By symmetry we only consider $x\in (0,1]$. Since $\bar u_+$ is a semiconvex function and satisfies $\bar u_+(x)\leq u_2(x)$, $\bar u_+$ can not equal to $u_2$ at $x=1$. We then assume that there exists $x_0\in (0,1)$ such that $\bar u_+(x_0)=u_2(x_0)$. Since we assume $\lambda>2$, we have $\lambda u_2(x)>V(x)$ for all $x\in (0,1)$. For $z>V(x)$, we set \begin{equation*} f(x,z)=\lambda \sqrt{2(z-V(x))}, \end{equation*} then $f(x,z)$ is of class $C^1$ on $(0,1)\times \{z\in\mathbb R:\ z>V(x)\}$. By the classical theory of ordinary differential equations, for $x\in (0,1)$, $\lambda u_2(x)$ is the unique solution of \begin{equation}\label{ode} \frac{dz}{dx}=f(x,z),\quad z(x_0)=u_2(x_0). \end{equation} If $\bar u_+$ is differentiable on $(0,1)$, then $\bar u_+$ satisfies (\ref{E0}) in the classical sense. Since $\bar u_+\leq u_2$ and $\bar u_+(x_0)=u_2(x_0)$, $\lambda\bar u_+$ is the unique solution of (\ref{ode}). That is, $\bar u_+=u_2$ on $(0,1)$, which contradicts the semiconvexity of $\bar u_+$. Therefore, for all $x\in (0,1]$, we have $\bar u_+<u_2$. It remains to show that $\bar u_+$ is differentiable on $(0,1)$. Assume there exists $y_0\in (0,1)$ such that $\bar u_+$ is not differentiable at $y_0$. Since $\bar u_+$ is the unique forward weak KAM solution of (\ref{E0}), there is $l>0$ such that $\pm l\in D^* \bar u_+(y_0)$, where $D^*$ stands for the set of all reachable gradients. By the semiconvexity of $\bar u_+$, it is decreasing on the left side of $y_0$. Since $\bar u_+(0)=0$ and $\bar u_+(y_0)\geq 0$, there is $z_0\in (0,y_0)$ which is a local maximum of $\bar u_+$. By the semiconvexity of $\bar u_+$, it is differentiable at $z_0$, then $\bar u'_+(z_0)=0$. By (\ref{E0}) we have $-\lambda \bar u_+(z_0)+V(z_0)=0$. Since $\bar u'_+(x)$ exists for almost all $x$, there is $z_1\in (z_0,y_0)$ such that $\bar u'_+(z_1)$ exists, $|\bar u'_+(z_1)|>0$ and $\bar u_+(z_0)\geq \bar u_+(z_1)\geq 0$. We also have $V(z_1)>V(z_0)$. Therefore \begin{equation*} -\lambda \bar u_+(z_1)+\frac{1}{2}|\bar u'_+(z_1)|^2+V(z_1)>-\lambda \bar u_+(z_0)+V(z_0)=0, \end{equation*} which contradicts that $\bar u_+$ satisfies (\ref{E0}) at $z_1$ in the classical sense. \qed Before proving Proposition \ref{long1}, we have to prove the following result. \begin{proposition}\label{Ttu-} For each $\varphi\in C(M)$, the limit function $\lim_{t\rightarrow+\infty} T^-_t\varphi(x)$ exits, and equals to the unique viscosity solution $u_-$ of (\ref{E}). \end{proposition} \begin{proof} We first deal with $\varphi\in Lip(M)$. We assert that for each $\varphi\in Lip(M)$, $T^-_t\varphi(x)$ has a bound independent of $t$. In fact, since $H(x,u,p)$ satisfies (STI), $u_-(x)+\|u_--\varphi\|_\infty$ and $u_-(x)-\|u_--\varphi\|_\infty$ are viscosity supersolution and subsolution of (\ref{E}) respectively. By the comparison theorem we have $u_-(x)-\|u_--\varphi\|_\infty\leq T^-_t\varphi(x)\leq u_-(x)+\|u_--\varphi\|_\infty$ for all $x\in M$. Thus $T^-_t\varphi(x)$ has a bound independent of $t$. By Remark \ref{Lipx}, both $\liminf_{t\rightarrow+\infty} T^-_t\varphi(x)$ and $\limsup_{t\rightarrow+\infty} T^-_t\varphi(x)$ are well-defined. By Lemma \ref{S1}, $\liminf_{t\rightarrow+\infty} T^-_t\varphi(x)=u_-(x)$. It remains to prove \[\limsup_{t\rightarrow+\infty} T^-_t\varphi(x)\leq u_-(x).\] Let \[\bar u(x):=\limsup_{t\rightarrow+\infty} T^-_t\varphi(x).\] We claim that for every $\varepsilon>0$, there exists a constant $s_0>0$ independent of $x$ such that for any $s\geq s_0$ we have \[T^-_s\varphi(x)\leq \bar u(x)+\varepsilon.\] Fixing $x\in M$, by the definition of $\limsup$, for every $\varepsilon>0$, there is $s_0(x)>0$ such that for any $s\geq s_0(x)$ we have \[T^-_s\varphi(x)\leq \bar u(x)+\frac{\varepsilon}{3}.\] Let $\kappa$ be the Lipschitz constant in $x$ of $T^-_t\varphi(x)$, which is independent of $t$ by Remark \ref{Lipx}. Take $r:=\frac{\varepsilon}{3\kappa}$, for $s\geq s_0(x)$ we have \begin{equation*} \begin{aligned} T^-_s\varphi(y)&\leq T^-_s\varphi(x)+\kappa d(x,y)\leq \bar u(x)+\frac{\varepsilon}{3}+\kappa d(x,y) \\ &\leq \bar u(y)+\frac{\varepsilon}{3}+2\kappa d(x,y)\leq \bar u(y)+\varepsilon,\quad \forall y\in B_r(x). \end{aligned} \end{equation*} Since $M$ is compact, there are finite points $x_i\in M$ such that for each $y\in M$, there is a point $x_i$ such that $y\in B_r(x_i)$. Let $s_0:=\max_i s_0(x_i)$ and the claim is proved. By Proposition \ref{psg} we have \[T^-_t(T^-_s\varphi(x))\leq T^-_t(\bar u(x)+\varepsilon)\leq T^-_t\bar u(x)+\varepsilon e^{\lambda t}.\] Taking the limit $s\rightarrow+\infty$ we have \[\bar u(x)=\limsup_{s\rightarrow +\infty}T^-_t(T^-_s\varphi(x))\leq T^-_t\bar u(x)+\varepsilon e^{\lambda t}.\] Letting $\varepsilon\rightarrow 0+$ we get $\bar u(x)\leq T^-_t \bar u(x)$, which means $T^-_t\bar u(x)$ is non-decreasing in $t$. Since $H(x,u,p)$ satisfies (STI), the function $u_-(x)+\|\bar u-u_-\|$ is a supersolution of $\partial_t u+H(x,u,\partial_x u)=0$. By the comparison result, $T^-_t\bar u(x)\leq u_-(x)+\|\bar u-u_-\|$, which implies that $T^-_t\bar u$ has an upper bound independent of $t$. Thus, the limit $\lim_{t\rightarrow +\infty} T^-_t\bar u(x)$ exists, and equals to $u_-$. We conclude that \[\limsup_{t\rightarrow+\infty}T^-_t\varphi(x)=\bar u(x)\leq \lim_{t\rightarrow +\infty} T^-_t\bar u(x)=u_-(x).\] We now assume $\varphi\in C(M)$. Consider the constant function $K:=\|\varphi\|_\infty$. We have proven $\lim_{t\rightarrow +\infty}T^-_t(\pm K)=u_-$. By Proposition \ref{psg} (1), we have \[T^-_t(-K)\leq T^-_t\varphi\leq T^-_tK.\] we conclude \[\lim_{t\rightarrow+\infty}T^-_t\varphi=u_-.\] The proof is completed. \end{proof} \begin{proposition}\label{long1} Given $\varphi\in C(M)$, we have the following results. \begin{itemize} \item [(1)] If $\varphi$ satisfies the condition ($\ast$') stated in Proposition \ref{*'}, then $T^+_t\varphi(x)$ has a bound independent of $t$ and $\varphi$; \item [(2)] If ($\ast$') does not hold, then there are two possible cases: \begin{itemize} \item [(a)] There is $x_0$ such that $\varphi(x_0)>u_-(x_0)$, then $T^+_t\varphi(x)$ tends to $+\infty$ uniformly as $t$ tends to infinity; \item [(b)] $\varphi<u_-$, then $T^+_t\varphi(x)$ tends to $-\infty$ uniformly as $t$ tends to infinity. \end{itemize} \end{itemize} \end{proposition} \begin{proof} The proof of (1) was given in Lemma \ref{*'}. It remains to prove (2). We only show (a) here, the proof of (b) is similar. We first show that $T^+_t\varphi$ is bounded from below. Let $y_0\in M$ be a maximal point of $\varphi-u_-$. Define $\varphi_0(x):=\varphi(x)-(\varphi(y_0)-u_-(y_0))$, then $\varphi_0(x)$ satisfies the condition ($\ast$'), thus $T^+_t\varphi_0$ is uniformly bounded. Note that $\varphi_0<\varphi$. By Proposition \ref{psg} (1), we have $T^+_t\varphi\geq T^+_t\varphi_0$. We argue by contradiction. Assume there is a sequence $t_n\rightarrow+\infty$ such that $T^+_{t_n}\varphi(x)$ is bounded by $C$. For any given $t_n$, the function $v_n(x):=T^+_{t_n}\varphi(x)$ is a bounded continuous function defined on $M$. We are going to show $\varphi(x_0)\leq T^-_{t_n} v_n(x_0)$. If not, then $\varphi(x_0)>T^-_{t_n} v_n(x_0)$ holds. Let $\gamma:[0,t_n]\rightarrow M$ with $\gamma(t_n)=x_0$ be a minimizer of $T^-_{t_n}v_n(x_0)$. Define \begin{equation*} F(s):=T^+_{t_n-s}\varphi(\gamma(s))-T^-_sv_n(\gamma(s)),\quad s\in [0,t_n]. \end{equation*} By assumption $F(t_n)>0$. There are two possible cases: \noindent Case (1): There is $\sigma\in [0,t_n]$ such that $F(\sigma)=0$ and $F(s)>0$ for all $s\in (\sigma,t_n]$. By definition \begin{equation*} T^-_s v_n(\gamma(s))=T^-_\sigma v_n(\gamma(\sigma))+\int_\sigma^s L(\gamma(\tau),T^-_\tau v_n(\gamma(\tau)),\dot \gamma(\tau))d\tau, \end{equation*} and \begin{equation*} T^+_{t_n-\sigma}\varphi(\gamma(\sigma))\geq T^+_{t_n-s}\varphi(\gamma(s))-\int_\sigma^s L(\gamma(\tau),T^+_{t_n-\tau} \varphi(\gamma(\tau)),\dot \gamma(\tau))d\tau, \end{equation*} which implies \begin{equation*} F(s)\leq F(\sigma)+\lambda\int_\sigma^s F(\tau)d\tau. \end{equation*} By the Gronwall inequality we conclude $F(s)\equiv 0$ for all $s\in[\sigma,t_n]$, which contradicts $F(t_n)>0$. \noindent Case (2): For each $s\in[0,t_n]$, we have $F(s)>0$. By definition \begin{equation*} \begin{aligned} T^-_sv_n(\gamma(s))=&v_n(\gamma(0))+\int_0^s L(\gamma(\tau),T^-_\tau v_n(\gamma(\tau)),\dot \gamma(\tau))d\tau \\ =&T^+_{t_n}\varphi(\gamma(0))+\int_0^s L(\gamma(\tau),T^-_\tau v_n(\gamma(\tau)),\dot \gamma(\tau))d\tau \\ \geq & T^+_{t_n-s}\varphi(\gamma(s))-\int_0^s L(\gamma(\tau),T^+_{t_n-\tau} \varphi(\gamma(\tau)),\dot \gamma(\tau))d\tau\\ &+\int_0^s L(\gamma(\tau),T^-_\tau v_n(\gamma(\tau)),\dot \gamma(\tau))d\tau, \end{aligned} \end{equation*} which implies that \begin{equation*} F(s)\leq \lambda\int_0^s F(\tau)d\tau, \end{equation*} thus $F(s)\equiv 0$, which contradicts $F(t_n)>0$. Therefore $\varphi(x_0)\leq T^-_{t_n} v_n(x_0)$. By Proposition \ref{psg} (1) we have \begin{equation*} T^-_{t_n}(-C)(x)\leq T^-_{t_n} v_n(x)\leq T^-_{t_n} C(x),\quad \forall x\in M. \end{equation*} By Proposition \ref{Ttu-}, the limit function $\lim_{t\rightarrow +\infty}T^-_tC(x)$ (resp. $\lim_{t\rightarrow +\infty}T^-_t(-C)(x)$) exists, and equals to $u_-$. We conclude that $\lim_{n\rightarrow +\infty}T^-_{t_n} v_n(x)=\lim_{t\rightarrow +\infty}T^-_t (\pm C)(x)=u_-(x)$. Therefore \begin{equation*} u_-(x_0)=\lim_{n\rightarrow +\infty}T^-_{t_n} v_n(x_0)\geq \varphi(x_0)>u_-(x_0), \end{equation*} which gives a contradiction. \end{proof} \section{Higher regular cases}\label{Regu} In this section, we begin to consider the higher regular cases. In Lemmas \ref{u-=u+} and \ref{u-C1}, we prove that the conjugate pair $(u_-,u_+)$ are both $C^1$ on the corresponding projected Aubry set $\mathcal I_{(u_-,u_+)}$ with the same derivative when $H$ is strictly convex in $p$ and locally Lipschitz. In Lemma \ref{HC1}, we prove further properties of the calibrated curve and the conjugate pair when $H$ is of class $C^1$ and $C^{1,1}$. We first remove the coercive property in $p$ of the Hamiltonian $H$ by the superlinear property in $p$. For a given conjugate pair $(u_-,u_+)$, by Proposition \ref{u<LLip}, both of them are Lipschitz continuous. Define \begin{equation*} R\geq \max\{\|\partial_x u_-\|_\infty,\|\partial_x u_+\|_\infty\}, \end{equation*} we can apply a modification on $H$ via \begin{equation*} H_R(x,u,p):=H(x,u,p)+\max\{\|p\|^2-R^2,0\}, \end{equation*} then $u_-$ and $u_+$ are the viscosity solutions of \begin{equation*} H_R(x,u(x),\partial_x u(x))=0,\quad F_R(x,u(x),\partial_x u(x))=0 \end{equation*} respectively. It is obvious that $H_R$ satisfies (SL). Thus, without any loss of generality, we assume $H(x,u,p)$ satisfies (SL). By \cite[Proposition 2.7]{gen}, if $H(x,u,p)$ satisfies (C)(STC)(SL)(LIP), then the corresponding Lagrangian satisfies (CON)(SL)(LIP) and \begin{itemize} \item [\textbf{(CD):}] $L(x,u,\dot x)$ is continuous and differentiable in $\dot x$. The map $(x,u,\dot x)\mapsto \partial_{\dot x}L(x,u,\dot x)$ is continuous. \end{itemize} In addition, if $H(x,u,p)$ satisfies (LL), by \cite[Proposition 7.2 (iv)]{Fa}, the corresponding Lagrangian satisfies (CON)(SL)(LIP) and \begin{itemize} \item [\textbf{(LLD):}] the map $(x,\dot x)\mapsto L(x,u,\dot x)$ is locally Lipschitz continuous for any $u$. $L(x,u,\dot x)$ is differentiable in $\dot x$, and the map $(x,u,\dot x)\mapsto \partial_{\dot x}L(x,u,\dot x)$ is continuous. \end{itemize} \begin{lemma}\label{til1} Given $a>0$. If $u\prec L$, let $\gamma:[-a,a]\rightarrow M$ be a $(u,L,0)$-calibrated curve, then $\gamma$ is of class $C^1$ and $u$ is differentiable at $\gamma(0)$. \end{lemma} The proof of Lemma \ref{til1} is similar to \cite[Lemma 4.3]{Wa3}. We give it in the appendix for the reader's convenience. \begin{lemma}\label{u-=u+} Given a conjugate pair $(u_-,u_+)$, for $x\in\mathcal I_{(u_-,u_+)}$, there exists a $C^1$ curve $\gamma:(-\infty,\infty)\rightarrow M$ with $\gamma(0)=x$ such that $u_-(\gamma(t))=u_+(\gamma(t))$, and \begin{equation}\label{upm} u_\pm(\gamma(t'))-u_\pm(\gamma(t))=\int_t^{t'}L(\gamma(s),u_\pm(\gamma(s)),\dot \gamma(s))ds,\quad \forall t\leq t'\in\mathbb R. \end{equation} In addition, $u_\pm$ are differentiable at $x$ with the same derivative. \end{lemma} \begin{proof} For $x\in\mathcal I_{(u_-,u_+)}$, there is a $(u_-,L,0)$-calibrated curve $\gamma_-:(-\infty,0]\rightarrow M$ with $\gamma_-(0)=x$ and a $(u_+,L,0)$-calibrated curve $\gamma_+:[0,+\infty)\rightarrow M$ with $\gamma_+(0)=x$, connecting these two curves, we get a curve $\gamma:(-\infty,\infty)\rightarrow M$ with $\gamma(0)=x$. We then prove $u_+(\gamma_+(s))=u_-(\gamma_+(s))$, the proof of $u_+(\gamma_-(s))=u_-(\gamma_-(s))$ is similar. We argue by contradiction. Assume there exists $s_0\in (0,+\infty)$ such that $u_+(\gamma_+(s_0))<u_-(\gamma_+(s_0))$. Define \begin{equation*} F(\tau):=u_-(\gamma_+(\tau))-u_+(\gamma_+(\tau)). \end{equation*} Then $F(0)=u_-(x)-u_+(x)=0$. By assumption $F(s_0)>0$. Then there is $\tau_0\in [0,s_0)$ such that $F(\tau_0)=0$ and $F(\tau)>0$ for all $\tau\in (\tau_0,s_0]$. By definition we have \begin{equation*} u_+(\gamma_+(\tau))-u_+(\gamma_+(\tau_0))=\int_{\tau_0}^\tau L(\gamma_+(s),u_+(\gamma_+(s)),\dot \gamma_+(s))ds, \end{equation*} \begin{equation*} u_-(\gamma_+(\tau))-u_-(\gamma_+(\tau_0))\leq \int_{\tau_0}^\tau L(\gamma_+(s),u_+(\gamma_+(s)),\dot \gamma_+(s))ds. \end{equation*} Therefore $F(\tau)\leq \lambda\int _{\tau_0}^\tau F(s)ds$. By the Gronwall inequality we get $F(\tau)\equiv 0$ for $\tau\in [\tau_0,s_0]$, which contradicts $F(s_0)>0$. Therefore $\gamma$ is a $(u_\pm,L,0)$-calibrated curve defined on the whole $\mathbb R$, i.e. (\ref{upm}) holds. By Lemma \ref{til1}, $\gamma$ is a $C^1$ curve and $\partial_xu_\pm(x)=\partial_{\dot x}L(x,u_\pm(x),\dot \gamma(0))$. \end{proof} \begin{lemma}\label{u-C1} The conjugate pair $u_-$ and $u_+$ are both of class $C^1$ on $\mathcal I_{(u_-,u_+)}$. \end{lemma} \begin{proof} By Lemma \ref{u<LLip}, $u_-$ is Lipschitz continuous. By \cite[Theorem 5.3.7]{cann}, if $H(x,u,p)$ satisfies (LL)(STC), $u_-$ is locally semiconcave. Similarly, since $-u_+$ is a viscosity solution of (\ref{E'}), it is also locally semiconcave. Equivalently $u_+$ is locally semiconvex. Then by \cite[Theorem 3.3.7]{cann}, for $x\in\mathcal I_{(u_-,u_+)}$, the conjugate pair $u_-$ and $u_+$ are both of class $C^1$. \end{proof} \begin{lemma}\label{HC1} Assume $H:T^*M\times\mathbb R\rightarrow\mathbb R$ satisfies (STC)(SL)(LIP), then for a given conjugate pair $(u_-,u_+)$, the following properties hold: \begin{itemize} \item [(1)] If $H(x,u,p)$ is of class $C^1$, for $x\in\mathcal I_{(u_-,u_+)}$, there exists a $C^1$ curve $\gamma:(-\infty,\infty)\rightarrow M$ with $\gamma(0)=x$ such that $u_-(\gamma(t))=u_+(\gamma(t))$ for all $t\in\mathbb R$, and \[x(t):=\gamma(t),\quad u(t):=u_\pm(\gamma(t)),\quad p(t):=\partial_{\gamma(t)}u_\pm(\gamma(t))\] satisfy the contact Hamiltonian equations (\ref{CH}). \item [(2)] In addition, if $H(x,u,p)$ is of class $C^{1,1}$, the conjugate pair $u_-$ and $u_+$ are of class $C^{1,1}$ on $\mathcal I_{(u_-,u_+)}$. Equivalently, the projection $\pi:T^*M\times\mathbb R\rightarrow M$ induces a bi-Lipschitz map between $\mathcal I_{(u_-,u_+)}$ and $\tilde{\mathcal I}_{(u_-,u_+)}$. \end{itemize} \end{lemma} \begin{proof} (1) By Lemma \ref{u-=u+}, the $C^1$ curve $\gamma$ exists and $\gamma(t)\in\mathcal I_{(u_-,u_+)}$ for all $t$. Since $u_-$ is a viscosity solution of (\ref{E}) and differentiable on $\gamma(t)$, we have \[H(\gamma(t),u_-(\gamma(t)),\partial_{\gamma_-(t)}u_-(\gamma_-(t)))=0.\] Denote $u(\gamma(t))=u_\pm(\gamma(t))$, be definition we have \begin{equation*} u(\gamma(t'))-u(\gamma(t))=\int_t^{t'}L(\gamma(s),u(\gamma(s)),\dot \gamma(s))ds. \end{equation*} Dividing by $t'-t$ on both sides and letting $t'\rightarrow t^+$, we have \begin{equation*} \partial_{\gamma(t)}u(\gamma(t))\cdot \dot \gamma(t)=L(\gamma(t),u(\gamma(t)),\dot \gamma(t)). \end{equation*} From the definition of the Legendre transformation, one easily get \begin{equation*} \dot \gamma(t)=\partial_pH(\gamma(t),u(\gamma(t)),\partial_{\gamma(t)}u(\gamma(t))),\quad \partial_{\gamma(t)}u(\gamma(t))=\partial_{\dot x}L(\gamma(t),u(\gamma(t)),\dot \gamma(t)). \end{equation*} By a direct calculation, we get \begin{equation*} \dot{u}(\gamma(t))=\partial_{\gamma(t)}u(\gamma(t))\cdot \dot \gamma(t)=\partial_{\gamma(t)}u(\gamma(t))\cdot\partial_pH(\gamma(t),u(\gamma(t)),\partial_{\gamma(t)}u(\gamma(t))). \end{equation*} By Lemma \ref{u-C1}, $u$ is of class $C^1$ on $\gamma(t)$. Denote $\bar L(x,v)=L(x,u(x),v)$, then it is of class $C^1$ along $\gamma$. By \cite[Theorem 2.1 (i)]{Clarke}, for almost all $t$, the calibrated curve $\gamma$ satisfies \begin{equation*} \frac{d}{dt}(\partial_{\dot x}\bar L(\gamma(t),\dot \gamma(t)))=\partial_x\bar L(\gamma(t),\dot \gamma(t)), \end{equation*} which implies \begin{equation*} \begin{aligned} \frac{d}{dt}(\partial_{\gamma(t)}u(\gamma(t)))=&-\partial_x H(\gamma(t),u(\gamma(t)),\partial_{\gamma(t)}u(\gamma(t))) \\ &-\partial_u H(\gamma(t),u(\gamma(t)),\partial_{\gamma(t)}u(\gamma(t)))\partial_{\gamma(t)}u(\gamma(t)),\quad a.e. \end{aligned} \end{equation*} \noindent (2) If $H$ is of class $C^{1,1}$, it satisfies the assumptions in \cite[Theorem 5.3.6]{cann}, so $u_-$ is a semiconcave function with a linear modulus. Similarly $-u_+$ is also a semiconcave function with a linear modulus. Equivalently, $u_+$ is a semiconvex function with a linear modulus. By \cite[Theorem 3.3.7]{cann}, the conjugate pair $u_-$ and $u_+$ are both of class $C^{1,1}$ on $\mathcal I_{(u_-,u_+)}$. \end{proof} \noindent {\bf Acknowledgements:} The authors would like to thank Kai Zhao for telling us the example in Remark \ref{limsup} and Professor Wei Cheng and Kaizhi Wang for many useful discussions. Lin Wang is supported by NSFC Grant No. 11790273, 11631006. Jun Yan is supported by NSFC Grant No. 11631006, 11790273. \appendix \section{One dimensional variational problems}\label{preli} The following results are useful in the proof of the existence and regularity of the minimizers in (\ref{T-}), which all come from \cite{One} and \cite{gam}. The results in the present section were proved for the case in the Euclidean space $\mathbb R^n$. One can easily generalize them for the case in the Riemannian manifold $M$. \begin{lemma}\textbf{}\label{TM} Let $J$ be a bounded interval. Assume that $F(t,x,\dot x)$ is lower semicontinuous, convex in $\dot x$, and has a lower bound. Then the integral functional \begin{equation*} \mathcal F(\gamma)=\int_J F(s,\gamma(s),\dot \gamma(s))ds \end{equation*} is sequentially weakly lower semicontinuous in $W^{1,1}(J,M)$. \end{lemma} \begin{proposition}\textbf{}\label{Tonelli} Let $M$ be a compact connected smooth manifold. Denote by $I=(a,b)\subset R$ a bounded interval, and let $F(t,x,\dot{x})$ be a Lagrangian defined on $I\times TM$. Assume $F$ satisfies \begin{itemize} \item [(i)] $F(t,x,\dot x)$ is measurable in $t$ for all $(x,\dot x)$, and continuous in $(x,\dot x)$ for almost every $t$; \item [(ii)] $F(t,x,\dot{x})$ is convex in $\dot{x}$; \item [(iii)] $F(t,x,\dot{x})$ is superlinear in $\dot{x}$. \end{itemize} \noindent Then for any given boundary condition $x_0$ and $x_1\in M$, there exists a minimizer of $\int_I F(t,x,\dot{x})dt$ in $\{x(t)\in W^{1,1}([a,b],M):\ x(a)=x_0,\ x(b)=x_1\}$. \end{proposition} \subsection{$\Gamma$-convergence} \begin{definition}\textbf{} Let $X$ be a topological space. Given a sequence $F_n:X\rightarrow [-\infty,+\infty]$, then we define \begin{equation*} (\Gamma-\liminf_{n\rightarrow +\infty} F_n)(x)=\sup_{U\in \mathcal N(x)}\liminf_{n\rightarrow +\infty}\inf_{y\in U} F_n(y), \end{equation*} \begin{equation*} (\Gamma-\limsup_{n\rightarrow +\infty} F_n)(x)=\sup_{U\in \mathcal N(x)}\limsup_{n\rightarrow +\infty}\inf_{y\in U} F_n(y). \end{equation*} Here the neighbourhoods $\mathcal N(x)$ can be replaced by the topological basis. When the superior limit equals to the inferior limit, we can define the $\Gamma$-limit. \end{definition} \begin{definition} Let $X$ be a topological space. For every function $F:X\rightarrow [-\infty,+\infty]$, the lower semicontinuous envelope $sc^- F$ of $F$ is defined for every $x\in X$ by \begin{equation*} (sc^-F)(x)=\sup_{G\in\mathcal G(F)} G(x), \end{equation*} where $\mathcal G(F)$ is the set of all lower semicontinuous functions $G$ on $X$ such that $G(y)\leq F(y)$ for every $y\in X$. \end{definition} \begin{lemma}\label{inc} If $F_n$ is an increasing sequence, then \begin{equation*} \Gamma-\lim_{n\rightarrow +\infty}F_n=\lim_{n\rightarrow +\infty} sc^- F_n=\sup_{n\in\mathbb N} sc^- F_n. \end{equation*} \end{lemma} \begin{remark}\label{rega} If $F_n$ is an increasing sequence of lower semicontinuous functions which converges pointwise to a function $F$, then $F$ is lower semicontinuous and $F_n$ has a $\Gamma$-convergence to $F$ by Lemma \ref{inc}. \end{remark} \begin{lemma}\label{inf} If the sequence $F_n$ has a $\Gamma$-convergence in $X$ to $F$, and there is a compact set $K\subset X$ such that \begin{equation*} \inf_{x\in X} F_n(x)=\inf_{x\in K}F_n(x), \end{equation*} then $F$ takes its minimum in $X$, and \begin{equation}\label{Fn} \min_{x\in X}F(x)=\lim_{n\rightarrow +\infty}\inf_{x\in X} F_n(x). \end{equation} \end{lemma} \subsection{ Regularity of minimizers in $t$-dependent cases}\label{A.2} The following results focus on the regularity of minimizers. Consider the following one dimensional variational problem \begin{equation}\label{P}\tag{P} I(\gamma):=\int_a^b F(t,\gamma(t),\dot{\gamma}(t))dt+\Psi(\gamma(a),\gamma(b)), \end{equation} where $\gamma$ is taken in the class of absolutely continuous curves, $\Psi$ takes its value in $\mathbb R\cup \{+\infty\}$ and stands for the constraints on the two ends of the curves $\gamma$. In the following, we are concerned with a minimizer of the above integral functional, which is denoted by $\gamma_*\in W^{1,1}([a,b],M)$. Due to the Lavrentiev phenomenon, the minimizier may not be Lipschitz continuous even if $F(t,x,\dot{x})$ is convex and superlinear with respect to $\dot{x}$. One can refer \cite{ball} for various counterexamples. Thanks to \cite{Bet}, the Lipschitz regularity of the minimizers still holds in the contact setting with $F:=L(x,v(x,t),\dot{x})$, where $v(x,t)$ is a Lipschitz function (see Lemma \ref{3.1} (1)). Let us recall the related results in \cite{Bet} as follows. \begin{itemize} \item [\textbf{(Lt):}] $F$ takes its value in $\mathbb R$, there exist a constant $\varepsilon>0$ and a Lebesgue-Borel-measurable map $k:[a,b]\times(0,+\infty)\rightarrow \mathbb R$ such that $k(t,1)\in L^1[a,b]$, and, for a.e. $t\in[a,b]$, for all $\sigma>0$ \begin{equation*} |F(t_2,\gamma_*(t),\sigma\dot{\gamma_*}(t))-F(t_1,\gamma_*(t),\sigma\dot{\gamma_*}(t))|\leq k(t,\sigma)|t_2-t_1|, \end{equation*} where $t_{1,2}\in[t-\varepsilon,t+\varepsilon]\cap [a,b]$. \end{itemize} \begin{lemma}\label{W4.1} Let $\gamma_*$ be a minimizer of (\ref{P}). If $F$ satisfies (Lt), then there exists an absolutely continuous function $p\in W^{1,1}([a,b],\mathbb R)$ such that for a.e. $t\in[a,b]$, we have \begin{equation}\label{W}\tag{W} F\left(t,\gamma_*(t),\frac{\dot{\gamma}_*(t)}{v}\right)v-F(t,\gamma_*(t),\dot{\gamma}_*(t))\geq p(t)(v-1),\quad \forall v>0, \end{equation} and $ |p'(t)|\leq k(t,1)$ for a.e. $ t\in[a,b]$. \end{lemma} \begin{lemma}\label{Lip6.3} Let $\gamma_*$ be a minimizer of (\ref{P}). Assume $F$ is a Borel measurable function. If $F$ satisfies (Lt) and \begin{itemize} \item [(1)] Superlinearity: There exists a function $\Theta:\mathbb R\rightarrow\mathbb R$ satisfying \begin{equation*} \lim_{r\rightarrow+\infty}\frac{\Theta(r)}{r}=+\infty,\quad \textrm{and}\quad F(t,\gamma_*(t),\xi)\geq \Theta(\|\xi\|)\quad \textrm{for\ all}\ \xi\in T_{\gamma_*(t)}M. \end{equation*} \item [(2)] Local boundedness: There exists $\rho>0$ and $M\geq 0$ such that for a.e. $t\in[a,b]$, we have $F(t,\gamma_*(t),\xi)\leq M$ for all $\xi\in T_{\gamma_*(t)}M$ with $\|\xi\|=\rho$. \end{itemize} Then the minimizer $\gamma_*$ is Lipschitz continuous. Moreover, if $\|\dot \gamma_*(t)\|>\rho$, we take $v=\|\dot \gamma_*(t)\|/\rho>1$ in (\ref{W}), then \begin{equation*} F\left(t,\gamma_*(t),\rho\frac{\dot{\gamma}_*(t)}{\|\dot \gamma_*(t)\|}\right)\geq \rho\frac{\Theta(\|\dot \gamma_*(t)\|)}{\|\dot \gamma_*(t)\|}-\|p\|_\infty. \end{equation*} Therefore $\|\dot \gamma_*(t)\|\leq \max\{\rho,R\}$ where $R:=\inf\{s:\ \rho\frac{\Theta(s)}{s}>M+\|p\|_\infty\}$. \end{lemma} \section{Proof of Lemma \ref{3.1}}\label{apbb} \noindent \textbf{Proof of Item (1).} According to (LIP) and the Lipschitz continuity of $v(x,t)$ on $M\times [0,T]$, for each $\tau\in [0,t]$, the map $s\mapsto L(\gamma(\tau),v(\gamma(\tau),s),\dot{\gamma}(\tau))$ satisfies the condition (Lt), where $k\equiv\lambda\|\partial_t v(x,t)\|_\infty$. By Lemma \ref{Lip6.3}, for every $(x,t)\in M\times[0,T]$, the minimizers of $u(x,t)$ are Lipschitz continuous. However, the Lipschitz constant depends on the end point $(x,t)$. We are now going to show for $(x',t')$ sufficiently close to $(x,t)$, the Lipschitz constant of the minimizers of $u(x',t')$ is independent of $(x',t')$. For any $r>0$, if $d(x,x')\leq r$ and $|t-t'|\leq r/2$, where $t\geq r>0$, we denote by $\gamma(s;x,t)$ and $\gamma(s;x',t')$ the minimizers of $u(x,t)$ and $u(x',t')$ respectively, then we have \begin{equation*} \begin{aligned} u(x',t')=&\varphi(\gamma(0;x',t'))+\int_0^{t'}L(\gamma(s;x',t'),v(\gamma(s;x',t'),s),\dot{\gamma}(s;x',t'))ds \\ \leq& \varphi(\gamma(0;x,t))+\int_0^{t-r}L(\gamma(s;x,t),v(\gamma(s;x,t),s),\dot{\gamma}(s;x,t))ds\\ &+\int_{t-r}^{t'}L(\alpha(s),v(\alpha(s),s),\dot \alpha(s))ds, \end{aligned} \end{equation*} where $\alpha:[t-r,t']\rightarrow M$ is a geodesic connecting $\gamma(t-r;x,t)$ and $x'$ with constant speed. Noticing \begin{equation*} \|\dot \alpha\|\leq \frac{1}{t'-(t-r)}\bigl{(}d(\gamma(t-r;x,t),x)+d(x,x')\bigl{)}\leq 2\left(\frac{1}{r}\int_{t-r}^t\|\dot\gamma(s;x,t)\|ds+1\right), \end{equation*} we obtain that \begin{equation*} \int_0^{t'}L(\gamma(s;x',t'),v(\gamma(s;x',t'),s),\dot{\gamma}(s;x',t'))ds \end{equation*} has a bound depending only on $(x,t)$ and $r$. By (SL), there exists a constant $M(x,t,r)>0$ such that \begin{equation*} \int_0^{t'}\|\dot{\gamma}(s;x',t')\|ds\leq M(x,t,r), \end{equation*} where $t'\geq t-r/2>0$. It means $\|\dot{\gamma}(s;x',t')\|$ are equi-integrable. Therefore, for $(x',t')$ sufficiently close to $(x,t)$, there exists a constant $R(x,t,r)>0$ and $s_0\in[0,t']$ such that $\|\dot \gamma(s_0;x',t')\|\leq R(x,t,r)$. By Lemma \ref{W4.1}, there exists an absolutely continuous function $p(t;x',t')$ satisfying $|p'(t;x',t')|\leq \lambda\|\partial_tv(x,t)\|_\infty$ such that \begin{equation*} \begin{aligned} L(&\gamma(s;x',t'),v(\gamma(s;x',t'),s),\frac{\dot \gamma(s;x',t')}{\theta})\theta \\ &-L(\gamma(s;x',t'),v(\gamma(s;x',t'),s),\dot \gamma(s;x',t'))\geq p(s;x',t')(\theta-1),\quad \forall \theta>0. \end{aligned} \end{equation*} One can take $\theta=2$ and $t=s_0$ to obtain the upper bound of $p(s_0)$. One can take $\theta=1/2$ and $t=s_0$ to obtain the lower bound of $p(s_0)$. Note that $p'(t)$ is bounded, we finally obtain the bound of $\|p(t)\|_\infty$ which is independent of $(x',t')$. Since $L(x,u,\dot x)$ satisfies (SL), according to Lemma \ref{Lip6.3} and taking $\rho=1$, we have \begin{equation*} L(\gamma(s;x',t'),v(\gamma(s;x',t'),s),\frac{\dot \gamma(s;x',t')}{\|\dot \gamma(s;x',t')\|})\geq \frac{\Theta(\|\dot \gamma(s;x',t')\|)}{\|\dot \gamma(s;x',t')\|}-\|p(s;x',t')\|_\infty. \end{equation*} Therefore, for $(x',t')$ sufficiently close to $(x,t)$, the minimizers $\gamma(s;x',t')$ have a Lipschitz constant independent of $(x',t')$. \vspace{1ex} \noindent \textbf{Proof of Item (2).} We first show that $u(x,t)$ is locally Lipschitz in $x$. For any $\delta>0$, given $(x_0,t)\in M\times[\delta,T]$ and $x$, $x'\in B(x_0,\delta/2)$, denoted by $d_0=d(x,x')\leq \delta$ the Riemannian distance between $x$ and $x'$, then \begin{equation*} \begin{aligned} u(x',t)-u(x,t)\leq& \int_{t-d_0}^{t}L(\alpha(s),v(\alpha(s),s),\dot \alpha(s))ds \\ &-\int_{t-d_0}^{t}L(\gamma(s;x,t),v(\gamma(s;x,t),s),\dot{\gamma}(s;x,t))ds, \end{aligned} \end{equation*} where $\gamma(s;x,t)$ is a minimizer of $u(x,t)$ and $\alpha:[t-d_0,t]\rightarrow M$ is a geodesic connecting $\gamma(t-d_0;x,t)$ and $x'$ with constant speed. By Lemma \ref{3.1} (1), if $x\in B(x_0,\delta/2)$, the bound of $\|\dot \gamma(s;x,t)\|$ depends only on $x_0$ and $\delta$. Noticing that \begin{equation*} \|\dot \alpha(s)\|\leq \frac{d(\gamma(t-d_0;x,t),x')}{d_0}\leq \frac{d(\gamma(t-d_0;x,t),x)}{d_0}+1, \end{equation*} and that $d(\gamma(t-d_0;x,t),x)\leq \int_{t-d_0}^t\|\dot \gamma(s;x,t)\|ds$, the bound of $\|\dot \alpha(s)\|$ also depends only on $x_0$ and $\delta$. Exchanging the role of $(x,t)$ and $(x',t)$, one obtain that $|u(x,t)-u(x',t)|\leq J_1d(x,x')$, where $J_1$ depends only on $x_0$ and $\delta$. We conclude that for $t\in(0,T]$, the value function $u(\cdot,t)$ is (locally) Lipschitz on $M$. We are now going to show the locally Lipschitz continuity of $u(x,t)$ in $t$. For given $t_0\geq 3\delta/2$, $t$ and $t'\in[t_0-\delta/2,t_0+\delta/2]$, without any loss of generality, we assume $t'>t$, then \begin{equation*} \begin{aligned} u(x,t')-u(x,t)\leq& u(\gamma(t;x,t'),t)-u(x,t) \\ &+\int_t^{t'}L(\gamma(s;x,t'),v(\gamma(s;x,t'),s),\dot \gamma(s;x,t'))ds, \end{aligned} \end{equation*} here the bound of $\|\dot \gamma(s;x,t')\|$ depends only on $t_0$ and $\delta$. We have shown that for $t\geq \delta$, the following holds \begin{equation*} u(\gamma(t;x,t'),t)-u(x,t)\leq J_1d(\gamma(t;x,t'),x)\leq J_1\int_t^{t'}\|\dot \gamma(s;x,t')\|ds\leq J_2(t'-t). \end{equation*} Thus, $u(x,t')-u(x,t)\leq J_3(t'-t)$, where $J_3$ depends only on $t_0$ and $\delta$. The condition $t'<t$ is similar. We conclude the locally Lipschitz continuity of $u(x,\cdot)$ on $(0,T]$. \vspace{1ex} \noindent \textbf{Proof of Item (3).} We first prove that $u(x,t)$ is continuous at $t=0$. We take the initial functions in (\ref{u}) as $\varphi_1$ and $\varphi_2$, and denote by $u_1(x,t)$ and $u_2(x,t)$ the corresponding value functions respectively. Since $v(x,t)$ is fixed, by the non-expansiveness of the Lax-Oleinik semigroup, we have $\|u_1(x,t)-u_2(x,t)\|_\infty\leq \|\varphi_1-\varphi_2\|_\infty$. Thus, without any loss of generality, we assume the initial function to be Lipschitz continuous in the following discussion. Take a constant curve $\alpha(t)\equiv x$ and let $\gamma$ be a minimizer of $u(x,t)$, it is obvious that \begin{equation*} u(x,t)=\varphi(\gamma(0))+\int_0^tL(\gamma(s),v(\gamma(s),s),\dot \gamma(s))ds\leq \varphi(x)+\int_0^tL(x,v(x,s),0)ds, \end{equation*} so $\limsup_{t\rightarrow 0^+}u(x,t)\leq \varphi(x)$. By (SL), there exists a constant $C>0$ such that \begin{equation*} \int_0^tL(\gamma(\tau),v(\gamma(\tau),\tau),\dot \gamma(\tau))d\tau\geq \int_0^t\|\partial_x \varphi\|_\infty\|\dot \gamma(\tau)\|d\tau+Ct\geq \|\partial_x \varphi\|_\infty d(\gamma(0),\gamma(t))+Ct, \end{equation*} which implies that \begin{equation*} \int_0^tL(\gamma(\tau),v(\gamma(\tau),\tau),\dot \gamma(\tau))d\tau+\varphi(\gamma(0))\geq \varphi(x)+Ct. \end{equation*} Therefore $\liminf_{t\rightarrow 0^+}u(x,t)\geq \varphi(x)$. Combining with Lemma \ref{3.1} (2), the conclusion that $u(x,t)$ is continuous on $M\times [0,T]$ is then proved. We are now going to show that the value function $u(x,t)$ is a continuous viscosity solution of (\ref{cu}). We first show that $u(x,t)$ is a viscosity subsolution. Let $V$ be an open subset of $M$ and $\phi:V\times[0,T]\rightarrow\mathbb{R}$ be a $C^1$ test function such that $u(x,t)-\phi(x,t)$ takes its maximum at $(x_0,t_0)$. Equivalently we have $\phi(x_0,t_0)-\phi(x,t)\leq u(x_0,t_0)-u(x,t)$ for all $(x,t)\in V\times [0,T]$. Given a constant $\delta>0$, we take a $C^1$ curve $\gamma:[t_0-\delta,t_0+\delta]\rightarrow M$ taking its value in $V$, satisfying $\gamma(t_0)=x_0$ and $\dot{\gamma}(t_0)=\xi$. For $t\in [t_0-\delta,t_0]$, we have \begin{equation*} \phi(x_0,t_0)-\phi(\gamma(t),t)\leq u(x_0,t_0)-u(\gamma(t),t)\leq \int_{t}^{t_0}L(\gamma(s),v(\gamma(s),s),\dot{\gamma}(s)){d}s. \end{equation*} Dividing by $t-t_0$ on both side of the above inequality, we have \begin{equation*} \frac{\phi(x_0,t_0)-\phi(\gamma(t),t)}{t-t_0}\leq \frac{1}{t-t_0} \int_{t}^{t_0}L(\gamma(s),v(\gamma(s),s),\dot{\gamma}(s)){d}s. \end{equation*} Let $t\rightarrow t_0^-$, we have $\phi_t(x_0,t_0)+\phi_x(x_0,t_0)\cdot \xi\leq L(x_0,v(x_0,t_0),v)$. By definition of the Lagrangian via Legendre transformation, we have \begin{equation*} \phi_t(x_0,t_0)+H(x_0,v(x_0,t_0),\phi_x(x_0,t_0))\leq 0. \end{equation*} Then we show that $u(x,t)$ is a supersolution. Let $\psi:V\times[0,T]\rightarrow\mathbb{R}$ be a $C^1$ test function such that $u(x,t)-\psi(x,t)$ takes its minimum at $(x_0,t_0)$. Equivalently we have $\psi(x_0,t_0)-\psi(x,t)\geq u(x_0,t_0)-u(\gamma(t),t)$ for all $(x,t)\in V\times [0,T]$. Let $\gamma$ be a minimmizer of $u(x_0,t_0)$, for $t\in[t_0-\delta,t_0]$ with $\gamma(t_0-\delta)\in V$, we have \begin{equation}\label{super} \psi(x_0,t_0)-\psi(\gamma(t),t)\geq u(x_0,t_0)-u(\gamma(t),t) =\int_{t}^{t_0}L(\gamma(s),v(\gamma(s),s),\dot{\gamma}(s)){d}s. \end{equation} Let $t\rightarrow t_0^-$. When $t$ is close enough to $t_0$, the curve $\gamma:[0,t]\to M$ is contained in a coordinate neighbourhood of $x_0$. In the local coordinate, we can assume $M$ equals to an open subset of $\mathbb R^n$. Since $v(x,t)$ is Lipschitz continuous on $M\times[0,T]$, the minimizer $\gamma$ is a Lipschitz curve. Therefore $\|x_0-\gamma(t)\|/|t_0-t|$ is bounded. One can take a sequence $t_n\rightarrow t_0^-$ such that $(x_0-\gamma(t_n))/(t_0-t_n)$ converges to some $\xi'\in\mathbb R^n$. By the continuity of $L(x,u,\dot x)$, $v(x,t)$ and $\gamma$, for any $\varepsilon>0$, there exists a large enough $n\in\mathbb N$ such that \begin{equation*} L(\gamma(s),v(\gamma(s),s),\dot{\gamma}(s))\geq L(x_0,v(x_0,t_0),\dot{\gamma}(s))-\varepsilon,\quad \forall s\in[t_n,t_0]. \end{equation*} Since $L(x,u,\cdot)$ is convex, the Jensen inequality implies that \begin{equation*} \begin{aligned} &\frac{1}{t_0-t_n}\int_{t_n}^{t_0}L(\gamma(s),v(\gamma(s),s),\dot{\gamma}(s))ds\geq L\left(x_0,v(x_0,t_0),\frac{1}{t_0-t_n}\int_{t_n}^{t_0}\dot{\gamma}(s)ds\right)-\varepsilon \\ &=L\left(x_0,v(x_0,t_0),\frac{x_0-\gamma(t_n)}{t_0-t_n}\right)-\varepsilon. \end{aligned} \end{equation*} When $n$ is large enough, $\varepsilon$ can be arbitrary small. Dividing by $t_0-t_n$ on both side of (\ref{super}), we have \begin{equation*} \begin{aligned} &\lim_{n\rightarrow +\infty}\frac{\psi(x_0,t_0)-\psi(\gamma(t_n),t_n)}{t_0-t_n} =\psi_t(x_0,t_0)+\psi_x(x_0,t_0)\cdot \xi' \\ &\geq \limsup_{n\rightarrow+\infty} \frac{1}{t_0-t_n}\int_{t_n}^{t_0}L(\gamma(s),v(\gamma(s),s),\dot{\gamma}(s))ds \geq L(x_0,v(x_0,t_0),\xi'). \end{aligned} \end{equation*} Therefore \begin{equation*} \begin{aligned} \psi_t(x_0,t_0)&+H(x_0,v(x_0,t_0),\psi_x(x_0,t_0)) \\ &\geq \psi_t(x_0,t_0)+\psi_x(x_0,t_0)\cdot \xi'- L(x_0,v(x_0,t_0),\xi')\geq 0. \end{aligned} \end{equation*} Finally, we have proven that $u(x,t)$ is a continuous viscosity solution of (\ref{cu}) on $M\times[0,T]$. \qed \section{Proof of Lemma \ref{Bol}}\label{apcc} Define \begin{equation*} \mathbb L^t_n(\gamma)=\varphi(\gamma(0))+\int_0^t L_n(\gamma(s),v(\gamma(s),s),\dot{\gamma}(s))ds. \end{equation*} It is well-known that the above functionals admit minimizers in the class of absolutely continuous curves with $\gamma(t)=x$. To prove the existence of the minimizers of $\mathbb L^t(\gamma)$, we define \begin{equation*} \Theta(r):=\inf_{x\in M}\left(\inf_{\|\dot x\|\geq r}L_1(x,0,\dot x)\right),\quad \forall r\geq 0. \end{equation*} Because $L_1$ is obtained by modifying outside $\|p\|>1$, the function $\Theta(r)$ is superlinear, and \begin{equation*} \begin{aligned} \Theta(\|\dot x\|)&\leq L_n(x,0,\dot x)\\ &\leq L_n(x,u,\dot x)+\lambda |u| \\ &\leq L(x,u,\dot x)+\lambda |u|,\quad \forall n\in\mathbb N,\ \forall (x,u,\dot x)\in TM\times \mathbb R. \end{aligned} \end{equation*} For any sequence $\gamma_n$ on $X_t(x)$ with $\lim_n \mathbb L^t(\gamma_n)<+\infty$, we have $\sup_n\int_0^t\Theta(\|\dot \gamma_n\|)ds<+\infty$, so $\gamma_n$ admits a weakly sequentially converging subsequence. By Lemma \ref{TM}, the functionals $\mathbb L^t$ and $\mathbb L^t_n$ are sequentially weakly lower semicontinuous on $X_t(x)$. Since $X_t(x)$ is a metric space, the functionals $\mathbb L^t$ and $\mathbb L^t_n$ are also lower semicontinuous. Since $\mathbb L^t_n$ is an increasing sequence, converges pointwisely to $\mathbb L^t$ on $X_t(x)$, and both $\mathbb L^t$ and $\mathbb L^t_n(\gamma)$ are lower semicontinuous, by Lemma \ref{inc}, we conclude $\Gamma-\lim_{n\rightarrow +\infty} \mathbb L_n^t=\mathbb L^t$ on $X_t(x)$. If the minimizers $\gamma_n$ of $\mathbb L_n^t$ are contained in a compact subset of $X_t(x)$, then by Lemma \ref{inf} one can obtain that $\mathbb L^t$ admits a minimum on $X_t(x)$. It remains to show there exists a compact set in $X_t(x)$ such that all minimizers $\gamma_n$ are contained in this set. Consider the set \begin{equation*} K_t(x):=\left\{\gamma\in X_t(x):\ \int_0^t\Theta(\|\dot \gamma\|)ds\leq \|\phi\|_\infty+\mathbb Kt+2\lambda Kt\right\}, \end{equation*} where $\mathbb K:=\sup_{x\in M}L(x,0,0)$, $K:=\|v(x,t)\|_\infty$. The set $K_t(x)$ is weakly sequentially compact in $W^{1,1}([0,t],M)$. According to \cite[Theorem 2.13]{One}, $K_t(x)$ is compact in $X_t(x)$. For constant curve $\gamma_x\equiv x$, we have \begin{equation*} \int_0^t\Theta(\|\dot \gamma_x\|)ds\leq\mathbb L^t_n(\gamma_x)+\lambda Kt\leq \mathbb L^t(\gamma_x)+\lambda Kt\leq \|\phi\|_\infty+\mathbb Kt+2\lambda Kt, \end{equation*} therefore $\gamma_x$ is contained in $K_t(x)$. Similarly, for minimizers $\gamma_n$, we have \begin{equation*} \begin{aligned} \int_0^t\Theta(\|\dot \gamma_n\|)ds&\leq\mathbb L^t_n(\gamma_n)+\lambda Kt\leq \mathbb L^t_n(\gamma_x)+\lambda Kt \\ &\leq \mathbb L^t(\gamma_x)+\lambda Kt\leq \|\phi\|_\infty+\mathbb Kt+2\lambda Kt, \end{aligned} \end{equation*} thus $\gamma_n$ are all contained in $K_t(x)$. \qed \section{Solution semigroups and weak KAM solutions}\label{swv} Following Fathi \cite{Fa08}, one can extend the definition of weak KAM solutions of equation \eqref{hj} by using Lipschitz calibrated curves instead of $C^1$ curves. \begin{definition}\label{bws} A function $u_-\in C(M)$ is called a backward weak KAM solution of \eqref{hj} if \begin{itemize} \item [(1)] For each absolutely continuous curve $\gamma:[t',t]\rightarrow M$, we have \begin{equation*} u_-(\gamma(t))-u_-(\gamma(t'))\leq \int_{t'}^{t}L(\gamma(s),u_-(\gamma(s)),\dot \gamma(s))ds. \end{equation*} The above condition reads that $u_-$ is dominated by $L$ and denoted by $u_-\prec L$. \item [(2)] For each $x\in M$, there exists an absolutely continuous curve $\gamma_-:(-\infty,0]\rightarrow M$ with $\gamma_-(0)=x$ such that \begin{equation*} u_-(\gamma_-(t))-u_-(x)=\int_t^0L(\gamma_-(s),u_-(\gamma_-(s)),\dot \gamma_-(s))ds,\quad \forall t<0. \end{equation*} The curves satisfying the above equality are called $(u_-,L,0)$-calibrated curves. \end{itemize} \end{definition} \begin{definition}\label{fws} Similar to the definition above, a function $v_+\in C(M)$ is called a forward weak KAM solution of \eqref{hj} if \begin{itemize} \item [(1)] For each absolutely continuous curve $\gamma:[t',t]\rightarrow M$, we have \begin{equation*} v_+(\gamma(t))-v_+(\gamma(t'))\leq \int_{t'}^{t}L(\gamma(s),v_+(\gamma(s)),\dot \gamma(s))ds. \end{equation*} The above condition reads that $v_+$ is dominated by $L$ and denoted by $v_+\prec L$. \item [(2)] For each $x\in M$, there exists an absolutely continuous curve $\gamma_+:[0,+\infty)\rightarrow M$ with $\gamma_+(0)=x$ such that \begin{equation*} v_+(\gamma_+(t))-v_+(x)=\int_0^tL(\gamma_+(s),v_+(\gamma_+(s)),\dot \gamma_+(s))ds,\quad \forall t>0. \end{equation*} The curves satisfying the above equality are called $(v_+,L,0)$-calibrated curves. \end{itemize} \end{definition} \begin{proposition}\label{fix} The backward weak KAM solutions defined in Definition \ref{bws} are the fixed points of $T^-_t$. Similarly, the forward weak KAM solutions defined in Definition \ref{fws} are the fixed points of $T^+_t$. \end{proposition} \begin{proof} According to the definition of the backward weak KAM solutions, for $u_-\in\mathcal S_-$ we have \begin{equation*} u_-(x)=\inf_{\gamma(t)=x}\left\{u_-(\gamma(0))+\int_0^t L(\gamma(\tau),u_-(\gamma(\tau)),\dot{\gamma}(\tau))d\tau\right\}, \end{equation*} where the infimum is taken in the class of absolutely continuous curves. We show $u_-(x)\leq T^-_tu_-(x)$, the opposite direction is similar. We argue by contradiction. Assume \[u_-(x)>T^-_tu_-(x).\] Let $\gamma:[0,t]\rightarrow M$ with $\gamma(t)=x$ be a minimizer of $T^-_tu_-(x)$. Define \begin{equation*} F(\tau):=u_-(\gamma(\tau))-T^-_\tau u_-(\gamma(\tau)). \end{equation*} Since $F(t)>0$ and $F(0)=0$, there is $s_0\in [0,t)$ such that $F(s_0)=0$ and $F(s)>0$ for $s\in(s_0,t]$. By definition \begin{equation*} T^-_su_-(\gamma(s))=T^-_{s_0}u_-(\gamma(s_0))+\int_{s_0}^sL(\gamma(\tau),T^-_\tau u_-(\gamma(\tau)),\dot\gamma(\tau))d\tau, \end{equation*} and \begin{equation*} u_-(\gamma(s))\leq u_-(\gamma(s_0))+\int_{s_0}^sL(\gamma(\tau),u_-(\gamma(\tau)),\dot\gamma(\tau))d\tau, \end{equation*} which implies \begin{equation*} F(s)\leq \lambda\int_{s_0}^s F(\tau)d\tau. \end{equation*} By the Gronwall inequality, we conclude $F(s)\equiv 0$ for all $s\in[s_0,t]$, which contradicts $F(t)>0$. \bigskip According to the definition of the forward weak KAM solutions, for $v_+\in\mathcal S_+$ we have \begin{equation*} v_+(x)=\sup_{\gamma(0)=x}\left\{v_+(\gamma(t))-\int_0^t L(\gamma(\tau),v_+(\gamma(\tau)),\dot{\gamma}(\tau))d\tau\right\}, \end{equation*} where the infimum is taken in the class of absolutely continuous curves. We show $v_+(x)\leq T^+_tv_+(x)$, the opposite direction is similar. We argue by contradiction. Assume \[v_+(x)>T^+_tv_+(x).\] Let $\gamma_+:[0,t]\rightarrow M$ with $\gamma_+(0)=x$ be a $(v_+,L,0)$-calibrated curve. Define \begin{equation*} F(\tau):=v_+(\gamma_+(\tau))-T^+_{t-\tau} v_+(\gamma_+(\tau)). \end{equation*} Since $F(t)=0$ and $F(0)>0$, there is $s_0\in (0,t]$ such that $F(s_0)=0$ and $F(s)>0$ for $s\in[0,s_0)$. By definition \begin{equation*} T^+_{t-s}v_+(\gamma_+(s))\geq T^+_{t-s_0}v_+(\gamma_+(s_0))-\int_s^{s_0}L(\gamma_+(\tau),T^+_{t-\tau} v_+(\gamma_+(\tau)),\dot\gamma_+(\tau))d\tau, \end{equation*} and \begin{equation*} v_+(\gamma_+(s))=v_+(\gamma_+(s_0))+\int_s^{s_0}L(\gamma_+(\tau),v_+(\gamma_+(\tau)),\dot\gamma_+(\tau))d\tau, \end{equation*} which implies \begin{equation*} F(s)\leq \lambda\int_s^{s_0} F(\tau)d\tau. \end{equation*} By the Gronwall inequality, we get $F(s)\equiv 0$ for all $s\in[0,s_0]$, which contradicts $F(0)>0$. \end{proof} \section{Proof of Lemma \ref{til1}} By Lemma \ref{u<LLip}, $u$ is Lipschitz continuous. By the condition (LLD), for $\|v_1\|$ and $\|v_2\|$ less than $R$, there exists a constant $K(R,\|u(x)\|_\infty)>0$ such that \begin{equation*} \begin{aligned} &|L(x_1,u(x_1),v_1)-L(x_2,u(x_2),v_2)| \\ &\leq |L(x_1,u(x_1),v_1)-L(x_1,u(x_2),v_1)|+|L(x_1,u(x_2),v_1)-L(x_2,u(x_2),v_2)| \\ &\leq \lambda\|\partial_xu(x)\|_\infty d(x_1,x_2)+K(R,\|u(x)\|_\infty)(d(x_1,x_2)+\|v_1-v_2\|). \end{aligned} \end{equation*} Therefore $(x,v)\mapsto L(x,u(x),v)$ is locally Lipschitz continuous. By \cite[Theorem 2.1 (ii)]{Clarke}, the minimizer $\gamma$ is a $C^1$ curve. Since we are arguing locally near the point $x:=\gamma(0)$, it suffices to prove the lemma for the case when $M$ is an open subset $U$ of $\mathbb R^n$. We are going to show for each $y\in U$, there holds \begin{equation}\label{sup<inf} \limsup_{\eta\rightarrow 0^+}\frac{u(x+\eta y)-u(x)}{\eta}\leq \frac{\partial L}{\partial \dot x}(x,u(x),\dot \gamma(0))\cdot y\leq \liminf_{\eta\rightarrow 0^+}\frac{u(x+\eta y)-u(x)}{\eta}. \end{equation} For $\eta>0$ and $0<\varepsilon\leq a$, define $\gamma_\eta:[-\varepsilon,0]\rightarrow U$ by $\gamma_\eta(s)=\gamma(s)+\frac{s+\varepsilon}{\varepsilon}\eta y$, then $\gamma_\eta(0)=x+\eta y$ and $\gamma_\eta(-\varepsilon)=\gamma(-\varepsilon)$. Since $\gamma$ is a $(u,L,0)$-calibrated curve \begin{equation*} u(x+\eta y)-u(\gamma(-\varepsilon))\leq \int_{-\varepsilon}^0 L(\gamma_\eta(s),u(\gamma_\eta(s)),\dot{\gamma}_\eta(s))ds, \end{equation*} \begin{equation*} u(x)-u(\gamma(-\varepsilon))=\int_{-\varepsilon}^0L(\gamma(s),u(\gamma(s)),\dot \gamma(s))ds. \end{equation*} It follows that \begin{equation*} \frac{u(x+\eta y)-u(x)}{\eta}\leq \frac{1}{\eta}\int_{-\varepsilon}^0(L(\gamma_\eta(s),u(\gamma_\eta(s)),\dot{\gamma}_\eta(s))-L(\gamma(s),u(\gamma(s)),\dot \gamma(s)))ds. \end{equation*} By the locally Lipschitz continuity of the map $(x,v)\mapsto L(x,u(x),v)$, there exists $K'(\|\dot \gamma(s)\|)$ such that \begin{equation*} \begin{aligned} \limsup_{\eta\rightarrow 0^+}&\frac{u(x+\eta y)-u(x)}{\eta}\\ \leq & \limsup_{\eta\rightarrow 0^+}\frac{1}{\eta}\int_{-\varepsilon}^0(L(\gamma_\eta(s),u(\gamma_\eta(s)),\dot{\gamma}_\eta(s))-L(\gamma_\eta(s),u(\gamma_\eta(s)),\dot \gamma(s))) \\ &+(L(\gamma_\eta(s),u(\gamma_\eta(s)),\dot \gamma(s))-L(\gamma(s),u(\gamma(s)),\dot \gamma(s)))ds \\ \leq &\int_{-\varepsilon}^0(\frac{1}{\varepsilon}\frac{\partial L}{\partial \dot x}(\gamma(s),u(\gamma(s)),\dot \gamma(s))+K'(\|\dot \gamma(s)\|)\frac{s+\varepsilon}{\varepsilon}\|y\|)ds. \end{aligned} \end{equation*} Let $\varepsilon\rightarrow 0^+$, we get the first equality in (\ref{sup<inf}). Similarly, define $\gamma_\eta:[0,\varepsilon]\rightarrow U$ by $\gamma_\eta(s)=\gamma(s)+\frac{\varepsilon-s}{\varepsilon}\eta y$, then $\gamma_\eta(0)=x+\eta y$ and $\gamma_\eta(\varepsilon)=\gamma(\varepsilon)$. Since $\gamma$ is a $(u,L,0)$-calibrated curve \begin{equation*} u(\gamma(\varepsilon))-u(x+\eta y)\leq \int_0^\varepsilon L(\gamma_\eta(s),u(\gamma_\eta(s)),\dot{\gamma}_\eta(s))ds, \end{equation*} \begin{equation*} u(\gamma(\varepsilon))-u(x)=\int_0^\varepsilon L(\gamma(s),u(\gamma(s)),\dot \gamma(s))ds. \end{equation*} It follows that \begin{equation*} \frac{u(x+\eta y)-u(x)}{\eta}\geq \frac{1}{\eta}\int_0^\varepsilon(L(\gamma(s),u(\gamma(s)),\dot \gamma(s))-L(\gamma_\eta(s),u(\gamma_\eta(s)),\dot{\gamma}_\eta(s)))ds. \end{equation*} By the locally Lipschitz continuity of the map $(x,v)\mapsto L(x,u(x),v)$, there exists $K'(\|\dot \gamma(s)\|)$ such that \begin{equation*} \begin{aligned} \liminf_{\eta\rightarrow 0^+}&\frac{u(x+\eta y)-u(x)}{\eta}\\ \leq & \limsup_{\eta\rightarrow 0^+}\frac{1}{\eta}\int_{-\varepsilon}^0(L(\gamma(s),u(\gamma(s)),\dot{\gamma}(s))-L(\gamma_\eta(s),u(\gamma_\eta(s)),\dot \gamma(s))) \\ &+(L(\gamma_\eta(s),u(\gamma_\eta(s)),\dot \gamma(s))-L(\gamma_\eta(s),u(\gamma_\eta(s)),\dot \gamma_\eta(s)))ds \\ \leq &\int_{-\varepsilon}^0(K'(\|\dot \gamma(s)\|)\frac{s-\varepsilon}{\varepsilon}\|y\|+\frac{1}{\varepsilon}\frac{\partial L}{\partial \dot x}(\gamma(s),u(\gamma(s)),\dot \gamma(s)))ds. \end{aligned} \end{equation*} Let $\varepsilon\rightarrow 0^+$, we get the second equality in (\ref{sup<inf}). \qed \medskip
164,812
As there is but one poet great enough to express the Puritan spirit, so there is but one commanding prose writer, John Bunyan. Milton was the child of the Renaissance, inheritor of all its culture, and the most profoundly educated man of his age. Bunyan was a poor, uneducated tinker. From the Renaissance he inherited nothing; but from the Reformation he received an excess of that spiritual independence which had caused the Puritan struggle for liberty. These two men, representing the extremes of English life in the seventeenth century, wrote the two works that stand to-day for the mighty Puritan spirit. One gave us the only epic since Beowulf; the other gave us our only great allegory, which has been read more than any other book in our language save the Bible. JOHN BUNYAN Life of Bunyan. Bunyan is an extraordinary figure; we must study him, as well as his books. Fortunately we have his life story in his own words, written with the same lovable modesty and sincerity that marked all his work. Reading that story now, in Grace Abounding, we see two great influences at work in his life. One, from within, was his own vivid imagination, which saw visions, allegories, parables, revelations, in every common event. The other, from without, was the spiritual ferment of the age, the multiplication of strange sects,--Quakers, Free-Willers, Ranters, Anabaptists, Millenarians,--and the untempered zeal of all classes, like an engine without a balance wheel, when men were breaking away from authority and setting up their own religious standards. Bunyan's life is an epitome of that astonishing religious individualism which marked the close of the English Reformation. He was born in the little village of Elstow, near Bedford, in 1628, the son of a poor tinker. For a little while the boy was sent to school, where he learned to read and write after a fashion; but he was soon busy in his father's shop, where, amid the glowing pots and the fire and smoke of his little forge, he saw vivid pictures of hell and the devils which haunted him all his life. When he was sixteen years old his father married the second time, whereupon Bunyan ran away and became a soldier in the Parliamentary army. The religious ferment of the age made a tremendous impression on Bunyan's sensitive imagination. He went to church occasionally, only to find himself wrapped in terrors and torments by some fiery itinerant preacher; and he would rush violently away from church to forget his fears by joining in Sunday sports on the village green. As night came on the sports were forgotten, but the terrors returned, multiplied like the evil spirits of the parable. Visions of hell and the demons swarmed in his brain. He would groan aloud in his remorse, and even years afterwards he bemoans the sins of his early life. When we look for them fearfully, expecting some shocking crimes and misdemeanors, we find that they consisted of playing ball on Sunday and swearing. The latter sin, sad to say, was begun by listening to his father cursing some obstinate kettle which refused to be tinkered, and it was perfected in the Parliamentary army. One day his terrible swearing scared a woman, "a very loose and ungodly wretch," as he tells us, who reprimanded him for his profanity. The reproach of the poor woman went straight home, like the voice of a prophet. All his profanity left him; he hung down his head with shame. "I wished with all my heart," he says, "that I might be a little child again, that my father might learn me to speak without this wicked way of swearing." With characteristic vehemence Bunyan hurls himself upon a promise of Scripture, and instantly the reformation begins to work in his soul. He casts out the habit, root and branch, and finds to his astonishment that he can speak more freely and vigorously than before. Nothing is more characteristic of the man than this sudden seizing upon a text, which he had doubtless heard many times before, and being suddenly raised up or cast down by its influence. With Bunyan's marriage to a good woman the real reformation in his life began. While still in his teens he married a girl as poor as himself. "We came together," he says, "as poor as might be, having not so much household stuff as a dish or spoon between us both." The only dowry which the girl brought to her new home was two old, threadbare books, The Plain Man's Pathway to Heaven, and The Practice of Piety[168] Bunyan read these books, which instantly gave fire to his imagination. He saw new visions and dreamed terrible new dreams of lost souls; his attendance at church grew exemplary; he began slowly and painfully to read the Bible for himself, but because of his own ignorance and the contradictory interpretations of Scripture which he heard on every side, he was tossed about like a feather by all the winds of doctrine. The record of the next few years is like a nightmare, so terrible is Bunyan's spiritual struggle. One day he feels himself an outcast; the next the companion of angels; the third he tries experiments with the Almighty in order to put his salvation to the proof. As he goes along the road to Bedford he thinks he will work a miracle, like Gideon with his fleece. He will say to the little puddles of water in the horses' tracks, "Be ye dry"; and to all the dry tracks he will say, "Be ye puddles." As he is about to perform the miracle a thought occurs to him: "But go first under yonder hedge and pray that the Lord will make you able to perform a miracle." He goes promptly and prays. Then he is afraid of the test, and goes on his way more troubled than before. After years of such struggle, chased about between heaven and hell, Bunyan at last emerges into a saner atmosphere, even as Pilgrim came out of the horrible Valley of the Shadow. Soon, led by his intense feelings, he becomes an open-air preacher, and crowds of laborers gather about him on the village green. They listen in silence to his words; they end in groans and tears; scores of them amend their sinful lives. For the Anglo-Saxon people are remarkable for this, that however deeply they are engaged in business or pleasure, they are still sensitive as barometers to any true spiritual influence, whether of priest or peasant; they recognize what Emerson calls the "accent of the Holy Ghost," and in this recognition of spiritual leadership lies the secret of their democracy. So this village tinker, with his strength and sincerity, is presently the acknowledged leader of an immense congregation, and his influence is felt throughout England. It is a tribute to his power that, after the return of Charles II, Bunyan was the first to be prohibited from holding public meetings. Concerning Bunyan's imprisonment in Bedford jail, which followed his refusal to obey the law prohibiting religious meetings without the authority of the Established Church, there is a difference of opinion. That the law was unjust goes without saying; but there was no religious persecution, as we understand the term. Bunyan was allowed to worship when and how he pleased; he was simply forbidden to hold public meetings, which frequently became fierce denunciations of the Established Church and government. His judges pleaded with Bunyan to conform with the law. He refused, saying that when the Spirit was upon him he must go up and down the land, calling on men everywhere to repent. In his refusal we see much heroism, a little obstinacy, and perhaps something of that desire for martyrdom which tempts every spiritual leader. That his final sentence to indefinite imprisonment was a hard blow to Bunyan is beyond question. He groaned aloud at the thought of his poor family, and especially at the thought of leaving his little blind daughter: I found myself a man encompassed with infirmities; the parting was like pulling the flesh from my bones.... Oh, the thoughts of the hardship I thought my poor blind one might go under would break my heart to pieces. Poor child, thought I, what sorrow thou art like to have for thy portion in this world; thou must be beaten, must beg, suffer hunger, cold, nakedness, and a thousand calamities, though I cannot now endure that the wind should blow upon thee.[169] And then, because he thinks always in parables and seeks out most curious texts of Scripture, he speaks of "the two milch kine that were to carry the ark of God into another country and leave their calves behind them." Poor cows, poor Bunyan! Such is the mind of this extraordinary man. With characteristic diligence Bunyan set to work in prison making shoe laces, and so earned a living for his family. His imprisonment lasted for nearly twelve years; but he saw his family frequently, and was for some time a regular preacher in the Baptist church in Bedford. Occasionally he even went about late at night, holding the proscribed meetings and increasing his hold upon the common people. The best result of this imprisonment was that it gave Bunyan long hours for the working of his peculiar mind and for study of his two only books, the King James Bible and Foxe's Book of Martyrs. The result of his study and meditation was The Pilgrim's Progress, which was probably written in prison, but which for some reason he did not publish till long after his release. The years which followed are the most interesting part of Bunyan's strange career. The publication of Pilgrim's Progress in 1678 made him the most popular writer, as he was already the most popular preacher, in England. Books, tracts, sermons, nearly sixty works in all, came from his pen; and when one remembers his ignorance, his painfully slow writing, and his activity as an itinerant preacher, one can only marvel. His evangelistic journeys carried him often as far as London, and wherever he went crowds thronged to hear him. Scholars, bishops, statesmen went in secret to listen among the laborers, and came away wondering and silent. At Southwark the largest building could not contain the multitude of his hearers; and when he preached in London, thousands would gather in the cold dusk of the winter morning, before work began, and listen until he had made an end of speaking. "Bishop Bunyan" he was soon called on account of his missionary journeys and his enormous influence. What we most admire in the midst of all this activity is his perfect mental balance, his charity and humor in the strife of many sects. He was badgered for years by petty enemies, and he arouses our enthusiasm by his tolerance, his self-control, and especially by his sincerity. To the very end he retained that simple modesty which no success could spoil. Once when he had preached with unusual power some of his friends waited after the service to congratulate him, telling him what a "sweet sermon" he had delivered. "Aye," said Bunyan, "you need not remind me; the devil told me that before I was out of the pulpit." For sixteen years this wonderful activity continued without interruption. Then, one day when riding through a cold storm on a labor of love, to reconcile a stubborn man with his own stubborn son, he caught a severe cold and appeared, ill and suffering but rejoicing in his success, at the house of a friend in Reading. He died there a few days later, and was laid away in Bunhill Fields burial ground, London, which has been ever since a campo santo to the faithful. Works of Bunyan. The world's literature has three great allegories,--Spenser's Faery Queen, Dante's Divina Commedia, and Bunyan's Pilgrim's Progress. The first appeals to poets, the second to scholars, the third to people of every age and condition. Here is a brief outline of the famous work: Argument of Pilgrim's Progress"As I walked through the wilderness of this world I lighted on a certain place where was a den [Bedford jail] and laid me down in that place to sleep; and, as I slept, I dreamed a dream." So the story begins. He sees a man called Christian setting out with a book in his hand and a great load on his back from the city of Destruction. Christian has two objects,--to get rid of his burden, which holds the sins and fears of his life, and to make his way to the Holy City. At the outset Evangelist finds him weeping because he knows not where to go, and points him to a wicket gate on a hill far away. As Christian goes forward his neighbors, friends, wife and children call to him to come back; but he puts his fingers in his ears, crying out, "Life, life, eternal life," and so rushes across the plain. Then begins a journey in ten stages, which is a vivid picture of the difficulties and triumphs of the Christian life. Every trial, every difficulty, every experience of joy or sorrow, of peace or temptation, is put into the form and discourse of a living character. Other allegorists write in poetry and their characters are shadowy and unreal; but Bunyan speaks in terse, idiomatic prose, and his characters are living men and women. There are Mr. Worldly Wiseman, a self-satisfied and dogmatic kind of man, youthful Ignorance, sweet Piety, courteous Demas, garrulous Talkative, honest Faithful, and a score of others, who are not at all the bloodless creatures of the Romance of the Rose, but men real enough to stop you on the road and to hold your attention. Scene after scene follows, in which are pictured many of our own spiritual experiences. There is the Slough of Despond, into which we all have fallen, out of which Pliable scrambles on the hither side and goes back grumbling, but through which Christian struggles mightily till Helpful stretches him a hand and drags him out on solid ground and bids him go on his way. Then come Interpreter's house, the Palace Beautiful, the Lions in the way, the Valley of Humiliation, the hard fight with the demon Apollyon, the more terrible Valley of the Shadow, Vanity Fair, and the trial of Faithful. The latter is condemned to death by a jury made up of Mr. Blindman, Mr. Nogood, Mr. Heady, Mr. Liveloose, Mr. Hatelight, and others of their kind to whom questions of justice are committed by the jury system. Most famous is Doubting Castle, where Christian and Hopeful are thrown into a dungeon by Giant Despair. And then at last the Delectable Mountains of Youth, the deep river that Christian must cross, and the city of All Delight and the glorious company of angels that come singing down the streets. At the very end, when in sight of the city and while he can hear the welcome with which Christian is greeted, Ignorance is snatched away to go to his own place; and Bunyan quaintly observes, "Then I saw that there was a way to hell even from the gates of heaven as well as from the city of Destruction. So I awoke, and behold it was a dream!" Such, in brief, is the story, the great epic of a Puritan's individual experience in a rough world, just as Paradise Lost was the epic of mankind as dreamed by the great Puritan who had "fallen asleep over his Bible." Success of Pilgrim's ProgressThe chief fact which confronts the student of literature as he pauses before this great allegory is that it has been translated into seventy-five languages and dialects, and has been read more than any other book save one in the English language. As for the secret of its popularity, Taine says, "Next to the Bible, the book most widely read in England is the Pilgrim's Progress.... Protestantism is the doctrine of salvation by grace, and no writer has equaled Bunyan in making this doctrine understood." And this opinion is echoed by the majority of our literary historians. It is perhaps sufficient answer to quote the simple fact that Pilgrim's Progress is not exclusively a Protestant study; it appeals to Christians of every name, and to Mohammedans and Buddhists in precisely the same way that it appeals to Christians. When it was translated into the languages of Catholic countries, like France and Portugal, only one or two incidents were omitted, and the story was almost as popular there as with English readers. The secret of its success is probably simple. It is, first of all, not a procession of shadows repeating the author's declamations, but a real story, the first extended story in our language. Our Puritan fathers may have read the story for religious instruction; but all classes of men have read it because they found in it a true personal experience told with strength, interest, humor,--in a word, with all the qualities that such a story should possess. Young people have read it, first, for its intrinsic worth, because the dramatic interest of the story lured them on to the very end; and second, because it was their introduction to true allegory. The child with his imaginative mind--the man also, who has preserved his simplicity--naturally personifies objects, and takes pleasure in giving them powers of thinking and speaking like himself.. Other Works of BunyanThe Holy War, published in 1665, is the first important work of Bunyan. It is a prose Paradise Lost, and would undoubtedly be known as a remarkable allegory were it not overshadowed by its great rival. Grace Abounding to the Chief of Sinners, published in 1666, twelve years before Pilgrim's Progress, is the work from which we obtain the clearest insight into Bunyan's remarkable life, and to a man with historical or antiquarian tastes it is still excellent reading. In 1682 appeared The Life and Death of Mr. Badman, a realistic character study which is a precursor of the modern novel; and in 1684 the second part of Pilgrim's Progress, showing the journey of Christiana and her children to the city of All Delight. Besides these Bunyan published a multitude of treatises and sermons, all in the same style,--direct, simple, convincing, expressing every thought and emotion perfectly in words that even a child can understand. Many of these are masterpieces, admired by workingmen and scholars alike for their thought and expression. Take, for instance, "The Heavenly Footman," put it side by side with the best work of Latimer, and the resemblance in style is startling. It is difficult to realize that one work came from an ignorant tinker and the other from a great scholar, both engaged in the same general work. As Bunyan's one book was the Bible, we have here a suggestion of its influence in all our prose literature.
334,008
This video should be required watching for every American. That's because Sen. Coburn, (R-OK), spent the time to educate people on how foolishly money gets spent by the federal government. This is spending stupidity on autopilot because the federal budget uses baseline budgeting as opposed to zero-based budgeting. Baseline budgeting assumes that what was spent last year will be spent this year plus an increase. Zero-based budgeting assumes that each department and agency starts with zero, then has to justify every penny of spending each year. This isn't Sen. Coburn's entire exposition of spending stupidity. It's the lowest of the low-hanging fruit in the federal budget. Think of this as the abridged version of wasteful federal spending. This part of Sen. Coburn's presentation is especially galling: SEN COBURN: Next one, housing assistance. We have 160 programs, separate programs. Nobody knows if they're working. Nobody in the administration knows all the programs. I'm probably the only person in Congress that does because nobody else has looked at it. Twenty different agencies. We're spending $170 billion. If we're really interested in housing assistance, why would we have 20 sets of overhead, 20 sets of administration? And what would it cost to accomplish the same thing? All these numbers come from the Government Accountability Office, by the way. They don't come from me. And the other part of the report is that nobody knows if these programs are working. We have no data to say that we're actually making a difference on housing assistance through this expenditure of money. So we're not even asking the most basic of questions that a prudent person would ask. That's stunning and appalling. It's one thing to spend $170 billion. It's another to spend $170 billion without having a method to determine whether spending that $170 billion is having a positive impact. There's no justification for not periodically auditing programs to see if these programs are doing what they're supposed what they're supposed to be doing. There's especially no justification to being disinterested in seeing whether they're efficiently doing their job so the taxpayers' money isn't being spent foolishly. Sen. Coburn is a patriot. He's the first person to put together a detailed blueprint of how the federal government wastes tens of billions of dollars a year. He's the first person to show how this administration didn't take the time to make sure those tens of billions of dollars were being spent efficiently. The thing that's infuriating is that President Obama is still pretending that he's interested in solving our deficit crisis. With this mountain of proof that the taxpayers' money is getting spent foolishly, it's impossible to take President Obama seriously. When Arne Duncan, his education secretary insists that teachers have already gotten pink slips as a direct result of sequestration, people should ridicule him. When Ray Lahood insists that people will experience longer wait times as a direct result of sequestration, people should ridicule him. When President Obama insinuates that his administration has cut the budget to the bare essentials, people, especially in the White House press corps, should ridicule him. Based on Sen. Coburn's presentation, this administration hasn't even scratched the surface on getting rid of wasteful government spending.
239,151
SD Senior Care Private Sitting & In Home Care for Senior and Disable People 24 Hour Care and Staffing 205-218-5061 Services - Family Services Published in The Birmingham News 2/22. Updated 2/26. nanny needed I am writing in search for a trustworthy Male house keeper/driver and a female Nanny/caregiver from your country to work with me in London in my home due to my absent I have 3 kids i need a caregiver to take care of them while i am away, and a house keeper to take care of my house and take my kid to school every morning . click to see more about my family Services - Family Services Web Id : ALA13495944 Published on al.com 2/15. Updated 2/15. SD 2/26.
189,533
On Sale Round Gold Pendant Necklace (Engrave-able) - Hand Crafted in Los Angeles - Ethically Sourced Materials - 14K Solid White, Yellow or Rose Gold - 16 Inch Chain Included Add a personal touch and make it all your own. This is the kind of jewelry you keep for decades to come.
316,180
TITLE: Why is ${\rm Div} E =0$? QUESTION [0 upvotes]: I understand that Div E for a charged body, takes the value of the volume charge density, inside the sphere. Why is it 0, outside the sphere, then? Because, with distance, it does decrease, isn't it? (Inverse square law) Or, how else should it be thought of? REPLY [3 votes]: Intuitively, the divergence of the field tells you how much each little point in space acts like a tiny "source" or "sink" (or better, an emanator) for that field. Note here two things: Emanation of electric field is what charges, and only charges, do. This is the point of Gauss's law involving the divergence. We have never found any region of space to emanate electric field without charge, and if we did, we could also just as well say that means charge is present there by definition. The divergence measures the emanation at a tiny point in space. Not an extended region - if there is some charge sitting some distance away from the point in question, the emanation is from "over there". It is not from the point "here" that we are looking at. In your example, all the charge is presumed to be inside the sphere. Outside the sphere, even a tiny, but nonzero, distance away, there is no charge. Since the divergence tells you how much emanation is occurring at a point only, then since each point outside that sphere has no charge located there, there can be no emanation of electric field at that point. Hence $\nabla \cdot \mathbf{E} = 0$. Conversely, if this equation holds true at a point, that means there is no charge present there. So the converse is a statement that there is no charge present at any point outside the charged sphere. REPLY [0 votes]: The divergence of the E-field is a property that applies to a point in space. If there is no net charge density at that point, then the divergence is zero. REPLY [0 votes]: When you think about the specific example of the point charge, the Electric Field is obtained by $E = \frac{kq}{r^{2}}$, that's the inverse square law. What you are asking is why the divergence of E at a specific point is zero, even though the electric field generated by another source is not. That happens because of the definition of divergence, it tells you whether a specific point of space is "emanating" or "absorbing" determined quantity. When we calculate $div(E)$ we are actually trying to know whether there are more electric field lines going inward or outward from that point. If that's positive, we conveniently say it's outward. If that's negative, we conveniently say it's inward. The tricky part is that you need to understand that electric field lines are just generated by charges, and inside the sphere, there is no charge. Therefore, even though the electric field generated by the sphere in all space affects the point you are trying to analyze, there is no electric field being generated from that point.
178,741
TITLE: How far does one have to zoom into an image that was rotated a certain amount of degrees in order to only see only the original pixels again? QUESTION [4 upvotes]: This question was asked by a work colleague of mine, but my days as a mathematician are long gone unfortunately. It does sound like a pretty basic geometry problem to me, doesn't it? I'm not expecting an extremly detailed answer here, but does anyone have any resources on this? I'm pretty sure this was already answered somewhere. Please let me know if the question is formulated too vaguely. Thanks a lot in advance! For clarification: I imagine that the original image looks something like this: And then it gets rotated like this: Only the red pixels are considered the "original pixels" while the white pixels are not. So the question would be how far do I have to zoom into the second picture in order to only see red pixels again? REPLY [6 votes]: The figure above shows the original image in the standard orientation with its edges parallel to the $x$ and $y$ axis, and then it shows the same image but rotated by its center by an angle $\theta$. What you need to do is find the intersection points between the diagonals and the rotated edges of the image, and choose the one that is closest to the center of the image to construct your zoom rectangle (the blue rectangle) in the figure above. If the image center is the origin, and the image extends over the rectangle $[-a, a] \times [-b, b ]$, then the equation of the right edge is $ x = a $, and the equation of the top edge is $ y = b$. When rotating the image by an angle $\theta$ counter clockwise, then the point $(x, y)$ is mapped to $(x',y') = R (x,y) $ where $ R = \begin{bmatrix} \cos \theta && - \sin \theta \\ \sin \theta && \cos \theta \end{bmatrix} $ It follows that $(x, y) = R^T (x', y')$. Therefore, the equation of the right edge is $R^T (x', y') = a$, which is $ \cos(\theta) x' + \sin(\theta) y' = a $ Similarly the equation of the rotated top edge is $ -\sin(\theta) x' + \cos(\theta) y' = b $ Now the equations of the red diagonals are $y' = \pm \dfrac{b}{a} x' $ Intersect these diagonals with the two rotated edges to obtain $(x_1, y_1)$ and $(x_2, y_2)$. Choose the point that closer to the origin, i.e. having the smaller $\sqrt{ x_i^2 + y_i^2 } $, and call this point $(x_0, y_0)$ Now the zoom rectangle (shown in blue) extands over $[- |x_0|, |x_0| ] \times [-|y_0| , |y_0| ] $ The zoom factor is the following ratio $\text{Zoom Factor} = \dfrac{a}{|x_0| } = \dfrac{b}{| y_0 | } $
217,265
CCI Research Presentations and Publications Export 3 results:Author [ Title Filters: Author is Montgomery, Carol Hansen [Clear All Filters] H How Electronic Journals are Changing Patterns of Use. The Serials Librarian. 46, 121-141.(2004). L Library Economic Measures: Examples of the Comparison of Print & Electronic Journal Collections. Library Trends. 51, 376-400.(2003). P Patterns of Journal Use by Faculty at Three Diverse Universities.
150,527
Author Affiliations: Department of Health Policy and Management, Harvard School of Public Health (Drs Regenbogen, Greenberg, and Gawande), Departments of Surgery (Drs Regenbogen and Hutter) and Anesthesia and Critical Care (Dr Ehrenfeld), Massachusetts General Hospital, and Center for Surgery and Public Health, Department of Surgery, Brigham and Women's Hospital (Drs Lipsitz, Greenberg, and Gawande), Boston, Massachusetts. general and vascular surgery patients enrolled in the National Surgical Quality Improvement Program surgical outcomes database at a major academic medical center. Main Outcome Measures Incidence of major postoperative complications and/or death within 30 days of surgery. Results. Conclusions. Surgical teams lack a routine, objective evaluation of patient condition after surgery to inform postoperative prognostication, guide clinical communication, and evaluate the efficacy of safety interventions in the operating room.1 Instead, surgeons rely primarily on subjective assessment of available patient data.2 Complex models, such as the Acute Physiology and Chronic Health Evaluation score3 and the Physiologic and Operative Severity Score for the Enumeration of Mortality,4 provide adequate predictions of a surgical patient's risk of complications. These scores have not come into standard use for surgical patients, however, because they are not easily calculated at the bedside, require numerous data elements that are not uniformly collected, and are often not well understood among the various members of a multidisciplinary care team.5 Efforts to significantly reduce surgery's overall 3% major complication rate6 have been hampered in part because surgical departments in most hospitals have no easily applied tool for routine measurement and monitoring of surgical results. We sought to develop a surgical outcome score that would be (1) simple for teams to collect immediately on completion of an operation for any patient in any setting, regardless of resource and technological capacity; (2) valid for predicting major postoperative complications and death; and (3) applicable throughout the fields of general and vascular surgery (at least). Our approach differs from that of risk-adjusted outcomes evaluations, such as the American College of Surgeons' National Surgical Quality Improvement Program (NSQIP).7,8 Rather than dissociating patient-related factors from those related to surgical performance, the Surgical Apgar Score takes a public health perspective on surgical results, seeking to promptly identify patients at highest risk and circumstances offering greatest opportunity for reducing complications and death, regardless of the prevailing cause. The Apgar score in obstetrics served a similar function in evaluating the condition of newborns and, as a result, became an indispensable clinical tool.9- 15 We devised an Apgar score for surgery, a 10-point score to rate surgical outcomes at Brigham and Women's Hospital.15 The score is calculated from the estimated blood loss (EBL), lowest heart rate (HR), and lowest mean arterial pressure (MAP) during an operation. In a pilot study of 767 general and vascular surgery patients,15 the score was significantly associated with the occurrence of major complications or death within 30 days of surgery (P < .001, C statistic = 0.72). Poor-scoring patients (scores ≤4) were 16 times more likely to experience a major complication than were patients with the highest scores (9 or 10). This preliminary study, however, was conducted in a single institution, with a limited sample size. To evaluate the broader applicability of the Surgical Apgar Score, we sought to evaluate its performance among a larger cohort of patients, from a different institution, and used electronic intraoperative data collection rather than the hand-written records from which it was derived. To evaluate its predictive ability, we compare its discrimination with that of the multivariate risk-adjustment models of the NSQIP, an established surgical risk-adjustment method, currently in use in selected centers. The Massachusetts General Hospital (MGH) Department of Surgery maintains an outcomes database on a systematic sample of patients undergoing general and vascular surgical procedures as part of the private sector NSQIP. In this program,7,8 trained nurse-reviewers retrospectively collect 49 preoperative, 17 intraoperative, and 33 outcome variables on surgical patients for the monitoring of risk-adjusted outcomes. Patients undergoing general or vascular surgery with general, epidural, or spinal anesthesia, or specified operations (carotid endarterectomy, inguinal herniorrhaphy, thyroidectomy, parathyroidectomy, breast biopsy, and endovascular repair of abdominal aortic aneurysm) regardless of anesthetic type, are eligible for inclusion. Children younger than 16 years and patients undergoing trauma surgery, transplant surgery, vascular access surgery, or endoscopic-only procedures are excluded. At the MGH, the first 40 consecutive patients undergoing operations that meet inclusion criteria in each 8-day cycle are enrolled. No more than 5 patients undergoing inguinal herniorrhaphies and 5 patients undergoing breast biopsies are enrolled per 8-day cycle to ensure diversity of operations in the case mix. We evaluated all patients in the MGH-NSQIP database who underwent surgery between July 1, 2003, and June 30, 2005, and for whom complete 30-day follow-up was obtained. We excluded (1) patients undergoing carotid endarterectomies performed concurrently with coronary artery bypass grafting because the score was not designed for application to patients receiving cardiopulmonary bypass and (2) patients receiving local anesthesia only because no electronic anesthesia record is generated for these procedures. The study protocol, including a waiver of informed consent for individual patients, was approved by the Human Research Committees of the MGH and the Harvard School of Public Health. We devised the Surgical Apgar Score by using multivariate logistic regression to screen a collection of intraoperative measures. We found that only 3 intraoperative variables remained independent predictors of 30-day major complications: the EBL, the lowest HR, and the lowest MAP during the operation. The score was thus developed using these 3 variables, and their β coefficients were used to weight the points allocated to each variable in a 10-point score. This procedure is described in detail elsewhere.15Table 1 gives the values used to calculate the 10-point score. The score for a patient with 50 mL of blood loss (3 points), a lowest MAP of 80 (3 points), and a lowest HR of 60 (3 points), for example, is 9. By contrast, a patient with more than a liter of blood loss (0 points), a MAP that decreased to 50 (1 point), and a lowest HR of 80 (1 point) receives a score of 2. We used intraoperative data collected from handwritten anesthesia records to develop the score at Brigham and Women's Hospital.15 At the MGH, intraoperative records are maintained by an electronic Anesthesia Information Management System (Saturn; Dräger Medical, Telford, Pennsylvania) in a database that is accessible via the Structured Query Language. We developed a Structured Query Language query to examine the intraoperative physiologic data during the surgical period. Because electronic anesthesia data differ from handwritten records in a number of ways,16,17 particularly the tendency for inclusion of some artifactual or erroneous values (for example, false pressure readings when an arterial catheter is flushed), our data extraction algorithm excluded extraphysiologic values for HR (data points <20/min or >200/min) and MAP (data points <25 mm Hg or >180 mm Hg) and then selected the median of remaining values in every 5-minute period for analysis. The lowest of these medians for each variable, along with the recorded EBL, was used to calculate the score. For data quality assurance, we manually reviewed the printed electronic anesthesia record for 50 operations and compared the results with those of the electronic data acquisition algorithm for these cases. The individual factor values and the total score obtained by each method were compared by computing κ statistics for agreement, using Fleiss-Cohen weighting for ordered categorical data.18 We collected all preoperative and postoperative patient variables from the NSQIP database. Some variables were aggregated by organ system. Pulmonary comorbidity was defined as preexisting chronic obstructive pulmonary disease, ventilator dependence, or pneumonia. Cardiovascular comorbidity was defined as prior myocardial infarction, angina, congestive heart failure, or coronary revascularization. Patients with a history of transient ischemic attack or stroke with or without residual neurologic deficit were pooled into a single group called “history of stroke or transient ischemic attack.” On the basis of previous studies, American Society of Anesthesiologists' Physical Status Classification was dichotomized as 3 or greater and less than 3, and wound classification was dichotomized as clean and clean but contaminated operations vs contaminated and dirty operations.7,19 Laboratory data were categorized according to the fiscal year 2005 NSQIP models.20 Procedural relative value units were calculated by linkage of Current Procedural Terminology codes to listings from the 2005 Medicare Physician Fee Schedule (Centers for Medicare and Medicaid Services). The magnitude of the surgical procedures was rated as either minor or intermediate (eg, breast, endocrine, groin and umbilical herniorrhaphy, appendectomy, laparoscopic cholecystectomy, perianal procedures, and skin or soft-tissue operations) or major or extensive (all other operations) as in previous studies of perioperative risk.21,22 The primary end points were major complication and death within 30 days after surgery, as recorded in the NSQIP database. The following NSQIP-defined8 events were considered major complications: acute renal failure, bleeding that required a transfusion of 4 U or more of red blood cells within 72 hours after surgery, cardiac arrest requiring cardiopulmonary resuscitation, coma of 24 hours or longer, deep venous thrombosis, myocardial infarction, unplanned intubation, ventilator use for 48 hours or more, pneumonia, pulmonary embolism, stroke, wound disruption, deep or organ-space surgical site infection, sepsis, septic shock, systemic inflammatory response syndrome, and vascular graft failure. All deaths were assumed to include a major complication. Superficial surgical site infection and urinary tract infection were not considered major complications. Patients having complications categorized in the database as “other occurrence” were reviewed individually, and severity of the occurrence was evaluated according to the Clavien classification.23 “Other occurrences” that involved complications of Clavien class III and greater (those that require surgical, endoscopic, or radiologic intervention or intensive care admission or are life-threatening) were considered major complications. All analyses were performed using the SAS statistical software, version 9.1 (SAS Institute Inc, Cary, North Carolina). We analyzed continuous variables using 2-sided t tests or, if skewed, Wilcoxon rank sum tests. We analyzed categorical predictors using χ2 tests. We performed univariate logistic regression to examine the relationship between major complication or death and the Surgical Apgar Score (treating the score as an ordered categorical variable) and calculated C statistics as a measure of model discrimination. We used χ2 tests and the Cochran-Armitage χ2 trend test24 to evaluate the relationship between the score and the incidence of both outcomes. For each outcome, we also compared the univariate logistic regression models with the score alone against the multivariate logistic regression models used for risk adjustment in the private-sector NSQIP for fiscal year 2005.20 Only observations with complete data available were included in the NSQIP models. As measures of discrimination, we constructed receiver operating characteristic curves and calculated C statistics (equivalent to the area under the receiver operating characteristic curve) to compare the models.25,26 Of 4163 patients identified in the NSQIP database who met inclusion criteria, 4119 (98.9%) had complete electronic intraoperative records and constituted our final cohort. The automated data extraction algorithm achieved excellent agreement with manual record review, both for point values assigned to each variable (κ = 0.97 for HR; κ = 0.75 for MAP) and for the total score (κ = 0.94). Table 2 compares the demographic characteristics, baseline comorbidities, and laboratory data for patients with and without major complications. One or more major complications occurred within 30 days of surgery in 581 patients (14.1%), including 94 deaths (2.3%). All preoperative risk factors and laboratory values collected were significantly associated with the rate of major complications, with the exceptions of race and obesity. The 3 variables that contributed to the Surgical Apgar Score were each significant univariate predictors of major complications, including death (Table 3). The mean lowest HRs were significantly lower (58 vs 63; P < .001) and the mean lowest MAPs were significantly higher (65 vs 61; P < .001) among patients with no complications compared with those with major complications. Likewise, median EBL was significantly lower in operations with no major complications than in those resulting in major complications (25 vs 200 mL; P < .001). The types of operations and their complication rates in the cohort are given in Table 4. With increasing scores, the incidence of major complications and death decreased monotonically (P < .001). In univariate logistic regression, the score demonstrated good discrimination, with a C statistic of 0.73 for major complications and 0.81 for death.25 The rates of major complications and death at each level of the Surgical Apgar Score are shown in Figure 1. Among 1441 patients with scores of 9 or 10, 72 (5.0%) developed major complications within 30 days, including 2 deaths (0.1%). By comparison, among 128 patients with scores of 4 or less, 72 (56.3%) developed major complications, of whom 25 died (19.5%). Compared with scores of 9 or 10, the relative risk of major complications for scores of 4 or less is 11.3 (95% confidence interval [CI], 8.6-14.8; P < .001), and the relative risk of death is 140.7 (95% CI, 33.7-587.4; P < .001). In every 2-point score category (as in Figure 1), the incidence of both major complications and death was significantly greater than that of the next-highest category (P < .001), except for the comparisons between the 0- to 2-point and the 3- or 4-point groups (P = .11 for major complications and P = .009 for death), in which statistical power was limited by the low prevalence of these poorest scores. Thirty-day major complications and deaths among 4119 general and vascular surgery patients in relation to Surgical Apgar Score. Major complication and death rates are shown according to the 10-point Surgical Apgar Score from the operation. Patients with scores of 9 or 10 served as the reference group. Risk of major complications and death decreased significantly with increasing scores (Cochran-Armitage trend test, both P < .001). CI indicates confidence interval. Even after stratifying the patients by the magnitude of operation, the score remained a highly significant predictor of outcomes. Among major or extensive operations, patients with scores of 4 or less were 6.5 times more likely to have a major complication (95% CI, 4.7-8.9; P < .001) and 112.0 times more likely to die (95% CI, 15.3-819.7; P < .001) within 30 days. After minor or intermediate operations, patients with scores of 4 or less were 22.8 times more likely to experience a major complication (95% CI, 12.6-41.1; P < .001) and 81.4 times more likely to die (95% CI, 5.4-1219.5; P < .001). Receiver operating characteristic curves for the Surgical Apgar Score and for multivariate models based on the separate NSQIP risk adjustment models for morbidity and mortality are shown in Figure 2. Complete risk predictions could be generated, however, for only 2482 patients (60.3%) in the NSQIP morbidity model and 2370 (57.5%) patients in the mortality model because required information, most often laboratory data, was missing. Among the restricted set of patients for whom all required data were available, the NSQIP models provided better discrimination than did the score alone for both morbidity (C = 0.81 vs C = 0.72; P < .001) and mortality (C = 0.93 vs C = 0.78; P < .001).26 Receiver operating characteristic curves for the Surgical Apgar Score and the National Surgical Quality Improvement Program (NSQIP) morbidity and mortality models as predictors of major complications and death. The sensitivity and specificity of the Surgical Apgar Score were compared with the separate morbidity and mortality models from the NSQIP.19 The score achieved a C statistic of 0.73 for predicting major complications and 0.81 for predicting deaths. C statistics for the NSQIP model were significantly greater for both major complications (C = 0.81, P < .001) and deaths (C = 0.93, P < .001). A simple surgical score based on blood loss, lowest HR, and lowest MAP during an operation provides a meaningful estimate of patients' condition and risk after general and vascular surgery. The 10-point Surgical Apgar Score is predictive of both major complications and death in the immediate postoperative period and is valid across the diversity of general and vascular surgery. We have shown that it remains highly predictive in a different institution from where it was derived and remains robust to the known differences between handwritten and electronic intraoperative records. The score successfully identifies not only the patients at highest risk of postoperative complications but also those at markedly lower-than-average risk. The 1441 patients with scores of 9 or 10 (35.0% of the sample) had only a 5.0% incidence of major complications and a 0.1% incidence of death. In contrast, most patients with scores of 4 or less had major complications and more than 1 in 5 died. Despite the relatively low prevalence of scores of 4 or less (3.1% overall), the consistent trend toward worse outcomes even at the extreme low end of the scale suggests that the score has good discriminative ability across the full point spectrum.25 The Surgical Apgar Score could serve several important purposes. Like the Apgar score for newborns, its primary value may be to provide teams with immediate feedback on operative condition for every patient13—an objective metric to complement their “gut feelings”2,27 about an operation. Because the feedback is immediate, the score would assist surgical teams in distinguishing patients most in need of increased intensity of postoperative monitoring and care from those likely to have an uncomplicated course. As a quantitative adjunct to surgeons' subjective impressions, the score may serve as a simple aid in communication among surgeons, postanesthesia care providers, surgical residents, and surgical ward staff regarding patients' immediate postoperative status and thereby assist decision making about, for example, unplanned admission after outpatient surgery, admission to the intensive care unit, or frequency of postoperative examinations by physicians and nurses. Surgeons might also use the score to convey to patients and families an appraisal of condition and prognosis after surgery. Looking forward, the score might be used as a metric for quality monitoring and innovation, even in resource-poor settings. Routine surveillance and case review for patients with low scores (eg, a score of ≤4), even when no complications result, may also enable earlier identification of safety problems.28 Like the obstetric Apgar score, however, our surgical score does not allow comparison of quality among institutions or physicians because its 3 variables are each influenced not only by the performance of medical teams but also by the patients' prior condition and the magnitude of the operations they undergo. The NSQIP has developed a risk-adjustment algorithm for detailed modeling and case mix adjustment that serves these purposes.7,8 The Surgical Apgar Score is not intended to supplant these methods of institutional quality assessment because its motivation and intended uses are distinct. Nevertheless, we provide a comparison between this intraoperative score and the preoperative risk-adjustment models from the NSQIP in Figure 2 as a point of reference by which its discriminative ability may be appraised. As a simple, objective measure, the Surgical Apgar Score offers an important addition to risk-adjustment strategies for institutional quality assessment. Because of the expense of data collection, comprehensive risk-adjusted 30-day outcome tracking is not yet achievable in most US hospitals, let alone hospitals worldwide. Complex, multivariate models are not commonly used in clinical settings because they are difficult for teams to interpret and communicate at the bedside and often require statistical imputation of key information because of missing data.28,29 The Surgical Apgar Score can be available in real time, immediately usable for clinical decision support, and easily and inexpensively collected in any hospital. It is these same characteristics that made the Apgar score such a powerful tool for broad safety improvement in obstetrics.13,14 Nonetheless, our study has several limitations. First, the Surgical Apgar Score has been tested only in general and vascular surgery patients 16 years or older. Whether the score is effective in grading risk in other fields of surgery remains uncertain, and it has not been adapted for use in pediatric populations. Second, the score has not been evaluated beyond major academic medical centers because of a lack of reliable and comprehensive outcomes assessment against which these measures could be validated. It is possible that, among other patient populations, some modifications to the score factors could be necessary. Third, blood loss estimation is inevitably imprecise. The broad categories used to calculate the score, however, are well within observers' range of precision in careful volumetric studies.30,31 Reliance on anesthesiologists' independent estimation further improves the reliability and insulates against surgeon bias.30 The variables in our score are at least as reliably quantified as any in the Apgar score and potentially more so than some Apgar components (such as grading of newborn muscle tone and color).12 Our results, therefore, demonstrate that a simple clinimetric surgical outcome score can be derived from intraoperative data alone. This 10-point score based on the lowest HR, lowest MAP, and EBL discriminates well between groups of patients at higher- and lower-than-average risk of major complications and death within 30 days of surgery and holds promise as both a prognostic measure and a clinical decision support tool. Our hope is that this score will prove useful for routine care, quality improvement, and clinical research in surgery. Correspondence: Scott E. Regenbogen, MD, MPH, Department of Health Policy and Management, Harvard School of Public Health, 677 Huntington Ave, Boston, MA 02115 (sregenbogen@partners.org). Accepted for Publication: November 8, 2008. Author Contributions: Drs Regenbogen and Gawande had full access to all of the data in the study and take responsibility for the integrity of the data and the accuracy of the data analysis. Study concept and design: Regenbogen, Ehrenfeld, Lipsitz, Greenberg, Hutter, and Gawande. Acquisition of data: Regenbogen, Ehrenfeld, and Hutter. Analysis and interpretation of data: Regenbogen, Ehrenfeld, Lipsitz, Greenberg, Hutter, and Gawande. Drafting of the manuscript: Regenbogen, Lipsitz, and Gawande. Critical revision of the manuscript: Regenbogen, Ehrenfeld, Lipsitz, Greenberg, Hutter, and Gawande. Statistical analyses: Regenbogen, Lipsitz, and Greenberg. Obtained funding: Regenbogen and Gawande. Study supervision: Gawande. Financial Disclosure: None reported. Funding/Support: This research was supported by a grant from the Risk Management Foundation of the Harvard Medical Institutions. Dr Regenbogen was also supported by Kirschstein National Research Service Award T32-HS000020 from the Agency for Healthcare Research and Quality. Role of the Sponsors: The funding agencies were not involved in any aspect of the design and conduct of the study; collection, management, analysis, and interpretation of the data; or preparation, review, or approval of the manuscript. Additional Contributions: Lynn Devaney, RN, assisted with the MGH-NSQIP database and John Walsh, MD, assisted with the intraoperative anesthesia.
252,052
\begin{document} \maketitle \begin{abstract} We investigate a model theoretic invariant $\kappa_{srd}^m(T)$, which was introduced by Shelah\cite{shelah}, and prove that $\kappa_{srd}^m(T)$ is sub-additive. When $\kappa_{srd}^m(T)$ is infinite, this gives the equality $\kappa^m_{srd}(T)=\kappa^1_{srd}(T)$, answering a question in \cite{shelah}. We apply the same proof method to analyze another invariant $\kappa^m_{ird}(T)$, and show that it is also sub-additive, improving a result in \cite{shelah}. \end{abstract} \section{Introduction} It is a basic fact that if a theory $T$ is unstable then we can find an unstable 1-formula $\varphi(x,y)$ that witnesses the instability of $T$. (Recall that a formula $\varphi(x,y)$ is called a 1-formula if the length $|x|$ of $x$ is 1.) Similar situations are true for some other properties of theories, such as $TP$, $TP_1$, $TP_2$, $IP$, $IP_n$ and $SOP$. Namely, if a theory $T$ has one of these properties, then we can find a $1$-formula witnessing the property. So, it is of interest to know whether such a $ 1 $-formula exists as a witness for other important properties of $ T $. The present paper deals with this kind of question, and we are concerned with the number of independent definable orders existing in the monster model ${\cal M}$ of $T$. \medbreak Shelah \cite{shelah} defined three invariants $\kappa_{inp}^m(T)$, $\kappa_{srd}^m (T)$ and $\kappa_{ird}^m (T)$, where $m$ is a positive integer. The first, second, and third invariants are concerning the number of independent partitions, independent orders, and independent strict orders existing in $ {\cal M} ^ m $, respectively. In \cite{shelah}, it was shown that $\kappa_{ird}^m(T)$ does not change its value as $m$ varies (at least if it is an infinite regular cardinal). Then it was asked if a corresponding result holds for $\kappa_{inp}^m(T)$ and $\kappa_{srd}(T)$ (\cite[Questions 7.5 and 7.9]{shelah}). The question about $\kappa_{inp}^m(T)$ was solved in \cite{chernikov2}. Although the terminology is different, Chernikov essentially proved the inequality $\kappa_{inp}^{n+m}(T)\leq \kappa_{inp}^{n}(T)\times\kappa_{inp}^{m}(T)$, which yields $\kappa_{inp}^m(T)=\kappa_{inp}^1(T)$ if $\kappa_{inp}^m(T)$ is infinite. Furthermore, he conjectured that the invariant is sub-additive, i.e. $\kappa_{inp}^{n+m}(T) +1 \leq \kappa_{inp}^{n}(T)+\kappa_{inp}^{m}(T)$. This conjecture arose in connection with \cite{kaplan}, in which it was shown that the dp-rank is sub-additive. It is known that dp-rank coincides with the rank counting the number of independent partitions under the assumption of NIP. Several other invariants (e.g. $\kappa_{cdt}, \kappa_{sct}$) introduced in \cite{shelah} were studied in \cite{chernikov1}, and similar type of results were obtained. \medbreak It seems however that $\kappa_{srd}^m(T)$ has not been studied well, and there seems to be no answer to Shelah's question on $\kappa_{srd}^m(T)$. Since it has been shown that if $T$ is NIP then $\kappa_{ird}^m(T)=\kappa_{srd}^m(T)$, there is no difference between those two invariants under the assumption of NIP. Under the assumption of NIP, in \cite{lynn}, the condition $\kappa_{ird}^m(T)<n$ was characterized by using the notion of collapse of indiscernible sequences. In this paper we examine how the value $\kappa^m_{srd}(T)$ changes as $m$ changes without any assumption on $T$ (such as NIP). We will prove that $\kappa^m_{srd}(T)$ is sub-additive, which gives a positive answer to a question by Shelah. The concept of mutually indiscernible sequences plays a central role in our proof technique. We will also see that the same technique can be applied when analyzing $\kappa^m_{ird}(T)$, and will prove that the invariant is also sub-additive. This gives an improvement of a result in \cite{shelah} on $\kappa^m_{ird}(T)$, when it is finite. \medbreak Now, we explain some details of $\kappa_{srd}^m(T)$. A complete theory $T$ is said to have the strict order property if there is a formula $\varphi(x_0,\dots, x_{m-1},y_0, \dots, y_{n-1})$ and parameters $b_i \in {\cal M}^n$ $(i \in \omega)$ such that $\Phi:=\{\varphi({\cal M},b_i): i \in \omega\}$ becomes a strictly increasing sequence of uniformly defined definable sets of ${\cal M}^m$, where $\varphi({\cal M},b_i)=\{a \in {\cal M}^m: {\cal M} \models \varphi(a,b_i)\}$. Let $\Phi_0=\{D_i: i \in \omega\}$ and $\Phi_1=\{E_i: i \in \omega\}$ be two such strictly increasing sequences consisting of subsets of ${\cal M}^n$. We say $\Phi_0$ and $\Phi_1$ are independent if $(D_{i+1} \setminus D_i) \cap (E_{j+1} \setminus E_j) \neq \emptyset$, for any $(i,j) \in \omega^2$. We can naturally define the independence among a larger number of $\Phi_i$'s. Then $\kappa_{\mathrm{srd}}^n(T)$ is defined as the minimum cardinal $\kappa$ for which there is no family $\{\Phi_i: i< \kappa\}$ of such independent sequences. (See Definition \ref{def1}, for a more precise definition.) We put $\kappa_{\mathrm{srd}}(T)=\sup_{n \in \omega} \kappa_{\mathrm{srd}}^n(T)$. If there is a (non-trivial) definable order $<$ on ${\cal M}$, then clearly $T$ has the strict order property, and $ \kappa_{\mathrm{srd}}^n(T) \geq n+1$. Indeed, for an increasing sequence $a_0<a_1< \dots \in {\cal M}$, if we let $X_{i,j}=\{(b_0,\dots,b_{n-1}) \in {\cal M}^n: b_i <a_j\}$, then $\Psi_i=\{X_{i,j}: j \in \omega\}$ $(i<n)$ will witness $ \kappa_{\mathrm{srd}}^n(T) \geq n+1$. \medbreak We investigate the invariants $\kappa_{\mathrm{srd}}^n(T)$ and $\kappa_{\mathrm{ird}}^m(T)$, and prove the following: \medbreak\noindent {\bf Theorem A.} $\kappa_{\mathrm{srd}}^{m+n}(T) +1 \leq \kappa_{\mathrm{srd}}^m(T)+\kappa_{\mathrm{srd}}^n(T)$. \medbreak\noindent {\bf Theorem B.} Suppose $\kappa_{\mathrm{srd}}^m(T) \geq \omega$. Then $\kappa_{\mathrm{srd}}(T)=\kappa_{\mathrm{srd}}^1(T)=\kappa_{\mathrm{srd}}^m(T)$. \medbreak\noindent {\bf Theorem C.} $\kappa_{\mathrm{ird}}^{m+n}(T) +1 \leq \kappa_{\mathrm{ird}}^m(T)+\kappa_{\mathrm{ird}}^n(T)$. \medbreak\noindent \section{Preliminaries} Let $L$ be a language and $T$ a complete $L$-theory with an infinite model. We work in a monster model ${\cal M} \models T$ with a very big saturation. For a set $A \subset {\cal M}$, $L(A)$ denotes the language obtained from $L$ by augmented by the constants for elements in $A$. Finite tuples in ${\cal M}$ are denoted by $a,b, \dots$ . The letters $x,y,\dots$ are used to denote finite tuples of variables. The length of $x$ is denoted by $|x|$. Formulas are denoted by $\varphi,\psi,\dots$. For a formula $\varphi$ and a condition $(*)$, we write $\varphi^{\text{if $(*)$}}$ to denote the formula $\varphi$ if $(*)$ is true, and $\neg \varphi$ if $(*)$ is false. In this paper, we are mainly interested in formulas of the form $\varphi(x,b)$, where $b$ is a parameter from ${\cal M}$. If $|x|=m$, this formula $\varphi(x,b)$ (or $\varphi(x,y)$) will be called an $m$-formula. The definable set defined by $\varphi(x,b)$ in ${\cal M}$ is denoted by $\varphi({\cal M},b)$. \medbreak Standard set-theoretic notation will be used. \begin{definition}\label{pattern} Let $\kappa$ be a (finite or infinite) cardinal. Let $(\varphi_i(x;y_i))_{i\in\kappa}$ be a sequence of formulas, and $(b_{i,j})_{i\in \kappa, j\in \omega}$ a sequence of tuples, where $|b_{i,j}|=|y_i|$ for all $i,j$. \begin{enumerate} \item The pair $\langle (\varphi_i(x;y_i))_{i\in\kappa}, (b_{i,j})_{i\in \kappa, j\in \omega} \rangle$ will be called an ird-pattern of width $\kappa$, if it satisfies: \begin{enumerate} \item for any $\eta \in \omega^\kappa$, $\{\varphi_i(x,b_{i,j})^{\text{if $(j \geq \eta(i))$}}: i \in \kappa, j\in \omega\}$ is consistent. \end{enumerate} \item The pair $\langle (\varphi_i(x;y_i))_{i\in\kappa}, (b_{i,j})_{i\in \kappa, j\in \omega} \rangle$ will be called an srd-pattern of width $\kappa$, if it satisfies: \begin{enumerate} \item for any $\eta\in \omega^\kappa$, $\{\varphi_i(x; b_{i,j})^{\mathrm{if}(j\geq \eta(i))}: i\in \kappa, j\in\omega\}$ is consistent, \item for each $i\in\kappa$ and $j\in\omega$, $\varphi({\cal M}, b_{i,j}) \subsetneq \varphi({\cal M}, b_{i,j+i})$. \end{enumerate} \end{enumerate} \end{definition} \begin{definition}\label{def1} Let $ * \in\{\text{ird, srd}\}$. $\kappa^m_{*}(T)$ is the minimum cardinal $\kappa$ such that (in $T$) there is no $*$-pattern of width $\kappa$ witnessed by $m$-formulas $\varphi_i(x;y_i)$ $(i\in\kappa)$. We write $\kappa^m_{*}(T)=\infty$, if there is no such $\kappa$. Also $\kappa_{*}(T)$ is defined as $\sup_{m\in\omega}\kappa^m_{*}(T)$. \end{definition} \begin{remark} \begin{enumerate} \item $\kappa^m_{\mathrm{ird}}(T)>1$ if and only if there is an unstable formula $\varphi(x,y)$ with $|x|=m$. $\kappa_{\mathrm{ird}}(T)>1$ if and only if $T$ is unstable. \item $\kappa^m_{\mathrm{srd}}(T)>1$ if and only if there is a $\varphi(x,y)$ with $|x|=m$ having the strict order property. $\kappa_{\mathrm{srd}}(T)>1$ if and only if $T$ has the strict order property. \end{enumerate} \end{remark} If $\kappa < \kappa^m_{srd}(T)$, then there are $\kappa$-many $\varphi_i$'s and a set $B=(b_{i,j})_{i \in \kappa, j \in \omega}$ satisfying the conditions (a) and (b) of the item 2 in Definition \ref{pattern}. The condition (b) states that each $\varphi_i$ defines a strict order on ${\cal M}^m$, and the condition (a) states that the orders defined by $\varphi_i$'s are independent. If $\kappa_{\mathrm{srd}}^m(T)=\infty$, then there is a set $\{\varphi_i(x,y_i):i<|T|^+\}$ witnessing the conditions. So, by choosing an infinite subset of $|T|^+$, we can assume $\varphi_i=\varphi$ for all $i<\omega$. Conversely, if $\kappa_{srd}(T) \geq \omega$ and if the witnessing formulas satisfy $\varphi_i = \varphi$ $(i<\omega)$, then by compactness, we see that there are arbitrarily many independent strict orders. Notice also that if $\kappa^m_{srd}(T)=\infty$ then $T$ has the independence property. \begin{example} Let $T$ be the theory of $\N=(\N,0,1,+,\cdot)$. Let $\varphi(x,y_0,y_1)$ be the formula asserting that the exponent of the $y_0$-th prime in the prime factorization of $x$ is smaller than $y_1$. Then, for each $i$, $\Phi_i:=\{\varphi({\cal M},i,j)\}_j$ forms an increasing sequence of definable sets. Moreover, $\Phi_i$'s are independent, so we have $\kappa^1_{srd}(T)=\infty$. \end{example} Indiscernibility is a substantial concept in modern model theory. In our paper \cite{kota}, a couple of results concerning the existence of an indiscernible tree are presented. Here in this paper, the notion of mutual indiscernibility is important. \begin{definition} A set $\{B_i: i<\kappa\}$ of indiscernible sequences is said to be mutually indiscernible over $A$ if for every $i<\kappa$, the sequence $B_i$ is indiscernible over $A \cup \bigcup_{i\neq j<\kappa} B_j$. \end{definition} The following proposition is simple to prove, but plays an important role in our argument. \begin{proposition}\label{prop} For each $i<\kappa$, let $B_i=(b_{i,j})_{j\in\omega}$ be an infinite sequence of tuples of the same length. Let $\Gamma((X_i)_{i<\kappa})$ be a set of formulas, where $X_i=(x_{i,j})_{j \in \omega}$ $(i < \kappa)$ and $|x_{i,j}|=|b_{i,j}|$. We assume the following property for $\Gamma$: \begin{itemize} \item[(*)] if $B'_i $ is an infinite subsequence of $B_i$ $(i<\kappa)$ then $(B'_i)_{i<\kappa}$ realizes $\Gamma((X_i)_{i<\kappa})$. \end{itemize} Then, for any set $A$, we can find $\{C_i:i<\kappa\}\models \Gamma((X_i)_{i<\kappa})$ that is mutually indiscernible over $A$. \end{proposition} The following observation, shown by Proposition \ref{prop}, is a key in our proof of Theorem \ref{main}. \begin{remark}\label{indiscernible witness}\rm Let $Z$ denote $\Z$ or $\Z\cup\{\pm \infty\}$. Then, there is an srd-pattern of width $\kappa$ witnessed by a sequence $(\varphi_i(x;y_i))_{i\in\kappa}$ of formulas if and only if there are tuples $a$ and $b_{i,j}$ ($i\in\kappa, j\in Z$) with the following properties: \begin{enumerate} \item For all $i\in\kappa$ and $j \leq k \in Z$, $\varphi_i({\cal M}, b_{i,j}) \subset \varphi_i({\cal M}, b_{i,k})$; \item $\{B_i:i\in \kappa\}$ is mutually indiscernible, where $B_i=(b_{i,j})_{j\in Z}$; \item For all $i\in\kappa$ and $j \in Z$, ${\cal M} \models\varphi_i(a,b_{i,j})$ if and only if $j \geq 0$. \end{enumerate} In the equivalence above, we can also assume the following condition in addition to 1 --3. \begin{enumerate} \setcounter{enumi}3 \item $\{B_{i,+} :i\in\kappa\} \cup \{B_{i,-} :i\in\kappa\}$ is mutually indiscernible over $a$, i.e. $B_{i,+}$ is indiscernible over $\{a\} \cup B_{i,-} \cup \bigcup_{i'\neq i}B_{i'}$ and $B_{i,-}$ is indiscernible over $\{a\} \cup B_{i,+} \cup \bigcup_{i'\neq i}B_{i'}$, where $B_{i,+}=(b_{i,j})_{j\geq 0}$ and $B_{i,-}=(b_{i,j})_{j< 0}$. \end{enumerate} \end{remark} \begin{remark} \label{second} Let $(D_i)_{i \in I}$ be an increasing sequence of sets in ${\cal M}^n$, where $I$ is a linearly ordered set. Then the following sequences are also increasing: \begin{enumerate} \item $(D_i \cap D)_{i \in I}$, where $D$ is a subset of ${\cal M}^n$; \item $(\pi (D_i))_{i \in I}$, where $\pi:{\cal M}^n \to {\cal M}^m$ is the projection $(x_0, \dots, x_{n-1}) \mapsto (x_{i_0}, \dots, x_{i_{m-1}})$. \end{enumerate} \end{remark} \section{Main Results} In the following theorem, $\kappa, \kappa_0$ and $\kappa_1$ are arbitrary cardinals, but the interesting case is when they are finite. \begin{theorem}\label{main} Let $\kappa, \kappa_0$ and $\kappa_1$ be cardinals such that $\kappa+1=\kappa_0 + \kappa_1$. Suppose that there is an srd-pattern of width $\kappa$ with formulas $\varphi_i(x;y_i)$ $(i\in\kappa)$, where $x=x_0x_1$. Then, there is $l\in\{0,1\}$ for which we can find formulas $\psi_i(x_l; y'_i)$ $(i\in\kappa_l)$ witnessing the definition of srd-pattern of width $\kappa_l$. \end{theorem} \begin{proof} Let $Z=\Z \cup \{\pm \infty\}$ and choose $b_{i,j}$ $(i\in\kappa, j \in Z)$ and $a$ satisfying the conditions 1 -- 4 in Remark \ref{indiscernible witness}. We write $a$ in the form $a=a_0a_1$, where $|a_0|=|x_0|$ and $|a_1|=|x_1|$. For $\eta \in \Z^{\kappa}$, let \[ \Delta_\eta(x_0, a_1):=\{\varphi_i(x_0, a_1, b_{i,j})^{\text{if }j\geq \eta(i)}: i\in \kappa, j\in Z\}. \] Then choose a maximal $F\subset \kappa$ satisfying the following property: \begin{itemize} \item[(*)] For any $\eta\in \Z^{\kappa}$ with $\mathrm{supp}(\eta) \subset F$ (i.e., $\eta(i)=0$ if $i \notin F$), $\Delta_\eta(x_0, a_1)$ is consistent. \end{itemize} There are two complementary cases: \medbreak\noindent {\bf Case 1: } Suppose $|F|\geq \kappa_0$. In this case the proof is straightforward for $l=0$, since the formulas $\varphi_i(x_0; x_1y_i)$ $(i\in F)$ and the tuples $c_{i,j}=a_1b_{i,j}$ $(i \in F, j\in\omega)$ form an srd-pattern of width $\kappa_0$e. \medbreak\noindent {\bf Case 2: } Suppose $|F|< \kappa_0$. Then the set $\kappa \setminus F$ has the cardinality $\geq \kappa_1$. Without loss of generality, we can assume $\kappa_1\subset \kappa\setminus F$. In this case, for any $\alpha\in\kappa_1$, the extension $F\cup\{\alpha\}\supset F$ does not satisfy $(*)$. Namely, there is $\eta$ with $\mathrm{supp}(\eta) \subset F \cup \{\alpha\}$, for which the set $\Delta_\eta(x_0,a_1)$ is inconsistent. Fix $\alpha\in \kappa_1$ for a while. Since $\{\varphi_i({\cal M}, a_1,b_{i,j}):j\in Z\}$ is a strictly increasing sequence for each $i$, we can choose $\eta_0\in \Z^F$ and $m \in Z\setminus\{0\}$ such that the subset \begin{equation*} \begin{split} &\{\varphi_i(x_0, a_1, b_{i,\eta_0(i)}), \neg\varphi_i(x_0, a_1, b_{i,\eta_0(i)-1}): i\in F\}\\ &\quad\cup \{\varphi_{\alpha}(x_0, a_1, b_{\alpha,m}), \neg\varphi_{\alpha}(x_0, a_1, b_{\alpha,m-1})\}\\ &\quad\cup \{\varphi_{i}(x_0, a_1, b_{i,0}), \neg\varphi_{i}(x_0, a_1, b_{i,-1}): i\in \kappa\setminus(F \cup\{\alpha\})\} \end{split} \end{equation*} of $\Delta_\eta$ is inconsistent. Since the other case is similar and in fact easier, we assume $m > 0$. Then, by compactness, and since $\{B_{i,+} :i\in\kappa\} \cup \{B_{i,-} :i\in\kappa\}$ is mutually indiscernible over $a$, we can find finite sets $F_0 \subset F$ and $F_1 \subset \kappa\setminus(F \cup\{\alpha\})$ such that \begin{equation*} \begin{split} \Sigma_\alpha(x_0) &:=\{\varphi_i(x_0, a_1, b_{i,\eta_0(i)}), \neg\varphi_i(x_0, a_1, b_{i,\eta_0(i)-1}): i\in F_0\}\\ &\quad\cup \{\varphi_{\alpha}(x_0, a_1, b_{\alpha,\infty}), \neg\varphi_\alpha(x_0,a_1, b_{\alpha, 0})\}\\ &\quad\cup \{\varphi_i(x_0, a_1, b_{i,\infty}), \neg\varphi_i(x_0,a_1, b_{i, -\infty}): i\in F_1\} \end{split} \end{equation*} Now, let \[ B^*:= \{b_{i,j}\}_{ i\in F, j\in \Z}\, \cup \, \{b_{i, -\infty} \}_{i\in \kappa\setminus F} \, \cup \, \{b_{i, \infty} \}_{i\in \kappa\setminus F} \] Then the parameters appearing in $\Sigma_\alpha(x_0)$, other than $B^*$, are $a_1$ and $b_{\alpha, 0}$. (The definition of $B^*$ does not depend on $\alpha$ and hereafter we work with the language $L(B^*)$.) So we write $\Sigma_\alpha$ as $\Sigma_\alpha(x_0,a_1,b_{\alpha,0})$. By preparing a variable $z_\alpha$ with $|z_\alpha|=|b_{\alpha,j}|$, let $\psi_\alpha'(x_0,x_1,z_\alpha)$ be the formula $\bigwedge \Sigma_\alpha(x_0,x_1,z_\alpha)$. Recall that the set $\Sigma_\alpha(x_0,a_1,b_{\alpha,0})$ is inconsistent. However, the set $\Sigma_\alpha(x_0,a_1,b_{\alpha,-1})$ is consistent, by our choice of $F$ and the condition $(*)$. By the condition 4 in Remark \ref{indiscernible witness}, this means that $\psi'_\alpha(x_0,a_1,b_{\alpha,j})$ is consistent if and only if $j < 0$. So, if we define \begin{eqnarray*} \psi_\alpha(x_1, z_\alpha)&:=& (\exists x_0) \, \psi'_\alpha(x_0,x_1,z_\alpha),\\ c_{\alpha,j}&:=&b_{\alpha,-j-1}, \end{eqnarray*} then we have \[ {\cal M} \models \psi_\alpha(a_1, c_{\alpha,j}) \iff j \geq 0. \] Since this is true for all $\alpha \in \kappa_1$, it follows that $\langle (\psi_\alpha)_{\alpha \in \kappa_1}, (c_{\alpha,j})_{\alpha \in \kappa_1,j \in \Z}\rangle$ satisfies the condition 3 in Remark \ref{indiscernible witness}. The condition 2 is easily shown, since the sequences $(c_{\alpha,j})_{j\in \Z}$ $( \alpha\in \kappa_1) $ are mutually indiscernible over $B^*$. Finally the condition 1 follows from Remark \ref{second}. Hence, $\langle (\psi_\alpha)_{\alpha \in \kappa_1}, (c_{\alpha,j})_{\alpha \in \kappa_1,j \in \Z}\rangle$ is an srd-pattern of width $\kappa_1$. \end{proof} \begin{corollary} \begin{enumerate} \item $\kappa^{m+n}_{\mathrm{srd}}(T)+1 \leq \kappa^{m}_{\mathrm{srd}}(T)+\kappa^{n}_{\mathrm{srd}}(T)$. \item If $\kappa^m_{\mathrm{srd}}(T)$ is infinite, then $\kappa^m_{\mathrm{srd}}(T)=\kappa^1_{\mathrm{srd}}(T)=\kappa_{\mathrm{srd}}(T)$. \end{enumerate} \end{corollary} \begin{proof} We only prove the first item. We can assume $\kappa^{m+n}_{\mathrm{srd}}(T)$ is finite, since the infinite case is easier. By way of a contradiction, we assume $\kappa^{m+n}_{\mathrm{srd}}(T) +1> \kappa^{m}_{\mathrm{srd}}(T)+\kappa^{n}_{\mathrm{srd}}(T)$. Then there must be an srd-pattern of width $\kappa:= \kappa^{m}_{\mathrm{srd}}(T)+\kappa^{n}_{\mathrm{srd}}(T) -1$ witnessed by $(m+n)$-formulas. By Theorem \ref{main}, using the equation $\kappa+1=\kappa^{m}_{\mathrm{srd}}(T)+\kappa^{n}_{\mathrm{srd}}(T)$, we would have (i) the existence of an srd-pattern of width $\kappa^{m}_{\mathrm{srd}}(T)$ by $m$-formulas, or (ii) the existence of an srd-pattern of width $\kappa^{n}_{\mathrm{srd}}(T)$ by $n$-formulas. In either case, we reach a contradiction. \end{proof} The above argument can be applied to show the corresponding result for $\kappa_{\mathrm{ird}}^m(T)$. The following theorem on $\kappa_{\mathrm{ird}}^m(T)$ gives an improvement of \cite[Theorem 7.10]{shelah}. (In that book he investigated $\kappa^m_{\mathrm{ird}}(T)$ when it is infinite.) In the following theorem, $\kappa, \kappa_0$ and $\kappa_1$ are any cardinals as before. \begin{theorem} Assume $\kappa+1=\kappa_0 + \kappa_1$. Suppose that there is an ird-pattern of width $\kappa$ with formulas $\varphi_i(x;y_i)$ $(i\in\kappa)$, where $x=x_0x_1$. Then, there is $l\in\{0,1\}$ for which we can find formulas $\psi_i(x_l; y'_i)$ $(i\in\kappa_l)$ witnessing an ird-pattern of width $\kappa_l$. \end{theorem} \begin{proof} The outline of the proof is quite similar to that of Theorem \ref{main}. However, for completeness, the details of the proof are provided. In the present proof, our linear order $Z$ has the form $Z=\Z_- + \Z + \Z_+$, where both $\Z_-$ and $\Z_+$ are copies of $\Z$, and the order is defined so that $\Z_- < \Z < \Z_+$. Choose $b_{i,j}$ $(i\in\kappa, j \in Z)$ and $a=a_0a_1$ satisfying the conditions 1 -- 4 in Remark \ref{indiscernible witness}. Then for $\eta \in \Z^{\kappa}$, consider the set $\Delta_\eta(x_0, a_1)$, which is defined in the same way as in the proof of previous theorem. Again, choose a maximal $F\subset \kappa$ satisfying the following property: \begin{itemize} \item[(**)] For any $\eta\in Z^{\kappa}$ with $\mathrm{supp}(\eta) \subset F$, $\Delta_\eta(x_0, a_1)$ is consistent. \end{itemize} \medbreak\noindent {\bf Case 1: } Suppose $|F|\geq \kappa_0$. The proof is straightforward as the previous theorem so we skip this case. \medbreak\noindent {\bf Case 2: } Suppose $|F|< \kappa_0$. Without loss of generality, we can assume $\kappa_1\subset \kappa\setminus F$. In this case, for any $\alpha\in\kappa_1$, there is $\eta$ with $\mathrm{supp}(\eta) \subset F \cup \{\alpha\}$, for which the set $\Delta_\eta(x_0,a_1)$ is inconsistent. By compactness, we can choose finite sets $F_0 \subset F$, $F_1 \subset \kappa \setminus (F \cup\{ \alpha\})$, and $U_i, O_i \subset \Z$ $(i \in F_0 \cup F_1 \cup \{\alpha\})$ with the following properties: \begin{enumerate} \item $U_i < O_i$, for any $i$; \item $U_i < 0 \leq O_i$, if $i \in F_1$; \item The following set $\Sigma_\alpha(x_0)$ is inconsistent: \[ \begin{split} \{ \neg \varphi_i(x_0,a_1,b_{i,j}): i \in F_0, j \in U_i\} & \cup \{\varphi_i(x_0,a_1,b_{i,j}): i \in F_0, j \in O_i\} \\ \cup \{ \neg \varphi_\alpha(x_0,a_1,b_{i,j}): j \in U_\alpha\} & \cup \{\varphi_\alpha(x_0,a_1,b_{i,j}): j \in O_\alpha\} \\ \cup \{ \neg \varphi_i(x_0,a_1,b_{i,j}): i \in F_1, j \in U_i \} & \cup \{\varphi_i(x_0,a_1,b_{i,j}): i \in F_1, j \in O_i\}. \end{split} \] \end{enumerate} If $U_\alpha < 0 \leq O_\alpha$ holds, then $\Sigma_\alpha$ must be consistent, by our choice of $F$. So, since the other case is similarly proven, we can assume $U_\alpha^+:=\{j \in U_\alpha: 0 \leq j\} \neq \emptyset$. Moreover $U_\alpha$ is assumed to be chosen so that $|U_\alpha^+|$ is minimum. Since $\{B_{i,+} :i\in\kappa\} \cup \{B_{i,-} :i\in\kappa\}$ is mutually indiscernible over $a$, we can assume \begin{itemize} \item $U_i , O_i \subset \Z$ $(i \in F)$; \item $U_i \subset \Z_-, \ O_i \subset \Z_+$ $ (i \in \kappa \setminus (F \cup \{\alpha\}))$; \item $U_\alpha^-:=\{j \in U_\alpha:j<0\} \subset \Z^-$, $U_\alpha^+=\{0, \dots,k-2, k-1\} \subset \Z$; \item $O_\alpha \subset \Z_+$. \end{itemize} Now, let \[ B^*:= \{b_{i,j}: i\in F, j\in \Z\}\, \cup \, \{b_{i, j} : i\in \kappa\setminus F, j \in \Z_- \cup \Z_+ \} \] Then the parameters appearing in $\Sigma_\alpha(x_0)$, other than $B^*$, are $a_1$ and $(b_{\alpha, j})_{j \in U_\alpha^+}$. So we write $\Sigma_\alpha$ as $\Sigma_\alpha(x_0,a_1,(b_{\alpha,j})_{j \in k})$. Let $\psi_\alpha'(x_0,x_1,z_\alpha)$ be the formula $\bigwedge \Sigma_\alpha(x_0,x_1,z_\alpha)$. Recall that the set $\Sigma_\alpha(x_0,a_1,(b_{\alpha,j})_{j \in k})$ is inconsistent. However, the set $\Sigma(x_0,a_1,(b_{\alpha,j})_{j \in \{-k,\dots , -1\}})$ is consistent, by the choice of $F$. By the condition 4, if we set $c_{\alpha,l}=(b_{\alpha,j})_{j \in \{lk, lk+1, \dots, lk+(k-1)\}}$, this means that $\psi'_\alpha(x_0,a_1,c_{\alpha,j})$ is consistent if and only if $j < 0$. The rest of the proof is almost identical with that of $srd$-case. \end{proof} From this theorem we deduce the following corollary. The item 2 is essentially shown in [1]. \begin{corollary} \begin{enumerate} \item $\kappa^{m+n}_{\mathrm{ird}}(T) +1 \leq \kappa^{m}_{\mathrm{ird}}(T)+\kappa^{n}_{\mathrm{ird}}(T)$. \item If $\kappa^m_{\mathrm{ird}}(T)$ is infinite, then $\kappa^m_{\mathrm{ird}}(T)=\kappa^1_{\mathrm{ird}}(T)=\kappa_{\mathrm{ird}}(T)$. \end{enumerate} \end{corollary}
24,938
TITLE: What is the point of algebraic logic? QUESTION [4 upvotes]: The question is worded a bit unfortunately. Honestly, I find algebraic logic to be one of the most interesting subjects I've ever encountered. But I find myself unable to articulate why it's interesting, other than for its own sake. Is there any reason, practically speaking, for taking this approach to logic? REPLY [1 votes]: The term "algebraic logic" is ambiguous and can have several interpretations. One such interpretation was proposed by Paul Halmos who was indeed advocating a rather radical scheme of algebraisation of logic in terms of polyadic algebras. The main advantage of his approach is to gain on the philosophical battleground, as a propping-up of naive set-theoretic realism. A recent article summarizes the issue as follows. We examine Paul Halmos’ comments on category theory, Dedekind cuts, devil worship, logic, and Robinson’s infinitesimals. Halmos’ scepticism about category theory derives from his philosophical position of naive set-theoretic realism. In the words of an MAA biography, Halmos thought that mathematics is “certainty” and “architecture” yet 20th century logic teaches us is that mathematics is full of uncertainty or more precisely incompleteness. If the term architecture meant to imply that mathematics is one great solid castle, then modern logic tends to teach us the opposite lesson, namely that the castle is floating in midair. Halmos’ realism tends to color his judgment of purely scientific aspects of logic and the way it is practiced and applied. He often expressed distaste for nonstandard models, and made a sustained effort to eliminate first-order logic, the logicians’ concept of interpretation, and the syntactic vs semantic distinction. He felt that these were vague, and sought to replace them all by his polyadic algebra. Halmos claimed that Robinson’s framework is “unnecessary” but Henson and Keisler argue that Robinson’s framework allows one to dig deeper into set-theoretic resources than is common in Archimedean mathematics. This can potentially prove theorems not accessible by standard methods, undermining Halmos’ criticisms.
201,889
Warning: The NCBI web site requires JavaScript to function. more... Generate a file for use with external citation management software. Numerous animal lineages have expanded and diversified the opsin-based photoreceptors in their eyes underlying color vision behavior. However, the selective pressures giving rise to new photoreceptors and their spectral tuning remain mostly obscure. Previously, we identified a violet receptor (UV2) that is the result of a UV opsin gene duplication specific to Heliconius butterflies. At the same time the violet receptor evolved, Heliconius evolved UV-yellow coloration on their wings, due to the pigment 3-hydroxykynurenine (3-OHK) and the nanostructure architecture of the scale cells. In order to better understand the selective pressures giving rise to the violet receptor, we characterized opsin expression patterns using immunostaining (14 species) and RNA-Seq (18 species), and reconstructed evolutionary histories of visual traits in five major lineages within Heliconius and one species from the genus Eueides. Opsin expression patterns are hyperdiverse within Heliconius. We identified six unique retinal mosaics and three distinct forms of sexual dimorphism based on ommatidial types within the genus Heliconius. Additionally, phylogenetic analysis revealed independent losses of opsin expression, pseudogenization events, and relaxation of selection on UVRh2 in one lineage. Despite this diversity, the newly evolved violet receptor is retained across most species and sexes surveyed. Discriminability modeling of behaviorally preferred 3-OHK yellow wing coloration suggests that the violet receptor may facilitate Heliconius color vision in the context of conspecific recognition. Our observations give insights into the selective pressures underlying the origins of new visual receptors. © The Author 2017. Published by Oxford University Press on behalf of the Society for Molecular Biology and Evolution. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com. butterflies; color vision; gene duplication; photoreceptor cells; pseudogenes; short-wavelength opsin National Center for Biotechnology Information, U.S. National Library of Medicine 8600 Rockville Pike, Bethesda MD, 20894 USA
340,937
Tuesday, August 09, 2011 sweet fruit pizza Always looking for tasty recipes without animal products I invented the sweet fruit pizza. The dough is basically a pizza dough, with the addition of 4 tablespoons of sugar. I also exchanged the water in the original recipe with vanilla soy milk. Eventually all you have to do is spread out the dough on a baking tray and cover it with seasonal fruits. In the last few months I have read plenty about the positive effects of a vegan diet. Ever since I have tried to cut animal products down to a minimum in my kitchen. However the most vegan recipes keep me wishing for more excitement, more variety and not the plain taste of health. My biggest goal is to create a kitchen, delicious and gourmet, that guests won't even know that it is composed entirely without animal products. The tea is a rooibos called "Capetown". The trigger to buy was of course the name, and it hasn't failed: everytime I take a sip I sink into pleasant nostalgia. Sweet pizza dough Muy buena y diferente, no soy vegetariana, pero como muy poca carne y uso lacteos.Me gusta la manera como te retas para conseguir un buen sabor precindiendo de productos animales.Muy bueno !!!
130,023
\begin{document} \title[Finite order spreading models]{Finite order spreading models} \author{S. A. Argyros, V. Kanellopoulos and K. Tyros} \address{National Technical University of Athens, Faculty of Applied Sciences, Department of Mathematics, Zografou Campus, 157 80, Athens, Greece} \email{sargyros@math.ntua.gr} \email{bkanel@math.ntua.gr} \email{ktyros@central.ntua.gr} \begin{abstract} Extending the classical notion of the spreading model, the $k$-spreading models of a Banach space are introduced, for every $k\in\nn$. The definition, which is based on the $k$-sequences and plegma families, reveals a new class of spreading sequences associated to a Banach space. Most of the results of the classical theory are stated and proved in the higher order setting. Moreover, new phenomena like the universality of the class of the 2-spreading models of $c_0$ and the composition property are established. As consequence, a problem concerning the structure of the $k$-iterated spreading models is solved. \end{abstract} \thanks{2010 \textit{Mathematics Subject Classification}: 46B03, 46B06, 46B25, 46B45, 05D10} \thanks{\textit{Keywords}: Spreading models, Ramsey theory} \thanks{This research is partially supported by NTUA Programme PEBE 2009 and it is part of the PhD Thesis of the third named author} \maketitle \section*{Introduction} The present work was motivated by a problem of E. Odell and Th. Schlumprecht concerning the structure of the $k$-iterated spreading models of the Banach spaces. Our attempt to answer the problem led to the $k$-spreading models which in turn are based on the $k$-sequences and plegma families. The aim of this paper is to introduce the above concepts and to develop a theory yielding , among others, a solution to the aforementioned problem. Spreading models, invented by A. Brunel and L. Sucheston (c.f. \cite{BS}), posses a key role in the modern Banach space theory. Let us recall that a spreading model of a Banach space $X$ is a spreading sequence\footnote{A sequence $(e_n)_{n}$ in a seminormed space $(E,\|\cdot\|_*)$ is called spreading if for every $n\in\nn$, $k_1<\ldots<k_n$ in $\nn$ and $a_1,\ldots,a_n\in\rr$ we have that $\|\sum_{j=1}^na_j e_j\|_*=\|\sum_{j=1}^n a_j e_{k_j}\|_*$. In the literature the term ``spreading model" usually indicates the space generated by the corresponding spreading sequence rather than the sequence itself. We have chosen to use the term for the spreading sequence and whenever we refer to $\ell^p$ or $c_0$ spreading model we shall mean that the spreading sequence is equivalent to the usual basis of the corresponding space.} generated by a sequence of $X$. The spreading sequences have regular structure and the spreading models act as the tool for realizing that structure in the space $X$ in an asymptotic manner. This together with the Brunel-Sucheston's discovery that every bounded sequence has a subsequence generating a spreading model determine the significance and importance of this concept. For a comprehensive presentation of the theory of the spreading models we refer the interested reader to the monograph of B. Beauzamy and J.-T. Laprest\'e (c.f. \cite{BL}). Iteration is naturally applicable to spreading models. Thus one could define the 2-iterated spreading models of a Banach space $X$ to be the spreading sequences which occur as spreading models of the spaces generated by spreading models of $X$. Further iteration yields the $k$-iterated spreading models of $X$, for every $k\in\nn$. Iterated spreading models appeared in the literature shortly after Brunel-Sucheston's invention. Indeed, B. Beauzamy and B. Maurey in \cite{BM}, answering a problem of H.P. Rosenthal, showed that the class of the 2-iterated spreading models does not coincide with the corresponding one of the spreading models. In particular they constructed a Banach space admitting the usual basis of $\ell^1$ as a $2$-iterated spreading model and not as a spreading model. E. Odell and Th. Schlumprecht in \cite{O-S} asked whether or not every Banach space admits a $k$-iterated spreading model equivalent to the usual basis of $\ell^p$, for some $1\leq p<\infty$, or $c_0$. Let us also point out that in the same paper they provided a reflexive space $\mathfrak{X}$ with an unconditional basis such that no $\ell^p$ or $c_0$ is embedded into the space generated by any spreading model of the space. This remarkable result answered a long standing problem of the Banach space theory. Our approach uses the $k$-spreading models which in many cases include the $k$-iterated ones. The $k$-spreading models are always spreading sequences $(e_n)_n$ in a seminormed space $E$. They are generated by $k$-sequences $(x_s)_{s\in[\nn]^k}$, where $[\nn]^k$ denotes the family of all $k$-subsets of $\nn$. A critical ingredient in the definition is the plegma families $(s_i)_{i=1}^l$ of elements of $[\nn]^k$, described as follows. A finite sequence $(s_j)_{j=1}^l$ in $[\nn]^k$ is a plegma family if its elements satisfy the following order relation: for every $1\leq i\leq k$, $s_1(i)<\ldots<s_l(i)$ and for every $1\leq i< k$, $s_l(i) < s_1(i+1)$. The plegma families, as they are used in the definition, force a weaker asymptotic relation of the $k$-spreading models to the space $X$, as $k$ increases. For $k=1$, the plegma families coincide to the finite subsets of $\nn$ yielding that the new definition of the 1-spreading models recovers the classical one. For $k>1$, the plegma families have a quite strict behavior which is described in the first section of the paper. Of independent interest is also Lemma \ref{intro_denslem} stated below. The $k$-spreading models of a Banach space $X$ are denoted by $\mathcal{SM}_k(X)$ and they define an increasing sequence. As the definition easily yields, the same holds for the $k$-iterated ones. Similarly to the classical case, for every bounded $k$-sequence $(x_s)_{s\in[\nn]^k}$ there exists an infinite subset $L$ of $\nn$ such that the $k$-subsequence $(x_s)_{s\in[L]^k}$ generates a $k$-spreading model. The advantage of the $k$-spreading models is that, unlike the $k$-iterated ones, for $k\geq2$, the space $X$ determines directly their norm, through the $k$-sequences. Moreover, the $k$-spreading models have a transfinite extension yielding a hierarchy of $\xi$-spreading models for all $\xi<\omega_1$. The definition and the study of this hierarchy is more involved and will be presented elsewhere. We should also mention that L. Halbeisen and E. Odell (c.f. \cite{H-O}) introduced the asymptotic models which share some common features with the 2-spreading models. The asymptotic models are associated to bounded 2-sequences $(x_s)_{s\in[\nn]^2}$ and they are not necessarily spreading sequences. The paper mainly concerns the definition and the study of the $k$-spreading models. Highlighting the results of the paper we should mention the universal property satisfied by the 2-spreading models of $c_0$. More precisely, it is shown that every spreading sequence is isomorphically equivalent to some 2-spreading model of $c_0$. As the spaces generated by $k$-iterated spreading models of $c_0$ are isomorphic to $c_0$, the previous result shows that the $k$-spreading models do not coincide with the $k$-iterated ones. The composition property is also established. Roughly speaking, under some natural conditions, the $d$-spreading model of a $k$-spreading model of a Banach space $X$ is a $(k+d)$-spreading model of $X$. This result is used for showing that a special class of the $k$-iterated spreading models are actually $k$-spreading models. We also extend to the higher order results of the spreading model theory. Among others we provide conditions for the $k$-sequences to generate unconditional spreading models and we study properties like non-distortion and duality of $\ell^1$ and $c_0$ $k$-spreading models. Moreover we introduce the Ces\`aro summability for $k$-sequences and we prove the following that extends a classical theorem due to H.P. Rosenthal (c.f. \cite{M,Ro}). \begin{thm} Let $X$ be a Banach space, $k\in\nn$ and $(x_s)_{s\in[\nn]^k}$ be a weakly relatively compact $k$-sequence in $X$, i.e. $\overline{\{x_s:s\in[\nn]^k\}}^w$ is $w$-compact. Then there exists $M\in [\nn]^\infty$ such that at least one of the following holds: \begin{enumerate} \item[(1)] The subsequence $(x_s)_{s\in[M]^k}$ generates a $k$-spreading model equivalent to the usual basis of $\ell^1$. \item[(2)] There exists $x_0\in X$ such that for every $L\in [M]^\infty$, $(x_s)_{s\in [L]^k}$ is $k$-Ces\`aro summable to $x_0$. \end{enumerate} \end{thm} There are significant differences between the cases $k=1$ and $k\geq2$. First for $k=1$ the two alternatives are exclusive which does not remain valid for $k\geq 2$. Second the proof for the case $k\geq 2$ uses the following density result concerning plegma families which is a consequence of the multidimensional Szemeredi's theorem due to H. Furstenberg and Y. Katznelson (c.f. \cite{FK}). \begin{lem}\label{intro_denslem} Let $\delta>0$ and $k, l\in\nn$. Then there exists $n_0\in \nn$ such that for every $n\geq n_0$ and every subset $\mathcal{A}$ of the set of all $k$-subsets of $\{1,\ldots, n\}$ of size at least $\delta (\substack{n\\ k})$, there exists a plegma $l$-tuple $(s_j)_{j=1}^l$ in $\mathcal{A}$. \end{lem} We close the paper with two examples. The first one is a Banach space similar to the aforementioned one of Odell-Schlumprecht. It is proved that no $k$-spreading model of the space is isomorphic to some $\ell^p$, $1\leq p<\infty$, or $c_0$. The composition property, mentioned above, yields that the same holds for the $k$-iterated spreading models and thus the answer to the aforementioned Odell-Schlumprecht problem is a negative one. In the second example, for every $k\in\nn$ we present a space $\mathfrak{X}_{k+1}$ admitting the usual basis of $\ell^1$ as a $(k+1)$-spreading model while for every $d\leq k$, $\mathfrak{X}_{k+1}$ does not admit $\ell^1$ as a $d$-spreading model. As we have mentioned, the corresponding problem for $k$-iterated spreading models has been answered in \cite{BM} for $k+1=2$. It seems that for $k>1$ this problem is still open. However, recently the $(k+1)$-iterated spreading models have been separated by the $k$ ones in \cite{AM}. The proofs in both examples make use of the results exhibited in the previous sections of the paper. \subsection*{Notation} By $\nn=\{1,2,...\}$ we denote the set of all positive integers. We will use capital letters as $L,M,N,...$ (resp. lower case letters as $s,t,u,...$) to denote infinite subsets (resp. finite subsets) of $\nn$. For every infinite subset $L$ of $\nn$, the notation $[L]^\infty$ (resp. $[L]^{<\infty}$) stands for the set of all infinite (resp. finite) subsets of $L$. For every $s\in[\nn]^{<\infty}$, by $|s|$ we denote the cardinality of $s$. For $L\in[\nn]^\infty$ and $k\in\nn$, $[L]^k$ (resp. $[L]^{\leq k}$) is the set of all $s\in[L]^{<\infty}$ with $|s|=k$ (resp. $|s|\leq k$). For every $s,t\in[\nn]^{<\infty}$, we write $s<t$ if either at least one of them is the empty set, or $\max s<\min t$. Throughout the paper we shall identify strictly increasing sequences in $\nn$ with their corresponding range, i.e. we view every strictly increasing sequence in $\nn$ as a subset of $\nn$ and conversely every subset of $\nn$ as the sequence resulting from the increasing ordering of its elements. Thus, for an infinite subset $L=\{l_1<l_2<...\}$ of $\nn$ and $i\in\nn$, we set $L(i)=l_i$ and similarly, for a finite subset $s=\{n_1<..<n_k\}$ of $\nn$ and for $1\leq i\leq k$, we set $s(i)=n_i$. Also, for every $L, N\in [\nn]^\infty$ and $s\in [\nn]^{<\infty}$, we set $L(N)=\{L(N(i)):i\in\nn\}$ and $L(s)=\{L(s(i)): 1\leq i\leq |s|\}$. Similarly, for every $s\in [\nn]^{k}$ and $ F\subseteq \{1,...,k\}$, we set $s(F)=\{s(i):i\in F\}$. Also for $1\leq m\leq k$, we set $s|m=\{s(i): 1\leq i\leq m\}$. For every $s,t\in[\nn]^{<\infty}$, we write $s\sqsubseteq t$ (resp. $s\sqsubset t$) to denote that $s$ is an initial (resp. \emph{proper} initial) segment of $t$. Given two sequences $(s^1_j)_{j=1}^{l_1}$ and $(s^2_j)_{j=1}^{l_2}$ in $[\nn]^{<\infty}$, by $(s^1_j)_{j=1}^{l_1\;\;\smallfrown}(s^2_j)_{j=1}^{l_2}$, we denote their concatenation. Similarly for more than two sequences. For a Banach space $X$ with a Schauder basis $(e_n)_n$ and every $x\in X$, $x=\sum_n \lambda_n e_n$ we write $\text{supp}(x)$ to denote the support of $x$, i.e. $\text{supp}(x)=\{n\in\nn:\lambda_n\neq 0\}$. If the support of $x$ is finite and $E\subseteq \nn$ then by $E(x)$, we denote the restriction of $x$ to $E$, namely $E(x)=\sum_{n\in E}\lambda_n e_n$. Two sequences $(x_n)_n$ and $(y_n)_n$, not necessarily in the same Banach space, will be called isometric (resp. equivalent) if (resp. there exists $0<c\leq C$ such that) for every $n\in\nn$ and $a_1,\ldots,a_n\in\rr$ we have that $\|\sum_{i=1}^na_ix_i\|=\|\sum_{i=1}^na_iy_i\|$ (resp. $c\|\sum_{i=1}^na_ix_i\|\leq\|\sum_{i=1}^na_iy_i\|\leq C\|\sum_{i=1}^na_ix_i\|$). Generally concerning Banach space theory the notation and the terminology that we follow is the standard one (see \cite{AK} and \cite{Lid-Tza}). \section{Plegma families in $[\nn]^k$} As we have already mentioned, the basic ingredients of the definition of the $k$-spreading models are the $k$-sequences and the plegma families. In this section we introduce the plegma families as well as the related notions of the plegma paths and the plegma preserving maps. \subsection{Definition and basic properties}\label{section admissibility} We start with the definition of the plegma families. \begin{defn}\label{defn plegma} Let $k\in\nn$ and $M\in [\nn]^\infty$. A plegma family in $[M]^k$ is a finite sequence $(s_j)_{j=1}^l$ in $[M]^k$ satisfying the following properties. \begin{enumerate} \item[(i)] For every $1\leq i\leq k$, $s_1(i)<\ldots<s_l(i)$. \item[(ii)] For every $1\leq i< k$, $s_l(i) < s_1(i+1)$. \end{enumerate} For each $l\in \nn$, the set of all sequences $(s_j)_{j=1}^l$ which are plegma families in $[M]^k$ will be denoted by $\textit{Plm}_l([M]^k)$. We also set $\textit{Plm}([M]^k)=\bigcup_{l=1}^\infty\textit{Plm}_l([M]^k)$. \end{defn} Notice that for $l=1$ and every $k\in \nn$, we have $\textit{Plm}_1([M]^k)=[M]^k$. Moreover, for $k=1$ and every $l\in\nn$, $\textit{Plm}_l([M]^1)=[M]^{l}$. In the sequel the elements of $\textit{Plm}_2([M]^k)$ will be called plegma pairs in $[M]^k$. \begin{rem} Although the notion of the plegma family is natural, it does not seem to have appeared in the literature. As it was pointed out to us by S. Todorcevic, a concept that slightly reminds plegma pairs in $[\nn]^3$ is given by E. Specker in \cite{Sp}. \end{rem} In the next proposition we gather some useful properties of plegma families. The proof is straightforward. \begin{prop} \label{rem34} Let $k,l\in\nn$, $M\in[\nn]^\infty$ and $(s_j)_{j=1}^l$ be a finite sequence in $[M]^k$. \begin{enumerate} \item[(i)] $(s_j)_{j=1}^l\in \textit{Plm}_l([M]^k)$ if and only if there exists $F\in[M]^{kl}$ such that $s_j(i)=F((i-1)k+j)$, for every $1\leq i\leq k$ and $1\leq j\leq l$. \item[(ii)] If $(s_j)_{j=1}^l\in \textit{Plm}_l([M]^k)$ then $(s_{j_p})_{p=1}^m\in\textit{Plm}_m([M]^k)$, for every $1\leq m\leq l$ and $1\leq j_1<\ldots<j_m\leq l$. \item[(iii)] $(s_j)_{j=1}^l\in\textit{Plm}_l([M]^k)$ if and only if $(s_{j_1},s_{j_2})$ is a plegma pair in $[M]^k$, for every $1\leq j_1<j_2\leq l$. \item[(iv)] If $(s_j)_{j=1}^l\in\textit{Plm}_l([M]^k)$ then $(s_j(F))_{j=1}^l\in\textit{Plm}_l([M]^{|F|})$, for every non empty $ F\subseteq \{1,...,k\}$. \end{enumerate} \end{prop} \begin{thm} \label{ramseyforplegma} Let $M$ be an infinite subset of $\nn$ and $k,l\in\nn$. Then for every finite partition $\textit{Plm}_l([M]^k)=\bigcup_{j=1}^p P_j$, there exist $L\in[M]^\infty$ and $1\leq j_0\leq p$ such that $\textit{Plm}_l([L]^k)\subseteq P_{j_0}$. \end{thm} \begin{proof} By Proposition \ref{rem34} (i), we conclude that the map sending each plegma family $(s_j)_{j=1}^l$ in $[M]^k$ to its union $\bigcup_{j=1}^ls_j$ is a bijection from $\textit{Plm}_l([M]^k)$ onto $[M]^{kl}$. Therefore the partition of $\textit{Plm}_l([M]^k)$ induces a corresponding one to $[M]^{kl}$ and the conclusion easily follows by applying the Ramsey's theorem \cite{R}. \end{proof} \subsection{Plegma paths in $[\nn]^k$} In this subsection we introduce the definition of the plegma paths. As we shall see in the sequel, the plegma paths play important role in the development of the theory of $k$-spreading models. \begin{defn} Let $l,k\in\nn$ and $M\in[\nn]^\infty$. We will say that a finite sequence $(s_j)_{j=0}^l$ is a plegma path of length $l$ from $s_0$ to $s_l$ \textit{in} $[M]^k$, if $(s_{j-1},s_{j})$ is a plegma pair in $[M]^k$, for every $1\leq j\leq l-1$. \end{defn} \begin{lem}\label{lemma conserning the length of the plegma path} Let $k\in\nn$ and $(s_j)_{j=0}^l$ be a plegma path in $[\nn]^k$. If $s_0<s_l$ then $l\geq k$. \end{lem} \begin{proof} Suppose on the contrary that $s_0<s_l$ and $l<k$. Since $(s_{j-1}, s_j)$ is a plegma pair in $[\nn]^k$, we have $s_j(i_1)<s_{j-1}(i_2)$, for every $1\leq j\leq l$ and $1\leq i_1<i_2\leq k$. Hence, $s_l(1)<s_{l-1}(2)<s_{l-2}(3)<\ldots<s_0(l+1)\leq s_0(k)$, which contradicts that $s_0<s_l$. \end{proof} \begin{defn} Let $k\in\nn$ and $M\in[\nn]^\infty$. An $s\in [M]^k$ will be called skipped in $M$ if for every $1\leq i<k$ there exists $m\in M$ such that $s(i)<m<s(i+1)$. The set of all skipped $s\in [M]^k$ in $M$ will be denoted by $[M]^k_\shortparallel$. \end{defn} \begin{rem}\label{rem453} Notice that for every $m\in\nn$ and $s\in [M]^k_\shortparallel$ there exists a plegma path $(s_j)_{j=0}^l$ in $[M]^k$ with $s_0=s$. \end{rem} \begin{prop}\label{accessing everything with plegma path of length |s_0|} Let $k\in\nn$ and $M\in[\nn]^\infty$. Then for every $s,t\in[M]_\shortparallel^k$ with $s<t$ there exists a plegma path of length $k$ in $ [M]^k $ from $s$ to $t$. Moreover, every plegma path in $[\nn]^k$ from $s$ to $t$ has length at least $k$. \end{prop} \begin{proof} Fix $s,t\in[M]_\shortparallel^k$ with $s<t$. It is clear that we may choose $\tilde{s}, \tilde{t}\in [M]^{2k-1}$ such that $\tilde{s}(2i-1)=s(i)$ and similarly $\tilde{t}(2i-1)=t(i)$, for every $1\leq i\leq k$. For every $0\leq j\leq k$, we set \[s_j=\big\{\tilde{s}(2i-1+j):1\leq i\leq k-j\big\}\cup\big\{\tilde{t}(2i-1+k-j):1\leq i\leq j\big\}\] It is easy to check that $s_0=s$, $s_k=t$ and $(s_j)_{j=0}^k$ is a plegma path in $[M]^k$. Moreover, by Lemma \ref{lemma conserning the length of the plegma path}, every plegma path in $[\nn]^k$ from $s$ to $t$ is of length at least $k$. Hence $(s_j)_{j=0}^k$ is a plegma path from $s$ to $t$ in $[M]^k$ with the least possible length and the proof is complete. \end{proof} \begin{rem}\label{graphs} In terms of graph theory the above proposition states that in the directed graph with vertices the elements of $ [\nn]^k$ and edges the plegma pairs $(s,t)$ in $[\nn]^k$, the distance between two vertices $s$ and $t$ with $s<t$ is equal to $k$. \end{rem} \subsection{Plegma families and mappings} \begin{defn} Let $k_1,k_2\in\nn$, $M\in[\nn]^\infty$ and $\varphi:[M]^{k_1}\to [\nn]^{k_2}$. We will say that the map $\varphi$ is plegma preserving from $[M]^{k_1}$ into $[\nn]^{k_2}$ if for every plegma family $(s_j)_{j=1}^l$ in $[M]^{k_1}$, $(\varphi(s_j))_{j=1}^l$ is a plegma family in $[\nn]^{k_2}$. \end{defn} \begin{rem}\label{rem3456} Let $k_1,k_2\in\nn$. If $k_1<k_2$ then for every $M\in[\nn]^\infty$ there exists a plegma preserving map from $[M]^{k_2}$ onto $[M]^{k_1}$. For instance, by Proposition \ref{rem34}, the map $s\to s|k_1$ is plegma preserving from $[M]^{k_2}$ onto $[M]^{k_1}$. \end{rem} In contrast to the above remark we have the following. \begin{thm}\label{non plegma preserving maps} Let $k_1,k_2\in\nn$. If $k_1<k_2$ then for every $M\in[\nn]^\infty$ and $\varphi:[M]^{k_1}\to [\nn]^{k_2}$ there exists $L\in[M]^\infty$ such that for every plegma pair $(s_1,s_2)$ in $[ L]^{k_1}$ neither $(\phi(s_1),\phi(s_2))$ nor $(\phi(s_2),\phi(s_1))$ is a plegma pair in $[\nn]^{k_2}$. In particular, there exists no $L\in[M]^\infty$ such that the map $\varphi$ is plegma preserving from $[L]^{k_1}$ into $[\nn]^{k_2}$. \end{thm} \begin{proof} Let $M\in[\nn]^\infty$ and $\varphi:[M]^{k_1}\to [\nn]^{k_2}$. We set $P_1$ (resp. $P_2$) to be the set of all $(s_1,s_2)\in \textit{Plm}_2([M]^{k_1})$ such that $(\varphi(s_1),\varphi(s_2))$ (resp. $(\varphi(s_2),\varphi(s_1))$) is a plegma pair in $[\nn]^{k_2}$ and $P_3=\textit{Plm}_2([M]^{k_1})\setminus (P_1\cup P_2)$. By Theorem \ref{ramseyforplegma} there exist $i\in\{1,2,3\}$ and $L\in [M]^\infty$ such that $\textit{Plm}_2([L]^{k_1})\subseteq P_i$. It remains to show that $i=3$. Indeed, assume that $i= 2$. By Remark \ref{rem453} we may choose a plegma path $(s_j)_{j=0}^l$ in $[L]^k$ with $min(\varphi(s_0))<l$. For every $0\leq j\leq l$, we set $n_j=\min(\varphi(s_j))$. Since $\textit{Plm}_2([L]^{k_1})\subseteq P_2$, we have that $(n_j)_{j=0}^l$ is a strictly decreasing sequence in $\nn$ with length $l+1$. Since $n_0<l$ this is impossible. It remains to show that $i\neq 1$. Indeed, assume on the contrary. Then notice that $\varphi$ transforms every plegma path in $[L]^{k_1}$ to a plegma path of equal length in $[\nn]^{k_2}$. Using Remark \ref{rem453}, it is easy to see that we may choose $s<t$ in $[L]_\shortparallel^{k_1}$ such that $\varphi(s)<\varphi(t)$ and $\varphi(s), \varphi(t)\in [\nn]_\shortparallel^{k_2}$. By Proposition \ref{accessing everything with plegma path of length |s_0|} and Remark \ref{graphs}, we have that the distance of $s,t$ is equal to $k$ while that of $\varphi(s), \varphi(t)$ is equal to $k_2$. But since $s,t$ are joined by a plegma path of length $k_1$ and $\varphi$ preserves plegma paths we have that the distance of $\varphi(s), \varphi(t)$ is at most $k_1$. Hence $k_2\leq k_1$, a contradiction. \end{proof} \begin{prop}\label{lemma making a hereditary nonconstant function, nonconstant on plegma pairs} Let $A$ be a set, $k\in\nn$, $M\in[\nn]^\infty$ and $\varphi:[M]^k\to A$. Then there exists $L\in [M]^\infty$ such that either the restriction of $\varphi$ on $[L]^k$ is constant or for every plegma pair $(s_1,s_2)$ in $[ L]^k$, $\varphi(s_1)\neq\varphi(s_2)$. \end{prop} \begin{proof} By Theorem \ref{ramseyforplegma} there exists $N\in [M]^\infty$ such that exactly one of the following are satisfied. \begin{enumerate} \item[(i)] For every plegma pair $(s_1,s_2)$ in $[N]^k$, $\varphi(s_1)=\varphi(s_2)$. \item[(ii)] For every plegma pair $(s_1,s_2)$ in $[ N]^k$, $\varphi(s_1)\neq\varphi(s_2)$. \end{enumerate} Therefore, it suffices to show that the first alternative implies that there exists $L\in [N]^\infty$ such that $\varphi$ is constant on $[L]^k$. Indeed, let $s=(N(2),N(4),...,N(2k))$, $L=\{N(2n): n\geq k+1\}$ and $t\in [L]^k$. Observe that $s<t$ and $s,t\in[N]^k_\shortparallel$ and therefore, by Proposition \ref{accessing everything with plegma path of length |s_0|}, there exists a plegma path $(s_j)_{j=0}^k$ of length $k$ in $[N]^k$ with $s_0=s$ and $s_k=t$. Assuming that (i) holds, we get that \[\varphi(s)=\varphi(s_0)=\varphi(s_1)=\ldots=\varphi(s_k)=\varphi(t)\] Hence for every $t\in[L]^k$, $\varphi(t)=\varphi(s)$, i.e. $\varphi$ is constant on $[L]^k$. \end{proof} \section{Spreading sequences} We recall that a sequence $(e_n)_{n}$ in a seminormed linear space $(E,\|\cdot\|_*)$ is called \emph{spreading} if it is isometric to any of its subsequences, i.e. for every $n\in\nn$, $a_1,\ldots,a_n\in \rr$ and $k_1<\ldots<k_n$ in $\nn$ we have that $\|\sum_{j=1}^na_je_j\|_*=\|\sum_{j=1}^na_je_{k_j}\|_*$. In this section we will briefly discuss the norm properties of the spreading sequences. The interested reader can find a detailed analysis in the monographs \cite{AK} and \cite{BL}. The proof of the following result shares similar ideas with the one of Proposition I.1.B.2 in \cite{BL}. \begin{prop}\label{sing} Let $(E,\|\cdot\|_*)$ be a seminormed linear space and $(e_n)_{n}$ be a spreading sequence in $E$. Then the following are equivalent. \begin{enumerate} \item[(i)] There exist $n\in\nn$ and $a_1,\ldots,a_n\in\rr$ not all zero, with $\|\sum_{i=1}^na_ie_i\|_*=0$. \item[(ii)] For every $n,m\in\nn$, $\|e_n-e_m\|_*=0$. \item[(iii)] For every $n\in\nn$ and $a_1,\ldots,a_n\in\rr$, $ \|\sum_{i=1}^na_ie_i\|_*=|\sum_{i=1}^na_i|\cdot \|e_1\|_*$. \end{enumerate} \end{prop} Spreading sequences in seminormed linear spaces satisfying (i)-(iii) of the above proposition will be called \emph{trivial}. By (i) we have that if $(e_n)_{n}$ is non trivial, then $(e_n)_{n}$ is linearly independent and the restriction of the seminorm $\|\cdot\|_*$ to the linear subspace of $E$ generated by $(e_n)_n$ is actually norm. Therefore, every non trivial spreading sequence generates a Banach space. We classify the non trivial spreading sequences into the following three categories: \begin{enumerate} \item The \emph{singular} spreading sequences, i.e. the non trivial spreading sequences which are not Schauder basic sequences. \item the \emph{unconditional} spreading sequences and \item the \emph{conditional Schauder basic} spreading sequences, i.e. the non trivial spreading sequences which are Schauder basic but not unconditional. \end{enumerate} The next two results are restatements of Propositions I.1.4 and I.4.2 of \cite{BL} respectively. \begin{prop}\label{equiv forms for 1-subsymmetric weakly null} Let $(e_n)_{n}$ be a non trivial spreading sequence. Then the following are equivalent. \begin{enumerate} \item[(i)] $(e_n)_{n}$ is unconditional and not equivalent to the usual basis of $\ell^1$. \item[(ii)] $(e_n)_{n}$ is weakly null. \item[(iii)] $(e_n)_{n}$ is Ces\'aro summable to zero. \item[(iv)] $(e_n)_{n}$ is 1-unconditional and not equivalent to the usual basis of $\ell^1$. \end{enumerate} \end{prop} \begin{prop}\label{thmsingular} Let $(e_n)_n$ be a non trivial spreading sequence and $E$ the Banach space generated by $(e_n)_n$. Then $(e_n)_n$ is singular if and only if $(e_n)_n$ is weakly convergent to a nonzero element $e\in E$. \end{prop} \begin{rem}\label{properties of the natural decomposition} Let $(e_n)_n$ be a singular spreading sequence. By the above proposition, we have that $(e_n)_n$ is of the form $e_n=e'_n+e$, where $e$ is nonzero and $(e'_n)_n$ is weakly null. This decomposition of $(e_n)_n$ as $e_n=e'_n+e$ will be called \emph{the natural decomposition} of $(e_n)_n$. It is easy to check that $(e'_n)_{n}$ is non trivial, spreading and not equivalent to the usual basis of $\ell^1$. Hence by Proposition \ref{thmsingular}, $(e'_n)_n$ is unconditional, weakly null and Ces\`aro summable to zero. Moreover, if $E$ and $E'$ are the Banach spaces generated by the sequences $(e_n)_n$ and $(e'_n)_n$ respectively, then $E,E'$ are isomorphic and $E=E'\oplus<e>$. \end{rem} Finally for the conditional Schauder basic spreading sequences we have the next characterization, which is a consequence of the above results and Rosenthal's $\ell^1$ theorem \cite{Ro}. \begin{prop} Let $(e_n)_n$ be a spreading non trivial sequence and $E$ be the Banach space generated by $(e_n)_n$. Then $(e_n)_n$ is a conditional Schauder basic sequence if and only if $(e_n)_n$ is non trivial weak-Cauchy. \end{prop} \section{$k$-sequences and $k$-spreading models} In this section we present the definition of the $k$-sequences and we introduce the notion of the $k$-spreading models, for all $k\in\nn$. As we will see, for $k=1$, the definition coincides with the classical one of A. Brunel and L. Sucheston \cite{BS}. \subsection{Definitions and basic properties} We start with the definition of the $k$-sequences. \begin{defn} Let $k\in\nn$ and $X$ be a non empty set. A $k$-sequence in $X$ is a map $\varphi:[\nn]^k\to X$. A $k$-subsequence in $X$ is a map of the form $\varphi:[M]^k\to X$ , where $M\in [\nn]^\infty$. \end{defn} A $k$-sequence $\varphi:[\nn]^k\to X$ will be usually denoted by $(x_s)_{s\in [\nn]^k}$, where $x_s=\varphi(s)$, $s\in [\nn]^k$. Similarly, the notation $(x_s)_{s\in [M]^k}$ stands for the $k$-subsequences $\varphi:[M]^k\to X$. \begin{defn}\label{Definition of spreading model} Let $X$ be a Banach space, $k\in\nn$, $(x_s)_{s\in [\nn]^k}$ be a $k$-sequence in $X$ and $(E,\|\cdot\|_*)$ be an infinite dimensional seminormed linear space with Hamel basis $(e_n)_{n}$. Also let $M\in[\nn]^\infty$ and $(\delta_n)_n$ be a null sequence of positive reals. We will say that the $k$-subsequence $(x_s)_{s\in [M]^k}$ generates $(e_n)_{n}$ as a spreading model as a $k$-spreading model (with respect to $(\delta_n)_n$), if the following is satisfied. For every $m,l\in \nn$, with $m\leq l$, every $(s_j)_{j=1}^m\in\textit{Plm}_m([M]^k)$ with $s_1(1)\geq M(l)$ and every choice of $a_1,...,a_m\in[-1,1]$, we have \begin{equation}\label{rsm}\Bigg{|}\Big{\|}\sum_{j=1}^m a_j x_{s_j}\Big{\|}-\Big{\|}\sum_{j=1}^m a_j e_j\Big{\|}_* \Bigg{|}\leq\delta_l\end{equation} \end{defn} Since $\text{Plm}([\nn]^1)=[\nn]^{<\infty}$, it is clear that for $k=1$, Definition \ref{Definition of spreading model} coincides with the classical definition of a spreading model of an ordinary sequence $(x_n)_n$ in a Banach space $X$. Thus the $1$-spreading models are the usual ones. Moreover, it is easy to see that for every $k\in\nn$, every $k$-spreading model $(e_n)_{n}$ is a spreading sequence. Let's point out here that there exist $k$-sequences in Banach spaces which generate $k$-spreading models which are trivial spreading sequences, in other words (see Proposition \ref{sing}), $\|\cdot\|_*$ is not a norm. For instance, this occurs for every constant $k$-sequence $(x_s)_{s\in[\nn]^k}$. We should also point out that even if $(e_n)_n$ is non trivial, it is not necessarily a Schauder basic sequence. More information on this issue are contained in Section \ref{s5}. In the next proposition we state some stability properties of the $k$-spreading models. The proof is straightforward. \begin{prop}\label{remark on the definition of spreading model} Let $k\in\nn$, $(x_s)_{s\in [\nn]^k}$ be a $k$-sequence in a Banach space $X$, $M\in[\nn]^\infty$ and $(\delta_n)_n$ be a null sequence of positive reals. If $(x_s)_{s\in [M]^k}$ generates a sequence $(e_n)_{n}$ as a $k$-spreading model with respect to $(\delta_n)_n$ then the following are satisfied. \begin{enumerate} \item[(i)] For every $L\in[M]^\infty$, $(x_s)_{s\in [L]^k}$ generates $(e_n)_{n}$ as a $k$-spreading model with respect to $(\delta_n)_n$. \item[(ii)] For every null sequence $(\delta'_n)_{n}$ of positive reals there exists $M'\in[M]^\infty$ such that $(x_s)_{s\in [M']^k}$ generates $(e_n)_{n}$ as a $k$-spreading model with respect to $(\delta'_n)_{n}$. \item[(iii)] The $k$-sequence $(y_s)_{s\in[\nn]^k}$, defined by $y_s=x_{M(s)}$, $s\in[\nn]^k$, generates $(e_n)_{n}$ as a $k$-spreading model with respect to $(\delta_n)_n$. \end{enumerate} \end{prop} Let us also notice that for $k=1$ the assertion that (\ref{rsm}) holds for all $m\leq l$ is redundant. This is not the case for $k\geq 2$, since a plegma family in $[\nn]^k$ is not always a subsequence of a larger one. However, the next lemma shows that we may bypass this extra condition by passing to a sparse infinite subset of $\nn$. \begin{lem}\label{old defn yields new} Let $k\in\nn$, $(x_s)_{s\in [\nn]^k}$ be a $k$-sequence in a Banach space $X$, $L\in[\nn]^\infty$, $(E,\|\cdot\|_*)$ be an infinite dimensional seminormed linear space with Hamel basis $(e_n)_{n}$ and $(\delta_n)_{n}$ be a null sequence of positive reals such that \begin{equation}\label{ewq}\Bigg{|}\Big{\|}\sum_{j=1}^l a_j x_{t_j}\Big{\|}-\Big{\|}\sum_{j=1}^l a_j e_j\Big{\|}_* \Bigg{|}\leq\delta_l\end{equation} for every $l\in\nn$, every $(t_j)_{j=1}^l\in\textit{Plm}_l([ L]^k)$ with $t_1(1)\geq L(l)$ and every choice of $a_1,...,a_l\in[-1,1]$. Then there exists $M\in[L]^\infty$ such that $(x_s)_{s\in [M]^k}$ generates $(e_n)_{n}$ as a $k$-spreading model with respect to $(\delta_n)_n$. \end{lem} \begin{proof} We choose $M\in [L]^\infty$ such that for every $l\in\nn$ there exist at least $l-1$ elements of $L$ between $M(l)$ and $M(l+1)$. Then notice that for every $m,l\in\nn$ with $m\leq l$ and every $(s_j)_{j=1}^m\in\text{Plm}_m([M]^k)$ with $s_j(1)\geq M(l)$, there exists $(t_j)_{j=1}^l\in\text{Plm}_l([L]^k)$ with $s_j=t_j$ for all $1\leq j\leq m$. This observation and (\ref{ewq}) easily yield that for every $m,l\in \nn$, with $m\leq l$, every $(s_j)_{j=1}^m\in\textit{Plm}_m([M]^k)$ with $s_1(1)\geq M(l)$ and every choice of $a_1,...,a_m\in[-1,1]$, we have \begin{equation}\Bigg{|}\Big{\|}\sum_{j=1}^m a_j x_{s_j}\Big{\|}-\Big{\|}\sum_{j=1}^m a_j e_j\Big{\|}_* \Bigg{|}\leq\delta_l\end{equation} and the proof is complete. \end{proof} \subsection{Existence of $k$-speading models} In this subsection we will show that every bounded $k$-sequence in a Banach space $X$ contains a $k$-subsequence which generates a $k$-spreading model. The proof follows similar lines with the corresponding one of the classical spreading models. For $k\in\nn$ and a $k$-sequence $(x_s)_{s\in [\nn]^k}$ in a Banach space $X$, we will say that $(x_s)_{s\in [\nn]^k}$ \emph{admits} $(e_n)_{n}$ as a $k$-spreading model (or $(e_n)_{n}$ \emph{is} a $k$-spreading model of $(x_s)_{s\in [\nn]^k}$) if there exists $M\in[\nn]^\infty$ such that the subsequence $(x_s)_{s\in [M]^k}$ generates $(e_n)_{n}$ as a $k$-spreading model. A $k$-sequence $(x_s)_{s\in [\nn]^k}$ in $X$ will be called \textit{bounded} (resp. \textit{seminormalized}) if there exists $C>0$ (resp. $0<c\leq C$) such that $\|x_s\|\leq C$ (resp. $c\leq\|x_s\|\leq C$), for every $s\in [\nn]^k$. \begin{thm} For all $k\in \nn$, every bounded $k$-sequence in a Banach space $X$ admits a $k$-spreading model. \end{thm} \begin{proof} Let $X$ be a Banach space and $k\in\nn$, $(x_s)_{s\in [\nn]^k}$ be a bounded $k$-sequence in $X$. We divide the proof into four steps. \textbf{Step 1.} Let $l\in\nn$, $N\in[\nn]^\infty$ and $\delta>0$. Then there exists $L\in[N]^\infty$ such that \[\Bigg{|}\Big{\|}\sum_{j=1}^la_jx_{t_j}\Big{\|}-\Big{\|}\sum_{j=1}^la_jx_{s_j}\Big{\|}\Bigg{|}\leq\delta\] for every $(t_j)_{j=1}^l, (s_j)_{j=1}^l \in\textit{Plm}_l([L]^k)$ and $a_1,...,a_l\in[-1,1]$. \begin{proof}[Proof of Step 1:] Let $(\textbf{a}_i)_{i=1}^{n_0}$ be a $\frac{\delta}{3l}-$net of the unit ball of $\big(\rr^l, \|\cdot\|_\infty\big)$. We set $N_0=N$. By a finite induction on $1\leq i\leq n_0$, we construct a decreasing sequence $N_0\supseteq N_1\supseteq\ldots\supseteq N_{n_0}$ as follows. Suppose that $N_0,\ldots,N_{i-1}$ have been constructed. Let $\textbf{a}_i=(a_j^i)_{j=1}^l$ and $g_i:\textit{Plm}_l([N_{i-1}]^k)\to[0,lC]$ defined by $g_i\big((s_j)_{j=1}^l\big)=\|\sum_{j=1}^l a_j^i x_{s_j}\|$. We partition the interval $[0,lC]$ into disjoint intervals of length $\frac{\delta}{3}$ and applying Theorem \ref{ramseyforplegma} we find $N_i\in[N_{i-1}]^\infty$ such that for every $(t_j)_{j=1}^l,(s_j)_{j=1}^l\in\textit{Plm}_l( [N_i]^k)$, we have $|g_i((t_j)_{j=1}^l)-g_i((s_j)_{j=1}^l)|<\frac{\delta}{3}$. Proceeding in this way we conclude that for every $(s_j)_{j=1}^l,(t_j)_{j=1}^l\in\textit{Plm}_l([ N_{n_0}]^k)$ and $1\leq i\leq n_0$, we have that $\Big{|}\|\sum_{j=1}^la_j^ix_{t_j}\|-\|\sum_{j=1}^la_j^ix_{s_j}\|\Big{|}\leq\frac{\delta}{3}$. Since $(\textbf{a}_i)_{i=1}^{n_0}$ is a $\frac{\delta}{3}-$net of the unit ball of $(\rr^l, \|\cdot\|_\infty)$ it is easy to see that $L=N_{n_0}$ is as desired. \end{proof} \textbf{Step 2.} Let $(\delta_n)_n$ be a null sequence of positive real numbers. Then there exists $M\in[\nn]^\infty$ such that for every $m\leq l$, every $(t_j)_{j=1}^m, (s_j)_{j=1}^m \in\textit{Plm}_m([M]^k)$ with $s_1(1),t_1(1)\geq M(l)$ and $a_1,...,a_m\in[-1,1]$, we have \begin{equation}\label{equo}\Bigg{|}\Big\|\sum_{j=1}^m a_jx_{t_j}\Big\|-\Big\|\sum_{j=1}^m a_jx_{s_j}\Big\|\Bigg{|}\leq\delta_l\end{equation} \begin{proof}[Proof of Step 2:] By Step 1 and a standard diagonalization we easily obtain an $L\in [\nn]^\infty$ satisfying $\Big{|}\|\sum_{j=1}^l a_jx_{t_j}\|-\|\sum_{j=1}^l a_jx_{s_j}\|\Big{|}\leq\delta_l$, for every $ l\in\nn$, every $(t_j)_{j=1}^l, (s_j)_{j=1}^l \in\textit{Plm}_l([L]^k)$ with $s_1(1),t_1(1)\geq L(l)$ and $a_1,...,a_l\in[-1,1]$. By Lemma \ref{old defn yields new}, there exists $M\in [L]^\infty$ satisfying (\ref{equo}). \end{proof} \textbf{Step 3.} Let $M\in[\nn]^\infty$ be the resulting from Step 2 infinite subset of $\nn$. Also let $l\in\nn$ and $a_1,...,a_l\in\rr$. Then for every sequence $\big((s_j^n)_{j=1}^l\big)_{n}$ with $(s_j^n)_{j=1}^l\in\text{Plm}_l([M]^k)$, for all $n\in\nn$ and $\lim s^n_1(1)=+\infty$, the sequence $(\|\sum_{j=1}^la_jx_{s_j}^n\|)_{n}$ is a Cauchy sequence in $[0,+\infty)$. Moreover, $\lim_n\|\sum_{j=1}^la_jx_{s_j}^n\|$ is independent from the choice of the sequence $((s_j^n)_{j=1}^l)_{n}$. \begin{proof}[Proof of Step 3:] It is straightforward by Step 2. \end{proof} \textbf{Step 4.} Let $(e_n)_n$ be the natural Hamel basis of $c_{00}(\nn)$. For every $l\in\nn$ and $a_1,...,a_l\in\rr$, we define \[\Big\|\sum_{j=1}^l a_je_j\Big\|_*=\lim_n\|\sum_{j=1}^la_jx_{s_j}^n\|\] where for every $n\in\nn$, $(s_j^n)_{j=1}^l\in\text{Plm}_l([M]^k)$ and $\lim s^n_1(1)=+\infty$. Then $\|\cdot\|_*$ is a seminorm on $c_{00}(\nn)$ under which the natural Hamel basis $(e_n)_n$ is a spreading sequence. Moreover for all $ m\leq l$, $a_1,\ldots, a_m\in[-1,1]$ and $(s_j)_{j=1}^m\in\textit{Plm}_m([M]^k)$ with $s_1(1)\geq M(l)$, we have $\Big{|}\|\sum_{j=1}^m a_jx_{s_j}\|-\|\sum_{j=1}^m a_je_j\|_*\Big{|}\leq\delta_l$. \begin{proof}[Proof of Step 4:] It follows easily by Steps 2 and 3. \end{proof} By Step 4, we have that $(x_s)_{s\in [M]^{k}}$ generates $(e_n)_n$ as a $k$-spreading model and the proof is complete. \end{proof} \subsection{The increasing hierarchy of $k$-spreading models} In this subsection we will show that the $k$-spreading models of a Banach space $X$ form an increasing hierarchy. We start with the following lemma which is an easy consequence of Remark \ref{rem3456}. \begin{lem}\label{propxi} Let $k_1,k_2\in\nn$ with $1\leq k_1<k_2$. Let $X$ be a Banach space and $(w_t)_{t\in [\nn]^{k_1}}$ be a $k_1$-sequence in $X$. Let $(x_s)_{s\in [\nn]^{k_2}}$ be the $k_2$-sequence in $X$ defined by $x_s=w_{s|k_1}$, for every $s\in [\nn]^{k_2}$. Then $(w_t)_{t\in [\nn]^{k_1}}$ and $(x_s)_{s\in [\nn]^{k_2}}$ admit the same $k$-spreading models. \end{lem} For a subset $A$ of $X$ we will say that $A$ \emph{admits} $(e_n)_n$ \emph{as a} $k$-spreading model (or $(e_n)_{n}$ \emph{is} a $k$-spreading model of $A$) if there exists a $k$-sequence $(x_s)_{s\in [\nn]^k}$ in $A$ which admits $(e_n)_{n}$ as a $k$-spreading model. \begin{notation} Let $X$ be a Banach space, $A\subseteq X$ and $k\in\nn$. The set of all $k$-spreading models of $A$ will be denoted by $\mathcal{SM}_k(A)$. \end{notation} By Lemma \ref{propxi}, we easily obtain the following. \begin{cor}\label{incresspr} Let $X$ be a Banach space and $A\subseteq X$. Then for all $k_1,k_2\in\nn$ with $k_1< k_2$, we have $\mathcal{SM}_{k_1}(A)\subseteq \mathcal{SM}_{k_2}(A)$, \end{cor} In Section \ref{s12}, for each $k\in\nn$, we construct a Banach space $\mathfrak{X}_{k+1}$ such that $\mathcal{SM}_{k}(\mathfrak{X}_{k+1})\subsetneqq\mathcal{SM}_{k+1}(\mathfrak{X}_{k+1})$. Here, we present a much simpler example of a space $X$ and a proper subset $A$ of $X$ satisfying $\mathcal{SM}_{k}(A)\subsetneqq\mathcal{SM}_{k+1}(A)$. \begin{examp} \label{example} Let $(e_n)_{n}$ be a normalized spreading and $1$-unconditional sequence in a Banach space $(E,\|\cdot\|)$ which is not equivalent to the usual basis of $c_0$. Let $k\in\nn$ and $(x_s)_{s\in [\nn]^{k+1}}$ be the natural Hamel basis of $c_{00}([\nn]^{k+1})$. For $x\in c_{00}([\nn]^{k+1})$ we define \[\|x\|_{k+1}=\sup\Big{\{}\Big\|\sum_{i=1}^lx(s_i)e_i\Big\|: l\in\nn,(s_i)_{i=1}^l\in\textit{Plm}_l([\nn]^{k+1})\text{ and }s_1(1)\geq l\Big{\}}\] We set $X=\overline{(c_{00}([\nn]^{k+1}),\|\cdot\|_{k+1})}$ and $A=\{x_s:s\in [\nn]^{k+1}\}$. It is easy to see that the sequence $(e_n)_{n}$ is generated by $(x_s)_{s\in[\nn]^{k+1}}$ as a $(k+1)$-spreading model and thus it belongs to $\mathcal{SM}_{k+1}(A)$. We shall show that for every $(\tilde{e}_n)_{n}\in\mathcal{SM}_k(A)$, either $(\tilde{e}_n)_{n}$ is a trivial spreading sequence or it is isometric to the usual basis of $c_0$. Therefore, there is no sequence in $\mathcal{SM}_{k}(A)$ equivalent to $(e_n)_n$. Indeed, let $(\tilde{e}_n)_{n}\in\mathcal{SM}_k(A)$. By Proposition \ref{remark on the definition of spreading model}, we may assume that there exists a $k$-sequence in $A$, $(y_t)_{t\in [\nn]^k}$ which generates $(\tilde{e}_n)_{n}$ as a $k$-spreading model. Let $\varphi:[\nn]^k\to [\nn]^{k+1}$ such that $y_t=x_{\varphi(t)}$, for all $t\in [\nn]^{k}$. By Proposition \ref{lemma making a hereditary nonconstant function, nonconstant on plegma pairs}, there exists $M\in [\nn]^\infty$ such that either $\varphi$ is constant on $[M]^{k}$ or for every plegma pair $(t_1,t_2)$ in $[M]^{k}$, $\varphi(t_1)\neq\varphi(t_2)$. By Proposition \ref{remark on the definition of spreading model}, we have that $(y_t)_{t\in [M]^k}$ also generates $(\tilde{e}_n)_{n}$ as a $k$-spreading model. If $\varphi$ is constant on $[M]^k$ then $(\tilde{e}_n)_{n}$ is a trivial sequence. Otherwise, by Theorem \ref{non plegma preserving maps}, there exists $L\in[M]^\infty$ such that for every plegma pair $(t_1,t_2)$ in $[L]^k$ neither $(\varphi(t_1),\varphi(t_2))$, nor $(\varphi(t_2),\varphi(t_1))$ is a plegma pair in $[\nn]^{k+1}$. Therefore, for every $(t_j)_{j=1}^m\in\textit{Plm}([L]^k)$ and $(s_j)_{j=1}^l\in\textit{Plm}([\nn]^{k+1})$ there is at most one $j\in\{1,...,m\}$ and at most one $i\in\{1,...,l\}$ with $\varphi(t_j)=s_i$. This observation and the definition of the norm $\|\cdot\|_{k+1}$, easily implies that \begin{equation}\label{eqwert}\Big\|\sum_{j=1}^ma_jy_{t_j}\Big\|_{k+1} =\Big\|\sum_{j=1}^ma_jx_{\varphi(t_j)}\Big\|_{k+1}=\max_{1\leq j\leq m}|a_j|\end{equation} for all $m\in\nn$, $a_1,\ldots,a_m\in\rr$ and $(t_j)_{j=1}^m\in\textit{Plm}([L]^k)$. Since $L\in [M]^\infty$, we have that $(\tilde{e}_n)_{n}$ is generated by $(y_t)_{t\in[L]^k}$ and by (\ref{eqwert}), the sequence $(\tilde{e}_n)_{n}$ is isometric to the usual basis of $c_0$.\end{examp} \section{Topological properties of $k$-sequences} This section is devoted to the study of the $k$-sequences in a topological space. We define the convergence of the $k$-sequences in a topological space and we introduce the notion of the subordinated $k$-sequences. \subsection{Convergence of $k$-sequences in topological spaces} We start with the following natural extension of the notion of convergence of sequences in topological spaces. \begin{defn}\label{defn convergence of f-sequences} Let $(X,\ttt)$ be a topological space, $k\in\nn$ and $(x_s)_{s\in[\nn]^k}$ be a $k$-sequence in $X$. Also let $M\in[\nn]^\infty$ and $x_0\in X$. We will say that $(x_s)_{s\in[M]^k}$ converges to $x_0$ if for every $U\in\ttt$ with $x_0\in U$ there exists $m\in \nn$ such that for every $s\in [M]^k$ with $ s(1)\geq M(m)$ we have that $x_s\in U$. \end{defn} It is straightforward that if a $k$-subsequence $(x_s)_{s\in[M]^k}$ in a topological space is convergent to some $x_0\in X$, then every further $k$-subsequence of $(x_s)_{s\in [M]^k}$ is also convergent to $x_0$. Moreover, every continuous map between two topological spaces preserves the convergence of $k$-sequences, i.e. if $\phi:(X_1,\mathcal{T}_1)\to (X_2,\mathcal{T}_2)$ is continuous and $(x_s)_{s\in[M]^k}$ converges to $x_0\in X_1$, then $(\phi(x_s))_{s\in[M]^k}$ converges to $\phi(x_0)\in X_2$. However, for $k\geq 2$, there are some differences with the ordinary convergent sequences in topological spaces. For instance it is easy to see that for $k\geq 2$, the convergence of a $k$-sequence $(x_s)_{s\in[M]^k}$ to some $x_0\in X$, does not in general imply that the set $\{x_s:s\in[M]^k\}$ is relatively compact. \subsection{Subordinated $k$-sequences} In this subsection we introduce the definition of the subordinated $k$-sequences in a topological space. First, recall that the powerset of $\nn$ is naturally identified with $\{0,1\}^\nn$. In this way, for all $k\in\nn$ and $M\in [\nn]^\infty$, the set $[M]^{\leq k}$ becomes a compact metric space containing $[M]^k$ as a dense subspace. Moreover, notice that an element $s\in [M]^{\leq k}$ is isolated in $[M]^{\leq k}$ if and only if $s\in [M]^k$. \begin{defn}\label{defn subordinating} Let $(X,\ttt)$ be a topological space, $k\in\nn$, $(x_s)_{s\in[\nn]^k}$ be a $k$-sequence in $X$ and $M\in[\nn]^\infty$. We say that $(x_s)_{s\in[M]^k}$ is subordinated $($with respect to $(X,\mathcal{T})$$)$ if there exists a continuous map $\widehat{\varphi}:[M]^{\leq k}\to (X,\mathcal{T})$ such that $\widehat{\varphi}(s)=x_s$, for all $s\in[M]^k$. \end{defn} \begin{rem}\label{remark on ff subordinated} If $(x_s)_{s\in[M]^k}$ is subordinated, then there exists a unique continuous map $\widehat{\varphi}:[M]^{\leq k}\to (X,\mathcal{T})$ witnessing this. Indeed, this is a consequence of the fact that $[M]^k$ is dense in $[M]^{\leq k}$. Also, $\overline{\{x_s:s\in[M]^k\}}=\widehat{\varphi}\big([M]^{\leq k}\big),$ where $\overline{\{x_s:s\in[M]^k\}}$ is the closure of $\{x_s : s\in[M]^k\}$ in $X$ with respect to $\mathcal{T}$. Therefore, $\overline{\{x_s:s\in[M]^k\}}$ is a countable compact metrizable subspace of $(X,\ttt)$ with Cantor-Bendixson index at most $k+1$. Also notice that if $(x_s)_{s\in[M]^k}$ is subordinated then $(x_s)_{s\in[L]^k}$ is also subordinated, for every $L\in [M]^\infty$. \end{rem} \begin{prop}\label{subordinating yields convergence} Let $(X,\ttt)$ be a topological space, $k\in\nn$, $(x_s)_{s\in[\nn]^k}$ be a $k$-sequence in $X$ and $M\in[\nn]^\infty$. Suppose that $(x_s)_{s\in[M]^k}$ is subordinated and let $\widehat{\varphi}:[M]^{\leq k}\to (X,\mathcal{T})$ be the continuous map witnessing this. Then $(x_s)_{s\in[M]^k}$ is convergent to $\widehat{\varphi}(\emptyset)$. \end{prop} \begin{proof} Let $(y_s)_{s\in [M]^k}$ be the $k$-sequence in $[M]^k$, with $y_s=s$, for all $s\in [M]^k$. Notice that $(y_s)_{s\in [M]^k}$ converges to the empty set and since $\widehat{\varphi}:[M]^{\leq k}\to (X,\mathcal{T})$ is continuous, we have that $\big(\widehat{\varphi}(y_s)\big)_{s\in [M]^k}$ converges to $\widehat{\varphi}(\emptyset)$. Since $\widehat{\varphi}(y_s)=\widehat{\varphi}(s)=x_s$, for all $s\in [M]^k$, we conclude that $(x_s)_{s\in[M]^k}$ is convergent to $\widehat{\varphi}(\emptyset)$. \end{proof} \begin{prop}\label{Create subordinated} Let $(X,\mathcal{T})$ be a topological space, $k\in\nn$ and $(x_s)_{s\in[\nn]^k}$ be a $k$-sequence in $X$. Then for every $N\in[\nn]^\infty$ such that $\overline{\{x_s:\;s\in[N]^k\}}$ is a compact metrizable subspace of $(X,\ttt)$ there exists $M\in[N]^\infty$ such that $(x_s)_{s\in[M]^k}$ is subordinated. \end{prop} \begin{proof} The proposition obviously holds for $k=1$, since in this case, subordinated and convergent sequences coincide. We proceed by induction on $k\in\nn$. Assume that Proposition \ref{Create subordinated} holds for some $k\in\nn$ and let $(x_s)_{s\in [N]^{k+1}}$ be a $(k+1)$-sequence in $X$. Let $N\in [\nn]^\infty$ such that $\overline{\{x_s:\;s\in[N]^{k+1}\}}$ is a compact metrizable subspace of $(X,\ttt)$. We also fix a compatible metric $d$ of $\overline{\{x_s:\;s\in[N]^{k+1}\}}$. Inductively we choose a strictly increasing sequence $(l_n)_n$ in $\nn$, a decreasing sequence $(L_n)_{n}$ of infinite subsets of $N$ and a $k$-sequence $(x_s)_{s\in[L]^k}$ in $X$, where $L=\{l_n: n\in\nn\}$ such that for every $n\in\nn$, the following are satisfied. \begin{enumerate} \item[(i)] $l_n<\min L_n$. \item[(ii)] For every $l\in L_n$ and every $t\in[\{l_1,...,l_n\}]^k $, $(x_{t\cup\{l\}})_{l\in L_n}\to x_t$ and in addition if $\max t=l_n$, then $d(x_{t\cup\{l\}},x_t)<\frac{1}{n}$. \end{enumerate} We omit the construction since it is straightforward. By the inductive assumption there exists $M\in[L]^\infty$ such that $(x_t)_{t\in[M]^k}$ is subordinated. If $\widehat{\psi}:[M]^{\leq k}\to X$ is the continuous map witnessing this then we extend $\widehat{\psi}$ to the map $\widehat{\varphi}:[M]^{\leq k+1}\to X$, by setting $\widehat{\varphi}(s)=x_s$, for every $s\in[M]^{k+1}$. Using condition (ii), we easily show that $\widehat{\varphi}$ is continuous and therefore $(x_s)_{s\in[M]^{k+1}}$ is subordinated. \end{proof} \begin{rem} By Propositions \ref{subordinating yields convergence} and \ref{Create subordinated}, we have that every $k$-sequence in a compact metrizable space contains a convergent $k$-subsequence. \end{rem} \section{Weakly relatively compact $k$-sequences in Banach spaces} It is well known that for every sequence $(x_n)_n$ in a weakly compact subset of a Banach space $X$ there exists $M\in \nn$ such that the subsequence $(x_n)_{n\in M}$ is weakly convergent to some $x_0\in X$. Moreover, if in addition $X$ has a Schauder basis then we may pass to a further subsequence $(x_n)_{n\in L}$ which is approximated by a sequence of the form $(\widetilde{x}_n)_{n\in L}$ such that $(\widetilde{x}_n)_{n\in L}$ also weakly converges to $x_0$ and $(\widetilde{x}_n-x_0)_{n\in L}$ is a block sequence of $X$. The main aim of this section is to show that, for every $k\geq2$, the $k$-sequences in Banach spaces satisfy similar properties. \begin{defn} A $k$-sequence $(x_s)_{s\in[\nn]^k}$, of a Banach space $X$ will be called weakly relatively compact if $\overline{\{x_s: s\in[\nn]^k\}}^{w}$ is a weakly compact subset of $X$. \end{defn} Since the weak topology on every separable weakly compact subset of a Banach space is metrizable, by Propositions \ref{subordinating yields convergence} and \ref{Create subordinated} we have the following. \begin{prop}\label{cor for subordinating} Let $X$ be a Banach space and $k\in\nn$. Then we have the following. \begin{enumerate} \item[(i)] Every subordinated $k$-sequence in $(X,w)$ is weakly convergent. \item[(ii)] Every weakly relatively compact $k$-sequence in $X$ contains a subordinated $k$-subsequence. \end{enumerate} \end{prop} To describe the regularity properties of weakly relatively compact $k$-sequences in a Banach space $X$ with Schauder basis we will need the next two definitions. The first is a natural extension of the notion of block (resp. disjointly supported) sequences of $X$. \begin{defn} Let $X$ be a Banach space with a Schauder basis and $k\in\nn$. Let also $(x_s)_{s\in[\nn]^k}$ be a $k$-sequence in $X$ and $M\in[\nn]^\infty$. We will say that the $k$-subsequence $(x_s)_{s\in[M]^k}$ is plegma block (resp. plegma disjointly supported) if for all plegma pairs $(s_1,s_2)$ in $[M]^k$ we have $\text{supp}(x_{s_1})<\text{supp}(x_{s_2})$ (resp. $\text{supp}(x_{s_1})\cap\text{supp}(x_{s_2})=\emptyset$). \end{defn} \begin{defn}\label{Def of plegma supported} Let $X$ a Banach space with a Schauder basis, $k\in\nn$ and $(x_s)_{s\in[\nn]^k}$ be a $k$-sequence in $X$. Also let $L\in [\nn]^\infty$ and $(y_t)_{t\in[L]^{\leq k}}$ be a family of vectors in $X$. We will say that $(y_t)_{t\in[L]^{\leq k}}$ is a canonical tree decomposition of $(x_s)_{s\in[L]^k}$ (or $(x_s)_{s\in[L]^k}$ admits $(y_t)_{t\in[L]^{\leq k}}$ as a canonical tree decomposition) if the following are satisfied. \begin{enumerate} \item[(i)] For every $s\in[L]^k$, $\displaystyle x_s=\sum_{j=0}^{k}y_{s|j}= y_\emptyset+\sum_{j=1}^{k}y_{s|j}$. \item[(ii)] For every $t\in[L]^{\leq k}\setminus\{\emptyset\}$, $\text{supp}(y_{t})$ is finite. \item[(iii)] For every $s\in[L]^k$ and $1\leq j_1<j_2\leq k$, $\text{supp}(y_{s|j_1}) <\text{supp}(y_{s|j_2})$. \item[(iv)] For every $(s_1,s_2)\in\textit{Plm}_2([L]^k)$ and $1\leq j_1\leq j_2\leq k$, we have\[\text{supp}(y_{s_1|j_1}) <\text{supp}(y_{s_2|j_2})\] \item[(v)] For every $(s_1,s_2)\in\textit{Plm}_2([L]^k)$ and $1\leq j_1<j_2\leq k$, we have \[\text{supp}(y_{s_2|j_1}) <\text{supp}(y_{s_1|j_2})\] \end{enumerate} \end{defn} The next proposition gathers some basic properties of the $k$-sequences which admit canonical tree decomposition. Its proof is straightforward. \begin{prop}\label{trocan} Let $X$ a Banach space with a Schauder basis, $k\in\nn$, $(x_s)_{s\in[\nn]^k}$ be a $k$-sequence in $X$ and $L\in [\nn]^\infty$. Assume that $(x_s)_{s\in[L]^k}$ admits $(y_t)_{t\in[L]^{\leq k}}$ as a canonical tree decomposition. Then the following are satisfied. \begin{enumerate} \item[(i)] For every $N\in[L]^\infty$, the $k$-subsequence $(x_s)_{s\in[N]^k}$ admits $(y_t)_{t\in[N]^{\leq k}}$ as a canonical tree decomposition. \item[(ii)] For every $s\in [L]^k$, the sequence $(y_{s|j})_{j=1}^k$ is a block sequence in $X$. \item[(iii)] For every $1\leq j\leq k$, the sequence $(y_{s|j})_{s\in [L]^k}$ is a plegma block $k$-sequence in $X$. \item[(iv)] Setting $x'_s=x_s-y_\emptyset$, for all $s\in[L]^k$, $y'_\emptyset=0$ and $y'_t=y_t$, for all $t\in[L]^{\leq k}$ with $t\neq\emptyset$, we have that the $k$-subsequence $(x'_s)_{s\in[L]^k}$ is plegma disjointly supported and admits $(y'_t)_{t\in[L]^{\leq k}}$ as a canonical tree decomposition. \item[(v)] For every $j\in\{1,..,k\}$ and $(s_i)_{i=1}^n\in\text{Plm}_n([L]^k)$, if $I$ is the interval of $\nn$ with $\min I=\min\text{supp}(y_{s_1|j})$ and $\max I= \max \text{supp}(y_{s_n|j})$, then for every $1\leq i\leq n$, $I(x_{s_i}-y_\emptyset)=y_{s_i|j}$. \end{enumerate} \end{prop} The following is the main result of this section. \begin{thm}\label{canonical tree} Let $X$ be a Banach space with Schauder basis, $k\in\nn$, $(x_s)_{s\in[\nn]^k}$ be a $k$-sequence in $X$ and $(\ee_n)_{n}$ be a null sequence of positive reals. Assume that for some $M\in [\nn]^\infty$, $(x_s)_{s\in[M]^k}$ is subordinated with respect to the weak topology of $X$ and let $x_0$ be the weak limit of $(x_s)_{s\in[M]^k}$. Then there exist $L\in[M]^\infty$ and a $k$-subsequence $(\widetilde{x}_s)_{s\in[L]^{k}}$ in $X$ satisfying the following. \begin{enumerate} \item[(i)] $(\widetilde{x}_s)_{s\in[L]^k}$ admits a canonical tree decomposition $(y_t)_{t\in[L]^{\leq k}}$ with $y_\emptyset=x_0$. \item[(ii)] For every $s\in[L]^k$, $\|x_s-\widetilde{x}_s\|<\ee_n$, where $\min s=L(n)$. \item[(iii)] $(\widetilde{x}_s)_{s\in[L]^k}$ is subordinated with respect to the weak topology of $X$. Moreover $x_0$ is the weak limit of $(\widetilde{x}_s)_{s\in[L]^k}$. \end{enumerate} \end{thm} \begin{proof} Without loss of generality, we may assume that $(\ee_n)_n$ is decreasing. We will first define a family $(y_t)_{t\in [M]^k}$ of finitely supported vectors in $X$ as follows. Let $\widehat{\varphi}:[M]^{\leq k}\to (X,w)$ be the continuous map witnessing that $(x_s)_{s\in[M]^k}$ is subordinated. For $t=\emptyset$, we set $y_\emptyset= \widehat{\varphi}(\emptyset)=x_0$. For $t\in[M]^{\leq k}\setminus\{\emptyset\}$, let $w_{t}=\widehat{\varphi}(t)-\widehat{\varphi}(t\setminus\{\max t\})$. Notice that the sequence $(w_{t\cup\{m\}})_{m\in M}$ is weakly null, for all $t\in[M]^{< k}$. Hence, by a sliding hump argument, we may choose a family $\big\{I_t: t\in [M]^{\leq k}\setminus\{\emptyset\}\big\}$ of finite intervals of $\nn$ satisfying the following properties. \begin{enumerate} \item[(P1)] For every $t\in [M]^{\leq k}$, with $t\neq \emptyset$, we have that $\|w_t-y_t\|=\|I_t^c(w_t)\|<\ee_n/k$, where $M(n)=\max t$. \item[(P2)] For every $t\in [M]^{< k}$, $ \min I_{t\cup\{m\}}\stackrel{ m\in M}{\longrightarrow}\infty$. \end{enumerate} Now for every $t\in [M]^{\leq k}\setminus \{\emptyset\}$, we set ${y}_t=I_t(w_t)$ and the definition of the family $(y_t)_{t\in [M]^k}$ is completed. Also, for every $s\in[M]^k$, we set $\widetilde{x}_s=\sum_{t\sqsubseteq s}y_t$. We claim that there exists $L\in [M]^\infty$ such that $(y_t)_{t\in [L]^k}$ is a canonical tree decomposition of $(\widetilde{x}_s)_{s\in [L]^k}$. Indeed, using (P2) and Ramsey's theorem, there exists $M_1\in [M]^\infty$ such that for every $s\in[M_1]^k$ and $1\leq j_1<j_2\leq k$, $\text{supp}(y_{s|j_1}) <\text{supp}(y_{s|j_2})$. Using again (P2) and Theorem \ref{ramseyforplegma}, we find $M_2\in [M_1]^\infty$ such that for every $(s_1,s_2)\in\textit{Plm}_2([M_2]^k)$ and $1\leq j_1\leq j_2\leq k$, $\text{supp}(y_{s_1|j_1}) <\text{supp}(y_{s_2|j_2})$, while for every $1\leq j_1<j_2\leq k$, $\text{supp}(y_{s_2|j_1}) <\text{supp}(y_{s_1|j_2})$. We set $L=M_2$. By the above, we have that all conditions (i)-(v) of Definition \ref{Def of plegma supported} are fulfilled and therefore $(y_t)_{t\in [L]^{\leq k}}$ is a canonical tree decomposition of $(\widetilde{x}_s)_{s\in [L]^k}$ and the proof of the claim is complete. Notice that $x_s-\widetilde{x}_s=\sum_{j=1}^k (w_s|j-y_s|j)$, for all $s\in [L]^k$. Hence by (P1) and since $(\ee_n)_n$ is decreasing, we get that $\|x_s-\widetilde{x}_s\|\leq \ee_n$, where $L(n)=\min s$. It remains to show that $(\widetilde{x}_s)_{s\in[L]^k}$ is subordinated. To this end, let $\widetilde{\varphi}:[L]^{\leq k}\to X$ defined by $\widetilde{\varphi}(t)=\sum_{u\sqsubseteq t}y_u$, for all $t\in[L]^{\leq k}$. Clearly $\widetilde{\varphi}(\emptyset)=y_{\emptyset}=\widehat{\varphi}(\emptyset)$ and $\widetilde{x}_s=\widetilde{\varphi}(s)$, for all $s\in[L]^k$. To show that $\widetilde{\varphi}$ is continuous let $(t_n)_{n}$ be a sequence in $[L]^{\leq k}$ and $t\in [L]^{\leq k}$ such that $(t_n)_{n}$ converges to $t$. Setting $\max t_n= M(k_n)$, we may assume that $k_n\to\infty$. Then \[\begin{split} \|(\widehat{\varphi}(t_n)-\widehat{\varphi}(t))-(\widetilde{\varphi}(t_n)-\widetilde{\varphi}(t))\|\leq\sum_{t\sqsubset u\sqsubseteq t_n}\|w_u-y_u\|\leq \ee_{k_n}\;\substack{\longrightarrow\\n\to\infty}\;0 \end{split}\] Since $\widehat{\varphi}(t_n)\stackrel{w}{\to}\widehat{\varphi}(t)$, we get that $\widetilde{\varphi}(t_n)\stackrel{w}{\to}\widetilde{\varphi}(t)$ and the proof is completed. \end{proof} \begin{notation} Let $X$ be a Banach space and $k\in\nn$. By $\mathcal{SM}_k^{wrc}(X)$ we will denote the set of all spreading sequences $(e_n)_n$ such that there exists a weakly relatively compact $k$-sequence of $X$ which generates $(e_n)_n$ as a $k$-spreading model. Notice that $\mathcal{SM}_k^{wrc}(X)=\mathcal{SM}_k(X)$, for every reflexive space $X$ and $k\in\nn$. \end{notation} \begin{cor}\label{cor canonical tree with spr mod} Let $X$ be a Banach space with Schauder basis and $k\in\nn$. Then every $(e_n)_{n}\in\mathcal{SM}_k^{wrc}(X)$ is generated by a $k$-sequence in $X$ which is subordinated with respect to the weak topology and admits a canonical tree decomposition. \end{cor} \begin{proof} Let $k\in\nn$ and $(x_s)_{s\in[\nn]^k}$ be a weakly relatively compact $k$-sequence in $X$ which generates a $k$-spreading model $(e_n)_{n}$. By Proposition \ref{cor for subordinating}, there exists $M\in[\nn]^\infty$ such that $(x_s)_{s\in[M]^k}$ is subordinated. By Theorem \ref{canonical tree}, there exists $L\in [M]^\infty$ and a subordinated sequence $(\widetilde{x_s})_{s\in[L]^k}$ in $X$ which admits a canonical tree decomposition such that $\|x_s-\widetilde{x}_s\|<1/n$, for every $s\in [L]^k$ with $\min s=L(n)$. Hence there is $N\in [L]^\infty$ such that $(\widetilde{x_s})_{s\in[N]^k}$ also generates $(e_n)_n$ as a $k$-spreading model. Setting $z_s=\widetilde{x}_{N(s)}$, for all $s\in[\nn]^k$, we have that $(z_s)_{s\in [\nn]^k}$ is as desired. \end{proof} \section{Norm properties of spreading models}\label{s5} In this section we provide conditions for $k$-sequences to admit unconditional, singular or trivial spreading models. Our main interest concerns subordinated $k$-sequences with respect to the weak topology. \subsection{Unconditional spreading models} As is well known every spreading model generated by a seminormalized weakly null sequence is an $1$-unconditional spreading sequence. In this subsection we give an extension of this result for subordinated seminormalized weakly null $k$-sequences. \begin{lem}\label{Lemma finding convex means} Let $k\in\nn$ and $(x_s)_{s\in[\nn]^k}$ be a $k$-sequence in a Banach space $X$. Suppose that $(x_s)_{s\in[\nn]^k}$ is subordinated and let $\widehat{\varphi}:[\nn]^{\leq k}\to (X,w)$ be the continuous map witnessing this. Let $\ee>0$, $M\in [\nn]^\infty$ and $ n\in\nn$. Then for every $p\in\{1,...,n\}$ there exists a finite subset $G$ of $[M]^k$ such that the following are satisfied. \begin{enumerate} \item[(i)] There exists a convex combination $x=\sum_{s\in G}\mu_sx_s$ of $(x_s)_{s\in G}$ such that $\|\widehat{\varphi}(\emptyset)-x\|<\ee$. \item[(ii)] For every $ 1\leq i\leq n$ with $i\neq p$, there exists $s_i\in [M]^k$ such that for every $s_p\in G$, the family $(s_i)_{i=1}^n$ is a plegma family in $[M]^k$. \end{enumerate} \end{lem} \begin{proof} For $k=1$, the result follows by Mazur's theorem. We proceed by induction on $k\in\nn$. Assume that the lemma is true for some $k\in\nn$. We fix a subordinated $(k+1)$-sequence $(x_s)_{s\in[\nn]^{k+1}}$ in $X$, $M\in[\nn]^\infty$, $ n\in \nn$, $\ee>0$ and $p\in\{1,...,n\}$. Let $(x_t)_{t\in[M]^k}$ defined by $x_t=\widehat{\varphi}(t)$, for all $t\in [M]^k$. By our inductive assumption, there exists a finite subset $F$ of $[M]^k$ satisfying the following. \begin{enumerate} \item[(a)] There exists a convex combination $\sum_{t\in F}\mu_tx_t$ of $(x_t)_{t\in F}$ such that \begin{equation}\label{hg}\Big\|\widehat{\varphi}(\emptyset)-\sum_{t\in F}\mu_tx_t\Big\|<\ee/2\end{equation} \item[(b)] For every $ 1\leq i\leq n$ with $i\neq p$, there exists $t_i\in [M]^k$ such that for every $t_p\in F$, $(t_i)_{i=1}^n$ is a plegma family in $[M]^k$. \end{enumerate} For notational simplicity we assume that $1<p<n$ (the proof for $p\in\{1,n\}$ is similar). Pick $m_1<\ldots<m_{p-1}$ in $M$ with $t_n(k)< m_1$ and set $s_i=t_i\cup \{m_i\}$, for all $i=1,\ldots,p-1$. Also let $M'=\{m\in M:m>m_{p-1}\}$. Since $\widehat{\varphi}$ is continuous, we have that $(x_{t\cup\{m\}})_{m\in M'}\stackrel{w}{\to}x_t$, for every $t\in F$. Hence by Mazur's theorem, for every $t\in F$, there exists a finite subset $G_t$ of $M'$ such that \begin{equation}\label{hgg}\Big\|x_t-\sum_{m\in G_t}\mu^t_mx_{t\cup\{m\}}\Big\|<\ee/2 \end{equation} for some convex combination $\sum_{m\in G_t}\mu^t_m x_{t\cup\{m\}}$ of $(x_{t\cup\{m\}})_{m\in G_t}$. We set \[G=\{t\cup\{m\}:t\in F\;\text{and}\;m\in G_t\}\] Finally, pick $m_{p+1}<...<m_n$ in $M$ with $\max\{m:m\in\bigcup_{t\in F} G_t\}<m_{p+1}$ and let $s_i=t_i\cup\{m_i\}$, for all $i=p+1,\ldots,n$. It is easy to check that every $(s_i)_{i=1}^n$ with $s_p\in G$, is a plegma family in $[M]^{k+1}$. It remains to show that condition (i) of the lemma is also satisfied. To this end, let $\mu_s=\mu_t\mu^t_m$, for every $s=t\cup\{m\}\in G$, where $\max t<m$. Notice that \[\sum_{s\in G}\mu_s=\sum_{t\in F}\mu_t\sum_{m\in G_t}\mu^t_m=\sum_{t\in F}\mu_t=1\] and therefore $\sum_{s\in G}\mu_s x_s$ is a convex combination of $(x_s)_{s\in G}$. Moreover, we have \[\begin{split} \Big\|\widehat{\varphi}(\emptyset)-\sum_{s\in G}&\mu_sx_s\Big\|= \Big\|\widehat{\varphi}(\emptyset)-\sum_{t\in F}\mu_t\sum_{m\in G_t}\mu^t_m x_{t\cup\{m\}}\Big\|\\ &\leq\Big\|\widehat{\varphi}(\emptyset)-\sum_{t\in G'}\mu'_tx_t\Big\|+ \sum_{t\in F}\mu_t\cdot\Big\|x_t-\sum_{m\in G_t}\mu^t_mx_{t\cup\{m\}}\Big\|\stackrel{(\ref{hg}), (\ref{hgg})}{<}\ee \end{split}\] and the proof is complete. \end{proof} \begin{thm}\label{unconditional spreading model} Let $k\in\nn$ and $(x_s)_{s\in[\nn]^k}$ be a $k$-sequence in a Banach space $X$. Suppose that $(x_s)_{s\in[\nn]^k}$ is seminormalized, subordinated (with respect to the weak topology of $X$) and weakly null. Then every $k$-spreading model of $(x_s)_{s\in[\nn]^k}$ is $1$-unconditional. \end{thm} \begin{proof} Let $(e_n)_n$ be a spreading model of $(x_s)_{s\in[\nn]^k}$. Lemma \ref{Lemma finding convex means} and the averaging technique used for the proof of the corresponding result in the case of the classical spreading models (see \cite{BL} Proposition I.5.1) yield that for every $n\in\nn$, $1\leq p\leq n$, $a_1,\ldots,a_n\in [-1,1]$ and $\ee>0$, we have \[\Big{\|}\sum_{\substack{i=1\\i\neq p}}^na_ie_i\Big{\|}_*\leq\Big{\|}\sum_{i=1}^na_ie_i\Big{\|}_*+\varepsilon\] Since the above inequality holds for every $\ee>0$, we have that \begin{equation}\label{eq13}\Big{\|}\sum_{\substack{i=1\\i\neq p}}^na_ie_i\Big{\|}_*\leq\Big{\|}\sum_{i=1}^na_ie_i\Big{\|}_*\end{equation} for all $n\in\nn$, $1\leq p\leq n$ and $a_1,\ldots,a_n\in [-1,1]$. Since $(x_s)_{s\in[\nn]^k}$ is seminormalized, we have that $\|e_1\|_*>0$. By (\ref{eq13}) we get that $\|e_1-e_2\|_*>0$. By Proposition \ref{sing}, we get that $(e_n)_n$ is non trivial. An iterated use of (\ref{eq13}) completes the proof. \end{proof} We close this subsection by giving an example showing that for $k\geq 2$ the assumption in Theorem \ref{unconditional spreading model} that the $k$-sequence is subordinated is necessary. More precisely, for every $k\geq 2$, there exist seminormalized weakly null $k$-sequences which generate conditional Schauder basic spreading models. \begin{examp} For simplicity we state the example for $k=2$. Let $(e_n)_n$ be the usual basis of $c_0$ and $(x_s)_{s\in[\nn]^2}$ be the $2$-sequence in $c_0$, defined by $x_s=\sum_{n=\min s}^{ \max s}e_n$, for all $s\in[\nn]^2$. Clearly, $(x_s)_{s\in[\nn]^2}$ is a normalized weakly null $2$-sequence. It is easy to check that for all $l\in\nn$, $a_1,\ldots,a_l\in\rr$ and $(s_j)_{j=1}^l\in\textit{Plm}_l([\nn]^2)$, we have \[ \Big\|\sum_{j=1}^la_jx_{s_j}\Big\|=\max\Big(\max_{1\leq k\leq l}\Big|\sum_{j=1}^ka_j\Big|,\max_{1\leq k\leq \l}\Big|\sum_{j=k}^la_j\Big|\Big)\] Therefore every spreading model of $(x_s)_{s\in[\nn]^2}$, is equivalent to the summing basis. \end{examp} \subsection{Singular and trivial spreading models} The results of this subsection concern the $k$-spreading models generated by subordinated $k$-sequences which are not weakly null. \begin{lem}\label{triv-ell} Let $X$ be a Banach space, $k\in\nn$, $(x_s)_{s\in[\nn]^k}$ be a $k$-sequence in $X$ and $x_0\in X$. Let $x'_s=x_s-x_0$, for all $s\in[\nn]^k$ and assume that that $(x_s)_{s\in[\nn]^k}$ and $(x'_s)_{s\in[\nn]^k}$ generate $k$-spreading models $(e_n)_n$ and $(\widetilde{e}_n)_n$ respectively. Then the following hold. \begin{enumerate} \item[(a)] $\|\sum_{i=1}^na_ie_{_i}\|=\|\sum_{i=1}^na_i\widetilde{e}_{_i}\|$, for every $n\in\nn$ and $a_1,\ldots,a_n\in\rr$ with $\sum_{i=1}^na_i=0$. \item[(b)] The sequence $(e_n)_n$ is trivial if and only if $(\widetilde{e}_n)_n$ is trivial. \item[(c)] The sequence $(e_n)_n$ is equivalent to the usual basis of $\ell^1$ if and only if $(\widetilde{e}_n)_n$ is equivalent to the usual basis of $\ell^1$. \end{enumerate} \end{lem} \begin{proof} (a) Notice that for every $n\in\nn$, $s_1,...,s_n$ in $[\nn]^k$ and $a_1,\ldots,a_n\in\rr$ with $\sum_{i=1}^na_i=0$, we have $\sum_{i=1}^na_ix_{s_i}=\sum_{i=1}^na_ix'_{s_i}$. Since $(e_n)_n$ and $(\widetilde{e}_n)_n$ are generated by $(x_s)_{s\in [\nn]^k}$ and $(x_s)_{s\in [\nn]^k}$ the result follows. \\ (b) It follows by assertion (a) and Proposition \ref{sing}.\\ (c) We fix $\ee>0$. If $(\widetilde{e}_n)_n$ is not equivalent to the usual basis of $\ell^1$ then there exist $n\in\nn$ and $a'_1,\ldots,a'_n\in\rr$ such that $\sum_{i=1}^n|a'_i|=1$ and $\|\sum_{i=1}^na'_i\widetilde{e}_i\|<\ee$. Setting $a_i=a'_i/2$ and $a_{n+i}=-a'_i/2$, for all $1\leq i\leq n$, we have $\sum_{i=1}^{2n}a_i=0$ and therefore, $\|\sum_{i=1}^{2n}a_ie_i\|=\|\sum_{i=1}^{2n}a_i\widetilde{e}_i\|<\ee$. Since $\sum_{i=1}^{2n}|a_i|=1$, $(e_n)_n$ is also not equivalent to the usual basis of $\ell^1$. \end{proof} \begin{thm}\label{nb} Let $X$ be a Banach space, $k\in\nn$ and $(x_s)_{s\in[\nn]^k}$ be a subordinated $k$-sequence in $X$. Also let $x'_s=x_s-x_0$, for every $s\in[\nn]^k$, where $x_0$ is the weak limit of $(x_s)_{s\in[\nn]^k}$. Assume that for some $M\in [\nn]^\infty$ the $k$-subsequence $(x_s)_{s\in[M]^k}$ generates a non trivial $k$-spreading model $(e_n)_n$. If $x_0\neq 0$, then exactly one of the following holds. \begin{enumerate} \item[(i)] The sequence $(e_n)_n$ as well as every spreading model of $(x'_s)_{s\in [M]^k}$ is equivalent to the usual basis of $\ell^1$. \item[(ii)] The sequence $(e_n)_n$ is singular and if $e_n=e'_n+e$ is its natural decomposition then $(e'_n)_n$ is the unique $k$-spreading model of $(x'_s)_{s\in[M]^k}$ and $\|e\|=\|x_0\|$. \end{enumerate} \end{thm} \begin{proof} Let $(\widetilde{e}_n)_n$ be a $k$-spreading model of $(x'_s)_{s\in[M]^k}$. If $(e_n)_n$ is equivalent to the usual basis of $\ell^1$ then by Lemma \ref{triv-ell}, we have that the same holds for $(\widetilde{e}_n)_n$ and hence (i) is satisfied. Assume for the following that $(e_n)_n$ is not equivalent to the usual basis of $\ell^1$. Since it is also non trivial, by Lemma \ref{triv-ell}, we have that $(\widetilde{e}_n)_n$ is non trivial and not equivalent to the $\ell^1$-basis. Let $L\in [M]^\infty$ such that $(x'_s)_{s\in[L]^k}$ generates $(\widetilde{e}_n)_n$. Since $(\widetilde{e}_n)_n$ is non trivial, it is easy to see that $(x'_s)_{s\in[L]^k}$ is seminormalized. Also notice that $(x'_s)_{s\in[M]^k}$ is subordinated and weakly null. Therefore by Theorem \ref{unconditional spreading model}, $(\widetilde{e}_n)_n$ is 1-unconditional. Moreover, since $(\widetilde{e}_n)_n$ is not equivalent to the usual basis of $\ell^1$, by Proposition \ref{equiv forms for 1-subsymmetric weakly null}, we conclude that $(\widetilde{e}_n)_n$ is Ces\`aro summable to zero. Hence we have \begin{equation}\label{tv} \lim_{n\to\infty}\Big\|\frac{1}{n}\sum_{j=1}^ne_j-\frac{1}{n}\sum_{j=n+1}^{2n}e_j\Big\| =\lim_{n\to\infty}\Big\|\frac{1}{n}\sum_{j=1}^n\widetilde{e}_j-\frac{1}{n}\sum_{j=n+1}^{2n}\widetilde{e}_j\Big\|=0\end{equation} Also it is easy to see that \begin{equation}\label{fd} \Big\|\frac{1}{n}\sum_{j=1}^ne_j\Big\|\to\|x_0\|>0\end{equation} By (\ref{tv}) and (\ref{fd}), we get that $(e_n)_n$ is not Schauder basic, i.e. it is singular. Let $e_n=e'_n+e$ be the natural decomposition of $(e_n)_n$. By (\ref{fd}) and the fact that $(e'_n)_n$ is Ces\`aro summable to zero, we have that $\|e\|=\|x_0\|$. To complete the proof it remains to show that $(\widetilde{e}_n)_n$ and $(e'_n)_n$ are isometrically equivalent. Indeed, we fix $n\in\nn$ and $a_1,\ldots,a_n\in\rr$. For every $p\in\nn$, let $(s^p_{j})_{j=1}^{n+p}\in\textit{Plm}_p([L]^k)$ such that $s^p_1(1)\geq L(n+p)$. We also set $a=\sum_{j=1}^na_j$. Then we have \[\begin{split} \Big\|\sum_{j=1}^na_je'_j\Big\|& =\lim_{p\to\infty}\Big\|\sum_{j=1}^na_je'_j-\frac{a}{p}\sum_{j=n+1}^{n+p}e'_j\Big\| =\lim_{p\to\infty}\Big\|\sum_{j=1}^na_je_j-\frac{a}{p}\sum_{j=n+1}^{n+p}e_j\Big\|\\ &=\lim_{p\to\infty}\Big\|\sum_{j=1}^na_jx_{s_j^p}-\frac{a}{p}\sum_{j=n+1}^{n+p}x_{s_j^p}\Big\| =\lim_{p\to\infty}\Big\|\sum_{j=1}^na_jx'_{s_j^p}-\frac{a}{p}\sum_{j=n+1}^{n+p}x'_{s_j^p}\Big\|\\ &=\lim_{p\to\infty}\Big\|\sum_{j=1}^na_j\widetilde{e}_j-\frac{a}{p}\sum_{j=n+1}^{n+p}\widetilde{e}_j\Big\| =\Big\|\sum_{j=1}^na_j\widetilde{e}_j\Big\| \end{split}\] \end{proof} By Remark \ref{properties of the natural decomposition}, Proposition \ref{cor for subordinating} and Theorems \ref{unconditional spreading model} and \ref{nb}, we derive the following. \begin{cor}\label{cor singular wrc} Let $X$ be a Banach space, $k\in\nn$ and $(e_n)_{n}\in\mathcal{SM}^{wrc}_k(X)$ non trivial. Then one of the following holds. \begin{enumerate} \item[(i)] The sequence $(e_n)_n$ is unconditional. \item[(ii)] The sequence $(e_n)_n$ is singular and if $e_n=e'_n+e$ is the natural decomposition of $(e_n)_n$ then $(e'_n)_{n}\in\mathcal{SM}^{wrc}_k(X)$, $(e'_n)_n$ is unconditional, weakly null and Ces\`aro summable to zero. Moreover, the spaces generated by $(e_n)_n$ and $(e'_n)_n$ are isomorphic. \end{enumerate} \end{cor} The next theorem provides more information concerning the trivial $k$-spreading models. Since we shall not use this result in the sequel, we omit its proof. \begin{thm}\label{Theorem equivalent forms for having norm on the spreading model} Let $k\in\nn$, $(x_s)_{s\in[\nn]^k}$ be an $k$-sequence in a Banach space $X$ and $(E,\|\cdot\|_*)$ be an infinite dimensional seminormed linear space with Hamel basis $(e_n)_n$. Assume that for some $M\in[\nn]^\infty$, the $k$-subsequence $(x_s)_{s\in [M]^k}$ generates $(e_n)_n$ as an $k$-spreading model. Then the following are equivalent: \begin{enumerate} \item[(i)] The sequence $(e_n)_n$ is trivial. \item[(ii)] The seminorm $\|\cdot\|_*$ is not a norm on $E$. \item[(iii)] $(x_s)_{s\in [M]^k}$ contains a further norm Cauchy $k$-subsequence, i.e. there exists $L\in[M]^\infty$ such that for every $\ee>0$ there exists $n_0\in\nn$ satisfying that $\|x_s-x_t\|<\ee$, for all $s,t\in[L]^k$ with $n_0\leq\min\{\min s,\min t\}$. \item[(iv)] There exists $x\in X$ such that every $k$-subsequence of $(x_s)_{s\in[M]^k}$ contains a further $k$-subsequence convergent to $x$. \end{enumerate} \end{thm} \section{Composition of the spreading models} In this section we study the composition property of the $k$-spreading models. Moreover we recall the definition of the $k$-iterated spreading models and we investigate their relation with the $k$-spreading models. We start with the following definition. \begin{defn} Let $X$ be a Banach space with a Schauder basis and $k\in\nn$. Then a $k$-spreading model $(e_n)_n$ of $X$ will be called plegma block generated if there exists a $k$-sequence $(x_s)_{s\in [\nn]^k}$ which is plegma block and generates $(e_n)_n$ as a $k$-spreading model. \end{defn} \begin{rem}\label{remnj} By Lemma \ref{propxi}, we easily conclude that for $1\leq k_1<k_2$, every plegma block generated $k_1$-spreading model is also a plegma block $k_2$-spreading model. Thus the plegma block $k$-spreading models of a Banach space $X$ with a Schauder basis form an increasing hierarchy. \end{rem} \begin{thm}\label{composition thm} Let $X$ be a Banach space, $k\in \nn$ and $(e_n)_n\in \mathcal{SM}_{k}(X)$ such that $(e_n)_n$ is a Schauder basic sequence. Let $E$ be the Banach space with Schauder basis the sequence $(e_n)_n$, $d\in\nn$ and $(\widetilde{e}_n)_{n}$ be a plegma block generated $d$-spreading model of $E$. Then $(\widetilde{e}_n)_{n}\in\mathcal{SM}_{k+d}(X)$. \end{thm} \begin{proof} We fix a plegma block $d$-sequence $(y_t)_{t\in[\nn]^d}$ in $E$ which generates $(\widetilde{e}_n)_n$ as a $d$-spreading model with respect to some null sequence $(\widetilde{\delta}_n)_n$ of positive reals. By Proposition \ref{remark on the definition of spreading model}, we may also choose a $k$-sequence $(x_s)_{s\in[\nn]^k}$ in $X$ which generates $(e_n)_n$ as a $k$-spreading model with respect to the same sequence $(\widetilde{\delta}_n)_n$. Since $(y_t)_{t\in [\nn]^d}$ is finitely supported, setting for every $t\in[\nn]^d$, $F_t=\text{supp}(y_t)$, \begin{equation}y_t=\sum_{j=1}^{|F_t|}a^t_{F_t(j)}e_{F_t(j)}^{\;}\end{equation} For every $v\in[\nn]^{k+d}$, let $t_v$ (resp. $s_v$) be the unique element in $[\nn]^d$ (resp. $[\nn]^k$) such that $v=t_v\cup s_v$ and $t_v<s_v$. For every $v\in [\nn]^{k+d}$ and $j\in\{1,...,|F_{t_u}|\}$, we set \begin{equation}\label{pl}s_j^v=(s_v(1)+j-1,...,s_v(k)+j-1)\end{equation} Notice that $(s_j^v)_{j=1}^{|F_{t_v}|}$ is a finite sequence in $[\nn]^k$ with $s_1^v=s_v$. We define a $(k+d)$-sequence $(z_v)_{v\in[\nn]^{k+d}}$ in $X$, by setting \begin{equation}z_v=\sum_{j=1}^{|F_{t_v}|}a^{t_v}_{F_{t_v}(j)}x_{s^v_j}\end{equation} The proof will be completed once we show the following. \textbf{Claim 1.} There exists $M\in[\nn]^\infty$ such that $(z_v)_{v\in[M]^{k+d}}$ generates $(\widetilde{e}_n)_n$ as a $(k+d)$-spreading model. \emph{Proof of Claim 1}: For every $l\in\nn$, we define a family $\mathcal{A}_l\subseteq \textit{Plm}_l([\nn]^{k+d})$ as follows: \[\begin{split}\mathcal{A}_l=\Big{\{} (v_i)_{i=1}^l\in &\textit{Plm}_l([\nn]^{k+d}): \; s_1^{v_1}(1)\geq \sum_{i=1}^{l}|F_{t_{v_i}}| \\ &\text{and}\;\; (s_j^{v_1})_{j=1}^{|F_{t_{v_1}|}}\;^\frown \ldots \;^\frown (s_j^{v_l})_{j=1}^{|F_{t_{v_l}|}}\in \textit{Plm}_{\sum_{i=1}^{l}|F_{t_{v_i}}|}([\nn]^{k}) \Big\}\end{split}\] Using (\ref{pl}), the fact that for every $(v_i)_{i=1}^l\in\textit{Plm}_l([\nn]^{k+d})$, $(s_{v_i})_{i=1}^l\in \textit{Plm}_l([\nn]^{k})$ and that $s_1^{v_i}=s_{v_i}$, for all $1\leq i\leq l$, it is easy to check that $\mathcal{A}_l\bigcap \textit{Plm}_l([L]^{k+d})\neq \emptyset$, for every $l\in\nn$ and $L\in [\nn]^\infty$. Hence, an iterated use of Theorem \ref{ramseyforplegma}, yields an $L\in [\nn]^\infty$ such that $(v_i)_{i=1}^l\in\mathcal{A}_l$, for every $(v_i)_{i=1}^l\in\textit{Plm}_l([L]^{k+d})$, with $v_1(1)\geq L(l)$. We fix $l\in\nn$, $(v_i)_{i=1}^l\in\textit{Plm}_l([L]^{k+d})$ with $v_1(1)\geq L(l)$ and $a_1,\ldots,a_l\in[-1,1]$. Notice that \begin{equation}\begin{split}\label{eq10} \Bigg| \Big\|\sum_{i=1}^l a_i z_{v_i} \Big\|-\Big\|\sum_{i=1}^l a_i \widetilde{e}_i \Big\| \Bigg| \leq& \Bigg|\Big\|\sum_{i=1}^l a_i z_{v_i} \Big\|-\Big\|\sum_{i=1}^l a_i y_{t_{v_i}} \Big\| \Bigg|\\ &+ \Bigg| \Big\|\sum_{i=1}^l a_i y_{t_{v_i}} \Big\|- \Big\|\sum_{i=1}^l a_i \widetilde{e}_i \Big\| \Bigg| \end{split}\end{equation} Also observe that $(t_{v_i})_{i=1}^l\textit{Plm}_l([L]^{d\widehat{}})$ and $t_{v_1}(1)=v_1(1)\geq L(l)\geq l$. Hence, \begin{equation}\label{eq11} \Bigg| \Big\|\sum_{i=1}^l a_i y_{t_{v_i}} \Big\|-\Big\| \sum_{i=1}^l a_i \widetilde{e}_i \Big\|\Bigg|<\widetilde{\delta}_l \end{equation} Also, $s_1^{v_1}(1)\geq \sum_{i=1}^l |F_{t_{v_i}}|$ and $F_{t_{v_1}}<...<F_{t_{v_l}}$. Therefore, \begin{equation}\begin{split}\label{eq12} \Bigg|\Big\|\sum_{i=1}^l a_i z_{v_i} \Big\|-\Big\|\sum_{i=1}^l a_i y_{t_{v_i}} \Big\| \Bigg|=& \Bigg|\Big\|\sum_{i=1}^l\sum_{j=1}^{l_{t_{v_i}}} a_ia_{F_{t_{v_i}}(j)}^{t_{v_i}} x_{s^{v_i}_j} \Big\|\\ &-\Big\|\sum_{i=1}^l\sum_{j=1}^{l_{t_{v_i}}} a_ia_{F_{t_{v_i}}(j)}^{t_{v_i}} e_{F_{t_{v_i}}(j)}^{\;} \Big\| \Bigg|<2CK\widetilde{\delta}_l \end{split}\end{equation} where $C$ is the basis constant of $(e_n)_n$ and $K=\sup\{\|y_t\|:t\in[L]^k\}$. By (\ref{eq10}), (\ref{eq11}) and (\ref{eq12}), we obtain that for every $l\in\nn$, $(v_i)_{i=1}^l\in\textit{Plm}_l([L]^{k})$ with $v_1(1)\geq L(l)$ and $a_1,\ldots,a_l\in[-1,1]$, we have \[\Bigg| \Big\|\sum_{i=1}^l a_i z_{v_i} \Big\|-\Big\|\sum_{i=1}^l a_i \widetilde{e}_i \Big\| \Bigg|<\delta_l\] where $\delta_l=(1+2CK)\widetilde{\delta}_l$. By Lemma \ref{old defn yields new}, there exists $M\in[L]^\infty$, such that $(z_v)_{v\in[M]^{k}}$ generates $(\widetilde{e}_n)_{n}$ as a $k$-spreading model and the proof of the claim as well as of Theorem \ref{composition thm} is complete. \end{proof} \begin{cor}\label{l^p in wrc} Let $X$ be a Banach space and $Y$ be either $\ell^p$ for some $p\in[1,\infty)$ or $c_0$. Also let $k\in\nn$, $(e_n)_n\in\mathcal{SM}^{wrc}_k(X)$ be non trivial and $E$ be the Banach space generated by $(e_n)_n$. Suppose that $E$ contains an isomorphic copy of $Y$. Then $\mathcal{SM}_{k+1}(X)$ contains a sequence equivalent to the usual basis of $Y$. \end{cor} \begin{proof} First assume that that $(e_n)_n$ is Schauder basic. Notice that $E$ contains a block sequence $(y_n)_n$ equivalent to the usual basis of $Y$. It is easy to see that $(y_n)_n$ admits a spreading model $(\widetilde{e}_n)_n$ equivalent to the usual basis of $Y$. By Theorem \ref{composition thm} we have that $(\widetilde{e}_n)_n\in\mathcal{SM}_{k+1}(X)$. Assume now that $(e_n)_n$ is not Schauder basic. Since $(e_n)_n$ is non trivial, we have that $(e_n)_n$ is singular. Let $e_n=e'_n+e$ be its natural decomposition and $E'$ the space generated by $(e'_n)_n$. By Remark \ref{properties of the natural decomposition} we have that $E$ and $E'$ are isomorphic and therefore $E'$ contains an isomorphic copy of $Y$. By Corollary \ref{cor singular wrc} we have that $(e'_n)_n\in\mathcal{SM}_{k+1}(X)$. Since $(e'_n)_n$ is unconditional, the result follows as in the first case. \end{proof} \subsection{The $k$-iterated spreading models} In this subsection we define the $k$-iterated spreading models of a Banach space $X$ which although they have not been named, have been appeared in \cite{BM} and \cite{O-S}. We also study their relation with the $k$-spreading models. \begin{defn} The $k$-iterated spreading models of a Banach space $X$ are inductively defined as follows. The 1-iterated are the non trivial spreading models of $X$. Assume that for some $k\in\nn$ the $k$-iterated spreading models of $X$ have been defined. Then the $(k+1)$-iterated spreading models are the non trivial spreading models of the spaces generated by the $k$-iterated spreading models. \end{defn} Notice that the class of the $k$-iterated spading models of a Banach space $X$ is contained in the one of the $(k+1)$-iterated spreading models. In the sequel we provide a sufficient condition ensuring that the $k$-iterated spreading models of a Banach space $X$ are up to isomorphism contained in $\mathcal{SM}_k(X)$. To this end we need the following lemma. \begin{lem}\label{iter_inter} Let $X$ be a Banach space and $k\in\nn$. Let $(e^0_n)_n$ be a Schauder basic $k$-spreading model of $X$, $E_0$ be the space generated by $(e^0_n)_n$, $(e_n)_n$ be a non trivial spreading model of $E_0$ and $E$ be the space generated by $(e_n)_n$. If $E_0$ is reflexive then there exists an unconditional $(k+1)$-spreading model of $X$ generating a space isomorphic to $E$. \end{lem} \begin{proof} Let $(x_n)_n$ be a sequence in $E_0$ generating $(e_n)_n$ as a spreading model. Since $E_0$ is reflexive, we may assume that $(x_n)_n$ is weakly convergent to some $x_0\in E_0$. If $x_0=0$, then $(e_n)_n$ is unconditional and it is generated by a block sequence in $E_0$, while if $(e_n)_n$ is equivalent to the usual basis of $\ell^1$ then $E_0$ contains a block sequence generating an $\ell^1$ spreading model. Therefore, in both cases the result follows by Theorem \ref{composition thm}. Assume that $x_0\neq0$ and $(e_n)_n $ is not equivalent to the usual basis of $\ell^1$. Let $x'_n=x_n-x_0$, for all $n\in\nn$. By Theorem \ref{nb}, we have that $(e_n)_n$ is singular and $(e'_n)_n$ is the unique spreading model of $(x'_n)_n$, where $e_n=e'_n+e$ is the natural decomposition of $(e_n)_n$. Since $(x'_n)_n$ is weakly null, we have that $(e'_n)_n$ is generated by a block sequence in $E_0$ as a spreading model. Hence, by Theorem \ref{composition thm}, the sequence $(e'_n)_n$ is a $(k+1)$-spreading model of $X$. Moreover, by Remark \ref{properties of the natural decomposition}, $(e'_n)_n$ is unconditional and the space $E'$ generated by $(e'_n)_n$ is isomorphic to $E$. \end{proof} \begin{prop}\label{iterated_lemma} Let $X$ be a reflexive space and $k\in\nn$ such that every space generated by a $k$-iterated spreading model of $X$ is reflexive. Then every space generated by a $(k+1)$-iterated spreading model of $X$ is isomorphic to the space generated by an unconditional $(k+1)$-spreading model of $X$. \end{prop} \begin{proof} We first treat the case $k=1$. So assume that $X$ as well as every space generated by a spreading model of $X$ is reflexive. Let $(\widetilde{e}_n)_n$ be a $2$-iterated spreading model of $X$ and $\widetilde{E}$ be the space generated by $(\widetilde{e}_n)_n$. Also let $\widetilde{E}_0$ be the space generated by a spreading model of $X$ such that $(\widetilde{e}_n)_n$ is a spreading model of $\widetilde{E}_0$. Since $X$ is reflexive, by Corollary \ref{cor singular wrc}, we conclude that $\widetilde{E}_0$ is isomorphic to a space $E_0$, generated by an unconditional spreading model of $X$. Moreover, by our assumption $E_0$ is also reflexive. Summarizing, the space $E_0$ is reflexive, it has a Schauder basis which is a spreading model of $X$ and it is isomorphic to $\widetilde{E}_0$. Therefore, $E_0$ admits a spreading model $(e_n)_n$ equivalent to $(\widetilde{e}_n)_n$. Let $E$ be the space generated by $(e_n)_n$. By Lemma \ref{iter_inter}, there exists an unconditional $2$-spreading model of $X$ generating a space isomorphic to $E$. Since $E$ is isomorphic to $\widetilde{E}$ the proof of the proposition for $ k=1$ is completed. We proceed by induction. Assume that the proposition holds for some $k\in\nn$ and let $X$ be a reflexive space such that every space generated by a $(k+1)$-iterated spreading model of $X$ is reflexive. Let $(\widetilde{e}_n)_n$ be a $(k+2)$-iterated spreading model of $X$ and $\widetilde{E}$ be the space that it generates. Let $\widetilde{E}_0$ be the space generated by a $(k+1)$-iterated spreading model of $X$ admitting $(\widetilde{e}_n)_n$ as a spreading model. Since the $k$-iterated spreading models of $X$ are included in the $(k+1)$-iterated ones, we have that the spaces generated by the $k$-iterated spreading models of $X$ are reflexive. Hence, by our assumption that the proposition holds for the positive integer $k$, we have that $\widetilde{E}_0$ is isomorphic to some space $E_0$ generated by an unconditional $(k+1)$-spreading model of $X$. Therefore, $E_0$ is reflexive, it is generated by a Schauder basic $(k+1)$-spreading model of $X$ and admits a spreading model $(e_n)_n$ equivalent to $(\widetilde{e}_n)_n$. Let $E$ be the space generated by $(e_n)_n$. By Lemma \ref{iter_inter}, there exists an unconditional $k+2$-spreading model of $X$ generating a space isomorphic to $E$. Since $E$ is isomorphic to $\widetilde{E}$ the proof of is completed. \end{proof} \begin{cor}\label{qwqwe} Let $X$ be a reflexive space such that for every $k\in\nn$, every space generated by an unconditional $k$-spreading model of $X$ is reflexive. Then for every $k\in\nn$, every space generated by a $k$-iterated spreading model of $X$ is isomorphic to the space generated by an unconditional $k$-spreading model of $X$. \end{cor} \begin{proof} By Corollary \ref{cor singular wrc} we have that every space generated by a spreading model of $X$ is isomorphic to the space generated by an unconditional spreading model of $X$ and therefore it is reflexive. The proof is carried out by induction and using Proposition \ref{iterated_lemma}. \end{proof} \begin{rem} As it is well known, see \cite{BL}, every non trivial spreading model of $c_0$ generates a space isomorphic to $c_0$. This easily implies that every $k$-iterated spreading model of $c_0$ generates a space isomorphic to $c_0$. On the other hand, as we will see in Section \ref{spreading models of c_0 and l^p}, the class of the $2$-spreading models of $c_0$ includes all spreading bimonote Schauder basic sequences yielding the existence of 2-spreading models which are not 2-iterated ones. \end{rem} \begin{rem} H.P. Rosenthal had asked whether every 2-iterated spreading model of a Banach space $X$ is actually a classical one. In \cite{BM} a Banach space $X$ has been constructed not admitting $\ell^1$ as a spreading model, while there is a spreading model generating a space which contains $\ell^1$. Thus $\ell^1$ occurs as 2-iterated spreading model but not as a classical one. A more striking result (see \cite{AOTS}) asserts the existence of a Banach space $X$ not admitting $\ell^1$ as a spreading model but $\ell^1$ is isomorphic to a subspace of every space generated generated by a non trivial spreading model of $X$. It remains open if for every $k\in\nn$ there exists a Banach space $X_{k+1}$ such that the class of $(k+1)$-iterated spreading models strictly includes the corresponding one of $k$-iterated. \end{rem} \section{$k$-spreading models equivalent to the $\ell^1$ basis}\label{chapter 9} In this section we study the properties of the $k$-spreading models equivalent to the usual basis of $\ell^1$. \subsection{Splitting spreading sequences equivalent to the $\ell^1$ basis} In this subsection we present some stability properties of spreading sequences in seminormed linear spaces which are actually related to the non distortion of $\ell^1$ (c.f. \cite{J2}). Let $(e_n)_{n}$ be a spreading sequence in a seminormed linear space $(E, \|\cdot\|_*)$ and $c>0$. We say that $(e_n)_n$ \emph{admits a lower $\ell^1$-estimate of constant} $c$, if for every $n\in\nn$ and $a_1,\ldots,a_n\in\rr$, we have $c\sum_{i=1}^n |a_i|\leq \big\|\sum_{i=1}^n a_i e_i\big \|_*$. \begin{prop}\label{propsplitl1} Let $(E,\|\cdot\|_\circ),(E_1,\|\cdot\|_*),(E_2,\|\cdot\|_{**})$ be seminormed linear spaces and $(e_n)_{n},(e_n^1)_{n}$ and $(e_n^2)_{n}$ be spreading sequences in $E, E_1$ and $E_2$ respectively. Assume that for every $n\in\nn$ and $a_1,\ldots,a_n\in\rr$, we have \begin{equation}\label{gt1}\Big\|\sum_{i=1}^na_ie_i\Big\|_\circ\leq \Big\|\sum_{i=1}^na_ie_i^1\Big\|_*+\Big\|\sum_{i=1}^na_ie_i^2\Big\|_{**}\end{equation} If $(e_n)_n$ admits a lower $\ell^1$-estimate of constant $c>0$ and $(e_n^2)_{n}$ does not admit any lower $\ell^1$-estimate then $(e_n^1)_{n}$ admits a lower $\ell^1$-estimate of the same constant $c$. \end{prop} \begin{proof} Suppose on the contrary that $(e_n^1)_{n}$ does not admit a lower $\ell^1$-estimate of constant $c$. Then there exist $\varepsilon >0$, $n\in\nn$ and $a_1,\ldots,a_n\in\rr$ with $\sum_{i=1}^n|a_i|=1$ such that $\|\sum_{i=1}^n a_i e_i^1\|_*<c-\varepsilon$. Also since $(e_n^2)_{n}$ does not admit any lower $\ell^1$-estimate, there exist $m\in\nn$ and $b_1,\ldots,b_m\in\rr$ such that $\sum_{j=1}^m|b_j|=1$ and $\| \sum_{j=1}^mb_j e_j^2 \|_{**}<\varepsilon/2$. Hence, we get that \begin{equation}\label{gt2}\begin{split} \Big{\|}\sum_{i=1}^n\sum_{j=1}^m a_i\cdot b_j e^1_{(i-1)m +j}\Big{\|}_*& \leq \sum_{j=1}^m |b_j|\Big{\|} \sum_{i=1}^na_i e^1_{(i-1)m +j}\Big{\|}_* < c-\varepsilon \end{split}\end{equation} and similarly \begin{equation}\label{gt3}\begin{split} \Big{\|}\sum_{i=1}^n\sum_{j=1}^m a_i\cdot b_j e^2_{(i-1)m +j}\Big{\|}_{**}& \leq \sum_{i=1}^n|a_i|\Big{\|}\sum_{j=1}^m b_j e^2_{(i-1)m +j}\Big{\|}_{**} < \frac{\varepsilon}{2} \end{split}\end{equation} But then by (\ref{gt1}), we obtain that \[\begin{split}\Big{\|}\sum_{i=1}^n\sum_{j=1}^m a_i\cdot b_j e_{(i-1)m +j}\Big{\|}_\circ &\leq \Big{\|}\sum_{i=1}^n\sum_{j=1}^m a_i\cdot b_j e^1_{(i-1)m +j}\Big{\|}_*\\ &+ \Big{\|}\sum_{i=1}^n\sum_{j=1}^m a_i\cdot b_j e^2_{(i-1)m +j}\Big{\|}_{**} \stackrel{(\ref{gt2}), (\ref{gt3})}{<}c-\frac{\varepsilon}{2}\end{split}\] which since $\sum_{i=1}^n\sum_{j=1}^m |a_i|\cdot|b_j|=1$, contradicts that $(e_n)_n$ admits a lower $\ell^1$-estimate of constant $c$. \end{proof} \begin{cor}\label{ultrafilter property for ell^1 spreading models} Let $k\in\nn$ and $(x_s)_{s\in [\nn]^k},(x_s^1)_{s\in [\nn]^k},(x_s^2)_{s\in [\nn]^k}$ be three $k$-sequences in a Banach space $X$ such that for all $s\in[\nn]^k$, $x_s=x_s^1+x_s^2$. Assume that the $k$-sequences $(x_s)_{s\in[\nn]^k}$, $(x^1_s)_{s\in[\nn]^k}$ and $(x^2_s)_{s\in[\nn]^k}$ generate the sequences $(e_n)_{n}$, $(e_n^1)_{n}$ and $(e_n^2)_{n}$ respectively, as $k$-spreading models. If $(e_n)_n$ admits a lower $\ell^1$-estimate of constant $c>0$ and $(e_n^2)_{n}$ does not admit any lower $\ell^1$-estimate then $(e_n^1)_{n}$ admits a lower $\ell^1$-estimate of constant $c$. \end{cor} \begin{proof} For every $n\in\nn$, $a_1,\ldots,a_n\in\rr$ and $(s_j)_{j=1}^n$ in $[\nn]^k$, we have \begin{equation}\label{sd}\Big\|\sum_{j=1}^n a_jx_{s_j}\Big\|\leq \Big\|\sum_{j=1}^n a_jx^1_{s_j}\Big\|+ \Big\|\sum_{j=1}^n a_jx^2_{s_j}\Big\|\end{equation} Let $(E,\|\cdot\|_\circ),(E_1,\|\cdot\|_*),(E_2,\|\cdot\|_{**})$ be the seminormed linear spaces with Hamel bases $(e_n)_{n},(e_n^1)_{n}$ and $(e_n^2)_{n}$ respectively. Notice that (\ref{sd}) implies that (\ref{gt1}) holds and therefore the conclusion follows by Proposition \ref{propsplitl1}. \end{proof} \subsection{$k$-spreading models almost isometric to the $\ell^1$ basis} Let $c>0$, $k\in\nn$ and $(x_s)_{s\in[\nn]^k}$ be a $k$-sequence in a Banach space $X$. We will say that the $k$-sequence $(x_s)_{s\in[\nn]^k}$ \textit{generates} $\ell^1$ \textit{as a $k$-spreading model of constant} $c$, if $(x_s)_{s\in[\nn]^k}$ generates a $k$-spreading model $(e_n)_n$ which admits a lower $\ell_1$-estimate of constant $c$. \begin{prop}\label{Prop on almost isometric l^1 spr mod} Let $X$ be a Banach space and $k\in\nn$. Assume that $X$ admits a $k$-spreading model equivalent to the usual basis of $\ell^1$. Then for every $\varepsilon>0$ there exists a $k$-sequence $(y_s)_{s\in[\nn]^k}$ in $X$ with $1-\varepsilon\leq \|y_s\|\leq 1$, for every $s\in [\nn]^k$, which generates $\ell^1$ as a $k$-spreading model of constant $1-\varepsilon$. \end{prop} \begin{proof} Let $(e_n)_n$ be a $k$-spreading model of $X$ which is equivalent to the usual basis of $\ell^1$. Also let $c=\inf \|\sum_{j=1}^na_je_j\|$, taken over all $n\in\nn$ and $a_1,\ldots,a_n\in\rr$ with $\sum_{j=1}^n|a_j|=1$. Let $\varepsilon>0$ and choose $0<\varepsilon'<c$, $p\in\nn$ and $b_1,...,b_p$ in $[-1,1]$ with $\sum_{i=1}^p|b_i|=1$ such that \begin{equation}\label{re}\frac{c-\varepsilon'}{c+2\varepsilon'}\geq 1-\varepsilon\;\;\text{and}\;\;\; c\leq \Big\|\sum_{i=1}^p b_i e_i\Big\|\leq c+\varepsilon'\end{equation} Let $(x_s)_{s\in[\nn]^k}$ be a $k$-sequence is $X$ generating $(e_n)_n$ as a $k$-spreading model. By passing to an infinite subset $M$ of $\nn$, we may assume that for every $n\in\nn$, $a_1,\ldots,a_n\in[-1,1]$ and $(s_i)_{i=1}^n\in\textit{Plm}_n([M]^k)$ with $ s_1(1)\geq M(n)$, we have \begin{equation}\label{qk}\Bigg|\Big\|\sum_{i=1}^na_ix_{s_i}\Big\|-\Big\|\sum_{i=1}^na_i e_i\Big\|\Bigg|\leq\varepsilon'\sum_{i=1}^n|a_i|\end{equation} Hence by (\ref{re}), for every $(s_i)_{i=1}^p \in\textit{Plm}_p([M]^k)$ with $s_1(1)\geq M(p)$ we have that \begin{equation}\label{lk}c-\varepsilon'\leq\Big\|\sum_{i=1}^p b_ix_{s_i}\Big\|\leq c+2\varepsilon'\end{equation} For every $s=(n_1,...,n_k)\in[\nn]^k$, we set \begin{equation}\label{mk}y_s=\frac{\sum_{i=1}^pb_i x_{t_i^s}}{c+2\ee'} \;\text{where}\; t_i^s=\big(M(p\cdot n_j+i-1)\big)_{j=1}^k,\;\text{for all}\;1\leq i\leq p\end{equation} Notice that $(t_i^{s})_{i=1}^p\in\textit{Plm}_p([\nn]^k$ and $t_1^s(1)=M(p\cdot s(1))\geq M(p)$. Hence, by (\ref{re}) and (\ref{lk}), it is clear that $1-\varepsilon\leq \|y_s\|\leq 1$. Moreover, the $k$-subsequence $(y_s)_{s\in[\nn]^k}$ generates $\ell^1$ as a $k$-spreading model of constant $1-\varepsilon$. Indeed, let $l\in\nn$, $a_1,\ldots,a_l\in [-1,1]$ and $(s_j)_{j=1}^l\in\textit{Plm}_l([\nn]^k)$ with $s_1(1)\geq l$. Notice that $(t_i^{s_1})_{i=1}^p \;^\frown\;...\;^\frown {(t_i^{s_l})_{i=1}^p}\in\textit{Plm}_{p\cdot l}([\nn]^k)$ and $t_1^{s_1}(1)=M(p\cdot s_1(1))\geq M(p\cdot l)$. Hence, \[\Big\|\sum_{j=1}^l a_j y_{s_j}\Big\|= \Big\|\sum_{j=1}^la_j \cdot \sum_{i=1}^p \frac{ b_i x_{t_i^{s_j}}}{c+2\ee'}\Big\| \stackrel{(\ref{qk})}{\geq} \frac{c-\varepsilon'}{c+2\ee'}\sum_{j=1}^l\sum_{i=1}^p|a_j|\cdot|b_i| \stackrel{(\ref{re})}{\geq}(1-\varepsilon)\sum_{j=1}^l|a_j|\] and the proof is complete. \end{proof} \begin{rem}\label{Rem on almost isometric l^1 spr mod} If we additionally assume that $X$ has a Schauder basis and $(x_s)_{s\in[M]^k}$ is plegma block (resp. plegma disjointly supported) then by (\ref{mk}) it is easy to see that $(y_s)_{s\in[L]^k}$ is also plegma block (resp. plegma disjointly supported). \end{rem} \subsection{Plegma block generated $k$-spreading models equivalent to the $\ell^1$ basis} It well known that if a Banach space $X$ with a Schauder basis admits an $\ell^1$ spreading model, then $X$ contains a block sequence which generates an $\ell^1$ spreading model. In this subsection we extend this result. More precisely, we have the following. \begin{thm}\label{getting block generated ell^1 spreading model} Let $X$ be a Banach space with a Schauder basis and $k\in\nn$. Suppose that $\mathcal{SM}_k^{wrc}(X)$ contains up to equivalence the usual basis of $\ell^1$. Then there exists a plegma block generated $k$-spreading model of $X$ equivalent to the usual basis of $\ell^1$. \end{thm} \begin{proof} Let $k_{X}$ be the minimum of all $k\in\nn$ such that the set $\mathcal{SM}_k^{wrc}(X)$ contains a sequence equivalent to the usual basis of $\ell^1$. By Remark \ref{remnj}, it suffices to show that $\mathcal{SM}_{k_{X}}(X)$ contains a sequence equivalent to the usual basis of $\ell_1$ which is plegma block generated. For $k_{X}=1$ this is a well known standard fact. So suppose that $k_{X}=k\geq 2$ and let $(e_n)_n\in\mathcal{SM}_{k}^{wrc}(X)$ be equivalent to the usual basis of $\ell^1$. By Corollary \ref{cor canonical tree with spr mod}, we may assume that $(e_n)_n$ is generated as a $k$-spreading model by a $k$-sequence $(x_s)_{s\in[\nn]^{k}}$ which is subordinated with respect to the weak topology of $X$ and admits a canonical tree decomposition $(y_t)_{t\in[\nn]^{\leq k}}$. Let $(w_v)_{v\in [\nn]^{k-1}}$ be the $(k-1)$-sequence in $X$ defined by $w_v=\sum_{t\sqsubseteq v}y_t$, for every $v\in[\nn]^{k-1}$. Also let $(x'_s)_{s\in [\nn]^k}$, be the $k$-sequence defined by $x'_s=w_{s|k-1}$, for every $s\in [\nn]^k$. Notice that $(w_v)_{v\in [\nn]^{k-1}}$ is subordinated with respect to the weak topology. Hence $(w_v)_{v\in [\nn]^{k-1}}$ is a weakly relatively compact $(k-1)$-sequence. Also, by Lemma \ref{propxi} we have that $(w_v)_{v\in [\nn]^{k-1}}$ and $(x'_s)_{s\in [\nn]^k}$ admit the same $(k-1)$-spreading models. Therefore, since the usual basis of $\ell_1$ is not contained up to equivalence in $\mathcal{SM}_{k-1}^{wrc}(X)$, we conclude that $(x'_s)_{s\in [\nn]^k}$ does not admit a $k$-spreading model equivalent to the usual basis of $\ell^1$. Since $x_s=x'_s+y_s$, for all $s\in [\nn]^k$, by Corollary \ref{ultrafilter property for ell^1 spreading models}, we get that the $k$-sequence $(y_s)_{s\in[\nn]^{k}}$ admits a $k$-spreading model equivalent to the usual basis of $\ell^1$. Since $(y_s)_{s\in[\nn]^{k}}$ is a plegma block $k$-sequence in $X$ (see Proposition \ref{trocan} (iii)), the proof is complete. \end{proof} \subsection{Duality of $c_0$ and $\ell^1$ $k$-spreading models} It is well known that if a Banach space $X$ admits a $c_0$ spreading model, then $X^*$ admits an $\ell^1$ spreading model. In this subsection we extend this result. \begin{lem}\label{lem for duality} Let $X$ be a Banach space with a Schauder basis, $k\in\nn$ and $(x_s)_{s\in[\nn]^{k}}$ be a $k$-sequence in $X$ which admits a canonical tree decomposition $(y_t)_{t\in[\nn]^{\leq k}}$ and generates a $k$-spreading model equivalent to the usual basis of $c_0$. Then $y_\emptyset=0$ and there exist $1\leq j_0\leq k$ and $L\in[\nn]^\infty$ such that the $k$-subsequence $(y_{s|j_0})_{s\in[L]^k}$ is plegma block and generates $c_0$ as a $k$-spreading model. \end{lem} \begin{proof} Since $(x_s)_{s\in[\nn]^{k}}$ generates a $k$-spreading model, we have that $(x_s)_{s\in[\nn]^{k}}$ is seminormalized. Let $(e_n)$ be the $k$-spreading model of $(x_s)_{s\in [\nn]^k}$. Since $(e_n)_n$ is equivalent to the usual basis of $c_0$, we have that $(e_n)_n$ is Cesaro summable to zero. Using these observations we may easily conclude that $y_\emptyset=0$. We also observe that there exists $\delta>0$ such that for every $s\in[\nn]^k$ there exists $1\leq j\leq k$ such that $\|y_{s|j}\|>\delta$. Hence by Ramsey's theorem there exists $1\leq j_0\leq k$ and $L\in[\nn]^\infty$ such that for every $s\in[L]^k$, $\|y_{s|j_0}\|>\delta$. Let $n\in\nn$, $a_1,\ldots,a_n\in\rr$ and $(s_i)_{i=1}^n\in\textit{Plm}_n([L]^k)$. If $I$ is the interval of $\nn$ with $\min I=\min\text{supp}(y_{s_1|j_0})$ and $\max I= \max \text{supp}(x_{s_n|j_0})$, then Proposition \ref{trocan} (v) and the fact that $y_\emptyset=0$, yield that \[I\Big(\sum_{i=1}^na_jx_{s_i}\Big)=\sum_{i=1}^na_iy_{s_i|j_0}\] Hence if $C$ is the basis constant of the Schauder basis of $X$, we get that \[\frac{\delta}{2C}\max_{1\leq i\leq n}|a_i|\leq\Big\|\sum_{i=1}^na_iy_{s_i|j_0}\Big\|\leq 2C\Big\|\sum_{i=1}^na_ix_{s_i}\Big\|\] Therefore, since $(x_s)_{s\in[L]^{k}}$ generates $c_0$ as a $k$-spreading model, we conclude that every $k$-spreading model of $(y_{s|j_0})_{s\in [L]^k}$ is equivalent to the usual basis of $c_0$. \end{proof} The above lemma shows that the analogue of Theorem \ref{getting block generated ell^1 spreading model} for the $c_0$ basis also holds. Namely we have the following. \begin{cor}\label{getting block generated c_0 spreading model} Let $X$ be a Banach space with a Schauder basis and $k\in\nn$. Suppose that $\mathcal{SM}_k^{wrc}(X)$ contains up to equivalence the usual basis of $c_0$. Then there exists a plegma block generated $k$-spreading model of $X$ equivalent to the usual basis of $c_0$. \end{cor} \begin{thm} Let $X$ be a Banach space. Assume that for some $k\in\nn$ the set $\mathcal{SM}_k^{wrc}(X)$ contains a sequence equivalent to the usual basis of $c_0$. Then $X^*$ admits $\ell^1$ as a $k$-spreading model. \end{thm} \begin{proof} Let $(x_s)_{s\in[\nn]^k}$ be a subordinated $k$-sequence in $X$ generating $c_0$ as a spreading model. Let $Y$ separable subspace of $X$ containing the $k$-sequence $(x_s)_{s\in[\nn]^k}$ and $T:Y\to C[0,1]$ an isometry. Notice that $C[0,1]$ is a Banach space with a bimonotone Schauder basis and $(T(x_s))_{s\in[\nn]^k}$ is subordinated. Let $(\ee_n)_n$ a null sequence of positive reals. By Theorem \ref{canonical tree} there exist $L\in[\nn]^\infty$ and a $k$-subsequence $(\widetilde{x}_s)_{s\in[L]^{k}}$ in $C[0,1]$ satisfying the following. \begin{enumerate} \item[(P1)] $(\widetilde{x}_s)_{s\in[L]^k}$ admits a canonical tree decomposition $(\widetilde{y}_t)_{t\in[M]^{\leq k}}$. \item[(P2)] For every $s\in[L]^k$, $\|T(x_s)-\widetilde{x}_s\|<\ee_n$, where $\min s=L(n)$. \end{enumerate} Notice that property (P2) yields that $(\widetilde{x}_s)_{s\in[L]^{k}}$ generates $c_0$ as a $k$-spreading model. By Lemma \ref{lem for duality} there exist $M\in[L]^\infty$ and $1\leq j_0\leq k$ such that the plegma block $k$-subsequence $(\widetilde{y}_{s|j_0})_{s\in[M]^k}$ generates $c_0$ as a $k$-spreading model. For every $s\in[M]^k$ we pick $\widetilde{y}_s^*\in S_{C[0,1]^*}$ with $\widetilde{y}_s^*(\widetilde{y}_{s|j_0})=\|\widetilde{y}_{s|j_0}\|$ and $\text{supp}\;\widetilde{y}_s^*\subseteq \text{range}\; \widetilde{y}_{s|j_0}$. For every $s\in[M]^k$ we set $y^*_s=T^*(\widetilde{y}_s^*)$ and we choose $x^*_s$ in $X^*$ an extension of $y^*_s$ of the same norm. It is easy to check that $(x^*_s)_{s\in[M]^k}$ admits $\ell^1$ as a spreading model. \end{proof} \section{$k$-Ces\`aro summability vs $\ell^1$ $k$-spreading models} In this section we extend the well known dichotomy of H.P. Rosenthal concerning Ces\`aro summability and $\ell^1$ spreading models (see also \cite{AT}, \cite{M}). We start by introducing the definition of the Ces\`aro summability for $k$-sequences in Banach spaces. \subsection{Definition of the $k$-Ces\`aro summability in Banach spaces} \begin{defn}\label{Cesaro} Let $X$ be a Banach space, $x_0\in X$ $k\in\nn$, $(x_s)_{s\in [\nn]^k}$ be a $k$-sequence in $X$ and $M\in [\nn]^\infty$. We will say that the $k$-subsequence $(x_s)_{s\in [M]^k}$ is $k$-Ces\`aro summable to $x_0$ if \[ \Big(\substack{n\\ \\k}\Big)^{-1} \sum_{s\in [M|n]^k} x_s \;\;\substack{\|\cdot\| \\ \longrightarrow \\ n\to\infty}\;\; x_0\] where $M|n=\{M(1),...,M(n)\}$. \end{defn} \begin{prop}\label{rem on k-Cesaro summability} Let $X$ be a Banach space, $x_0\in X$, $k\in\nn$, $(x_s)_{s\in [\nn]^k}$ be a $k$-sequence in $X$ and $M\in [\nn]^\infty$. \begin{enumerate} \item[(i)] If $(x_s)_{s\in[M]^k}$ norm converges to $x_0$, then $(x_s)_{s\in [M]^k}$ is $k$-Ces\`aro summable to $x_0$. \item[(ii)] If $(x_s)_{s\in[M]^k}$ is $k$-Ces\`aro summable to $x_0$ and in addition it is weakly convergent, then $x_0$ is the weak limit of $(x_s)_{s\in[M]^k}$. \item[(iii)] If $X^*$ is separable and for every $N\in[M]^\infty$, $(x_s)_{s\in[N]^k}$ is $k$-Ces\`aro summable to $x_0$ then there exists $L\in[M]^\infty$ such that $(x_s)_{s\in[L]^k}$ weakly converges to $x_0$. \end{enumerate} \end{prop} \begin{proof} Assertions (i) and (ii) are straightforward. For (iii), first observe that for every $x^*\in X^*$, $\ee>0$ and $N\in[M]^\infty$ there exists an $L\in[N]^\infty$ such that $|x^*(x_s)-x^*(x_0)|<\ee$, for all $s\in[L]^k$. Next for a norm dense subset $\{x^*_n:n\in\nn\}$ of $X^*$, we inductively choose an $L\in[M]^\infty$ such that for every $n\in\nn$ and $s\in[L]^k$ with $\min s\geq L(n)$ we have that $|x_i^*(x_s)-x_i^*(x_0)|<\frac{1}{n}$ for all $1\leq i=1 \leq n$. This yields that $(x_s)_{s\in[L]^k}$ weakly converges to $x_0$. \end{proof} \begin{rem} It is open if assertion (iii) of the above proposition remains valid without any restriction for $X^*$. \end{rem} \subsection{A density result for plegma families in $[\nn]^k$} In this subsection we will present a density Ramsey result concerning plegma families. For its proof, we will need the deep theorem of H. Furstenberg and Y. Katznelson \cite{FK}. Actually, we shall use the following finite version of this theorem (see also \cite{G2}). \begin{thm}\label{Furstenberg and Katznelson Theorem} Let $k\in\nn$, $F$ be a finite subset of $\mathbb{Z}^k$ and $\delta>0$. Then there exists $n_0\in\nn$ such that for all $n\geq n_0$, every subset $\mathcal{A}$ of $\{1,\ldots,n\}^k$ of size at least $\delta n^k $ has a subset of the form $a+d F$ for some $a\in \mathbb{Z}^k$ and $d\in\nn$. \end{thm} Our density result for plegma families is the following. \begin{prop}\label{Lemma using Furstenberg and Katznelson Theorem to find long plegma} Let $k, l\in\nn$ and $\delta>0$. Then there exists $N_0\in \nn$ such that for every $n\geq n_0$ and every subset $\mathcal{A}$ of $[\{1,\ldots,n\}]^k$ of size at least $\delta (\substack{n\\ k})$, there exists a plegma family $(s_j)_{j=1}^l\in\textit{Plm}_l([\nn]^k)$ such that $s_j\in\mathcal{A}$, for every $1\leq j\leq l$. \end{prop} \begin{proof} For every $1\leq j\leq l$, let $t_j=\big(j,l+j, 2l+j,...,(k-1)l+j\big)$. Clearly $(t_j)_{j=1}^l\in\textit{Plm}_l([\nn]^k)$. We set $F=\{\textbf{0}\}\cup\{t_j: \;1\leq j\leq l\}$, where $\textbf{0}=(0,...,0)$ is the zero element of $\mathbb{Z}^k$. Fix $\delta>0$. Since $\lim_{n}(\substack{n\\k})/n^k=1/k!$, there exists $m_0\in\nn$ such that for every $n\geq m_0$ and every subset $\mathcal{A}$ of $[\{1,\ldots,n\}]^k$ of size at least $\delta (\substack{n\\ k})$ has density at least $\frac{\delta}{2k!}$ in $\{1,\ldots,n\}^k$. Hence, by Theorem \ref{Furstenberg and Katznelson Theorem} (applied for $\frac{\delta}{2k!}$ in place of $\delta$) we have that there exists $n_0\geq m_0$ such that for every $n\geq n_0$, every subset $\mathcal{A}$ of $[\{1,\ldots,n\}]^k$ of size at least $\delta (\substack{n\\k})$ has a subset of the form $a+d F$ for some $a\in \mathbb{Z}^k$ and $d\in\nn$. Notice that $a=a+d\textbf{0}\in\mathcal{A}$ and therefore $a\in [\{1,...,n\}]^k$. For every $j\in\{1,...,l\}$, we set $s_j=a+dt_j$. Then $ \{s_j:\;1\leq j\leq l\}\subseteq \mathcal{A}$. Moreover, since $a\in [\nn]^k$ and $d\in\nn$, we easily conclude that $(s_j)_{j=1}^l\in\textit{Plm}_l([\nn]^k)$ and the proof is complete. \end{proof} \begin{rem} It is easy to see that for $k=1$ the preceding lemma trivially holds (it suffices to set $N_0=\lceil\frac{l}{\delta}\rceil$) and therefore Theorem \ref{Furstenberg and Katznelson Theorem} is actually used for $k\geq 2$. However, it is not completely clear to us if the full strength of such a deep theorem like Furstenberg-Katznelson's is actually necessary for the proof of Proposition \ref{Lemma using Furstenberg and Katznelson Theorem to find long plegma}. \end{rem} \subsection{The main results} \begin{prop}\label{Furstemberg's application on spreading models} Let $X$ be a Banach space, $k\in\nn$ and $(x_s)_{s\in[\nn]^k}$ be a bounded $k$-sequence in $X$. Let $M\in[\nn]^\infty$ such that the subsequence $(x_s)_{s\in[M]^k}$ generates a Ces\`aro summable to zero $k$-spreading model $(e_n)_n$. Then for every $L\in[M]^\infty$ the $k$-subsequence $(x_s)_{s\in[L]^k}$ is $k$-Ces\`aro summable to zero. \end{prop} \begin{proof} Assume on the contrary that there exists $L\in[M]^\infty$ such that $(x_s)_{s\in[L]^k}$ is not $k$-Ces\`aro summable to zero. Then there exists a $\theta>0$ and a strictly increasing sequence $(p_n)_n$ of natural numbers such that for every $n\in\nn$, \begin{equation}\label{wq}\Big(\substack{p_n\\ \\ k} \Big)^{-1}\Big{\|}\sum_{s\in [L|p_n]^k} x_s\Big{\|}>\theta\end{equation} For each $n\in\nn$, we pick $x_n^*\in S_{X^*}$ such that $x_n^*\big( (\substack{p_n\\ \\ k})^{-1} \sum_{s\in [L|p_n]^k} x_s \big)>\theta$ and we set \begin{equation}\label{yt}\mathcal{A}_n=\Big\{s\in \big[\{1,...,p_n\}\big]^k:\;x_n^*(x_{L(s)})>\frac{\theta}{2}\Big\}\end{equation} where $S_{X^*}$ is the unit sphere of $X^*$. By (\ref{wq}) and a simple averaging argument we easily derive that $|\mathcal{A}_n|\geq \frac{\theta}{2K} \Big(\substack{p_n\\ \\ k} \Big)$, where $K=\sup\{\|x_s\|:s\in [\nn]^k\}$. We fix $m\in\nn$. By Proposition \ref{Lemma using Furstenberg and Katznelson Theorem to find long plegma}, with $\delta=\frac{\theta}{2K}$ and $l=2m-1$, there exists $n_0\in\nn$ such that for every $n\geq n_0$ there exists a plegma family $(s_j)_{j=1}^l\in\textit{Plm}_l([\nn]^k)$ such that $\{s_j:\;1\leq j\leq l\}\subseteq \mathcal{A}_{n}$. Therefore setting $t_i=L(s_{m+i-1})$ for all $1\leq i\leq m$, we conclude that for every $m\in\nn$ there exists $(t_i)_{i=1}^m\in\textit{Plm}_m([L]^k)$ such that $t_1(1)\geq L(m)$ and $\Big\|\frac{1}{m}\sum_{j=1}^m x_{t_j}\Big\|>\frac{\theta}{2}$. This easily yields that $(e_n)_n$ is not Ces\`aro summable to zero, which is a contradiction. \end{proof} \begin{cor}\label{Rosenthal prop} Let $X$ be a Banach space, $k\in\nn$ and $(x_s)_{s\in[\nn]^k}$ be a bounded $k$-sequence in $X$. Let $M\in[\nn]^\infty$ such that the subsequence $(x_s)_{s\in[M]^k}$ generates an unconditional $k$-spreading model $(e_n)_n$. Then at least one of the following holds: \begin{enumerate} \item[(1)] The sequence $(e_n)_n$ is equivalent to the usual basis of $\ell^1$. \item[(2)] For every $L\in [M]^\infty$ $(x_s)_{s\in [L]^k}$ is $k$-Ces\`aro summable to zero. \end{enumerate} \end{cor} \begin{proof} Assume that $(e_n)_n$ is not equivalent to the usual basis of $\ell^1$. Since $(e_n)_n$ is an unconditional spreading sequence, by Proposition \ref{equiv forms for 1-subsymmetric weakly null} we have that $(e_n)_n$ is Ces\`aro summable to zero. Hence, by Proposition \ref{Furstemberg's application on spreading models} we have that $(x_s)_{s\in [L]^k}$ is $k$-Ces\`aro summable to zero, for every $L\in [M]^\infty$. \end{proof} \begin{rem} Notice that in the case $k=1$ the two alternatives of Corollary \ref{Rosenthal prop} are mutually exclusive. This does not remain valid for $k\geq 2$. For instance, assume that in Example \ref{example}, $(e_n)_n$ is the usual basis of $\ell^1$. Then the basis $(x_s)_{s\in[\nn]^{k+1}}$ of $X$ generates a $(k+1)$-spreading model equivalent to the usual basis of $\ell^1$ and simultaneously for every $L\in[\nn]^\infty$, $(x_s)_{s\in[L]^{k+1}}$ is $(k+1)$-Ces\`aro summable to zero. Indeed, let $L\in[\nn]^\infty$ and $n\in\nn$. Then since every plegma tuple in $[L|n]^{k+1}$ is of size less than $n$, we have \[\Big\|\Big(\substack{n\\\;\\k+1}\Big)^{-1}\sum_{s\in [L|n]^{k+1}} x_s\Big\|_{k+1}\leq n\Big(\substack{n\\\;\\k+1}\Big)^{-1}\] Since $k+1\geq 2$, $\lim_n n(\substack{n\\k+1})^{-1}=0$. Thus for every $L\in[\nn]^\infty$, $(x_s)_{s\in [L]^{k+1}}$ is Ces\`aro summable to zero. \end{rem} \begin{thm}\label{Rosenthal thm} Let $X$ be a Banach space, $k\in\nn$ and $(x_s)_{s\in[\nn]^k}$ be a weakly relatively compact $k$-sequence in $X$. Then there exists $M\in [\nn]^\infty$ such that at least one of the following holds: \begin{enumerate} \item[(1)] The subsequence $(x_s)_{s\in[M]^k}$ generates a $k$-spreading model equivalent to the usual basis of $\ell^1$. \item[(2)] There exists $x_0\in X$ such that for every $L\in [M]^\infty$, $(x_s)_{s\in [L]^k}$ is $k$-Ces\`aro summable to $x_0$. \end{enumerate} \end{thm} \begin{proof} First we notice that if there exists $M\in [\nn]^\infty$ such that $(x_s)_{s\in[M]^k}$ norm converges to some $x_0\in X$, then by Proposition \ref{rem on k-Cesaro summability} (i), we immediately get that (2) holds. So we may suppose for the sequel that the $k$-sequence $(x_s)_{s\in[\nn]^k}$ does not contain any norm convergent $k$-subsequence. Let $M_1\in[\nn]^\infty$ such that $(x_s)_{s\in[M_1]^k}$ generates a $k$-spreading model $(e_n)_n$. By Proposition \ref{Create subordinated} there exists $M_2\in[M_1]^\infty$ such that $(x_s)_{s\in[M_2]^k}$ is subordinated (with respect to the weak topology). Let $\widehat{\varphi}:[M_2]^k\to (X,w)$ be the continuous map witnessing this and $x_0=\widehat{\varphi}(\emptyset)$. For every $s\in [M_2]^k$ we set $x'_s=x_s-x_0$. Notice that the map $\widehat{\psi}:[M_2]^k\to (X,w)$ defined by $\widehat{\psi}(t)=\widehat{\varphi}(t)-x_0$ is continuous. Hence $(x'_s)_{s\in[M_2]^k}$ is subordinated. Since $\widehat{\psi}(\emptyset)=0$, by Proposition \ref{subordinating yields convergence}, we have that $(x'_s)_{s\in[M_2]^k}$ is weakly null. Moreover, since $(x_s)_{s\in[\nn]^k}$ does not contain any norm convergent $k$-subsequence, it is easy to see that $(x'_s)_{s\in[M_2]^k}$ is seminormalized. Let $(e'_n)_{n}$ be a $k$-spreading model of $(x'_s)_{s\in [M_2]^k}$ and let $M\in [M_2]^\infty$ such that $(x'_s)_{s\in[M]^k}$ generates $(e'_n)_{n}$. By Theorem \ref{unconditional spreading model}, $(e'_n)_{n}$ is unconditional and therefore, by Corollary \ref{Rosenthal prop}, we have that either $(e'_n)_{n}$ is equivalent to the usual basis of $\ell^1$ or for every $L\in [M]^\infty$, $(x'_s)_{s\in [L]^k}$ is $k$-Ces\`aro summable to zero. Since $x_s=x'_s+x_0$, for every $s\in [M]^k$, by Lemma \ref{triv-ell} we have that the first alternative yields that $(e_n)_n$ is equivalent to the usual basis of $\ell^1$ while the second one, easily gives that for every $L\in [M]^\infty$, $(x_s)_{s\in [L]^k}$ is $k$-Ces\`aro summable to $x_0$. \end{proof} \section{The $k$-spreading models of $c_0$ and $\ell^p$, $1\leq p<\infty$} \label{spreading models of c_0 and l^p} In this section we deal with a natural problem, posed to us by Th. Schlumprecht, of determining the spreading models of the classical sequence spaces. As we will see, while the spreading models of $\ell^p$, $1\leq p<\infty$, are as expected, the class of the 2-spreading models of $c_0$ is surprising large. \subsection{The $k$-spreading models of $c_0$} It is well known that every non trivial spreading model of $c_0$ generates a space isomorphic to $c_0$. On the other hand the class of the 2-spreading models of $c_0$ is quite large. As we will see $\mathcal{SM}_2(c_0)$ contains all bimonotone Schauder basic spreading sequences. Notice that this property of $c_0$ is similar to the one of $C(\omega^\omega)$ admitting every $1$-unconditional spreading sequence as a spreading model (see \cite{Odell}). We start with the following lemma. \begin{lem}\label{on spr mod of c_0 second lem} Let $(e_n)_n$ be a spreading sequence in $\ell^\infty$ and let $(x_s)_{s\in [\nn]^2}$ be the $2$-sequence in $c_0$ defined by $x_s=(e_{s(1)}(1), e_{s(1)}(2),...,e_{s(1)}(s(2)), 0,0,...)$, for every $s\in [\nn]^2$. Then for every non trivial $2$-spreading model $(\widetilde{e}_n)_{n}$ of $(x_s)_{s\in[\nn]^2}$, $l\in\nn$ and $a_1,\ldots,a_l\in\rr$, we have \begin{equation}\label{gto}\Big\|\sum_{i=1}^la_ie_i\Big\|_\infty\leq\Big\|\sum_{i=1}^la_i\widetilde{e}_i\Big\|\leq\max_{1\leq j\leq l}\Big\|\sum_{i=j}^l a_i e_i\Big\|_\infty\end{equation} \end{lem} \begin{proof} We fix $l\in\nn$ and $a_1,\ldots, a_l\in\rr$. It is easy to check that for every $(s_i)_{i=1}^l\in \textit{Plm}_l([\nn]^k)$, we have that \begin{equation}\label{kh}\Big\|\sum_{i=1}^la_ix_{s_i}\Big\|_\infty\leq\max_{1\leq j\leq l}\Big\|\sum_{i=j}^l a_i e_{s_i(1)} \Big\|_\infty\end{equation} Let $M\in[\nn]^\infty$ such that $(x_s)_{s\in[M]^2}$ generates a non trivial $2$-spreading model $(\widetilde{e}_n)_{n}$. Then by (\ref{kh}), we easily obtain the righthand inequality of (\ref{gto}). To complete the proof, we fix $\ee>0$ and $m_\ee\in\nn$ such that \begin{equation}\label{pi}\Big\|\sum_{i=1}^la_ie_i\Big\|_\infty-\ee\leq \Big|\sum_{i=1}^la_ie_i(m_\ee)\Big|\end{equation} Notice that for every $(s_i)_{i=1}^l\in\textit{Plm}_l([\nn]^2)$ in $[\nn]^2$ with $ s_1(1)\geq m$, we have that \begin{equation}\label{pii}\Big|\sum_{i=1}^la_ie_i(m)\Big|\leq\Big\|\sum_{i=1}^la_ix_{s_i}\Big\|_\infty\end{equation} Therefore, since $(x_s)_{s\in[M]^2}$ generates $(\widetilde{e}_n)_{n}$ as a 2-spreading model, by (\ref{pi}) and (\ref{pii}), we get that \[\Big\|\sum_{i=1}^la_ie_i\Big\|_\infty-\ee\leq \Big|\sum_{i=1}^la_ie_i(m_\ee)\Big|\leq \Big\|\sum_{i=1}^la_i\widetilde{e}_i\Big\|_\infty\] Since this holds for every $\ee>0$, we obtain the lefthand inequality of (\ref{gto}) and the proof is complete. \end{proof} \begin{prop}\label{c_0 has every bimonotone as order 2} For every Schauder basic spreading sequence $(e_n)_n$ there exists $(\widetilde{e}_n)_{n}\in\mathcal{SM}_2(c_0)$ equivalent to $(e_n)_n$. In particular, if $(e_n)_n$ is bimonotone then $(e_n)_{n}$ is contained in $\mathcal{SM}_2(c_0)$. \end{prop} \begin{proof} We may assume that $(e_n)_n$ is a sequence in $\ell^\infty$. Let $C>0$ be the basis constant of $(e_n)_n$. By Lemma \ref{on spr mod of c_0 second lem} there exists $(\widetilde{e}_n)_{n}\in\mathcal{SM}_2(c_0)$ such that for all $l\in\nn$ and $a_1,\ldots,a_l\in\rr$, we have \begin{equation}\label{lo}\Big\|\sum_{i=1}^la_ie_i\Big\|_\infty\leq\Big\|\sum_{i=1}^la_i\widetilde{e}_i\Big\|\leq\max_{1\leq j\leq l}\Big\|\sum_{i=j}^l a_i e_i\Big\|_\infty\leq (1+C)\Big\|\sum_{i=1}^la_ie_i\Big\|_\infty\end{equation} Hence, $(e_n)_n$ and $(\widetilde{e}_n)_{n}$ are equivalent. Moreover, if in addition $(e_n)_n$ is bimonotone then $\text{max}_{1\leq j\leq l}\|\sum_{i=j}^l a_i e_i\|_\infty\leq \|\sum_{i=1}^l a_i e_i\|_\infty$ and therefore $(\widetilde{e}_n)_{n}$ is isometric to $(e_n)_n$. \end{proof} \begin{cor}\label{c_0 universcal for singular} For every singular spreading sequence $(e_n)_n$, there exists $(\widetilde{e}_n)_n\in\mathcal{SM}_2(c_0)$ equivalent to $(e_n)_n$. \end{cor} \begin{proof} Let $e_n=e'_n+e$ be the natural decomposition of $(e_n)_n$. By Remark \ref{properties of the natural decomposition}, $(e'_n)_n$ is spreading and 1-unconditional. Hence, by Proposition \ref{c_0 has every bimonotone as order 2}, there exists a 2-sequence $(x_s)_{s\in[\nn]^2}$ in $c_0$ generating $(e'_n)_n$ as a 2-spreading model. For every $s\in[\nn]^2$, let $\widetilde{x}_s$ be the sequence in $c_0$ defined by $\widetilde{x}_s(1)=\|e\|$ and $\widetilde{x}_s(n+1)=x_s(n)$ for all $n\in\nn$. It is easy to see that $(\widetilde{x}_s)_{s\in[\nn]^2}$ generates a 2-spreading model $(\widetilde{e}_n)_n$, satisfying \[\Big\|\sum_{j=1}^na_j\widetilde{e}_j\Big\|=\max\Big\{\Big|\sum_{j=1}^na_j\Big|\cdot\|e\|,\Big\|\sum_{j=1}^na_je'_j\Big\|\Big\}\] for all $n\in\nn$ and $a_1,\ldots,a_n\in\rr$. Therefore, by Remark \ref{properties of the natural decomposition}, we conclude that $(e_n)_n$ and $(\widetilde{e}_n)_{n}$ are equivalent. \end{proof} By Proposition \ref{c_0 has every bimonotone as order 2} and Corollary \ref{c_0 universcal for singular} we have the following. \begin{cor}\label{universality_of_c_0} The set $\mathcal{SM}_2(c_0)$ is isomorphically universal for all spreading sequences. \end{cor} \subsection{The $k$-spreading models of $\ell^p$, for $1\leq p<\infty$} The $k$-spreading models of the spaces $\ell^p$, for $1\leq p<\infty$, can be treated as the classical spreading models. This is based on the observation that the usual basis of these spaces is symmetric. Therefore, the norm-behavior of the $k$-sequences admitting a canonical tree decomposition is identical with the one of sequences being of the form $(x_n+x)_n$, where $(x_n)_n$ is block. Especially, for the case of $\ell^1$, one has to make use of the $w^*$-relative compactness of the bounded $k$-sequences in order to pass to a subordinated $k$-subsequence with respect to the $w^*$-topology and in turn to a further one which is approximated by a $k$-subsequence admitting a canonical tree decomposition. This procedure yields the following. \begin{thm}\label{Spr of l^p, 1<p} Let $1\leq p<\infty$ and $(\widetilde{e}_n)_n$ be a $k$-spreading model of $\ell^p$, for some $k\in\nn$. Then there exist $a_1,a_2\geq 0$ such that $(\widetilde{e}_n)_n$ is isometric to the sequence $(a_1e_1+a_2e_{n+1})_n$, where $(e_n)_n$ denotes the usual basis of $\ell^p$. More precisely we have the following. \begin{enumerate} \item[(i)] The sequence $(\widetilde{e}_n)_n$ is trivial if and only if $a_2=0$. \item[(ii)] The sequence $(\widetilde{e}_n)_n$ is singular if and only if $a_1\neq0$ and $a_2\neq0$. \item[(iii)] The sequence $(\widetilde{e}_n)_n$ is Schauder basic if and only if $a_1=0$ and $a_2\neq0$. In this case $(\widetilde{e}_n)_n$ is equivalent to the usual basis of $\ell^p$. \end{enumerate} \end{thm} \begin{rem} It can also be shown that every $(\widetilde{e}_n)_n\in\mathcal{SM}_{k}^{wrc}(c_0)$, satisfies the analogue of Theorem \ref{Spr of l^p, 1<p} with $c_0$ in place of $\ell^p$.\end{rem} \begin{cor} Every non trivial $k$-spreading model of $\ell^p$, $1<p<\infty$, generates a space isometric to $\ell^p$. In particular, every non trivial $k$-spreading model of $\ell^1$ is Schauder basic and equivalent to the usual basis of $\ell^1$. \end{cor} \section{A reflexive space not admitting $\ell^p$ or $c_0$ as a spreading model}\label{space Odel_Schlum} A space not admitting any $\ell^p$, for $1\leq p<\infty$, or $c_0$ spreading model was constructed in \cite{O-S}. In the same paper it is asked if there exists a space which does not contain any $\ell^p$, for $1\leq p<\infty$, or $c_0$ $k$-iterated spreading model of any $k\in\nn$. In this section we give an example of a reflexive space $X$ answering affirmatively this problem. \subsection{The definition of the space $X$} The construction of $X$ is closely related to the corresponding one in \cite{O-S}. Let $(n_j)_j$ and $(m_j)_j$ be two strictly increasing sequences of natural numbers satisfying the following: \begin{enumerate} \item[(i)] $\sum_{j=1}^\infty \frac{1}{m_j}\leq 0,1$. \item[(ii)] For every $a>0$, we have that $\frac{n_j^a}{m_j}\stackrel{j\to\infty}{\longrightarrow}\infty$. \item[(iii)] For every $j\in\nn$, we have that $\frac{n_j}{n_{j+1}}<\frac{1}{m_{j}}$. \end{enumerate} Let $\|\cdot\|$ be the norm on $c_{00}(\nn)$, implicitly defined as follows. For every $x\in c_{00}(\nn)$ we set \begin{equation}\label{eq15} \|x\|=\max \Big\{ \|x\|_\infty,\big( \sum_{j=1}^\infty \|x\|_j^2 \big)^\frac{1}{2}\Big\} \end{equation} where $\|x\|_j=sup\{\frac{1}{m_j}\sum_{q=1}^{n_j}\|E_q(x)\|:E_1<\ldots<E_{n_j}\}$. Let $X$ be the completion of $c_{00}(\nn)$ under the above norm. It is easy to see that the Hamel basis of $c_{00}(\nn)$ is an unconditional basis of the space $X$. Also notice that for every $x\in X$ the sequence $w=(\|x\|_j)_j$ belongs to $\ell^2$ and $\big( \sum_{j=1}^\infty \|x\|_j^2 \big)^\frac{1}{2}=\|w\|_{\ell^2}\leq \|x\|$. \subsection{The main results} The following is the main result of this section. \begin{thm}\label{Odel slumpr theorem} For every $k\in\nn$ and $(e_n)_n\in\mathcal{SM}_k(X)$, the space $E$ generated by $(e_n)_n$ does not contain any isomorphic copy of $\ell^p$, $1\leq p<\infty$, or $c_0$. \end{thm} Given the above theorem we get the following consequence which the aforementioned problem stated in \cite{O-S}. \begin{cor} For every $k\in\nn$, the spaces generated by the $k$-iterated spreading models of $X$ do not contain any isomorphic copy of $\ell^p$, $1\leq p<\infty$, or $c_0$. \end{cor} \begin{proof} By Theorem \ref{Odel slumpr theorem} and James' Theorem we have that for every $k\in\nn$, the spaces generated by the unconditional $k$-spreading models of $X$ are reflexive. By Corollary \ref{qwqwe} we have that for every $k\in\nn$, every space generated by a $k$-iterated spreading model of $X$ is isomorphic to the space generated by an unconditional $k$-spreading model of $X$. By Theorem \ref{Odel slumpr theorem}, the proof is complete. \end{proof} Also notice this example shows that Krivine's theorem \cite{Kr} concerning $\ell^p$ or $c_0$ block finite representability cannot be captured by the notion of $k$-spreading models. \subsection{Proof of Theorem \ref{Odel slumpr theorem}} We will need the next well known lemma (see \cite{AT}). \begin{lem}\label{small_estimation_on_means} Let $j<j_0$ in $\nn$ and $(x_q)_{q=1}^{n_{j_0}}$ be a block sequence in the unit ball $B_X$ of $X$. Then \[\Big\| \frac{x_1+\ldots+x_{n_{j_0}}}{n_{j_0}} \Big\|_j<\frac{2}{m_j}\] \end{lem} \begin{lem}\label{small means} Let $d_0<j_0$ in $\nn$, and $(x_q)_{q=1}^{n_{j_0}}$ be a block sequence in $B_X$. We set $E=\{n\in\nn:n>d_0\}$ and $w_q=(\|x_q\|_j)_j$, for all $1\leq q\leq n_{j_0}$. Assume that for some $0<\ee<1$ there exists a disjointly supported finite sequence $(w'_q)_{q=1}^{n_{j_0}}$ in $\ell^2$ such that $\|E(w_q-w'_q)\|_{\ell^2}<\ee$, for all $1\leq q\leq n_{j_0}$. Then \[\Big\|\frac{x_1+\ldots+x_{n_{j_0}}}{n_{j_0}}\Big\|<0.2+\ee+2n_{j_0}^{-\frac12}\] \end{lem} \begin{proof} By Lemma \ref{small_estimation_on_means}, we have that \[\Bigg\|\Big(\Big\|\frac{\sum_{q=1}^{n_{j_0}}x_q}{n_{j_0}}\Big\|_j \Big)_{j=1}^{d_0}\Bigg\|_{\ell^2} \leq\sum_{j=1}^{d_0}\Big\|\frac{\sum_{q=1}^{n_{j_0}}x_q}{n_{j_0}}\Big\|_j\leq\sum_{j=1}^{d_0}\frac{2}{m_j}<0,2\] Using the above and the observation that $\|E(w'_q)\|_{\ell^2}\leq2$, for all $1\leq q\leq n_{j_0}$, we get the following. \[\begin{split} \Bigg\|\Big(\Big\|\frac{1}{n_{j_0}}\sum_{q=1}^{n_{j_0}}x_q \Big\|_j\Big)_j\Bigg\|_{\ell_2}& \leq0,2+\Bigg\|\Big(\Big\|\frac{1}{n_{j_0}}\sum_{q=1}^{n_{j_0}}x_q\Big\|_j\Big)_{j>d_0}\Bigg\|_{\ell^2}\\ &\leq0,2+\Bigg\|\frac{1}{n_{j_0}}\sum_{q=1}^{n_{j_0}}\big(w_q(j)\big)_{j>d_0}\Bigg\|_{\ell^2} \leq0,2+\Big\|\sum_{q=1}^{n_{j_0}}\frac{E(w'_q)}{n_{j_0}}\Big\|_{\ell^2}+\ee\\ &\leq0,2+\Big(\sum_{q=1}^{n_{j_0}}\Big(\frac{2}{n_{j_0}}\Big)^2\Big)^\frac{1}{2}+\ee= 0,2+\ee+2n_{j_0}^{-\frac12} \end{split}\] Moreover $\|\frac{1}{n_{j_0}}\sum_{q=1}^{n_{j_0}}x_q\|_\infty\leq\frac{1}{n_{j_0}}<\frac{1}{m_1}<0,1$. Hence by (\ref{eq15}) the proof is completed. \end{proof} \begin{lem}\label{non containing l^1 block spreading model} For all $k\in\nn$, every plegma block generated $k$-spreading model of $X$ is not equivalent to the usual basis of $\ell^1$. \end{lem} \begin{proof} Assume on the contrary that there exist $k\in\nn$ and a plegma block $k$-sequence $(x_s)_{s\in[\nn]^k}$ in $X$ which generates $\ell^1$ as a $k$-spreading model. By Proposition \ref{Prop on almost isometric l^1 spr mod}, we may also assume that $x_s\in B_X$, for all $s\in[\nn]^k$ and $(x_s)_{s\in[\nn]^k}$ generates $\ell^1$ as a $k$-spreading model of constant $1-\ee$, where $\ee=0,1$. For every $s\in[\nn]^k$, let $w_s=(\|x_s\|_j)_j$. Since $(w_s)_{s\in[\nn]^k}$ is a $k$-sequence in $B_{\ell^2}$, it is weakly relatively compact. Hence, by Proposition \ref{cor for subordinating}, there exists $M\in[\nn]^\infty$ such that the $k$-subsequence $(w_s)_{s\in[M]^k}$ is subordinated with respect the weak topology on $\ell^2$. Let $\widehat{\varphi}:[M]^{\leq k}\to(\ell^2,w)$ be the continuous map witnessing this. By Theorem \ref{canonical tree}, there exist $L\in[M]^\infty$ and a $k$-subsequence $(\widetilde{w}_s)_{s\in[L]^k}$ in $X$ satisfying the following. \begin{enumerate} \item[(i)] $(\widetilde{w}_s)_{s\in[L]^k}$ admits a canonical tree decomposition $(\widetilde{z}_t)_{t\in[L]^{\leq k}}$ with $\widetilde{z}_\emptyset=\widehat{\varphi}(\emptyset)$. \item[(ii)] For every $s\in[L]^k$, $\|w_s-\widetilde{w}_s\|_{\ell^2}<\ee/2$, where $\min s=L(n)$. \item[(iii)] The $k$-subsequence $(\widetilde{w}_s)_{s\in[L]^k}$ is subordinated with respect to the weak topology of $\ell^2$. \end{enumerate} Let $d_0\in\nn$ such that $\|E(\widehat{\varphi}(\emptyset))\|_{\ell^2}<\frac{\ee}{2}$, where $E=\{d_0+1,\ldots\}$. For every $s\in[L]^k$ we set $w'_s=\widetilde{w}_s-\widehat{\varphi}(\emptyset)$. By Proposition \ref{trocan} (iv), we have that $(w'_s)_{s\in[L]^k}$ is plegma disjointly supported. Moreover, notice that $\|E(w_s-w'_s)\|_{\ell^2}<\ee$, for all $s\in[L]^k$. We pick $j_0>d_0$ such that $2n_{j_0}^{-\frac12}<\ee$. Since $(x_s)_{s\in[\nn]^k}$ generates $\ell^1$ as a $k$-spreading model of constant $0,9$, we may choose $(s_q)_{q=1}^{n_{j_0}}\in\textit{Plm}_{n_{j_0}}([L]^k)$ such that \begin{equation}\label{eq16} \Big\| \frac{1}{n_{j_0}}\sum_{q=1}^{n_{j_0}}x_{s_q} \Big\|\geq 0,8 \end{equation} Observe that $d_0,j_0,\ee$, $(x_{s_q})_{q=1}^{n_{j_0}}$ and $(w'_{s_q})_{q=1}^{n_{j_0}}$ satisfy the assumptions of Lemma \ref{small means}. Hence \[\Big\| \frac{1}{n_{j_0}}\sum_{q=1}^{n_{j_0}}x_{s_q} \Big\|<0,2+\ee+2n_{j_0}^{-\frac12}<0,4\] which contradicts (\ref{eq16}) and the proof is complete. \end{proof} \begin{cor} The space $X$ is reflexive.\end{cor} \begin{proof} Lemma \ref{non containing l^1 block spreading model} implies that the space $X$ does not contain any isomorphic copy of $\ell^1$. Moreover, using that $\frac{n_j}{m_j}\stackrel{j\to\infty}{\longrightarrow}\infty$, it is easy to see that the space $X$ does not contain any isomorphic copy of $c_0$. Since the basis of $X$ is unconditional, the result follows by James' theorem.\end{proof} \begin{cor}\label{corell1} For all $k\in\nn$, every $k$-spreading model of $X$ is not equivalent to the usual basis of $\ell^1$. \end{cor} \begin{proof} Suppose on the contrary that there exist $k\in\nn$ and a bounded $k$-sequence $(x_s)_{s\in[\nn]^k}$ which generates a $k$-spreading model equivalent to the $\ell^1$ basis. By the reflexivity of $X$, we have that $(x_s)_{s\in[\nn]^k}$ is weakly relatively compact. Therefore, by Theorem \ref{getting block generated ell^1 spreading model}, there exists a plegma block generated $k$-spreading model of $X$ equivalent to the usual basis of $\ell^1$, which contradicts to Lemma \ref{non containing l^1 block spreading model}. \end{proof} \begin{lem}\label{breaking upper l^p} Let $1<p\leq\infty$. Then for every $\delta,C>0$ there exists $l_0\in\nn$ such that for every $l\geq l_0$ and every block sequence $(x_q)_{q=1}^{n_l}$ in $X$ with $\|x_q\|>\delta$, for all $1\leq q\leq n_l$, we have that \[\Big\|\sum_{q=1}^{n_l}x_q\Big\|>Cn_l^\frac{1}{p}\] where by convection $\frac1\infty=0$ \end{lem} \begin{proof} Since $\frac{n_l^{1-\frac{1}{p}}}{m_l}\stackrel{l\to\infty}{\longrightarrow}\infty$, there exists $l_0\in\nn$ such that $\frac{n_l^{1-\frac{1}{p}}}{m_l}>\frac{C}{\delta}$, for every $l\geq l_0$. Let $(x_q)_{q=1}^{n_l}$ be a block sequence in $X$ with $\|x_q\|>\delta$, for all $1\leq q\leq n_l$. Then \[\Big\|\sum_{q=1}^{n_l}x_q\Big\|\geq\Big\|\sum_{q=1}^{n_l}x_q\Big\|_l\geq \frac{1}{m_l}\sum_{q=1}^{n_l}\|x_q\|>\frac{n_l}{m_l}\delta>Cn_l^\frac{1}{p}\] \end{proof} \begin{cor}\label{adfs} For all $k\in\nn$, every $k$-spreading model of $X$ is not equivalent to the usual basis of $\ell^p$, $1<p<\infty$, or $c_0$. \end{cor} \begin{proof} Suppose on the contrary that for some $k\in\nn$, $X$ admits a $k$-spreading model $(e_n)_n$, which is equivalent to the usual basis of either $\ell^p$, for some $1<p<\infty$, or $c_0$. First we shall treat the case of $\ell^p$. Since $X$ is reflexive, we have that $(e_n)_n\in\mathcal{SM}_k^{wrc}(X)$. By Corollary \ref{cor canonical tree with spr mod}, there exists a subordinated $k$-sequence $(x_s)_{s\in[\nn]^k}$ admitting a canonical tree decomposition $(y_t)_{t\in[\nn]^{\leq k}}$, which generates $(e_n)_n$ as a $k$-spreading model. Since the basis of $X$ is unconditional and $(e_n)_n$ is Ces\'aro summable to zero, it is easy to see that $y_\emptyset=0$. Notice that $(x_s)_{s\in[\nn]^k}$ is seminormalized and let $\delta>0$ such that $\|x_s\|>\delta$, for all $s\in[\nn]^k$. Hence, for every $s\in[\nn]^k$ there exists $1\leq d\leq k$ such that $\|y_{s|d}\|>\frac{\delta}{k}$. By Ramsey's theorem there exists $1\leq d\leq k$ and $L\in[\nn]^\infty$ such that for every $s\in[L]^k$, $\|y_{s|d}\|>\frac{\delta}{k}$. By Proposition \ref{trocan} (iii), we have that $(y_{s|d})_{s\in[L]^k}$ is plegma block. Fix $C>0$. By Lemma \ref{breaking upper l^p} we have that there exists $l_0$ such that for every $l>l_0$ and $(s_q)_{q=1}^{n_l}\in\textit{Plm}_{n_l}[L]^k$ we have that $\Big\|\sum_{q=1}^{n_l}y_{s_q|d}\Big\|>Cn_l^{\frac1p}$. Hence, dy the 1-unconditionality of the basis of $X$, we conclude that \[\Big\|\sum_{q=1}^{n_l}x_{s_q}\Big\|>Cn_l^{\frac1p}\] Since the above holds for every $C>0$ we have that $(e_n)_n$ is not equivalent to the usual basis of $\ell^p$, which is a contradiction. Finally, if $(e_n)_n$ is equivalent to the usual basis of $c_0$, then the proof is carried out using identical arguments as above and applying Lemma \ref{breaking upper l^p} for $p=\infty$. \end{proof} \begin{proof}[Proof of Theorem \ref{Odel slumpr theorem}] Suppose that for some $k\in\nn$ there exists $(e_n)_n\in\mathcal{SM}_{k}(X)$ such that the space $E$ generated by $(e_n)_n$ contains an isomorphic copy of $Y$, where $Y$ is either $\ell^p$, for some $1\leq p<\infty$, or $c_0$. Obviously $(e_n)_n$ is non trivial. Since $X$ is reflexive, $(e_n)_n\in\mathcal{SM}_{k}^{wrc}(X)$. By Corollary \ref{l^p in wrc}, we have that $\mathcal{SM}_{k+1}(X)$ contains a sequence equivalent to the usual basis of $Y$. By Corollaries \ref{corell1} and \ref{adfs}, we get the contradiction. \end{proof} \section{A space $X$ such that $\mathcal{SM}_k(X)$ is a proper subset of $\mathcal{SM}_{k+1}(X)$}\label{s12} In this section we shall present a Banach space $\mathfrak{X}_{k+1}$, having an unconditional basis $(e_s)_{s\in[\nn]^{k+1}}$ which generates a $(k+1)$-spreading model equivalent to the usual basis of $\ell^1$, while the space $\mathfrak{X}_{k+1}$ does not admit $\ell^1$ as a $k$-spreading model. Moreover, $(e_s)_{s\in[\nn]^{k+1}}$ is not $(k+1)$-Ces\`aro summable to any $x_0$ in $\mathfrak{X}_{k+1}$. \subsection{The definition of the space $\mathfrak{X}_{k+1}$} We fix for the following a positive integer $k$. We will need the next definition. \begin{defn} A family $\mathcal{P}\subseteq [\nn]^{k+1}$ will be called plegmatic in $[\nn]^{k+1}$, if there exist a finite block sequence $F_1<\ldots<F_{k+1}$ of subsets of $\nn$ with $|F_1|=\ldots=|F_{k+1}|$ such that $\mathcal{P}\subseteq F_1\times\ldots\times F_{k+1}$. A plegmatic family $\mathcal{P}\subseteq[\nn]^{k+1}$ will be called Schreier if in addition $|F_1|\leq \min F_1$. \end{defn} For instance, for every $(s_j)_{j=1}^l\in\textit{Plm}_l(\nn]^{k+1}$, the family $\mathcal{P}=\{s_1,\ldots,s_l\}$ is plegmatic but notice that not all plegmatic families in $[\nn]^{k+1}$ are plegma. Let $(e_s)_{s\in [\nn]^{k+1}}$ be the Hamel basis of $c_{00}([\nn]^{k+1})$. For every $x=\sum_{s\in[\nn]^{k+1}}x(s)e_s$ in $c_{00}([\nn]^{k+1})$, we set \begin{equation}\label{no}\|x\|=\sup \Big(\sum_{i=1}^n\|\mathcal{P}_i(x) \|_1^2\Big)^{\frac{1}{2}}\end{equation} where $\|\mathcal{P}(x)\|_1=\sum_{s\in \mathcal{P}}|x(s)|$, for all $\mathcal{P}\subseteq [\nn]^{k+1}$ and the supremum in (\ref{no}) is taken over all finite sequences $(\mathcal{P}_i)_{i=1}^n$ of disjoint Schreier plegmatic families in $[\nn]^{k+1}$. The space $\mathfrak{X}_{k+1}$ is defined to be the completion of $(c_{00}([\nn]^{k+1}),\|\cdot\|)$. The proof of the next proposition is straightforward. \begin{prop} \label{mn}The Hamel basis $(e_s)_{s\in[\nn]^{k+1}}$ of $c_{00}([\nn]^{k+1})$ is an unconditional basis for the space $\mathfrak{X}_{k+1}$ and it generates a $(k+1)$-spreading model which is isometric to the usual basis of $\ell^1$. \end{prop} We may also define a norming set $W$ for the space $\mathfrak{X}_{k+1}$ as follows. First, let \[W^0=\Big\{ \sum_{s\in\mathcal{P}} \pm e_s^*: \mathcal{P}\subseteq [\nn]^{k+1}\;\;\text{is Schreier plegmatic}\Big\}\] For each $f=\sum_{s\in\mathcal{P}} e_s^*\in W^0$, the support of $f$, denoted by $\text{supp}(f)$, is defined to be the family $\mathcal{P}$. It is easy to see that a norming set for $\mathfrak{X}_{k+1}$ is the set $W$ which consists of all $f=\sum_{i=1}^n\lambda_i f_i$ where $(f_i)_{i=1}^n$ is a sequence in $W^0$ such that $\text{supp}(f_i)\cap\text{supp}(f_j)=\emptyset$, for all $1\leq i<j\leq n$ and $\sum_{i=1}^n\lambda_i^2\leq 1$. In order to study the basic properties of the space $\mathfrak{X}_{k+1}$, we need the following proposition. \begin{prop} \label{The space X_k+1 does not contain l^1 disjointly supported spreading models of order k} Every plegma disjointly generated $k$-spreading model of $\mathfrak{X}_{k+1}$ is not equivalent to the usual basis of $\ell^1$. \end{prop} The proof is postponed in the next subsection. Assuming Proposition \ref{The space X_k+1 does not contain l^1 disjointly supported spreading models of order k} we are able to prove the following. \begin{thm} The space $\mathfrak{X}_{k+1}$ has the next properties. \item [(i)] It is reflexive. \item[(ii)] There is no sequence $(e_n)_n\in\mathcal{SM}_{k}(\mathfrak{X}_{k+1})$ equivalent to the usual basis of $\ell^1$. \item[(iii)] Every (k+1)-subsequence of $(e_s)_{s\in[\nn]^{k+1}}$ is not $(k+1)$-Ces\`aro summable to any $x_0$ in $\mathfrak{X}_{k+1}$. \end{thm} \begin{proof} (i) By Proposition \ref{mn}, we have that $(e_s)_{s\in[\nn]^{k+1}}$ is unconditional. Also, it is easy to check that it is boundedly complete. Thus $c_0$ is not contained in $\mathfrak{X}_{k+1}$. Moreover, the same holds for $\ell^1$, since otherwise there would exist a disjointly supported sequence $(x_n)_{n}\in\mathfrak{X}_{k+1}$ equivalent to the usual basis of $\ell^1$, which is impossible by Proposition \ref{The space X_k+1 does not contain l^1 disjointly supported spreading models of order k}. Hence, by James' theorem \cite{J}, the space $\mathfrak{X}_{k+1}$ is reflexive.\\ (ii) Assume on the contrary, that there exists $(e_n)_{n}$ in $\mathcal{SM}_k(\mathfrak{X}_{k+1})$ equivalent to the usual basis of $\ell^1$. Since $\mathfrak{X}_{k+1}$ is reflexive, we get that $(e_n)_{n}\in\mathcal{SM}^{wrc}_k(\mathfrak{X}_{k+1})$. Hence, by Corollary \ref{cor canonical tree with spr mod}, $(e_n)_{n}$ is generated by a $k$-sequence $(x_s)_{s\in[\nn]^k}$ in $X_{k+1}$ admitting a canonical tree decomposition $(y_t)_{t\in[\nn]^{\leq k}}$. Setting $x'_s=x_s-y_\emptyset$, for all $s\in[\nn]^k$, by Lemma \ref{triv-ell}, we have that $(x'_s)_{s\in[\nn]^k}$ also admits a $k$-spreading model equivalent to the usual basis of $\ell^1$. Since $(x'_s)_{s\in[\nn]^k}$ is a plegma disjointly supported $k$-sequence, by Proposition \ref{The space X_k+1 does not contain l^1 disjointly supported spreading models of order k} we have reached to a contradiction.\\ (iii) Since $\mathfrak{X}_{k+1}$ is reflexive we have that $(e_s)_{s\in[\nn]^{k+1}}$ is a weakly null $(k+1)$-sequence. Let $M\in[\nn]^\infty$ and assume that $(e_s)_{s\in[M]^{k+1}}$ is $(k+1)$-Ces\`aro summable to some $x_0\in\mathfrak{X}_{k+1}$. By Proposition \ref{rem on k-Cesaro summability}(ii), we get that $x_0=0$. For every $n\in\nn$, let \begin{equation}y_n=\Big( \substack{(k+2)n\\\ k+1} \Big)^{-1}\sum_{s\in[M| (k+2)n]^{k+1}}e_s\end{equation} where $l_n=(k+2)n$, $\mathcal{P}_n=F_1^n\times\ldots\times F_{k+1}^n$, where for every $1\leq i\leq k+1$, $F_i^n=\{M(in+1),\ldots,M((i+1)n)\}$ and $f_n=\sum_{s\in \mathcal{P}_n}e^*_s$. It is easy to check that \begin{equation}\label{qr}f_n(y_n)= n^{k+1}\cdot \Big( \substack{(k+2)n\\k+1} \Big)^{-1}\stackrel{n\to\infty}{\longrightarrow}\frac{(k+1)!}{(k+2)^{k+1}}\end{equation} Since $\|y_n\|\geq f_n(y_n)$, by (\ref{qr}) we conclude that $(e_s)_{s\in[M]^{k+1}}$ is not $(k+1)$-Ces\`aro summable to $x_0=0$, a contradiction. \end{proof} \subsection{Proof of Proposition \ref{The space X_k+1 does not contain l^1 disjointly supported spreading models of order k}} \begin{lem}\label{supports} Let $x\in \mathfrak{X}_{k+1}$ of finite support and $f\in W^0$ such that $\text{supp} (f)\cap \text{supp}(x)\neq \emptyset$. Then $|\text{supp}(f)|\leq n_0^{k+1}$, where $n_0=\max\{s(1):\;s\in\text{supp}(x)\}$. \end{lem} \begin{proof} There exist $F_1<\ldots< F_{k+1}$ subsets of $\nn$ such that $|F_1|=...=|F_{k+1}|$, $\text{supp}(f)\subseteq F_1\times\ldots\times F_{k+1}$ and $|F_1|\leq\min F_1$. Hence $|\text{supp}(f)|\leq(\min F_1)^{k+1}$. Let $s\in\text{supp}(f)\cap\text{supp}(x)$. Then $n_0\geq s(1)\geq\min F_1$. Hence $n_0\geq\min F_1$ and therefore $|\text{supp}(f)|\leq n_0^{k+1}$. \end{proof} \begin{lem} Let $N_0\in\nn$. Then for every $0<\ee<1$, every $l\in\nn$ and every disjointly supported finite sequence $(x_j)_{j=1}^l$ in the unit ball of $\mathfrak{X}_{k+1}$ such that for every $1\leq j\leq l$ and $s\in\text{supp}(x_j)$, $s(1)\leq N_0$, we have that \[\Big\|\frac{1}{l}\sum_{j=1}^{l}x_{j}\Big\|\leq \ee +\frac{N_0^{k+1}}{\ee^2 l}\] \end{lem} \begin{proof} We fix $0<\ee<1$, $l\in\nn$ and $(x_j)_{j=1}^l$ satisfying the assumptions of the lemma. Let $\varphi=\sum_{i=1}^n\lambda_if_i\in W$, where $n\in\nn$, $\lambda_1,\ldots,\lambda_n\in\rr$ with $\sum_{i=1}^n\lambda_i^2\leq1$ and $f_1,\ldots,f_n\in W^0$ pairwise disjointly supported. For every $j=1,\ldots,l$ we set \[I_j=\Big\{i\in\{1,\ldots,n\}:\;\text{supp}(f_i)\cap\text{supp}(x_j)\neq\emptyset \Big\}\] By Lemma \ref{supports}, we have that for every $1\leq j\leq l$, if $i\in I_j$ then $|\text{supp}(f_i)|\leq N_0^{k+1}$. Also let $F_1=\{j\in\{1,\ldots,l\}:\; \sum_{i\in I_j}\lambda_i^2 <\varepsilon^2\}$ and $F_2=\{1,\ldots,l\}\setminus F_1$. It is easy to see that $\sum_{i\in I_j}\frac{f_i(x_j)}{( \sum_{i\in I_j}f_i(x_j)^2 )^\frac{1}{2}}f_i$ belongs to $W$, for all $1\leq j\leq l$. Hence, since $\|x_j\|\leq1$, we have that $\sum_{i\in I_j} f_i(x_j)^2\leq1$, for all $1\leq j\leq l$. Therefore we have \[\begin{split} \varphi\Big(\sum_{j=1}^{l}x_j\Big)&= \sum_{i=1}^n\lambda_i f_i\Big(\sum_{j=1}^{l}x_j\Big) = \sum_{j=1}^{l} \sum_{i=1}^n \lambda_i f_i(x_j)\\ &= \sum_{j=1}^{l} \sum_{i\in I_j} \lambda_i f_i(x_j)\leq \sum_{j=1}^{l} \Big{(}\sum_{i\in I_j} \lambda_i^2\Big{)}^\frac{1}{2} \Big{(}\sum_{i\in I_j}f_i(x_j)^2\Big{)}^\frac{1}{2}\\ &\leq \sum_{j\in F_1} \Big{(}\sum_{i\in I_j} \lambda_i^2\Big{)}^\frac{1}{2} + \sum_{j\in F_2} \Big{(}\sum_{i\in I_j} \lambda_i^2\Big{)}^\frac{1}{2}\\ &\leq \varepsilon |F_1|+|F_2|\leq \ee l+|F_2| \end{split}\] If for some $1\leq i\leq n$ we have that $J_i\neq\emptyset$ then, by Lemma \ref{supports} we have that $|\text{supp}(f_i)|\leq N_0^{k+1}$ and since $(x_j)_{j=1}^{l}$ are disjointly supported, we conclude that $|J_i|\leq N_0^{k+1}$. Therefore, for every $1\leq i\leq n$, $|J_i|\leq N_0^{k+1}$. Hence $$\ee^2 |F_2|\leq \sum_{j\in F_2}\sum_{i\in I_j}\lambda_i^2\leq \sum_{j=1}^{l}\sum_{i\in I_j}\lambda_i^2=\sum_{i=1}^n|J_i|\lambda_i^2\leq N_0^{k+1}\sum_{i=1}^n \lambda_i^2\leq N_0^{k+1}$$ which yields that $|F_2|\leq N_0^{k+1}/\ee^2$. Therefore, for every $\varphi\in W$ we have \[\varphi\Big(\sum_{j=1}^{l}x_j\Big)\leq \varepsilon l+\frac{N_0^{k+1}}{\ee^2}\] Since $W$ is a norming set for $\mathfrak{X}_{k+1}$, the proof is complete. \end{proof} \begin{defn} (i) Let $\mathcal{G}_1,\mathcal{G}_2\subseteq [\nn]^{k+1}$. We will call the pair $(\mathcal{G}_1,\mathcal{G}_2)$ weakly plegmatic if for every $s_2\in G_{2}$ there exists $s_1\in \mathcal{G}_i$ such that the pair $\{s_1,s_2\}$ is plegmatic.\\ (ii) For every $0\leq j\leq l$, let $\mathcal{G}_j\subseteq [\nn]^{k+1}$. The finite sequence $(\mathcal{G}_j)_{j=0}^l$ will be called a \textit{weakly plegmatic path of subsets of} $[\nn]^{k+1}$, if for every $0\leq i< l$ the pair $(\mathcal{G}_{i},\mathcal{G}_{i+1})$ is weakly plegmatic. \end{defn} \begin{lem} \label{blocking the first coordinates in weakly allowable paths} Let $(\mathcal{G}_j)_{j=0}^k$ be a weakly plegmatic path of subsets in $[\nn]^{k+1}$. Then $\max\{s(1):s\in\cup_{j=0}^k \mathcal{G}_j\}\leq \max\{ s(k+1): s\in \g_0\}$. \end{lem} \begin{proof} Let $0\leq j\leq k$ and $s\in \g_j$. Then it is easy to see that there exists a sequence $(s_i)_{i=0}^j$ in $[\nn]^{k+1}$ with $s_i\in \g_i$, for every $0\leq i\leq j-1$ and $s_j=s$, such that $\{s_i,s_{i+1}\}$ is plegmatic, for all $0\leq i\leq j-1$. Hence $$s(1)=s_j(1)<s_{j-1}(2)<...<s_0(j+1)\leq s_0(k+1)\leq \max\{ s(k+1): s\in \g_0\}$$ \end{proof} \begin{lem} \label{Lemma for 2 vectos of norm almost 2} Let $0<\eta<\frac18$ and $x_1,x_2\in\mathfrak{X}_{k+1}$ with disjoint finite supports such that $\|x_1\|,\|x_2\|\leq1$ and $\|x_1+x_2\|> 2-2\eta$. Let $\mathcal{G}_1\subseteq\text{supp}(x_1)$ such that $\|\mathcal{G}_1^c(x_1)\|\leq \eta$. Then there exists $\mathcal{G}_2\subseteq\text{supp}(x_2)$ satisfying the following. \begin{enumerate} \item[(i)] The pair $(\mathcal{G}_1,\mathcal{G}_2)$ is a weakly plegmatic path and \item[(ii)] $\|\mathcal{G}_2^c(x_2)\|\leq \eta^\frac{1}{8}$. \end{enumerate} \end{lem} \begin{proof} Since $\|x_1+x_2\|>2-2\eta$, there exists $\varphi\in W$ such that $\varphi(x_1+x_2)>2-2\eta$. Since $\|x_1\|,\|x_2\|\leq1$, we get that $\varphi(x_1)>1-2\eta$ and $\varphi(x_2)>1-2\eta$. The functional $\varphi$ is of the form $\sum_{i=1}^n \lambda_i f_i$, where $f_1,\ldots,f_n$ are pairwise disjoint supported elements of $W^0$ and $\sum_{i=1}^n\lambda_i^2\leq1$. We set $I=\{1,\ldots,n\}$ and we split it to $I_1$ and $I_2$ as follows: \[I_1=\{i\in I:\;\text{supp}(f_i)\cap \g_1\neq\emptyset\}\;\;\text{and}\;\;I_2=I\setminus I_1=\{i\in I:\;\text{supp}(f_i)\subseteq \g_1^c\}\] We also set $\varphi_1=\sum_{i\in I_1}\lambda_i f_i$ and $\varphi_2=\sum_{i\in I_2}\lambda_i f_i$. Hence $\varphi_2(x_1)\leq\|\g_1^c(x_1)\|\leq \eta$ and therefore $\varphi_1(x_1)> 1-3\eta$. Applying Cauchy-Schwartz's inequality we get that \[1-3\eta < \varphi_1(x_1)=\sum_{i\in I_1}\lambda_i f_i(x_1)\leq \Big{(} \sum_{i\in I_1}\lambda_i^2 \Big{)}^\frac{1}{2}\Big{(} \sum_{i\in I_1}f_i(x_1)^2 \Big{)}^\frac{1}{2}\leq\Big{(} \sum_{i\in I_1}\lambda_i^2 \Big{)}^\frac{1}{2}\] Since $\sum_{i\in I}\lambda_i^2\leq1$, we have that $(\sum_{i\in I_2}\lambda_i^2)^\frac{1}{2}< (1-(1-3\eta)^2)^\frac{1}{2}\leq (6\eta)^\frac{1}{2}$. Hence \[\varphi_2(x_2)=\sum_{i\in I_2}\lambda_if_i(x_2)\leq \Big{(} \sum_{i\in I_2}\lambda_i^2 \Big{)}^\frac{1}{2}\Big{(} \sum_{i\in I_2} f_i(x_2)^2 \Big{)}^\frac{1}{2}< (6\eta)^\frac{1}{2}\] Hence $\varphi_1(x_2)> 1-2\eta-(6\eta)^\frac{1}{2}>1-4\eta^\frac{1}{2}$. We set $\g_2=\text{supp}(x_2)\cap\text{supp}(\varphi_1)$. Then by the definition of $I_1$ it is immediate that the pair $(\g_1,\g_2)$ is weakly plegmatic. Finally, since $\|\g_2(x_2)\|^2+\|\g_2^c(x_2)\|^2\leq \|x_2\|^2\leq 1$ and $\|\g_2(x_2)\|\geq \varphi_1(x_2)$, we get that $\|\g_2^c(x_2)\|\leq (1-(1-4\eta^\frac12)^2)^\frac{1}{2}<\eta^\frac18$ and the proof is complete. \end{proof} An iterated use of the above yields the following. \begin{cor}\label{final lemma for mathfrak-X-_k+1} Let $m\in\nn$ and $0<\ee<\frac18$ Then for every sequence $(x_i)_{i=0}^m$ of disjointly and finitely supported vectors in $\mathfrak{X}_{k+1}$ with $\|x_i\|\leq1$, for all $0\leq i\leq m$, and $\|x_i+x_{i+1}\|> 2-2\varepsilon^{8^m}$, for all $0\leq i<m$, there exists a weakly plegmatic path $(\g_i)_{i=0}^m$ of subsets of $[\nn]^{k+1}$ such that $\mathcal{G}_i\subseteq \text{supp}\;x_i$ and $\|\g_i^c(x_i)\|<\ee$, for all $0\leq i\leq m$. \end{cor} We are now ready to give the proof of Proposition \ref{The space X_k+1 does not contain l^1 disjointly supported spreading models of order k}. \begin{proof}[Proof of Proposition \ref{The space X_k+1 does not contain l^1 disjointly supported spreading models of order k}:] Assume on the contrary that the space $\mathfrak{X}_{k+1}$ admits a plegma disjointly generated $k$-spreading model equivalent to the usual basis of $\ell^1$. Let $0<\ee<\frac18$. By Proposition \ref{Prop on almost isometric l^1 spr mod} and Remark \ref{Rem on almost isometric l^1 spr mod} there exists a sequence $(x_t)_{t\in[\nn]^k}$ in the unit ball of $\mathfrak{X}_{k+1}$ which is plegma disjointly supported and generates $\ell^1$ as a $k$-spreading model of constant $c>1-\ee^{8^k}$. Therefore, we may suppose that \begin{equation}\label{eq14} \Big\|\frac1l\sum_{j=1}^lx_{t_j}\Big\|>1-\ee^{8^k} \end{equation} for all $l\in\nn$ and $(t_j)_{j=1}^l\in\textit{Plm}_l([\nn]^k)$ with $t_1(1)\geq l$. We set $t_0=\{2,4,\ldots,2k\}$, $N_0=\max\{s(k+1):s\in\text{supp}(x_{t_0})\}$ and $L=\{2n:s>k\}$. For every $t\in[L]^k$ we select $\g_t\subseteq[\nn]^k$ such that $\g_t\subseteq\text{supp}(x_t)$, $\|\g_t^c(x_t)\|<\ee$ and $s(1)<N_0$, for all $s\in\g_t$, as follows. Let $t\in[L]^k$. Observe $t\in [\nn]_\shortparallel^{k}$ and $t_0<t$. By Proposition \ref{accessing everything with plegma path of length |s_0|} there exists a plegma path $(t_j)_{j=0}^{k}$ in $[\nn]^{k}$, with $t_k=t$. By Corollary \ref{final lemma for mathfrak-X-_k+1} (for $m=k$) there exists a weakly plegmatic path $(\g_j)_{j=0}^{k}$ such that $\g_j\subseteq \text{supp}\;x_{t_j}$ and $\|\g_j^c(x_{t_j})\|<\ee$, for all $j=0,\ldots,k$. We set $\g_t=\g_k$. Lemma \ref{blocking the first coordinates in weakly allowable paths} and Corollary \ref{final lemma for mathfrak-X-_k+1} yield that the choice of $(\g_t)_{t\in[L]^k}$ is as desired. For every $t\in[L]^k$, let $x_t^1=\g_t(x_t)$. Then $\|x_t-x_t^1\|<\ee$, for all $t\in[L]^k$. Hence by (\ref{eq14}) we get that for every $l\in\nn$ and every $(t_j)_{j=1}^l\in\textit{Plm}_l([L]^k)$ with $t_1(1)\geq l$, we have that \begin{equation}\label{eq9} \Big{\|} \frac{1}{l}\sum_{i=1}^l x^1_{t_i} \Big{\|}>1-2\varepsilon>\frac68 \end{equation} Moreover notice that $(x'_t)_{t\in[L]^k}$ is a plegma disjointly supported $k$-subsequence in the unit ball of $\mathfrak{X}_{k+1}$. Therefore, by Lemma \ref{blocking the first coordinates in weakly allowable paths} and (\ref{eq9}) for $l>8N_0^{k+1}/5\ee^2$, we get a contradiction. The proof of Proposition \ref{The space X_k+1 does not contain l^1 disjointly supported spreading models of order k} is complete. \end{proof} \begin{rem} As we have mentioned in the introduction of this article, the $k$-spreading models of a Banach space $X$ have a transfinite extension yielding an hierarchy of $\xi$-spreading models, for $\xi<\omega_1$. It can be shown that the space in Section \ref{space Odel_Schlum} does not admit $\ell^p$, for $1\leq p<\infty$, or $c_0$ as $\xi$-spreading model, for every $\xi<\omega_1$. Also an analogue of the last example exists. Namely, for every limit countable ordinal $\xi$ there exists a reflexive space $X_\xi$ admitting $\ell^1$ as $\xi$-spreading model but not less. \end{rem}
95,355
Carmelized black pepper chicken. I am wondering if it will taste anything like the one at 4 Sisters Vietnamese restaurant. If so, I'll make it! Chicken Recipe, Black Pepper Chicken, Caramel Black, Maine Dishes, Yummy Food, Chicken Thighs, Dishes Recipe, Food Recipe, Black Peppers Chicken Pad Pak Bung Fai Daeng (Morning Glory) ผัดผักบุ้งไฟแดง Thai Food, Fai Daeng, Mornings Glories, Pak Bung, Rice Dishes, Thai Dishes, Pads Pak, Bung Fai, Daeng Mornings Vietnamese Spring Rolls Vietnamese Spring Rolls - The filling is made of group pork, shrimp, crab meat, with shredded carrots in a deep-fried, crunchy, golden brown shell. #appetizer #seafood Eggs Rolls, Fun Recipe, Vietnam Spring, Asian Food, Crabs Meat, Vietnamese Spring Rolls, Dozen Please, Bad, Cry Vietnamese Spring Rolls recipe (Cha Gio) - The filling is made of group pork, shrimp, crab meat, with shredded carrots, noodles and the Egg Rolls: outside is deep-fried, crunchy, and golden brown exterior is just delicious! Vietnamese Spring Rolls - One day, I will learn how to make all types of Asian food. Recipe Photo Credit Our recipe of the week is from Viet Nam. Try your hand at cooking this delicious recipe. #CaKho #VietNam #Recipe For more info:... Photo credit: justataste.com Chicken Recipe, Black Pepper Chicken, Caramel Black, Maine Dishes, Yummy Food, Chicken Thighs, Dishes Recipe, Food Recipe, Black Peppers Chicken, Malaysian Mondays, Asian Food, Asian Malaysian Food, Dishes Recipe, Hungry Tummy, Sarawak Laksa, Laksa Paste我一定要试, Food Regular 3 hungry tummies: Sarawak Laksa 砂拉越叻沙 - "Malaysian Monday 95" Homemade/ Sauces Recipe, Food Recipes, Custard Sauces, Sbs Food, Sweet Tooth, Caramel Meringue, Sauces Canonigo, Recipe Canonigo, Sweet Dreams/ Asian Favorite, Beef Crepes, Asian Food, Telor Indonesian, Crepes Recipe, Indonesian Beef, Martabak Telor, Indonesian Recipe,/ Chicken Biryani, Gai Chicken, Cooking Khao, Chicken Cooking, Recipe Photo, Yellow Rice,:... Southeast Asian, Food Bali, Indonesian, Food Lists, Asian Food, Ikanpep Easttimor, Fish Dishes, East Timorous, Easttimor Recipe:... Asian Recipe, Whiskey Recipe, Asian Food, Beef Recipe, Food Recipe Beef, Laos Whiskey, Laos Recipe, Beef Wok Toss, Laos Cuisine Malaysia Recipe, Mixed Herbs, Side Dishes, Complete Meals, Herbs Rice, Malaysian Nasi, Malay Dishes, Nasi Ulam Recipe, Nasiulam Malaysia Malaysian mixed herb rice - Nasi Ulam. Recipe on rasamalaysia. Nasi ulam is a Malay dish. It is utterly delicious, aromatic, healthy, and extremely appetizing, and great with a side dish of meat or fish. What’s more, it can be a complete meal on its own. Malaysian Nasi Ulam Tom Yum Fried Rice A recipe for Tom Yum Fried Rice with a cook time of about 5 minutes! Asian Recipe, Recipe Food, Rasamalaysia, Asian Cooking, Fried Rice Recipes, Delicious Recipe, Recipe, Rasamalaysia With, Udang Prawn, Prawn Sambal, Filipino Dishes, Easy Asian Recipe, Sambal Udang, Filipino Recipe, Filipino Food Filipino Recipes | Philippine Foods | Filipino Dish_1<< Our recipe for the week! Cambodian Fish Amok More info:... Cambodian Fish Amok, Cambodian Food, Khmer Fish Amok, Amok Cambodia, Kh Food Fish, Cambodian Dishes, Fish Mousse Fish Amok - Cambodian dish Fish Amok cambodia CAMBODIA: Fish Amok - fish mousse with fresh coconut milk and kroeung (Khmer curry paste) Cambodian Fish Amok, Bananas Slices, Asian Food, Bananas Rotis, Thai Street Food, Street Food Rotis, Thai Bananas, Rotis Bananas 4 Jpg 550 824,, Asian Recipes, Radish Cakes, Chilis Sauces, Easy Asian Recipe, Cakes Recipe, Turnip Cakes, Cake Recipes south asian turnip cake Fried Radish Cake (菜头粿) | Easy Asian Recipes at RasaMalaysia.com Chai tow kway! (fried carrot cake!) common singaporean and malaysian breakfast :D Fried Radish Cake - radish, rice flour, egg, garlic, fish sauce, chili sauce, onion,soya sauce(breakfast/brunch/anytime) Pan-fried Shrimp and Chive Dumplings, so yummy, just like dim sum restaurants. Dumplings Asian, Dumplings 韭菜虾饺, Cooking With, Chive Dumplings, Delicious Recipe, Shrimp Chive, Dim Sum, Asian Dumplings, Rasa Malaysia Pan-fried Shrimp and Chive Dumplings, so yummy, just like dim sum restaurants | Rasa Malaysia Shrimp Chive Dumplings Thai Fish Soup Soto ayam, a spicy yellow chicken soup commonly found in both Indonesia and Singapore. Food Recipeyummm, Chicken Soups, Asian Food, Chicken Soto, Of Indonesia, Filesoto Ayamjpg, Indonesian Food, Cooking Soto, Small Cakes Mulai dari Soto Medan di bagian barat Nusantara, Soto Betawi di Jakarta, Sroto Sokaraja di Jawa Tengah, dan Coto Makassar di Indonesia Timur, menu SOTO nampaknya memang menjadi salah satu makanan favorit setiap daerah.." Fish Sauces, Slices Green, Signature Dishes, Hot Bowls, Fresh Herbs, Grilled Pork, Green Papaya, Rice Noodles, Vietnam Food
333,374
\chapter{Plane Gravity Waves in Two-Layered Liquid} {\small{In chapter 3, equations of the liquid-liquid interface are sought in a parametric form, which maps the trace of the interface in a vertical plane onto a half of a circle. The mapping allows us to express the solution with the use of a countable set of functions. Using the results of chapters 1, the governing equations are rewritten in a particular curvilinear coordinate system, and the preliminary solution in the form of infinite series is obtain with the use of integral operators derived in chapter 2.}} \section{Gravity waves: governing equations in curvilinear coordinates} Mathematical formulation of the problem outlined in Chapter 1 is presented below. Let at $t<0$ the liquid be at rest and the interface be a horizontal plane. We consider the problem assuming that at $t=0$ a body of water is disturbed, so the waves start to propagate away from the initially disturbed body of water. At $t>0$ the water is acted on by no external force other than gravity. \vspace{3mm} Let the curve $\,\,\Gamma\,\,$ (in figure 1) be the trace of the interface in the plane $\,\,(x,y)\,\,$, the vertical $x$-axis be oriented upward and the $y$-axis be horizontal; $\,x=f<0,\,$ $y=0$ be the coordinates of the pole $\,O_1\,$ of the polar coordinate system in the $\,(x,y)\,$ plane, $\,\theta\,$ be the polar angle measured from the positive $x$-axis in the counterclockwise direction, $\,t\,$ be the time. The equilibrium position of the interface is horizontal plane $x=0$. \vspace{3mm} Equations of the interface are sought in parametric form \begin{equation} x=W(\theta,t),\, y=(W-f)\,\tan\,\theta,\, -\pi/2 < \theta < \pi/2, \label{c3_1.1} \end{equation} where $\,\,W(\theta,t)\,\,$ is an unknown function that must be found while solving the problem. Formally, equations \eqref{c3_1.1} for each specified function $\,\,W\,\,$ describe a family of curves depending on $\,f,\,$ $\,t\,$ being considered a constant. The value of $\,\,f\,\,$ determines the horizontal scale of the problem. Indeed, for any specific function $W(\theta, t)$ and any fixed value of $t$, equations \eqref{c3_1.1} describe a family of curves depending on parameter $f$. According to the equations, at each value of $\theta$ the horizontal coordinate $y$ increases when $|f|$ increases, while vertical coordinate $x$ remains unchanged, so the curve \eqref{c3_1.1} ``stretches" along the horizontal axis. \vspace{3mm} In the $\,\,(x,y)\,\,$ plane, curvilinear coordinates $(\sigma, \theta)$ are defined by the relations \begin{equation} x=\sigma+W(\theta,t),\, y=(\sigma+W-f)\,\tan\,\theta,\, -\pi/2 < \theta < \pi/2 \label{c3_1.2} \end{equation} so the equation of the interface takes the form $\sigma =0$ (the liquid of density $\gamma_2$ occupies the half-space $\sigma <0$). \begin{figure} \centering \resizebox{0.7\textwidth}{!} {\includegraphics{figure3_1.jpg}} \caption{ Coordinate systems and sketch of the liquid-liquid interface. } \label{c3_qu-.1-.10} \end{figure} For any function $F(\sigma ,\theta ,t)$ the following notations are introduced for one-sided limits ($\sigma<0$ for negative side and $\sigma>0$ for positive side of the interface): $$ F_{-}=\lim_{\sigma\to -0}F(\sigma ,\theta ,t),\,\,\,\,\,\, F_{+}=\lim_{\sigma\to +0}F(\sigma ,\theta ,t). $$ In chapter 1, subsection 2.1, coordinates $x,\,y$ introduced as implicit functions of curvilinear coordinates $u,\,v$. In this section, the coordinates $x,\,y$ are given as explicit functions of $\theta,\,\sigma$. Below, in section 3, the quantities $D_{ij}$ \eqref{c1_2.3} of chapter 1, subsection 2, are expressed as functions of $\theta,\,\sigma$. \vspace{3mm} In terms of variables $(\sigma, \theta)$, the exact equations \eqref{c1_2.9}, \eqref{c1_2.10}, \eqref{c1_2.11} of chapter 1, section 2, governing time evolution of the interface, the doublet density $g(\theta,t)$ and velocity potential $\Phi(\sigma,\theta,t)$ become respectively the forms \begin{equation} \pd{W}{t} \left(1- 2\pd{W}{\theta}\hat D_1 \tan\theta \right)= \hat D_2\,\pd{\hat\Phi}{\theta_-}+ \hat D_3\,\pd{\hat\Phi}{\sigma_-} \label{c3_1.3} \end{equation} where $$ \hat D_1=-\frac{\cos^2\theta}{W-f},\,\,\,\,\,\, \hat D_2=-\frac{\sin\theta\cos\theta}{W-f} -\pd{W}{\theta}\,\frac{\cos^2\theta}{(W-f)^2} $$ $$ \hat D_3=1+ 2\frac{\sin\theta\cos\theta}{W-f}\pd{W}{\theta} + \frac{\cos^2\theta}{(W-f)^2} \,\left(\pd{W}{\theta} \right)^2, $$ \begin{equation} \pd{g}{t}+\pd{g}{\theta}\hpd{U}{t}+ \frac{\hat D_2}{\hat D_3}\hpd{V}{t}\pd{g}{\theta}+ \frac{1}{2}\frac{\hat D_1^2}{\hat D_3}\pd{g}{\theta} \left(\pd{\hat\Phi}{\theta}_++\pd{\hat\Phi}{\theta}_-\right)+ $$ $$ \gamma \left(\frac{1}{2}\,\hat q^2_-+W+\hpd{\Phi}{t}_-\right)=0, \,\,\,\,\,\,\gamma=\frac{\gamma_2}{\gamma_1}-1 \label{c3_1.4} \end{equation} where $\gamma_1$ and $\gamma_2$ are the densities of the liquids. In equations \eqref{c3_1.3} and \eqref{c3_1.4} all terms are calculated at the point \linebreak $ Q_1(\sigma_1=0,\,\theta=\theta_1)$ on the interface. \begin{equation} \Phi(\sigma,\theta,t)=-\frac{D_1}{|D_1|} \frac{1}{2\pi}\int\limits_{-\pi/2}^{\pi/2} g(\theta_1,t)A(\sigma,\theta,\theta_1,t) \left.\frac{d\theta_1}{S}\right|_{\sigma_1=0}, \label{c3_1.5} \end{equation} $$ A=(\sigma +W-W_1)(f-W_1)+ $$ $$ \pd{W_1}{\theta_1}\, [(\sigma+W-f)(\tan\,\theta-\tan\theta_1)-(W_1-f)\,\tan\,\theta_1] \cos^2\theta_1 $$ $$ S=(\sigma-\sigma_1+W-W_1)^2\cos^2\theta_1+ $$ $$ [(\sigma-f+W)\tan\theta\cdot\cos\theta_1- (\sigma_1-f+W_1)\sin\theta_1]^2, $$ $$ W=W(\theta,t),\,\,\,\,\,\,W_1=W(\theta_1,t),\,\,\,\,\,\, \frac{D_1}{|D_1|}=-1, $$ $S=\cos^2\theta_1R^2$, $R^2$ is the squared distance between points $Q(\sigma, \theta)$ and $ Q_1(\sigma_1,\,\theta_1)$. The subscript $\sigma_1=0$ in \eqref{c3_1.5} denotes that integrand is calculated at $\sigma_1=0$. Equations \eqref{c3_1.1} - \eqref{c3_1.5} are supplemented by boundary conditions \begin{equation} |W(\theta ,t)|<C(t)\cos^2\theta,\,\,\,\,\, \lim \limits_{\cos \theta \to 0}\frac {\partial W}{\partial \theta}=0,\,\,\,\,\, |g(\theta,t)|<C(t) \label{c3_1.6} \end{equation} and initial conditions $W=W(\theta ,0),\,\,g=g(\theta,0)$. The conditions at infinity assure that the total energy initially supplied to the water by a source of disturbances of finite power remains finite at any moment of time. \vspace{3mm} The problem on gravitational waves is formulated mathematically in terms of nonlinear integro-differential equations in two unknown functions $W(\theta,t)$ and $g(\theta,t)$. Note, that the change of variables \eqref{c3_1.2} is a nonlinear transformation of the equations written in Cartesian coordinates. The equations remain valid in time as long as each ray $\,\theta=\hbox{const}\,$ intersects the free surface at not more than one point. \vspace{3mm} The last remark reflects the problem which inevitably arises in connection with any theoretical work, devoted to the free surface waves, even when Cartesian coordinates are used: how can be checked whether or not the free surface may be described by a function of one spatial coordinate (and time) as the surface evolves with time? The problem has been cleared up only for linear waves. In the present book, for nonlinear waves, the situation is similar: the validity of the leading-order equations (and their solutions) on the positive semiaxis of time is ensured by Theorems proved below in Chapter 3. That is why it is important, that the leading-order equations are solved exactly. \vspace{3mm} The equations \eqref{c3_1.3} - \eqref{c3_1.5} governing the flow are written in non-dimensional variables. Since the problem has no characteristic linear size, the dimensional unit of length, $\,\,L_*,\,\,$ is a free parameter. But for applications in Chapters 5 and 6, the value of $L_*$, as well as the value of $|f|L_*$, will be obtained from instrumental data. The dimensional unit of time, $\,\,T_*,\,\,$ is defined by the relation $\,\,T_*^2g=L_*\,$, where $\,\,g\,\,$ is the acceleration of free fall. The non-dimensional acceleration of free fall is equal to unity. All parameters, variables and equations are made non-dimensional by the quantities $\,L_*,\,T_*,\,P_* $ and the density of water $\gamma_*=1000\,$ $\hbox{kg/m}^3$. \vspace{3mm} Physical reasons imply that the initial conditions can not be assigned arbitrarily. The volume of the "upper" liquid of density $\gamma_1$ detrained from the region $x>0$ into the region $x<0$ displaces the equal volume of the "lower" liquid into the region $x>0$. The volume of the liquid transferred across the equilibrium plane $x=0$ is required to be finite. This means that the improper integral \begin{equation} \int\limits_{-\pi/2}^{\pi/2}x(\theta,t)dy(\theta,t)= \int\limits_{-\pi/2}^{\pi/2}W\left(\tan\theta\pd{W}{\theta}+ (W-f)\,\frac{1}{\cos^2\theta}\right)\,d\theta \label{c3_1.7} \end{equation} must be equal to zero. \vspace{3mm} To remove the singularities, we accept that \begin{equation} W(\theta,t)=c\cdot\cos^2\theta\cdot V(\theta,t), \label{c3_1.8} \end{equation} where $V(\theta,t)$ is a function bounded in any rectangle $-\pi/2\le\theta\le\pi/2$, $0\le t\le T$, $c$ is a constant determined by normalization of some sort. Substituting \eqref{c3_1.8} into \eqref{c3_1.7} and integrating by parts we obtain $$ \int\limits_{-\pi/2}^{\pi/2}\left[\frac{1}{4}\,c^2\, V^2\cdot(\cos(2\theta)+1) -f\,c V \right]d\theta=0 $$ $$ \frac{c}{f}\int\limits_{-\pi/2}^{\pi/2}\frac{1}{4}\, V^2\cdot\cos^2\theta\,d\theta -\int\limits_{-\pi/2}^{\pi/2} V\cdot\cos^2\theta\,d\theta =0 $$ In the Chapter 4, we will consider small values of $\mu=c/f$. \vspace{3mm} At $\mu\to 0$, we have \begin{equation} \int\limits_{-\pi/2}^{\pi/2} V\cdot\cos^2\theta\,d\theta =0 \label{c3_1.9} \end{equation} Assume that $$ V(\theta,0)=\alpha_0(0)+\sum_{k=1}^{+\infty}\alpha_k(0)\cos (2k\theta),\,\,\,\,\,\, \sum_{k=0}^{+\infty}\alpha^2_k(0)<+\infty. $$ It follows from \eqref{c3_1.9} that $2a_0(0)+a_1(0)=0$ \section{ Derivation of the governing equations} From \eqref{c1_2.1} of chapter 1, section 2, we obtain $$ du=\pd{U}{x} dx+\pd{U}{y} dy+\pd{U}{t} dt $$ $$ dv=\pd{V}{x} dx+\pd{V}{y} dy+\pd{V}{t} dt $$ $$ D=\pd{U}{x}\pd{V}{y}-\pd{U}{y}\pd{V}{x}\ne 0 $$ The inverted system reads $$ d\hat x=\frac{1}{\hat D}\left[\hpd{V}{y}du-\hpd{U}{y}dv+ \left(\hpd{U}{y}\, \hpd{V}{t}-\hpd{U}{t}\, \hpd{V}{y}\right)dt\right] $$ $$ d\hat y=\frac{1}{\hat D}\left[-\hpd{V}{x}du+\hpd{U}{x}dv+ \left(\hpd{U}{x}\, \hpd{V}{t}-\hpd{U}{t}\, \hpd{V}{x}\right)dt\right] $$ which gives $$ \hpd{V}{y}=\hat D\,\pd{\hat x}{u},\,\,\,\,\,\, \hpd{V}{x}=-\hat D\,\pd{\hat y}{u},\,\,\,\,\,\, \hpd{V}{t}= -\left(\pd{\hat x}{t}\,\pd{\hat y}{u}+ \pd{\hat y}{t}\,\pd{\hat x}{u}\right) $$ $$ \hpd{U}{y}=-\hat D\,\pd{\hat x}{v},\,\,\,\,\,\, \hpd{U}{x}=\hat D\,\pd{\hat y}{v},\,\,\,\,\,\, \pd{U}{t}=\hat D\left(\hpd {x}{t}\pd{y}{v}+\hpd{y}{t}\hpd{x}{v}\right) $$ $$ \frac{1}{\hat D}=\pd{\hat x}{u}\,\pd{\hat y}{v}- \pd{\hat y}{u}\,\pd{\hat x}{v} $$ Now we use relations \eqref{c3_1.2} to calculate derivatives and obtain $$ \frac{1}{\hat D}=\pd{W}{\theta}\tan\theta- \left(\pd{W}{\theta}\tan\theta+(\sigma+W-f)\,\frac{1}{\cos^2\theta} \right)\cdot 1=-\frac{\sigma+W-f}{\cos^2\theta} $$ $$ \hat D=-\frac{\cos^2\theta}{\sigma+W-f} $$ Proceeding exactly on the same lines as for $\hat D$, we get from the inverted system $$ \hat D_{11}=\frac{\cos^2\theta}{(\sigma+W(\theta,t)-f)^2} $$ $$ \hat D_{12}=-\frac{\sin\theta\cos\theta}{\sigma+W(\theta,t)-f} -\frac{\cos^2\theta}{(\sigma+W(\theta,t)-f)^2}\pd{W}{\theta} $$ $$ \hat D_{22}=1+ 2\frac{\sin\theta\cos\theta}{\sigma+W-f}\pd{W}{\theta} + \frac{\cos^2\theta}{(\sigma-f+W)^2} \,\left(\pd{W}{\theta} \right)^2 $$ \begin{equation} \hpd{V}{x}\,(x-x_1)=-\hat D_1(\sigma+W-W_1) \left(\pd{W_1}{\theta_1}\tan\theta_1+ \frac{W_1-f}{\cos^2\theta_1}\right) \label{c3_2.1} \end{equation} $$ \hpd{V}{y}\,(y-y_1)=\hat D_1\,\pd{W_1}{\theta_1} [(\sigma+W-f)\,\tan\,\theta-(W_1-f)\,\tan\,\theta_1] $$ $$ \hpd{V}{t}=\pd{W}{t}+2\frac{\sin\theta\cos\theta}{\sigma-f+W} \pd{W}{\theta} $$ $$ \hpd{U}{t}=\frac{\cos\theta(\sin\theta+\cos\theta)}{\sigma-f+W}\, \pd{W}{t} $$ Substituting just obtained expressions (at $\sigma=0$) in the equations \eqref{c1_2.9}, \eqref{c1_2.10}, and \eqref{c1_2.11} of chapter 1 gives equation \eqref{c3_1.3} and equation \eqref{c3_1.4} governing evolution of the interface and dublets distribution respectively, and \eqref{c3_1.5}. Alternative way to prove \eqref{c3_1.5} is shown in the next lines: $$ \Phi=-\frac{1}{2\pi}\int\limits_{\Gamma} g(\theta_1)\pd{}{n}\ln R\,dl= -\frac{1}{2\pi}\int\limits_{\Gamma} g(\theta_1) \frac{1}{R^2}\,\frac{1}{2}\,\pd{R^2}{n}\,dl, $$ $$ R^2=(x(\theta,\sigma)-x(\theta_1,\sigma_1))^2+ (y(\theta,\sigma)-y(\theta_1,\sigma_1))^2, $$ $$ dl=\sqrt{(dx)^2+(dy)^2}=\sqrt{\left(\pd{\hat x}{\theta_1}\right)^2+\left(\pd{\hat y}{\theta_1}\right)^2}\,d\theta_1=\frac{\sqrt{\hat D_{22}}}{|\hat D_1|} d\theta_1. $$ $$ \sqrt{D_{22}}\,\frac{1}{2}\,\pd{R^2}{n}=\pd{V}{x}(x(\theta,\sigma)-x(\theta_1,\sigma_1))+\pd{V}{y}(y(\theta,\sigma)-y(\theta_1.\sigma_1)) $$ From equations \eqref{c3_2.1} we obtain $$ \frac{1}{2}\,\pd{R^2}{n}=\frac{D_1}{\cos^2\theta_1\sqrt{D_{22}}}\,A, $$ $$ \frac{1}{R^2}\,\frac{1}{2}\,\pd{R^2}{n}\,dl=\frac{1}{S}\,\frac{D_1}{|D_1|}A,\,\,\,\,\,\,S=R^2\cos^2\theta_1, $$ $$ \frac{1}{R^2}\, \frac{D_1}{\cos^2\theta_1\sqrt{D_{22}}}A\, \frac{\sqrt{\hat D_{22}}}{|\hat D_1|} du= \frac{1}{S}\,\frac{D_1}{|D_1|}A. $$ \section{Series expansion of velocity potential} To solve equations \eqref{c3_1.3} and \eqref{c3_1.4}, a usefull procedure for finding one-sided limits of velocity potential $\Phi$ is proposed below, bearing in mind that the integral \eqref{c3_1.5} must be evaluated at $\sigma_1=0$, and one-sided limits are to be found at $\sigma\to 0$ from positive (negative) side of the interface. \vspace{3mm} Consider the fraction $\frac{1}{S}$ involved in \eqref{c3_1.5} $$ S=(\sigma-\sigma_1+W-W_1)^2\cos^2\theta_1+ $$ $$ [(\sigma-f+W)\tan\theta\cdot\cos\theta_1- (\sigma_1-f+W_1)\sin\theta_1]^2, $$ Let $S_0$ denote the value of $S$ at $W=W_1=0$: $$ S_0=(\sigma-\sigma_1)^2\cos^2\theta_1+ $$ $$ [(\sigma-f)\tan\theta\cdot\cos\theta_1- (\sigma_1-f)\sin\theta_1]^2, $$ Define $$ \rho=\frac{S-S_0}{S_ 0} $$ At $\sigma\ne 0$ the distance between points $Q_1$ on the interface and $Q$ outside the interface is positive, and consequently $S>0$ and $S_0>0$ irrespective of the values of other variables, but $S=0,\,S_0=0$ at $\sigma_1=0$, $\sigma=0$, and $\theta=\theta_1$ simultaneously. Nevertheless, the ratio $\rho$ is bounded, if $|W-W_1|<k|\sin\theta-\sin\theta_1|$ where $k$ is a constant. \vspace{3mm} At $\sigma=\sigma_1=0$, after some algebra and trigonometry we find $$ \cos^2\theta(S-S_0)=(W-W_1)^2\cos^2\theta_1\cos^2\theta+ $$ $$ [(W-W_1)\sin\theta\cos\theta_1+(W_1-f)\sin(\theta-\theta_1)]^2 $$ $$ \cos^2\theta S_0=f^2\sin^2(\theta-\theta_1) $$ and boundedness of $\rho$ becomes obvious. The difference $ S-S_0$ is a polynomial of degree $2$ in two variables $W$ and $W_1$: $$ S-S_0=aW+bW_1+V(W,W_1) $$ $$ a=\pd{S_0}{\sigma},\,\,\,\,\,\,b=\pd{S_0}{\sigma_1} $$ $$ V(W,W_1)=\frac{\cos^2\theta_1}{\cos^2\theta}W^2 -2\frac{\cos\theta_1}{\cos\theta}\,\cos(\theta_1-\theta)WW_1+ \frac{1}{\cos^2\theta}W^2_1, $$ where $$ \frac{\cos ^2\theta_1}{\cos^2\theta}= \frac{\partial^2S_0}{\partial\sigma^2},\,\,\text{and so on}. $$ $$ \frac{S_0}{S}=\frac{1}{1+\rho},\,\,\,\,\,\, \frac{1}{S}=\frac{1}{S_0}\,\frac{1}{1+\rho},\,\,\,\,\,\, \frac{1}{S}=\frac{1}{S_0}\,\sum\limits_{n+0}^{+\infty}(-1)^n\rho^n $$ Now the velocity potential $\Phi$ may be rewritten as $$ \Phi(\sigma,\theta,t)=\frac{1}{2\pi}\int\limits_{-\pi/2}^{\pi/2} g(\theta_1,t)A\frac{1}{1+\rho} \left.\frac{d\theta_1}{S_0}\right|_{\sigma_1=0} $$ \begin{equation} F=g(\theta_1,t)A\,\frac{1}{1+\rho},\,\,\,\,\,\, \Phi(\sigma,\theta,t)=\hat H(F) \label{c3_3.1} \end{equation} $$ A=(\sigma -\sigma_1)f-W_1(\sigma -\sigma_1+W-W_1)+f(W-W_1) $$ $$ (\sigma-f+W)\,\pd {W_1}{\theta_1}\,\, (\tan\theta\cdot\cos^2\theta_1- \sin\theta_1\cdot\cos\theta_1) $$ Linear operator $\hat H(F)$ is defined by formula \eqref{c2_2.9} of Chapter 2 . \vspace{3mm} Since the ratio $$ \frac{S_0}{S}=\frac{1}{1+\rho} $$ is a bounded function in two variables $W$ and $W_1$, it can be expanded in the series $$ \frac{S_0}{S}=1+\sum\limits_{n=1}^{+\infty}V_n,\,\,\,\,\,\, V_n=\sum\limits_{i+j=n}\frac{W^iW_1^j}{i!j!}\frac{\partial^n S_0}{\partial\sigma^i\,\partial\sigma_1^j}, $$ convergent within a small circle $W^2+W^2_1<\delta^2$.
59,861
\begin{document} \ \vskip0.5cm \begin{center} \baselineskip 24 pt {\Large \bf {Higher-order superintegrable momentum-dependent Hamiltonians on curved spaces from the classical Zernike system}} \end{center} \medskip \medskip \begin{center} {\sc Alfonso Blasco$^1$, Ivan Gutierrez-Sagredo$^{2}$ and Francisco J. Herranz$^1$ } \medskip {$^1$Departamento de F\'isica, Universidad de Burgos, 09001 Burgos, Spain} {$^2$Departamento de Matem\'aticas y Computaci\'on, Universidad de Burgos, 09001 Burgos, Spain} \medskip e-mail: {\href{mailto:ablasco@ubu.es}{ablasco@ubu.es}, \href{mailto:igsagredo@ubu.es}{igsagredo@ubu.es}, \href{mailto:fjherranz@ubu.es}{fjherranz@ubu.es}} \end{center} \medskip \begin{abstract} \noindent We consider the classical momentum- or velocity-dependent two-dimensional Hamiltonian given by $$ \mathcal H_N = p_1^2 + p_2^2 +\sum_{n=1}^N \gamma_n(q_1 p_1 + q_2 p_2)^n , $$ where $q_i$ and $p_i$ are generic canonical variables, $\gamma_n$ are arbitrary coefficients, and $N\in \mathbb N$. For $N=2$, being both $\gamma_1,\gamma_2$ different from zero, this reduces to the classical Zernike system. We prove that $\mathcal H_N$ always provides a superintegrable system (for any value of $\gamma_n$ and $N$) by obtaining the corresponding constants of the motion explicitly, which turn out to be of higher-order in the momenta. Such generic results are not only applied to the Euclidean plane, but also to the sphere and the hyperbolic plane. In the latter curved spaces, $\mathcal H_N $ is expressed in geodesic polar coordinates showing that such a new superintegrable Hamiltonian can be regarded as a superposition of the isotropic 1\,:\,1 curved (Higgs) oscillator with even-order anharmonic curved oscillators plus another superposition of higher-order momentum-dependent potentials. Furthermore, the Racah algebra determined by the constants of the motion is also studied, giving rise to a $(2N-1)$th-order polynomial algebra. As a byproduct, the Hamiltonian $\mathcal H_N $ is interpreted as a family of superintegrable perturbations of the classical Zernike system. Finally, it is shown that $\mathcal H_N$ (and so the Zernike system as well) is endowed with a Poisson $\mathfrak{sl}(2,\mathbb R)$-coalgebra symmetry which would allow for further possible generalizations that are also discussed. \end{abstract} \medskip \medskip \noindent {MSC}: 37J35, 70H06, 22E60, 17B62 \medskip \noindent {PACS}: 02.30.Ik, 45.20.Jj, 02.20.Sv, 02.40.Ky \medskip \noindent{Keywords}: Integrable systems; Curvature; Sphere; Hyperbolic plane; Curved oscillator; Poisson coalgebras; Integrable perturbations; Racah algebras \newpage \tableofcontents \sect{Introduction} Classical and quantum momentum- or velocity-dependent Hamiltonian systems have been extensively studied in the literature over many decades mainly due to their relevant, wide and varied physical applications. Without trying to be exhaustive, let us mention that linear momentum-dependent Hamiltonians have been considered from different viewpoints in~\cite{Hietarinta84,Hietarinta85,Gunn1985,DGRW1985,IcBo1988,McSW2000,Ranada2003,Pucacco2004,Ranada2005,MB2008,SozTsi2015,Tsi2015,Yehia2016,BKS2020,FSW2020} and quadratic momentum-dependent ones have been analyzed in~\cite{RFL62,McM1965,FGS1967,SesmaVento1978,SBB2012}. In addition, exponentials of momentum-dependent potentials ($V\propto {\rm e}^{-\,p^2}$) have also been considered in~\cite{DDR1987,BG1988,CH1995momentumpotentials,LGXZL2003} and another more involved momentum-dependent potential was recently introduced in~\cite{NMS2020}; see references therein in all the aforementioned works. Furthermore, from a completely different perspective, we stress that quantum groups~\cite{ChariPressley1994} have been applied to classical and quantum (super)integrable Hamiltonians through both deformed and undeformed coalgebras in~\cite{BR98,BBHMR2009}. Following this coalgebra symmetry approach, several classes of momentum-dependent classical Hamiltonians have been constructed in~\cite{BBHMR2009,Ballesteros2000,Ballesteros2007,Ballesteros2009} giving rise to quasi-maximally superintegrable systems, {\em i.e.}~in arbitrary dimension $d$ they are endowed, by construction, with $(2d-3)$ functionally independent constants of the motion (besides the Hamiltonian). Hence one additional constant of the motion is left to ensure maximal superintegrability. In this paper, we shall consider a large class of two-dimensional (2D) higher-order momentum-dependent systems comprised within the Hamiltonian given by \begin{equation} \mathcal H_N = p_1^2 + p_2^2 +\sum_{n=1}^N \gamma_n(q_1 p_1 + q_2 p_2)^n , \label{00} \end{equation} where $q_i$ and $p_i$ are generic canonical variables (with Poisson bracket $\{q_i,p_j\}=\delta_{ij}$), $\gamma_n$ are arbitrary coefficients, and the index $N\in \mathbb N$. Therefore, as particular cases, we find that for $N=1$ we shall deal with linear momentum-dependent Hamiltonians and for for $N=2$ with quadratic momentum-dependent ones, but when $N>2$ we shall obtain cubic, quartic\dots momentum-dependent Hamiltonians. The underlying motivation to consider $\mathcal H_N$ (\ref{00}) is that this is just the natural generalization (for arbitrary $N$) of the superintegrable classical Zernike system formerly introduced in~\cite{PWY2017zernike} (see also~\cite{Fordy2018,Wolf2020}), which is recovered for $N=2$ with $\gamma_1\ne 0$ and $\gamma_2\ne 0$. Recall that the original Zernike system is properly quantum~\cite{Zernike1934} and as a quantum superintegrable Hamiltonian has been extensively studied in~\cite{PSWY2017,PWY2017a,Atakishiyev2017,Fordy2018,Wolf2020,Atakishiyev2019}. Moreover, we observe that the Hamiltonian $\mathcal H_N$ (\ref{00}) is naturally endowed with a Poisson $\mathfrak{sl}(2,\mathbb R)$-coalgebra symmetry~\cite{Ballesteros2007,BBHMR2009}. The aim of this paper is twofold. On the one hand, we explicitly prove that the Hamiltonian $\mathcal H_N$ (\ref{00}) is superintegrable for any $N$ and for any value of the coefficients $\gamma_n$. And, on the other hand, we apply this result not only to the flat Euclidean plane $\mathbf E^2$, but also to the curved sphere $\mathbf S^2$ and the hyperbolic plane $\mathbf H^2$. The structure of the paper is as follows. In the next section we review the classical Zernike Hamiltonian on $\mathbf E^2$ along with its interpretation on $\mathbf S^2$ and $\mathbf H^2$ and, furthermore, we describe its underlying Poisson $\mathfrak{sl}(2,\mathbb R)$-coalgebra symmetry. This allows us to propose $\mathcal H_N$ (\ref{00}) as its natural generalized Hamiltonian. In Section~\ref{s31} we prove that $\mathcal H_N$ always determines a superintegrable system on $\mathbf E^2$ (for any $\gamma_n$ and $N$) by obtaining explicitly the constants of the motion, which turn out to be of higher-order in the momenta. The corresponding interpretation on $\mathbf S^2$ and $\mathbf H^2$ is performed in Section~\ref{s32}. In particular, we introduce the so called geodesic polar coordinates~\cite{RS,conf,BaHeMu13}, which are the curved generalization of the usual Euclidean polar coordinates. In this way, we show that $\mathcal H_N$ can alternatively be regarded as a superposition of the isotropic 1\,:\,1 curved (Higgs) oscillator with even-order anharmonic curved oscillators plus another superposition of higher-order momentum-dependent potentials. Such general results are illustrated in Section~\ref{s4} for $N\le 8$ and, moreover, the associated polynomial Racah algebra, defined through the constants of the motion, is also computed leading to a $(2N-1)$th-order generalization of the well-known cubic Higgs Poisson algebra~\cite{PWY2017zernike,Higgs}. As a byproduct, our results are specifically applied to the classical Zernike Hamiltonian in Section~\ref{s5}, being interpreted as superintegrable perturbations. Their real part of the trajectories are also plotted up to $N=6$. We remark that the underlying Poisson $\mathfrak{sl}(2,\mathbb R)$-coalgebra symmetry of $\mathcal H_N$ naturally suggests further possible generalizations. These open problems along with the application to (1+1)D Lorentzian spacetimes of constant curvature (Minkowskian and (anti-)de Sitter spaces) are discussed in the last section with some detail. To end with, we stress that a quantization of $\mathcal H_N$ is also addressed in the last section. The guiding idea is to replace the Poisson $\mathfrak{sl}(2,\mathbb R)$-coalgebra symmetry by a Lie $\mathfrak{gl}(2)$-coalgebra symmetry. Anyhow, serious ordering problems arise in the constants of the motion, so that our proposal for a quantum $\mathcal H_N$ Hamiltonian also remains as an open problem. \newpage \sect{The classical Zernike system revisited} \label{s2} The original quantum Zernike system was introduced in~\cite{Zernike1934} and, very recently, deeply analysed in~\cite{PSWY2017,PWY2017a,Atakishiyev2017,Fordy2018,Wolf2020,Atakishiyev2019} (see also references therein) as a quantum superintegable Hamiltonian. Such a Hamiltonian system is defined on the 2D Euclidean plane $\mathbf E^2$, which has a potential depending on both linear and quadratic terms on the quantum momenta operators. Its classical counterpart was formerly presented and studied in~\cite{PWY2017zernike} (see also~\cite{Fordy2018,Wolf2020}), which possesses quadratic in the momenta constants of the motion. The superintegrable classical Zernike system is the cornerstone of our construction of new higher-order superintegrable momentum-dependent classical Hamiltonians on a 2D Riemannian space of constant (Gaussian) curvature $\kappa$, so covering the flat Euclidean space $\mathbf E^2$ ($\kappa=0$), the sphere $\mathbf S^2$ ($\kappa>0$) and the hyperbolic or Lobachevski space $\mathbf H^2$ ($\kappa<0$). With this aim, we review in this section the main results on the known classical Zernike system along with its interpretation on curved spaces and, furthermore, we present new properties related with Poisson $\mathfrak{sl}(2,\mathbb R)$-coalgebra symmetry~\cite{BR98,Ballesteros2007,BBHMR2009,Latini2019,Latini2021} which, to the best of our knowledge, have not been considered in the literature yet. The main superintegrability properties (in the Liouville sense~\cite{Perelomov}) of the classical Zernike system are established in the following statement. \begin{theorem} \cite{PWY2017zernike} \label{teor0} Let $\{ q_1,q_2,p_1,p_2\}$ be a set of canonical variables with Poisson brackets $\{ q_i,p_j\} = \delta_{ij}$. The classical Zernike Hamiltonian on the Euclidean plane, $(q_1,q_2)\equiv (x,y)\in \mathbb R^2$, is given by \begin{equation} \mathcal H_{\rm Zk} = p_1^2 + p_2^2 + \gamma_1(q_1 p_1 + q_2 p_2)+ \gamma_2(q_1 p_1 + q_2 p_2)^2 , \label{za} \end{equation} where $\gamma_1$ and $\gamma_2$ are arbitrary parameters. \noindent (i) The Hamiltonian $\mathcal H_{\rm Zk} $ has three (quadratic in the momenta) constants of the motion: \bea && \mathcal C = q_1 p_2 - q_2 p_1 , \nonumber\\[2pt] && \mathcal I= p_2^2 + \gamma_1\, q_2p_2 + \gamma_2 \bigl( q_1^2 + q_2^2 \bigr)p_2^2 , \label{zb}\\ && \mathcal I'= p_1^2 + \gamma_1\, q_1 p_1 + \gamma_2 \bigl( q_1^2 + q_2^2 \bigr)p_1^2 . \nonumber \eea (ii) The above functions fulfil the relation \be \mathcal \mathcal H_{\rm Zk} = \mathcal I + \mathcal I' - \gamma_{2}\,\mathcal C^{2} . \label{zc} \ee (iii) The sets $\{\mathcal H_{\rm Zk} , \mathcal I, \mathcal C\}$ and $\{\mathcal H_{\rm Zk} , \mathcal I', \mathcal C\}$ are formed by three functionally independent functions so that $\mathcal H_{\rm Zk}$ is a superintegrable Hamiltonian. \noindent (iv) The three functions defined by \bea &&\mathcal L_1:= \mathcal C/2, \qquad \mathcal L_2:=\bigl(\mathcal I' - \mathcal I \bigr)/2 ,\nonumber\\ &&\mathcal L_3:= \{ \mathcal L_1,\mathcal L_2\}= \left( 1+ \gamma_2 \bigl(q_1^2+q_2^2\bigr) \right)p_1 p_2 + \tfrac 12 \gamma_1 (q_1 p_2 + q_2 p_1) , \label{zzd} \eea satisfy the Poisson brackets \be \{ \mathcal L_1,\mathcal L_2\}=\mathcal L_3, \qquad \{ \mathcal L_1,\mathcal L_3\}=-\mathcal L_2, \qquad \{ \mathcal L_2, \mathcal L_3\}=-\mathcal L_1\left( \gamma_1^2 + 2 \gamma_2 \mathcal H_{\rm Zk}+8 \gamma_2^2 \mathcal L_1^2 \right) . \label{zd} \ee \end{theorem} All the results covered by Theorem~\ref{teor0} can be expressed straightforwardly in polar coordinates $(r,\phi)$ and conjugate momenta $(p_r,p_\phi)$, as it was already performed in~\cite{Fordy2018,PWY2017zernike}, by means of the usual canonical transformation given by \be \begin{array}{ll} q_1=r\cos\phi , &\quad\displaystyle{ p_1=\cos\phi \, p_r -\frac{\sin \phi}{r}\, p_\phi } , \\[4pt] q_2=r\sin\phi , &\quad\displaystyle{ p_2=\sin\phi \, p_r + \frac{\cos \phi}{r}\, p_\phi } . \end{array} \label{ze} \ee In particular, in these variables the Hamiltonian $\mathcal H_{\rm Zk} $ (\ref{za}) and the angular momentum constant of the motion $ \mathcal C $ (\ref{zb}) turn out to be \be \mathcal H_{\rm Zk} = p_r^2 + \frac{p_\phi^2 }{r^2} + \gamma_1 r p_r+ \gamma_2(r p_r)^2 , \qquad \mathcal C = p_\phi , \label{zf} \ee showing directly the integrability of the system, while $ \mathcal I$ (or $ \mathcal I'$) (\ref{zb}) is an additional integral (or hidden symmetry) determining its superintegrability. \subsection{Interpretation on the sphere and the hyperbolic space} \label{s21} We stress, as it was already pointed out in~\cite{PWY2017zernike}, that the relations (\ref{zd}) provide a cubic Higgs algebra~\cite{Higgs} (whenever $\gamma_2\ne 0$), which is just the Racah algebra of the integrals of the motion of the well-known Higgs or isotropic curved oscillator on the 2D sphere $\mathbf S^2$ that has been extensively studied over the last few decades~\cite{Higgs,Leemon,Pogoa,RS,Kalnins1,Kalnins2,Nersessian1,Ranran,Santander6,BaHeMu13,MiPWJPa13,GonKas14AnnPhys,BaBlHeMu14,Kuruannals} (see also references therein). We also recall that a cubic Higgs-type algebra arises in Kepler--Coulomb systems on $\mathbf S^2$ and on the 2D hyperbolic space $\mathbf H^2$~\cite{Kepler2009}. These facts suggest a natural relationship between the previous interpretation of $\mathcal H_{\rm Zk} $ on $\mathbf E^2$ and an alternative one as a superintegrable Hamiltonian on a 2D curved space as it was mentioned in~\cite{Fordy2018,PWY2017zernike}. Let us consider the terms in $\mathcal H_{\rm Zk}$ depending quadratically in the momenta as the free Hamiltonian or kinetic energy of the system, so that the associated metric can then be deduced. From the expression (\ref{zf}) in polar variables the underlying 2D non-Euclidean metric reads \be {\rm d}s^2= \frac{1}{1+\gamma_2 r^2}\, {\rm d}r^2+r^2 {\rm d}\phi^2 . \label{zg} \ee Its Gaussian curvature $\kappa$ turns out to be constant and equal to $-\gamma_2$~\cite{Fordy2018}. Hence, according to the sign of the curvature parameter $\kappa=-\gamma_2$, we find that the metric (\ref{zg}) simultaneously comprises the flat Euclidean space $\mathbf E^2$ ($\kappa=\gamma_2=0$), the sphere $\mathbf S^2$ ($\kappa>0, \gamma_2<0$) and the hyperbolic space $\mathbf H^2$ ($\kappa<0, \gamma_2>0$). Since both initial (arbitrary) $\gamma_1$- and $\gamma_2$-potentials are essential to deal with the proper Zernike system we shall assume in this section that they are different from zero, so that we shall deal with $\mathbf S^2$ and $\mathbf H^2$. It should be noted that the polar radial coordinate $r$ is no longer a geodesic distance in a curved space with $\kappa\ne 0$ (which is our case now). In order to perform an appropriate geometrical and dynamical interpretation of $\mathcal H_{\rm Zk} $ on $\mathbf S^2$ and $\mathbf H^2$, let us introduce the so-called geodesic radial coordinate~\cite{RS,BaHeMu13,conf}, here denoted by $\rho$, which is just the distance along the geodesic joining the origin in the curved space and the particle, keeping unchanged the usual angular coordinate $\phi$. The relationship between $r$ and $\rho$ is given by \be r= \Sk_\kk( \rho),\qquad \kk=-\gamma_2, \label{zh} \ee where from now on we shall make use of the curvature-dependent cosine and sine functions defined by~\cite{RS,trigo,conf} \begin{equation} \Ck_{\kk}(x):=\left\{ \begin{array}{ll} \cos{\sqrt{\kk}\, x} &\quad \kk>0 \\ \qquad 1 &\quad \kk=0 \\ \cosh{\sqrt{-\kk}\, x} &\quad \kk<0 \end{array}\right. , \qquad \Sk{_\kk}(x) := \left\{ \begin{array}{ll} \frac{1}{\sqrt{\kk}} \sin{\sqrt{\kk}\, x} &\quad \kk>0 \\ \qquad x &\quad \kk=0 \\ \frac{1}{\sqrt{-\kk}} \sinh{\sqrt{-\kk}\, x} &\quad \kk<0 \end{array}\right. . \label{zi} \end{equation} The $\kk$-tangent is defined as \be \Tk_\kk(x) := \frac{\Sk_\kk(x)} { \Ck_\kk(x)} \, . \label{zj} \ee These $\kk$-dependent trigonometric functions coincide with the circular and hyperbolic ones for $\kk=\pm 1$, while under the contraction (or flat limit) $\kk=0$ they reduce to the parabolic functions: $\Ck_{0}(x)=1$ and $\Sk_{0}(x)=\Tk_{0}(x)=x$. Under the change of variable (\ref{zh}), the metric (\ref{zg}) is transformed in its usual form in geodesic polar coordinates $(\rho, \phi)$~\cite{RS,conf}: \be {\rm d}s^2= {\rm d}\rho ^2+ { \Sk^2_\kk( \rho)}\, {\rm d}\phi^2 . \label{zk} \ee Note that its flat limit $\kk=\gamma_2=0$ leads to the usual metric on $\mathbf E^2$ in polar coordinates, ${\rm d}s^2= {\rm d}r ^2+ r^2\, {\rm d}\phi^2 $, since $\rho\equiv r$. By taking into account the results presented in~\cite{Fordy2018,PWY2017zernike} in canonical polar variables $\{r,\phi,p_r,p_\phi\}$ (\ref{ze}) together with the relation (\ref{zh}), we can apply the results of Theorem~\ref{teor0} for $\mathcal H_{\rm Zk} $ on $\mathbf E^2$ to $\mathbf S^2$ and $\mathbf H^2$ in geodesic polar variables. These are summarized as follows. \begin{proposition} \label{prop0} Let $\{ \rho,\phi, p_\rho ,p_\phi\}$ be a set of canonical geodesic polar variables with Poisson brackets $\{ q_\alpha ,p_\beta \} = \delta_{\alpha\beta}$ where $\alpha,\beta \in\{\rho, \phi\}$. \\ (i) The classical superintegrable Zernike Hamiltonian (\ref{za}) can be expressed in these variables on $\mathbf S^2$ and $\mathbf H^2$, with $\kappa=-\gamma_2$, by applying the canonical transformation given by \be \begin{array}{ll} q_1=\Sk_\kk( \rho)\cos\phi , &\quad\displaystyle{ p_1=\cos\phi \, \frac{p_\rho} {\Ck_\kk(\rho) } -\frac{\sin \phi}{\Sk_\kk( \rho) }\, p_\phi } , \\[10pt] q_2=\Sk_\kk( \rho) \sin\phi , &\quad\displaystyle{ p_2=\sin\phi \, \frac{p_\rho} {\Ck_\kk(\rho) } +\frac{\cos \phi}{\Sk_\kk( \rho) }\, p_\phi }, \end{array} \label{zl} \ee leading to \be \mathcal H_{\rm Zk} = p_\rho^2 + \frac{p_\phi^2 }{ \Sk^2_\kk( \rho)} + \gamma_1 \Tk_\kk( \rho)\,p_\rho . \label{zm} \ee The domain for the variables $(\rho, \phi) $ of $ \mathcal H_{\rm Zk}$ (\ref{zm}) is given by $\phi\in[ 0, 2\pi)$ and \be \mathbf S^2\ (\kk>0)\!: \ 0< \rho< \frac{ \pi }{2\sqrt{\kk} } \, ,\qquad \mathbf H^2\ (\kk<0)\!: \ 0< \rho< \infty . \label{zn} \ee (ii) The following canonical transformation \be \begin{array}{ll} q_1=\Sk_\kk( \rho)\cos\phi , &\quad\displaystyle{ p_1= \frac{\cos\phi} {\Ck_\kk(\rho) }\left( p_\rho- \frac{\gamma_1}{2} \Tk_\kk( \rho) \right) -\frac{\sin \phi}{\Sk_\kk( \rho) }\, p_\phi } , \\[10pt] q_2=\Sk_\kk( \rho) \sin\phi , &\quad\displaystyle{ p_2= \frac{ \sin\phi} {\Ck_\kk(\rho) }\left( p_\rho- \frac{\gamma_1}{2} \Tk_\kk( \rho) \right)+\frac{\cos \phi}{\Sk_\kk( \rho) }\, p_\phi }, \end{array} \label{zl2} \ee gives rise to the Zernike system (\ref{za}) written as a natural Hamiltonian \be \mathcal H_{\rm Zk} =\mathcal T_\kk+\mathcal U_\kk(\rho) ,\qquad \mathcal T_\kk = p_\rho^2 + \frac{p_\phi^2 }{ \Sk^2_\kk( \rho)} , \qquad \mathcal U_\kk(\rho)= -\frac{\gamma_1^2}{4} \Tk^2_\kk( \rho) , \label{zo} \ee where $\mathcal T_\kk$ is the kinetic energy on the curved space and $\mathcal U_\kk(\rho)$ is a central potential. The latter is just a central or Higgs oscillator, with centre at the origin on the curved space, whenever the parameter $\gamma_1$ is a pure imaginary number. \end{proposition} \begin{proof} It is immediate from Theorem~\ref{teor0}, the canonical transformations \eqref{zl} and (\ref{zl2}) together with the definitions \eqref{zi} and \eqref{zj}. \end{proof} Observe that the relation between the two canonical transformations (\ref{zl}) and (\ref{zl2}) simply corresponds to the substitution in (\ref{zm}) given by \be p_\rho = \tilde p_\rho- \frac{\gamma_1}{2} \Tk_\kk( \rho), \label{zzo} \ee and next dropping the tilde in $\tilde p_\rho$ while keeping $\rho$ as the common conjugate coordinate~\cite{Fordy2018} so obtaining (\ref{zo}). The isometries of the metric (\ref{zk}) associated with the free Hamiltonian $ \mathcal T_\kk$ (\ref{zo}) turn out to be~\cite{BaHeMu13} \be J_{01}= \cos\phi\,p_\rho-\frac{\sin\phi}{ \Tk_\kk(\rho) }\, p_\phi,\qquad J_{02}= \sin\phi\,p_\rho+\frac{\cos\phi}{\Tk_\kk(\rho)}\, p_\phi ,\qquad J_{12}=p_\phi . \label{zp} \ee These functions fulfil the Poisson brackets given by \be \{J_{12},J_{01}\}=J_{02},\qquad \{J_{12},J_{02}\}=-J_{01},\qquad \{J_{01},J_{02}\}=\kk J_{12} , \label{zq} \ee thus closing a Poisson--Lie algebra isomorphic either to $\mathfrak{so}(3)$ for $\kk>0$ or to $\mathfrak{so}(2,1)$ for $\kk<0$ in agreement with~\cite{Fordy2018}. Note also that the kinetic term $\mathcal T_\kk$ (\ref{zo}) is just the Casimir of the Poisson--Lie algebra (\ref{zq}): \be \mathcal T_\kk=J_{01}^2+J_{02}^2+\kk J_{12}^2 . \label{zr} \ee As we have mentioned in Proposition~\ref{prop0}, the superintegrable potential $\mathcal U_\kk(\rho)$ (\ref{zo}) corresponds to the isotropic 1\,:\,1 or Higgs oscillator on $\mathbf S^2$ and $\mathbf H^2$ when $\gamma_1$ is purely imaginary. If we set $\gamma_1=2 {\rm i} \omega$ with real parameter $\omega$, then $\mathcal U_\kk(\rho)=\omega^2\Tk^2_\kk( \rho) $ with $\omega$ behaving as the frequency of the curved oscillator, that is, $\mathcal U_{+1}(\rho)=\omega^2\tan^2 \rho $ and $\mathcal U_{-1}(\rho)=\omega^2\tanh^2 \rho $. In this case, the Zernike system has bounded trajectories which are all periodic and given by ellipses in accordance with~\cite{PWY2017zernike}. For some trajectories of the Higgs oscillator on $\mathbf S^2$ and $\mathbf H^2$ see also~\cite{BaHeMu13,Kuruannals}. From the results of Theorem~\ref{teor0} and Proposition~\ref{prop0} it is straightforward to express the Zernike Hamiltonian together with its associated superintegrability properties in other relevant sets of canonical variables such as geodesic parallel and projective (Beltrami and Poincar\'e) ones~\cite{RS,Santander6,BaHeMu13,BaBlHeMu14,Kuruannals}. \subsection{Poisson $\mathfrak{sl}(2,\mathbb R)$-coalgebra symmetry} \label{s22} Let us consider the algebra $\mathfrak{sl}(2,\mathbb R)=\spn\{J_3,J_+,J_- \}$ expressed as a Poisson--Lie algebra with defining Poisson brackets and Casimir $ {C}$ given by \begin{equation} \{J_3,J_+\}=2 J_+ , \qquad \{J_3,J_-\}=-2 J_- ,\qquad \{J_-,J_+\}=4 J_3 , \label{zs} \end{equation} \begin{equation} {C}= J_- J_+ -J_3^2 . \label{zt} \end{equation} Then, as any Poisson--Lie algebra, $\mathfrak{sl}(2,\mathbb R)$ can be endowed with a Poisson coalgebra structure~\cite{BR98}, $(\mathfrak{sl}(2,\mathbb R),\Delta)$, by considering the primitive or non-deformed coproduct map $\Delta$ given by \be \Delta: \mathfrak{sl}(2,\mathbb R)\to \mathfrak{sl}(2,\mathbb R) \otimes \mathfrak{sl}(2,\mathbb R) ,\qquad \Delta(J_l)= J_l \otimes 1+ 1\otimes J_l ,\quad l\in\{3,+,-\}, \label{zu} \ee which is a homomorphism of Poisson algebras from $(\mathfrak{sl}(2,\mathbb R), \{ \cdot , \cdot \})$ and $(\mathfrak{sl}(2,\mathbb R) \otimes \mathfrak{sl}(2,\mathbb R), \{ \cdot, \cdot \}^{(2)})$, where $\{ \cdot , \cdot \}$ is given by \eqref{zs} and $\{ \cdot, \cdot \}^{(2)}$ is the direct product of two such Poisson structures. Notice that the (trivial) counit and antipode can also be defined giving rise to a non-deformed Hopf algebra structure~\cite{ChariPressley1994}. A one-particle symplectic realization of (\ref{zs}) reads \begin{equation} J_-^{(1)}=q_1^2 , \qquad J_+^{(1)}= p_1^2+\frac {\otra_1}{ q_1^2} , \qquad J_3^{(1)}= q_1 p_1 , \label{zv1} \end{equation} where $\otra_1$ is a real parameter that labels the representation through the Casimir (\ref{zt}): \be {C}^{(1)}= J_-^{(1)} J_+^{(1)} -\left(J_3^{(1)}\right)^2 = \otra_1. \ee From (\ref{zv1}), the coproduct (\ref{zu}) provides the following two-particle symplectic realization of (\ref{zs}): \be J_-^{(2)}=q_1^2+ q_2^2, \qquad J_+^{(2)}= p_1^2+\frac {\otra_1}{ q_1^2} +p_2^2+\frac{\otra_2}{ q_2^2} , \qquad J_3^{(2)}= q_1 p_1 + q_2 p_2 . \label{zv} \ee And the two-particle realization of the Casimir (\ref{zt}) turns out to be: \be {C}^{(2)}= J_-^{(2)} J_+^{(2)} -\left(J_3^{(2)}\right)^2 = ({q_1}{p_2} - {q_2}{p_1})^2 + \left( \otra_1\frac{q_2^2}{q_1^2}+\otra_2\frac{q_1^2}{q_2^2}\right)+ \otra_1+\otra_2 . \label{zw} \ee By construction~\cite{BR98}, $ {C}^{(2)}$ Poisson-commutes with the three functions (\ref{zv}) so that any smooth function $ \mathcal H$ defined on them becomes, at least, a 2D {\em integrable} Hamiltonian, \be \mathcal H^{(2)} = \mathcal H \left(J_3^{(2)}, J_+^{(2)}, J_-^{(2)} \right), \label{zx} \ee always sharing the constant of the motion given by $ {C}^{(2)}$. Geometrically, the 3D Poisson manifold is foliated by 2D symplectic leaves defined by the level sets of $ {C}^{(2)}$. We recall that, by taking into account the coassociativity property of the coproduct, this result from 2D Poisson $\mathfrak{sl}(2,\mathbb R)$-coalgebra symmetry can be generalized to arbitrary dimension $d$ providing $(2d-3)$ functionally independent `universal' constants of the motion~\cite{Ballesteros2007,BBHMR2009}; for the corresponding Racah algebra we refer to~\cite{Latini2019,Latini2021} and references therein. Hence such Hamiltonians are called quasi-maximally superintegrable since only {\em one} additional constant of the motion is left to ensure maximal superintegrability. The application of the above results to the classical Zernike system is now straightforward. Let us set the parameters $\lambda_1=\lambda_2=0$. Then the Zernike Hamiltonian (\ref{za}) is shown to be endowed with an $\mathfrak{sl}(2,\mathbb R)$-coalgebra symmetry by considering the following particular expression for $\mathcal H^{(2)}$: \be \mathcal H_{\rm Zk} \equiv \mathcal H^{(2)} = J_+^{(2)} +\gamma_1 J_3^{(2)}+\gamma_2 \left(J_3^{(2)}\right)^2 . \label{zy} \ee And, obviously, $ {C}^{(2)}$ (\ref{zw}) reduces to the square of the angular momentum constant of the motion $ \mathcal C $~(\ref{zb}). The superintegrability property arises by obtaining an additional functionally independent integral $ \mathcal I$ (or $ \mathcal I'$) (\ref{zb}) with respect to $\mathcal H^{(2)}$ and $ {C}^{(2)}$. The crucial point now is that all the above results naturally suggest to consider the following generalization of the Zernike Hamiltonian (\ref{zy}): \be \mathcal H_N = J_+^{(2)} + \sum_{n=1}^N \gamma_n \left(J_3^{(2)}\right)^n = \mathbf{p}^2 + \sum_{n=1}^N \gamma_n (\mathbf{q} \boldsymbol{\cdot} \mathbf{p})^n , \label{zz} \ee where $\gamma_n$ are arbitrary parameters and hereafter we denote $\mathbf{p} = p_1^2 + p_2^2$ and $\mathbf{q} \boldsymbol{\cdot} \mathbf{p} = q_1 p_1 + q_2 p_2$. Clearly, $\mathcal H_N $ is an integrable Hamiltonian keeping the same constant of the motion $ \mathcal C $~(\ref{zb}) and the Zernike system is the particular case $\mathcal H_{\rm Zk} \equiv \mathcal H_2$. Therefore, the open problem is to obtain the generalization of the additional integral $ \mathcal I$ (or $ \mathcal I'$) (\ref{zb}) thus ensuring that $\mathcal H_N$ actually determines a superintegrable system. In the next section, we solve this problem presenting the additional integrals, say $ \mathcal I_N$ and $ \mathcal I'_N$, for $\mathcal H_N$ which turn out to be of higher-order in the momenta. \sect{A new class of superintegrable momentum-dependent Hamiltonians} \label{s3} Our aim now is to prove that the Hamiltonian $\mathcal H_N $ (\ref{zz}) is superintegrable for any value of the arbitrary parameters $\gamma_n$ by explicitly finding an additional constant of the motion. Hence, when both $\gamma_1$ and $\gamma_2$ are different from zero, $\mathcal H_N $ can be regarded as a generalization of the classical Zernike system through superintegrable perturbations determined by the terms $\gamma_n$ with $n>2$. Firstly, we shall consider the construction of $\mathcal H_N $ on $\mathbf E^2$ and, secondly, we shall interpret our results on $\mathbf S^2$ and $\mathbf H^2$ following Section~\ref{s21}. \subsection{Superintegrable systems on the Euclidean plane} \label{s31} Let us start by introducing a set of four types of homogeneous polynomials depending on the two Cartesian variables $(q_1,q_2) \in \mathbb R^2$ on $\mathbf E^2$ given by \be Q^{(n-j,j)}_{ab} := Q^{(n-j,j)}_{ab} (q_1,q_2), \qquad n,j \in \mathbb N,\qquad 0\le j \le n,\qquad a,b \in \{e,o\}, \ee where $e$ stands for \emph{even} and $o$ for \emph{odd} according to the parity of the integers $n$ and $j$. These polynomials are of degree $n=(n-j) +j$ and read \begin{equation} \begin{split} \label{eq:Q_ab} Q^{(n-j,j)}_{ee} &:= (-1)^{j/2} \sum_{k=0}^{j/2} (-1)^k \binom{n}{2k} \bigg[ (-1)^{\frac{n}{2}+1}q_1^{n-2k} q_2^{2k} + q_1^{2k} q_2^{n-2k} \bigg] , \\ Q^{(n-j,j)}_{eo} &:= (-1)^{(j-1)/2} \sum_{k=0}^{(j-1)/2} (-1)^k \binom{n}{2k+1} \bigg[ (-1)^{\frac{n}{2}} q_1^{n-(2k+1)} q_2^{2k+1} + q_1^{2k+1} q_2^{n-(2k+1)} \bigg] ,\\ Q^{(n-j,j)}_{oe} &:= (-1)^{j/2} \left[ \sum_{k=0}^{j/2-1} (-1)^k (-1)^{\frac{n+1}{2}} \binom{n}{2k+1} q_1^{n-(2k+1)} q_2^{2k+1} + \sum_{k=0}^{j/2} (-1)^k \binom{n}{2k} q_1^{2k} q_2^{n-2k} \right], \\ Q^{(n-j,j)}_{oo} &:= (-1)^{(j-1)/2} \sum_{k=0}^{(j-1)/2} (-1)^k \bigg[ (-1)^{\frac{n+1}{2}} \binom{n}{2k} q_1^{n-2k} q_2^{2k} + \binom{n}{2k+1} q_1^{2k+1} q_2^{n-(2k+1)} \bigg] .\\ \end{split} \end{equation} Note that for some values of $j$, some of the sums above could be empty (in particular, this may happen with $Q^{(n-j,j)}_{oe}$). Next we define a set of polynomials $Q^{(n-j,j)}$ which encompasses the above four types $Q^{(n-j,j)}_{ab}$. Let us consider the function $\Theta : \mathbb N \to \{0,1\}$, \begin{equation} \Theta (m) := \begin{cases} 1 &\text{if } m \text{ is even} \\ 0 &\text{if } m \text{ is odd} \\ \end{cases} . \end{equation} Then the polynomials $Q^{(n-j,j)}$ are defined by \begin{equation} \begin{split} \label{eq:Q} Q^{(n-j,j)} :=Q^{(n-j,j)} (q_1,q_2)&= \Theta(n) \Theta(j) Q^{(n-j,j)}_{ee} + \Theta(n) \big(1-\Theta(j) \big) Q^{(n-j,j)}_{eo} \\ &\quad +\big(1-\Theta(n) \big) \Theta(j) Q^{(n-j,j)}_{oe} + \big(1-\Theta(n) \big) \big(1-\Theta(j) \big) Q^{(n-j,j)}_{oo} , \end{split} \end{equation} where $Q^{(n-j,j)}_{ab}$ are given by \eqref{eq:Q_ab}. Thus, the degree of the polynomial $Q^{(n-j,j)}$ is again $n$. With the previous definitions, we have all the ingredients to state and prove the main result of this paper. \smallskip \begin{theorem} \label{teor1} Let $\{ q_1,q_2,p_1,p_2\}$ be a set of canonical Cartesian variables such that $\{ q_i,p_j\} = \delta_{ij}$. The Hamiltonian (\ref{zz}) on the Euclidean plane, namely, \begin{equation} \mathcal H_N = \mathbf{p}^2 + \sum_{n=1}^N \gamma_n (\mathbf{q} \boldsymbol{\cdot} \mathbf{p})^n , \label{hamN} \end{equation} such that $\gamma_n$ are arbitrary parameters, is superintegrable for all $N \in \mathbb N$. The two integrals of the motion are the usual angular momentum $\mathcal C = q_1 p_2 - q_2 p_1 $ (\ref{zb}), together with the following $N$th-order in the momenta function \begin{equation} \label{eq:I} \mathcal I_N = p_2^2 + \sum_{n=1}^N \gamma_{n} \sum_{j=0}^{\varphi(n)} p_2^{n-j} p_1^j \, Q^{(n-j,j)}(q_1,q_2), \end{equation} where $Q^{(n-j,j)}$ is given by \eqref{eq:Q} through \eqref{eq:Q_ab} and $\varphi(n)$ denotes the greatest even integer less than $n$, that is, \begin{equation} \varphi (n) := \begin{cases} n-2 &\text{if } n \text{ is even} \\ n-1 &\text{if } n \text{ is odd} \\ \end{cases} . \label{a1} \end{equation} The set $\{\mathcal H_N, \mathcal I_N, \mathcal C\}$ is formed by three functionally independent functions. \end{theorem} \begin{proof} The functional independence among $\mathcal H_N$, $\mathcal I_N$ and $ \mathcal C$ can be seen from their explicit expressions; in fact, one can set all the parameters $\gamma_n\equiv 0$ recovering the superintegrability of the geodesic motion on the Euclidean plane. In order to prove that $\mathcal I_N$ is an integral of motion, let us denote \be \mathcal G_N = (\mathbf{q} \boldsymbol{\cdot} \mathbf{p})^N ,\qquad \mathcal J_N = \sum_{j=0}^{\varphi(N)} p_2^{N-j} p_1^j \,Q^{(N-j,j)}, \label{b1} \ee and we have that \begin{equation} \begin{split} \mathcal H_N &= \mathcal H_{N-1} + \gamma_N \mathcal G_N , \qquad \mathcal I_N = \mathcal I_{N-1} + \gamma_N \mathcal J_N . \\ \end{split} \label{xa} \end{equation} We also write the free motion as $\mathcal H_0 = p_1^2 + p_2^2$ and $\mathcal I_0 = p_2^2$, and thus \begin{equation} \begin{split} \mathcal H_N &= \mathcal H_{0} + \sum_{n=1}^N \gamma_n \mathcal G_n , \qquad \mathcal I_N = \mathcal I_{0} + \sum_{n=1}^N \gamma_n \mathcal J_n . \\ \end{split} \label{xb} \end{equation} From (\ref{xa}) and by bilinearity of the Poisson bracket we find that \begin{equation} \begin{split} \label{eq:poisHNIN} \{ \mathcal H_N, \mathcal I_N \} &= \{ \mathcal H_{N-1} + \gamma_N \mathcal G_N, \mathcal I_{N-1} + \gamma_N \mathcal J_N \} \\ &=\{ \mathcal H_{N-1}, \mathcal I_{N-1} \} + \gamma_N \left( \{ \mathcal G_{N}, \mathcal I_{N-1} \} + \{ \mathcal H_{N-1}, \mathcal J_{N} \}\right) + \gamma_N^2 \{ \mathcal G_N, \mathcal J_N \} . \end{split} \end{equation} Since the only dependence on $\gamma_N$ is explicit on the previous formula, both terms $\{ \mathcal G_{N}, \mathcal I_{N-1} \} + \{ \mathcal H_{N-1}, \mathcal J_{N} \} $ and $\{ \mathcal G_N, \mathcal J_N \} $ must vanish. Using (\ref{xb}) and bilinearity $(N-1)$ times we obtain that \begin{equation} \begin{split} \{ \mathcal G_{N}, \mathcal I_{N-1} \} &= \{ \mathcal G_{N}, \mathcal I_{0} \} + \sum_{n=1}^{N-1} \gamma_{n} \{ \mathcal G_{N}, \mathcal J_{n} \} , \\ \{ \mathcal H_{N-1}, \mathcal J_{N} \} &= \{ \mathcal H_{0}, \mathcal J_{N} \} + \sum_{n=1}^{N-1} \gamma_{n} \{ \mathcal G_{n}, \mathcal J_{N} \} . \end{split} \end{equation} Therefore, it remains to prove that \begin{enumerate} \item[1. ] $\{ \mathcal G_M, \mathcal J_N \} = 0$ for all $M,N \in \mathbb N$, and \item[2. ] $\{ \mathcal G_{N}, \mathcal I_{0} \} + \{ \mathcal H_{0}, \mathcal J_{N} \} = 0$ for all $N \in \mathbb N$. \end{enumerate} Hence, if both of the above two statements are true, then from \eqref{eq:poisHNIN} and applying bilinearity $(N-1)$ times we find that \begin{equation} \{ \mathcal H_N, \mathcal I_N \} = \{ \mathcal H_{N-1}, \mathcal I_{N-1} \} = \cdots = \{ \mathcal H_0, \mathcal I_0 \} =\left \{ p_1^2 + p_2^2, p_2^2\right \} = 0 , \end{equation} for all $N \in \mathbb N$. Now we prove that $\{ \mathcal G_M, \mathcal J_N \} = 0$ for all $M,N \in \mathbb N$. A simple computation shows that \begin{equation} \begin{split} \{ \mathcal G_M, \mathcal J_N \} &= \sum_{j=0}^{\varphi(N)} \left \{ (\mathbf{q} \boldsymbol{\cdot} \mathbf{p})^M, \, p_2^{N-j} p_1^j \, Q^{(N-j,j)} \right \} \\ &= M (\mathbf{q} \boldsymbol{\cdot} \mathbf{p})^{M-1} \sum_{j=0}^{\varphi(N)} p_2^{N-j} p_1^j \left( N Q^{(N-j,j)} - \mathbf{q} \boldsymbol{\cdot} \nabla Q^{(N-j,j)} \right) = 0 , \end{split} \end{equation} where the last identity follows directly from Euler's homogeneous function theorem by recalling that $Q^{(N-j,j)}$ (\ref{eq:Q}) is a homogeneous polynomial of degree $N$, thus satisfying \begin{equation} \mathbf{q} \boldsymbol{\cdot} \nabla Q^{(N-j,j)} = N Q^{(N-j,j)} ,\qquad \forall N,j \in \mathbb N . \end{equation} To prove that $\{ \mathcal G_{N}, \mathcal I_{0} \} + \{ \mathcal H_{0}, \mathcal J_{N} \} = 0$ for all $N \in \mathbb N$, we first compute \begin{equation} \label{eq:GNI0} \{ \mathcal G_{N}, \mathcal I_{0} \} =\left \{ (\mathbf{q} \boldsymbol{\cdot} \mathbf{p})^N, p_2^2 \right \} = 2 \sum_{j=0}^{N-1} (N-j) \binom{N}{j} p_2^{N-j+1} p_1^j \,q_2^{N-j-1} q_1^j , \end{equation} and then \begin{equation} \label{eq:H0JN} \begin{split} \{ \mathcal H_{0}, \mathcal J_{N} \} &= \sum_{j=0}^{\varphi(N)} \left \{ p_1^2 + p_2^2, \, p_2^{N-j} p_1^j Q^{(N-j,j)} \right\} = \sum_{j=0}^{\varphi(N)} p_2^{N-j} p_1^j \left \{ p_1^2 + p_2^2, Q^{(N-j,j)} \right \} \\ &= -2 \sum_{j=0}^{\varphi(N)} \left( p_2^{N-j} p_1^{j+1}\, \frac{\partial Q^{(N-j,j)}}{ \partial q_1} + p_2^{N-j+1} p_1^{j} \, \frac{\partial Q^{(N-j,j)}}{\partial q_2} \right) . \end{split} \end{equation} Equating the coefficients of $p_2^{N-a} p_1^{a+1}$ in \eqref{eq:GNI0} and \eqref{eq:H0JN} we arrive at the equation \begin{equation} \label{xd} \frac{\partial Q^{(N-a,a)}}{\partial q_1} + \frac{\partial Q^{(N-a-1,a+1)}}{\partial q_2} = (N-a-1) \binom{N}{a+1} q_2^{N-a-2} q_1^{a+1} . \end{equation} Since this computation involves the explicit expressions of $Q^{(N-j,j)}$ (\ref{eq:Q}), for the sake of brevity we only present the case when $N$ and $a$ are {\em even} numbers ({the proof for the remaining cases is similar}). Hence $Q^{(N-a,a)}=Q^{(N-a,a)}_{ee}$ and $Q^{(N-a-1,a+1)}=Q^{(N-a-1,a+1)}_{eo}$ given in (\ref{eq:Q_ab}), so that we have \begin{equation} \label{xc} \begin{split} &\!\!\!\! \!\!\!\! \frac{\partial Q^{(N-a,a)}_{ee}}{\partial q_1} + \frac{\partial Q^{(N-a-1,a+1)}_{eo}}{\partial q_2} \\ &= (-1)^{a/2} \sum_{k=0}^{a/2} (-1)^k \left[(-1)^{\frac{N}{2}}\bigg(-(N-2k) \binom{N}{2k} +(2k+1) \binom{N}{2k+1} \bigg) q_1^{N- 2k-1} q_2^{2k} \right. \\ &\qquad\qquad\qquad+2k \left. \binom{N}{2k} q_1^{2k-1} q_2^{N-2k} + (N- 2k-1) \binom{N}{2k+1} q_1^{2k+1} q_2^{N-2k-2} \right] \\ &= (-1)^{a/2} \sum_{k=0}^{a/2} (-1)^k \left[ 2k \binom{N}{2k} q_1^{2k-1} q_2^{N-2k} + (N- 2k-1) \binom{N}{2k+1} q_1^{2k+1} q_2^{N-2k-2} \right] \\ &=(N-a-1) \binom{N}{a+1} q_1^{a+1} q_2^{N-a-2} + (-1)^{a/2} \left[ \sum_{k=0}^{a/2} (-1)^k \, 2k \binom{N}{2k} q_1^{2k-1} q_2^{N-2k} \right. \\ &\qquad\qquad\qquad + \left. \sum_{k=0}^{a/2-1} (-1)^k (N-2k-1 ) \binom{N}{2k+1} q_1^{2k+1} q_2^{N-2k-2} \right] , \end{split} \end{equation} where we have used that \be \binom{N}{b}(N-b) = \binom{N}{b+1} (b+1). \label{xl} \ee Therefore, if we prove that the last expression in (\ref{xc}) between square brackets vanishes we would have finished since the relation (\ref{xd}) would be fulfilled. We can rewrite such expression as \begin{equation} \begin{split} \sum_{k=0}^{a/2} &(-1)^k \, 2k \binom{N}{2k} q_1^{2k-1} q_2^{N-2k} + \sum_{k=1}^{a/2} (-1)^{k-1} (N- 2k+1) \binom{N}{2k-1} q_1^{2k-1} q_2^{N-2k} \\ &= \sum_{k=1}^{a/2} (-1)^k \left [ 2k \binom{N}{2k} - (N- 2k+1) \binom{N}{2k-1} \right] q_1^{2k-1} q_2^{N-2k} = 0 , \end{split} \end{equation} where we have used again the property (\ref{xl}). Consequently, we have proved that $\mathcal I_N$ (\ref{eq:I}) is an $N$th-order in the momenta integral of the motion for the Hamiltonian $\mathcal H_N$ (\ref{hamN}). \end{proof} By symmetry of the Hamiltonian $\mathcal H_N$ (\ref{hamN}), it is clear that the permutation of indices $1\leftrightarrow 2$ in \eqref{eq:I} provides another integral of the motion $\mathcal I'_N$ which, obviously, is not functionally independent of the three functions presented in Theorem~\ref{teor1}: $\mathcal C$, $\mathcal I_N$ and $\mathcal H_N$. In addition, there exists a relationship among the above four functions. These results are characterised by the following statement. \begin{proposition} \label{prop1} (i) The Hamiltonian $\mathcal H_N$ (\ref{hamN}) is also endowed with the $N$th-order in the momenta integral of motion given by \begin{equation} \label{eq:II} \mathcal I_N' = p_1^2 + \sum_{n=1}^N \gamma_{n} \sum_{j=0}^{\varphi(n)} p_1^{n-j} p_2^j \, Q^{(n-j,j)}(q_2,q_1), \end{equation} where $\varphi(n)$ is defined by (\ref{a1}) and $Q^{(n-j,j)}(q_2,q_1)$ are the homogeneous polynomials (\ref{eq:Q_ab}) and (\ref{eq:Q}) obtained through the interchange $q_1\leftrightarrow q_2$, that is, $\mathcal I_N'(q_1,p_1,q_2,p_2)=\mathcal I_N(q_2,p_2,q_1,p_1)$ (\ref{eq:I}). The set $\{\mathcal H_N, \mathcal I'_N, \mathcal C\}$ is formed by three functionally independent functions. \noindent (ii) The four functions $\{\mathcal H_N, \mathcal I_N, \mathcal I'_N, \mathcal C\}$ are subjected to the relation \begin{equation} \label{a2} \mathcal H_N=\mathcal I_N + \mathcal I_N' + \!\! \sum_{k=1}^{\varphi(N+1)/2} \!\! (-1)^{k} \, \gamma_{2k}\,\mathcal C^{2k} . \end{equation} \end{proposition} \begin{proof} The only non-trivial fact to be proved is that the relationship (\ref{a2}) holds. The procedure is similar to the one performed in the proof of Theorem~\ref{teor1} which is quite cumbersome. We thus restrict ourselves to outline the main steps of the proof. Let us consider the functions $\mathcal G_N$ and $ \mathcal J_N $ (\ref{b1}) along with a new function $ \mathcal J'_N$ related to $\mathcal I_N' $ (\ref{eq:II}) in the form \be \mathcal J'_N = \sum_{j=0}^{\varphi(N)} p_1^{N-j} p_2^j \,Q^{(N-j,j)}(q_2,q_1),\qquad \mathcal I'_N = \mathcal I'_{N-1} + \gamma_N \mathcal J'_N , \label{a4} \ee that is, $ \mathcal J'_N(q_1,p_1,q_2,p_2)=\mathcal J_N(q_2,p_2,q_1,p_1)$. Next, after some long computations, we obtain the following relations according to the parity of $N$: \begin{equation} \label{a5} \begin{array}{ll} \displaystyle{N\ \mbox{\rm even:}} &\quad\displaystyle{ \mathcal G_N=\mathcal J_N + \mathcal J_N' + (-1)^{N/2} \, \mathcal C^{N} } . \\[4pt] \displaystyle{N\ \mbox{\rm odd:}} &\quad\displaystyle{ \mathcal G_N=\mathcal J_N + \mathcal J_N' } . \end{array} \end{equation} Now we proceed by applying mathematical induction. It is straightforward to show that the relation (\ref{a2}) holds for a low value of $N$. Assuming that (\ref{a2}) is valid for $(N-1)$ we shall prove that such equation also holds for $N$, distinguishing the parity of $N$. Firstly, let $N$ be {\em even}. By taking into account (\ref{xa}), the expression (\ref{a2}) for $\mathcal H_{N-1}$ and (\ref{a5}), we obtain that \begin{equation} \label{a6} \begin{split} \mathcal H_{N}&=\mathcal H_{N-1}+\gamma_N \mathcal G_N\\ &=\mathcal I_{N-1} + \mathcal I_{N-1}' + \!\! \sum_{k=1}^{\varphi(N)/2} \!\! (-1)^{k} \, \gamma_{2k}\,\mathcal C^{2k}+ \gamma_N\left(\mathcal J_N + \mathcal J_N' + (-1)^{N/2} \, \mathcal C^{N} \right) . \end{split} \end{equation} The equations (\ref{xa}) and (\ref{a4}) lead to $\mathcal I_{N}$ and $ \mathcal I_{N}'$ in the above result. Since ${\varphi(N+1)/2}=N/2$ we also recover the complete sum in the relation (\ref{a2}) (note that ${\varphi(N)/2}=N/2-1$). And, secondly, let $N$ be {\em odd}. Now we find that \begin{equation} \label{a7} \begin{split} \mathcal H_{N} =\mathcal I_{N-1} + \mathcal I_{N-1}' + \!\! \sum_{k=1}^{\varphi(N)/2} \!\! (-1)^{k} \, \gamma_{2k}\,\mathcal C^{2k}+ \gamma_N\left(\mathcal J_N + \mathcal J_N' \right) . \end{split} \end{equation} In this case, ${\varphi(N) }={\varphi(N+1) }=N-1$, so that we have proven the relation (\ref{a2}). \end{proof} The results of Proposition~\ref{prop1} strongly indicate a quite different behaviour of the Hamiltonian $\mathcal H_N$ (\ref{hamN}) according to the superposition of either even or odd potential terms $\mathcal G_n= (\mathbf{q} \boldsymbol{\cdot} \mathbf{p})^n$ determined by the coefficients $\gamma_n$. In fact, if we only consider {\em odd} terms in the potential, \emph{i.e.}~$\gamma_{2k} = 0$ for all $k \in \mathbb N$, then the relation (\ref{a2}) reduces to \begin{equation} \mathcal H_N= \mathcal I_N + \mathcal I_N' . \end{equation} Consequently, Theorem~\ref{teor1} and Proposition~\ref{prop1} extend the results for the classical Zernike Hamiltonian of Theorem~\ref{teor0} to any arbitrary superposition of momentum dependent potentials $ (\mathbf{q} \boldsymbol{\cdot} \mathbf{p})^n$. In particular, setting $N=2$ we find that \be \mathcal H_{\rm Zk} \equiv\mathcal H_2,\qquad \mathcal I\equiv \mathcal I_2 ,\qquad \mathcal I'\equiv \mathcal I'_2 ,\qquad \mathcal H_2=\mathcal I_2 + \mathcal I_2' - \gamma_{2}\,\mathcal C^{2}, \ee thus recovering, as a particular case of (\ref{a2}) (note that $\varphi(3)/2=1$), the equation (75) in~\cite{PWY2017zernike}. In addition, it is immediate to express all the above results in polar variables by means of the canonical transformation (\ref{ze}); for instance, the Hamiltonian $\mathcal H_N$ (\ref{hamN}) becomes \be \mathcal H_N=p_r^2 + \frac{p_\phi^2 }{r^2} +\sum_{n=1}^N \gamma_n (r p_r)^n. \label{a8} \ee \subsection{Superintegrable systems on the sphere and the hyperbolic space} \label{s32} Taking into account the interpretation carried out in Section~\ref{s21} for the Zernike system on curved spaces and presented in Proposition~\ref{prop0}, we can now apply the results of Theorem~\ref{teor1} and Proposition~\ref{prop1} to $\mathbf S^2$ and $\mathbf H^2$. This is summarized in the following statement. \begin{proposition} \label{prop2} Let $\{ \rho,\phi, p_\rho ,p_\phi\}$ be a set of canonical geodesic polar variables. \\ (i) The superintegrable Hamiltonian $\mathcal H_N$ (\ref{hamN}) can be written in these variables on $\mathbf S^2$ and $\mathbf H^2$, with constant Gaussian curvature $\kappa=-\gamma_2\ne 0$, through the canonical transformation (\ref{zl}), namely \be \mathcal H_N = p_\rho^2 + \frac{p_\phi^2 }{ \Sk^2_\kk( \rho)} + \gamma_1 \Tk_\kk( \rho)\,p_\rho + \sum_{n=3}^N \gamma_n\bigl( \Tk_\kk( \rho)\,p_\rho \bigr)^n . \label{a9} \ee The domain for the variables $(\rho, \phi) $ is again given by $\phi\in[ 0, 2\pi)$ and (\ref{zn}). \\ (ii) By means of the canonical transformation (\ref{zl2}), the superintegrable Hamiltonian $\mathcal H_N$ (\ref{hamN}) can alternatively be expressed as \bea && \mathcal H_N=\mathcal T_\kk+\mathcal U_\kk(\rho)+\mathcal V_\kk(\rho,p_\rho) ,\qquad \mathcal T_\kk = p_\rho^2 + \frac{p_\phi^2 }{ \Sk^2_\kk( \rho)} , \nonumber\\ &&\mathcal U_\kk(\rho)= -\frac{\gamma_1^2}{4} \Tk^2_\kk( \rho)+\sum_{n=3}^N (-1)^n \gamma_n\left( \frac{\gamma_1}2\right)^{n}\! \Tk^{2n}_\kk( \rho) , \label{a10}\\ &&\mathcal V_\kk(\rho,p_\rho) = \sum_{n=3}^N\gamma_n \sum_{k=1}^n (-1)^{n-k} \binom{n}{k} \left( \frac{\gamma_1 }2\right)^{{n-k}} \! \Tk^{2n-k}_\kk( \rho)p_\rho ^k, \nonumber \eea that is, $\mathcal T_\kk$ is the kinetic energy (\ref{zo}) on the curved space, $\mathcal U_\kk(\rho)$ is a central potential and $\mathcal V_\kk(\rho,p_\rho) $ is a higher-order momentum-dependent potential. \end{proposition} \begin{proof} It is a direct consequence of Theorem \ref{teor1}, the canonical transformations \eqref{zl} and (\ref{zl2}) along with the definitions \eqref{zi} and \eqref{zj}. \end{proof} The expression (\ref{a9}) shows that the initial Hamiltonian $\mathcal H_N$ (\ref{hamN}) can be seen as a superposition of higher-order momentum-dependent potentials (except for the quadratic term) on $\mathbf S^2$ and $\mathbf H^2$, similarly to the Euclidean case. However, in its alternative form (\ref{a10}), $\mathcal H_N$ can be regarded as a superposition of the isotropic 1\,:\,1 curved oscillator with even-order anharmonic curved oscillators~\cite{Ballesteros2007,JNMP2008} within $\mathcal U_\kk(\rho)$ plus another superposition of higher-order momentum-dependent potentials through the term $\mathcal V_\kk(\rho,p_\rho) $. In this respect, it is worth stressing the prominent role played by the coefficient $\gamma_1$ in the expression (\ref{a10}) in contrast to (\ref{a9}). Since $\gamma_1$ is arbitrary we can set it equal to zero so that both expressions for $\mathcal H_N$ (\ref{a9}) and (\ref{a10}) do coincide (and both canonical transformations (\ref{zl}) and (\ref{zl2}) as well). In this case, $\mathcal U_\kk(\rho)$ and $\mathcal V_\kk(\rho,p_\rho) $ (\ref{a10}) reduce to \be \gamma_1=0\!:\qquad \mathcal U_\kk(\rho)=0,\qquad \mathcal V_\kk(\rho,p_\rho)= \sum_{n=3}^N\gamma_n \! \Tk^{n}_\kk( \rho)p_\rho ^n , \ee and, consequently, there does not exist an alternative interpretation in terms of curved oscillators. We also remark that the flat limit $\kk\to 0$ ({\em i.e.}, $\gamma_2=0$) is well defined in all the results of Proposition~\ref{prop2} leading to the corresponding expressions in $\mathbf E^2$ in a consistent way. Recall that under this flat limit the geodesic parallel coordinates $(\rho,\phi)$ reduce to the usual polar ones $(r,\phi)$ (see (\ref{zi}) and (\ref{zj})). In particular, if we apply the limit $\kk\to 0$ to $\mathcal H_N$ (\ref{a9}) we just recover its form in polar variables (\ref{a8}) with $\gamma_2=0$. And if we now compute the limit $\kk\to 0$ on the expressions (\ref{a10}) we directly obtain that \bea && \mathcal H_N=\mathcal T_0+\mathcal U_0(r)+\mathcal V_0(r,p_r) ,\qquad \mathcal T_0 = p_r^2 + \frac{p_\phi^2 }{ r^2} , \nonumber\\ &&\mathcal U_0(r)= -\frac{\gamma_1^2}{4} \, r^2 +\sum_{n=3}^N (-1)^n \gamma_n \left( \frac{\gamma_1}{2} \right)^n\! r^{2n} , \label{a110}\\ &&\mathcal V_0(r,p_r) = \sum_{n=3}^N\gamma_n \sum_{k=1}^n (-1)^{n-k} \binom{n}{k} \left( \frac{\gamma_1}{2} \right)^{n-k}\! r^{2n-k} p_r ^k. \nonumber \eea The central potential $\mathcal U_0(r)$ corresponds to a superposition of anharmonic Euclidean oscillators which, in arbitrary dimension, were proposed in~\cite{Ballesteros2007,JNMP2008} from a Poisson $\mathfrak{sl}(2,\mathbb R)$-coalgebra approach. The same results can also be obtained by applying the flat counterpart of the curved canonical transformation (\ref{zl2}) to $\mathcal H_N$ (\ref{hamN}), so with $\kk=\gamma_2=0$, or by substituting $ p_r = \tilde p_r- {\gamma_1} r/2$ in $\mathcal H_N$ (\ref{a8}) with $\gamma_2=0$ and then removing the tilde in $ p_r$ (see (\ref{zzo})). \sect{Examples and Racah algebra} \label{s4} In this section we illustrate the results of Theorem~\ref{teor1} and Proposition~\ref{prop1} by explicitly writing down the main expressions associated with the Hamiltonian $\mathcal H_N$ (\ref{hamN}) for some values of $N$ and, furthermore, we study the Racah algebra, thus generalizing the cubic `Higgs' algebra (\ref{zd}) to higher-order polynomial algebras. For this purpose we present in Table~\ref{table1} the polynomials $Q^{(N-j,j)}$ \eqref{eq:Q} coming from $Q^{(N-j,j)}_{ab}$ \eqref{eq:Q_ab} which are involved in the constant of the motion $\mathcal I_N$ (\ref{eq:I}) up to $N=8$. Thus the expressions for $\mathcal I_N$ can be obtained straightforwardly and the constant of the motion $\mathcal I'_N$ (\ref{eq:II}) can be deduced simply by interchanging the indices $1\leftrightarrow 2$ in the canonical variables. With this information the general relationship (\ref{a2}) among the four functions $\{H_N,\mathcal I_N,\mathcal I'_N, \mathcal C\}$, with $\mathcal C$ given by (\ref{zb}), can be easily checked. These results are displayed in Table~\ref{table2} up to $N=6$. As an additional relevant property of $\mathcal H_N$ (\ref{hamN}), let us also construct its corresponding Racah algebra understood as the algebra closed by its constants of the motion. From $\{\mathcal I_N,\mathcal I'_N, \mathcal C\}$ we define the following constants of the motion similarly to (\ref{zzd}) (so following~\cite{PWY2017zernike}): \be \mathcal L_1:= \mathcal C/2, \qquad \mathcal L_2:=\bigl(\mathcal I'_N - \mathcal I_N \bigr)/2 , \qquad \mathcal L_3:= \{ \mathcal L_1,\mathcal L_2\}. \ee Although we have not been able to deduce a general and closed expression for the Racah algebra for arbitrary $N$, which remains as an open problem, we have obtained that the three above constants of the motion satisfy the following generic Poisson brackets up to $N=8$: \be \{ \mathcal L_1,\mathcal L_2\}=\mathcal L_3, \qquad \{ \mathcal L_1,\mathcal L_3\}=-\mathcal L_2, \qquad \{ \mathcal L_2, \mathcal L_3\}=\sum_{k=0}^{N-1} \mathcal F_k(\gamma_n, \mathcal H_N) \mathcal L_1^{2k+1} ,\quad\ 1\le N\le 8, \label{a13} \ee where $ \mathcal F_k$ is a polynomial function depending on some coefficients belonging to the set $\{\gamma_1,\dots, \gamma_N\}$ and sometimes on $\mathcal H_N$. Therefore, our conjecture is that for arbitrary $N$ the polynomial algebra (\ref{a13}) is of $(2N-1)$th-order and the well-known cubic Higgs algebra is recovered, as already shown, for the proper Zernike system with $N=2$ in (\ref{zd}). The explicit expressions for the Poisson bracket $ \{ \mathcal L_2, \mathcal L_3\}$ are also written in Table~\ref{table2} up to $N=6$. \begin{table}[t] {\small \caption{\small Polynomials $Q^{(N-j,j)}$ \eqref{eq:Q} from $Q^{(N-j,j)}_{ab}$ \eqref{eq:Q_ab} appearing in the constant of the motion $\mathcal I_N$ (\ref{eq:I}) of the Hamiltonian $\mathcal H_N$ (\ref{hamN}) up to $N=8$. All of them are homogeneous polynomials of degree $N$.} \label{table1} \noindent \begin{tabular}{l l} \\[-0.2cm] \hline \hline \\[-0.2cm] $\bullet$ $N=1\quad \varphi(1)=0$: & $Q^{(1,0)}=q_2$ \\[0.25cm] $\bullet$ $N=2\quad \varphi(2)=0$: & $Q^{(2,0)}=q_1^2+q_2^2$ \\[0.25cm] $\bullet$ $N=3\quad \varphi(3)=2$: & $Q^{(3,0)}= q_2^3$\qquad $Q^{(2,1)}= q_1^3+ 3 q_1 q_2^2$\qquad $Q^{(1,2)}= -q_2^3$ \\[0.25cm] $\bullet$ $N=4\quad \varphi(4)=2$: & $Q^{(4,0)}= -q_1^4+ q_2^4$\qquad $Q^{(3,1)}= 4\bigl( q_1^3 q_2+ q_1 q_2^3 \bigr)$\qquad $Q^{(2,2)}=q_1^4-q_2^4$ \\[0.25cm] $\bullet$ $N=5\quad \varphi(5)=4$: & $Q^{(5,0)}= q_2^5$\qquad $Q^{(4,1)}= -q_1^5 + 5 q_1 q_2^4$\qquad $Q^{(3,2)}=5 q_1^4 q_2+ 10 q_1^2 q_2^3-q_2^5$ \\[0.20cm] & $Q^{(2,3)}= q_1^5- 5 q_1 q_2^4$\qquad $Q^{(1,4)}= q_2^5$ \\[0.25cm] $\bullet$ $N=6\quad \varphi(6)=4$: & $Q^{(6,0)}= q_1^6 + q_2^6$\quad $Q^{(5,1)}=-6\bigl( q_1^5 q_2- q_1 q_2^5\bigr)$ \ \ $Q^{(4,2)}=- q_1^6+ 15 \bigl( q_1^4 q_2^2+ q_1^2q_2^4\bigr) - q_2^6$ \\[0.20cm] & $Q^{(3,3)}= 6 \bigl( q_1^5 q_2 - q_1q_2^5\bigr) $\qquad $Q^{(2,4)}= q_1^6 + q_2^6$ \\[0.25cm] $\bullet$ $N=7\quad \varphi(7)=6$: & $Q^{(7,0)}= q_2^7$\qquad $Q^{(6,1)}= q_1^7+7q_1 q_2^6$ \qquad $Q^{(5,2)}=- 7 q_1^6 q_2 + 21 q_1^2 q_2^5- q_2^7 $ \\[0.2cm] & $Q^{(4,3)}= -q_1^7 - 7 q_1 q_2^6 + 21 q_1^5q_2^2 + 35 q_1^3 q_2^4$\qquad $Q^{(3,4)}= 7 q_1^6q_2- 21 q_1^2 q_2^5+ q_2^7$ \\[0.22cm] & $Q^{(2,5)}= q_1^7 + 7 q_1 q_2^6 $ \qquad $Q^{(1,6)}=-q_2^7 $ \\[0.25cm] $\bullet$ $N=8\quad \varphi(8)=6$: & $Q^{(8,0)}= -q_1^8 + q_2^8$\quad $Q^{(7,1)}= 8\bigl( q_1^7q_2+ q_1 q_2^7 \bigr)$ \quad $Q^{(6,2)}= q_1^8 - 28 \bigl( q_1^6q_2^2-q_1^2 q_2^6 \bigr) - q_2^8 $ \\[0.2cm] & $Q^{(5,3)}= -8\bigl( q_1^7q_2+q_1 q_2^7\bigr) +56\bigl( q_1^5 q_2^3 + q_1^3q_2^5\bigl)$\quad $Q^{(4,4)}= - q_1^8 +28\bigl( q_1^6 q_2^2 - q_1^2q_2^6\bigl) + q_2^8 $ \\[0.22cm] & $Q^{(3,5)}= 8\bigl( q_1^7q_2 + q_1 q_2^7\bigr) $ \qquad $Q^{(2,6)}=q_1^8 - q_2^8 $ \\[0.25cm] \hline \hline \end{tabular} } \end{table} \begin{table}[htp] {\small \caption{\footnotesize The constant of the motion $\mathcal I_N$ (\ref{eq:I}) of the superintegrable Hamiltonian $\mathcal H_N$ (\ref{hamN}) from the polynomials written in Table~\ref{table1} up to $N=6$. Recall that $\mathcal I'_N$ (\ref{eq:II}) comes from the interchange of indices $1\leftrightarrow 2$ in the canonical variables while $\mathcal C = q_1 p_2 - q_2 p_1 $ (\ref{zb}). The relation (\ref{a2}) among the four functions $\{H_N,\mathcal I_N,\mathcal I'_N, \mathcal C\}$ is also displayed together with the Poisson bracket $\{ \mathcal L_2, \mathcal L_3\}$ (\ref{a13}) that determines a $(2N-1)$th-order polynomial Racah algebra.} \label{table2} \noindent \begin{tabular}{l l} \\[-0.2cm] \hline \hline \\[-0.2cm] $\bullet$ & \!\!\!\! \!\!\!\! $\mathcal I_1=p_2^2+\gamma_1 Q^{(1,0)} p_2=p_2^2+\gamma_1 q_2 p_2 $\qquad $\mathcal H_1=\mathcal I_1 + \mathcal I_1' $\qquad $ \{ \mathcal L_2, \mathcal L_3\}= -\gamma_1^2 \mathcal L_1 $ \\[8pt] $\bullet$ & \!\!\!\! \!\!\!\! $\mathcal I_2=p_2^2+\gamma_1 Q^{(1,0)} p_2+ \gamma_2 Q^{(2,0)} p_2^2 =p_2^2+\gamma_1 q_2 p_2 + \gamma_2 (q_1^2+q_2^2)p_2^2 $ \\[4pt] & \!\!\!\! \!\!\!\! $ \mathcal H_2=\mathcal I_2 + \mathcal I_2' - \gamma_{2}\,\mathcal C^{2}$\qquad $ \{ \mathcal L_2, \mathcal L_3\}=-\left( \gamma_1^2 + 2 \gamma_2 \mathcal H_2\right)\mathcal L_1- \gamma_2^2 (2 \mathcal L_1 )^3 $ \\[8pt] $\bullet$ & \!\!\!\! \!\!\!\! $\mathcal I_3=p_2^2+\gamma_1 Q^{(1,0)} p_2+ \gamma_2 Q^{(2,0)} p_2^2 + \gamma_3\left(Q^{(3,0)} p_2^3+Q^{(2,1)} p_2^2p_1+ Q^{(1,2)} p_2p_1^2 \right) $ \\[4pt] & \!\!\!\! \!\!\! $\quad\,=p_2^2+\gamma_1 q_2 p_2 + \gamma_2 (q_1^2+q_2^2)p_2^2 + \gamma_3\bigl( q_2^3 p_2^3+(q_1^3+ 3 q_1 q_2^2) p_2^2p_1-q_2^3 p_2p_1^2 \bigr) $ \\[4pt] & \!\!\!\! \!\!\!\! $ \mathcal H_3=\mathcal I_3+ \mathcal I_3' - \gamma_{2}\,\mathcal C^{2}$\qquad $ \{ \mathcal L_2, \mathcal L_3\}=-\left( \gamma_1^2 + 2 \gamma_2 \mathcal H_3\right)\mathcal L_1- \left( \gamma_2^2- 2 \gamma_1\gamma_3\right) (2 \mathcal L_1 )^3 - \tfrac 32 \gamma_3^2 (2 \mathcal L_1 )^5$ \\[8pt] $\bullet$ & \!\!\!\! \!\!\!\! $\mathcal I_4=p_2^2+\gamma_1 Q^{(1,0)} p_2+ \gamma_2 Q^{(2,0)} p_2^2 + \gamma_3\left(Q^{(3,0)} p_2^3+Q^{(2,1)} p_2^2p_1+ Q^{(1,2)} p_2p_1^2 \right) $ \\[4pt] & \!\!\!\! \!\!\!\! $\qquad\quad + \gamma_4\left(Q^{(4,0)} p_2^4+Q^{(3,1)} p_2^3p_1+ Q^{(2,2)} p_2^2p_1^2 \right) $ \\[4pt] & \!\!\!\! \!\!\!\! $\quad\,=p_2^2+\gamma_1 q_2 p_2 + \gamma_2 (q_1^2+q_2^2)p_2^2 + \gamma_3\bigl( q_2^3 p_2^3+(q_1^3+ 3 q_1 q_2^2) p_2^2p_1-q_2^3 p_2p_1^2 \bigr) $ \\[4pt] & \!\!\!\! \!\!\!\! $\qquad\quad + \gamma_4 \bigl( ( q_2^4-q_1^4) p_2^4+4 ( q_1^3 q_2+ q_1 q_2^3 ) p_2^3p_1+ (q_1^4-q_2^4) p_2^2p_1^2 \bigr) $ \\[4pt] & \!\!\!\! \!\!\!\! $ \mathcal H_4=\mathcal I_4+ \mathcal I_4' - \gamma_{2}\,\mathcal C^{2}+\gamma_{4}\,\mathcal C^{4}$ \\[4pt] & \!\!\!\! \!\!\!\! $ \{ \mathcal L_2, \mathcal L_3\}=-\big( \gamma_1^2 + 2 \gamma_2 \mathcal H_4\big)\mathcal L_1- \big( \gamma_2^2- 2 \gamma_1\gamma_3- 2\gamma_4 \mathcal H_4\big) (2\mathcal L_1)^3 - \tfrac 32 \big( \gamma_3^2 -2\gamma_2\gamma_4 \big)(2 \mathcal L_1 )^5- 2\gamma_4^2(2 \mathcal L_1 )^7 $ \\[8pt] $\bullet$ & \!\!\!\! \!\!\!\! $\mathcal I_5=p_2^2+\gamma_1 Q^{(1,0)} p_2+ \gamma_2 Q^{(2,0)} p_2^2 + \gamma_3\left(Q^{(3,0)} p_2^3+Q^{(2,1)} p_2^2p_1+ Q^{(1,2)} p_2p_1^2 \right) $ \\[4pt] & \!\!\!\! \!\!\!\! $\qquad\quad + \gamma_4\left(Q^{(4,0)} p_2^4+Q^{(3,1)} p_2^3p_1+ Q^{(2,2)} p_2^2p_1^2 \right) $ \\[4pt] & \!\!\!\! \!\!\!\! $\qquad\quad + \gamma_5\left(Q^{(5,0)} p_2^5+Q^{(4,1)} p_2^4p_1+ Q^{(3,2)} p_2^3p_1^2+ Q^{(2,3)} p_2^2p_1^3+ Q^{(1,4)} p_2 p_1^4 \right) $ \\[4pt] & \!\!\!\! \!\!\!\! $\quad\,=p_2^2+\gamma_1 q_2 p_2 + \gamma_2 (q_1^2+q_2^2)p_2^2 + \gamma_3\bigl( q_2^3 p_2^3+(q_1^3+ 3 q_1 q_2^2) p_2^2p_1-q_2^3 p_2p_1^2 \bigr) $ \\[4pt] & \!\!\!\! \!\!\!\! $\qquad\quad + \gamma_4 \bigl( ( q_2^4-q_1^4) p_2^4+4 ( q_1^3 q_2+ q_1 q_2^3 ) p_2^3p_1+ (q_1^4-q_2^4) p_2^2p_1^2 \bigr) $ \\[4pt] & \!\!\!\! \!\!\!\! $\qquad\quad + \gamma_5 \bigl( q_2^5 p_2^5+ ( 5 q_1 q_2^4 -q_1^5 ) p_2^4p_1+ (5 q_1^4 q_2+ 10 q_1^2 q_2^3-q_2^5)p_2^3p_1^2+ ( q_1^5- 5 q_1 q_2^4)p_2^2p_1^3+ q_2^5p_2 p_1^4\bigr) $ \\[4pt] & \!\!\!\! \!\!\!\! $ \mathcal H_5=\mathcal I_5+ \mathcal I_5' - \gamma_{2}\,\mathcal C^{2}+\gamma_{4}\,\mathcal C^{4}$ \\[4pt] & \!\!\!\! \!\!\!\! $ \{ \mathcal L_2, \mathcal L_3\}=-\big( \gamma_1^2 + 2 \gamma_2 \mathcal H_5\big) \mathcal L_1- \big( \gamma_2^2- 2 \gamma_1\gamma_3- 2\gamma_4 \mathcal H_5\big) (2\mathcal L_1)^3 - \tfrac 32 \big( \gamma_3^2 -2\gamma_2\gamma_4 + 2\gamma_1\gamma_5\big)(2 \mathcal L_1 )^5 $ \\[4pt] & \!\!\!\! \!\!\!\! $\qquad\qquad\qquad - 2\big(\gamma_4^2 - 2\gamma_3\gamma_5\big)(2 \mathcal L_1 )^7 -\tfrac 52\, \gamma_5^2 (2 \mathcal L_1 )^9 $\ \\[8pt] $\bullet$ & \!\!\!\! \!\!\!\! $\mathcal I_6=p_2^2+\gamma_1 Q^{(1,0)} p_2+ \gamma_2 Q^{(2,0)} p_2^2 + \gamma_3\left(Q^{(3,0)} p_2^3+Q^{(2,1)} p_2^2p_1+ Q^{(1,2)} p_2p_1^2 \right) $ \\[4pt] & \!\!\!\! \!\!\!\! $\qquad\quad + \gamma_4\left(Q^{(4,0)} p_2^4+Q^{(3,1)} p_2^3p_1+ Q^{(2,2)} p_2^2p_1^2 \right) $ \\[4pt] & \!\!\!\! \!\!\!\! $\qquad\quad + \gamma_5\left(Q^{(5,0)} p_2^5+Q^{(4,1)} p_2^4p_1+ Q^{(3,2)} p_2^3p_1^2+ Q^{(2,3)} p_2^2p_1^3+ Q^{(1,4)} p_2 p_1^4 \right) $ \\[4pt] & \!\!\!\! \!\!\!\! $\qquad\quad + \gamma_6\left(Q^{(6,0)} p_2^6+Q^{(5,1)} p_2^5p_1+ Q^{(4,2)} p_2^4p_1^2+ Q^{(3,3)} p_2^3p_1^3+ Q^{(2,4)} p_2^2 p_1^4 \right) $ \\[4pt] & \!\!\!\! \!\!\!\! $\quad\,=p_2^2+\gamma_1 q_2 p_2 + \gamma_2 (q_1^2+q_2^2)p_2^2 + \gamma_3\bigl( q_2^3 p_2^3+(q_1^3+ 3 q_1 q_2^2) p_2^2p_1-q_2^3 p_2p_1^2 \bigr) $ \\[4pt] & \!\!\!\! \!\!\!\! $\qquad\quad + \gamma_4 \bigl( ( q_2^4-q_1^4) p_2^4+4 ( q_1^3 q_2+ q_1 q_2^3 ) p_2^3p_1+ (q_1^4-q_2^4) p_2^2p_1^2 \bigr) $ \\[4pt] & \!\!\!\! \!\!\!\! $\qquad\quad + \gamma_5 \bigl( q_2^5 p_2^5+ ( 5 q_1 q_2^4 -q_1^5 ) p_2^4p_1+ (5 q_1^4 q_2+ 10 q_1^2 q_2^3-q_2^5)p_2^3p_1^2+ ( q_1^5- 5 q_1 q_2^4)p_2^2p_1^3+ q_2^5p_2 p_1^4\bigr) $ \\[4pt] & \!\!\!\! \!\!\!\! $\qquad\quad + \gamma_6 \big( (q_1^6 + q_2^6)p_2^6-6 ( q_1^5 q_2- q_1 q_2^5 ) p_2^5p_1+ \big (15 ( q_1^4 q_2^2+ q_1^2q_2^4) - q_1^6- q_2^6\big)p_2^4p_1^2 $ \\[4pt] & \!\!\!\! \!\!\!\! $\qquad\qquad\qquad\quad + 6 ( q_1^5 q_2 - q_1q_2^5 ) p_2^3p_1^3+ (q_1^6 + q_2^6)p_2^2 p_1^4 \big) $ \\[4pt] & \!\!\!\! \!\!\!\! $ \mathcal H_6=\mathcal I_6+ \mathcal I_6' - \gamma_{2}\,\mathcal C^{2}+\gamma_{4}\,\mathcal C^{4}-\gamma_{6}\,\mathcal C^{6}$ \\[4pt] & \!\!\!\! \!\!\!\! $ \{ \mathcal L_2, \mathcal L_3\}=-\big( \gamma_1^2 + 2 \gamma_2 \mathcal H_6\big) \mathcal L_1- \big( \gamma_2^2- 2 \gamma_1\gamma_3- 2\gamma_4 \mathcal H_6\big) (2\mathcal L_1)^3 - \tfrac 32 \big( \gamma_3^2 -2\gamma_2\gamma_4 + 2\gamma_1\gamma_5+2\gamma_6 \mathcal H_6 \big)(2 \mathcal L_1 )^5 $ \\[4pt] & \!\!\!\! \!\!\!\! $\qquad\qquad\qquad - 2\big(\gamma_4^2 - 2\gamma_3\gamma_5+ 2\gamma_2\gamma_6\big)(2 \mathcal L_1 )^7 -\tfrac 52\big( \gamma_5^2-2\gamma_4\gamma_6 \big)(2 \mathcal L_1 )^9- 3 \gamma_6^2(2 \mathcal L_1 )^{11} $ \\[8pt] \hline \hline \end{tabular} } \end{table} \newpage \clearpage \pagebreak[4] \sect{Superintegrable perturbations of the classical Zernike system} \label{s5} So far, we have proven the superintegrability property of the Hamiltonian $\mathcal H_N$ (\ref{hamN}) on $\mathbf E^2$ in Theorem~\ref{teor1} and, then, established a natural interpretation of these results on $\mathbf S^2$ and $\mathbf H^2$ in Proposition~\ref{prop2}. Let us now focus on the original classical Zernike system. The proper Zernike system $\mathcal H_{\rm Zk} $ (\ref{za}) arises by setting $N=2$ in $\mathcal H_N$ (\ref{hamN}) such that the coefficient $\gamma_1$ is a pure imaginary number while $\gamma_2$ is real~\cite{PWY2017zernike}. In this section let us set \be \gamma_1=2{\rm i}\omega, \quad \omega\in \mathbb R,\qquad \gamma_2=-\kappa, \quad \kk\in \mathbb R. \label{a14} \ee Then $\mathcal H_{\rm Zk} $ (\ref{za}) becomes \be \mathcal H_{\rm Zk}= \mathbf{p}^2 +2{\rm i}\omega (\mathbf{q} \boldsymbol{\cdot} \mathbf{p})-\kk (\mathbf{q} \boldsymbol{\cdot} \mathbf{p})^2 . \label{a15} \ee Thus $ \mathcal H_{\rm Zk}$ can be seen as superposition of a linear momentum-dependent imaginary potential and a real quadratic one on $\mathbf E^2$, or as a single linear momentum-dependent imaginary potential on $\mathbf S^2$ $(\kk>0)$ and $\mathbf H^2$ $(\kk<0)$ with kinetic energy given by $\mathbf{p}^2-\kk (\mathbf{q} \boldsymbol{\cdot} \mathbf{p})^2 $. Hence on these curved spaces $(q_1,q_2)$ can be thought as projective coordinates. Recall that the problem of dealing with such imaginary potential was already analyzed and solved in~\cite{PWY2017zernike}. In fact, if we apply the canonical transformation (\ref{zl2}) to (\ref{a15}) with the identification (\ref{a14}) we obtain a real Hamiltonian (\ref{zo}) reading as \be \mathcal H_{\rm Zk} =\mathcal T_\kk+\mathcal U_\kk(\rho) ,\qquad \mathcal T_\kk = p_\rho^2 + \frac{p_\phi^2 }{ \Sk^2_\kk( \rho)} , \qquad \mathcal U_\kk(\rho)= \omega^2 \Tk^2_\kk( \rho) , \label{a16} \ee reproducing the isotropic 1\,:\,1 curved (Higgs) oscillator on $\mathbf S^2$ and $\mathbf H^2$ with frequency $\omega$ as discussed after Proposition~\ref{prop0}. Since $\mathcal H_{\rm Zk} $ determines a superintegrable system, all bounded trajectories are periodic and, in this case, correspond to ellipses, that is, to a Lissajous 1\,:\,1 curve~\cite{PWY2017zernike,BaHeMu13,Kuruannals}. Such trajectories can be drawn directly from the expression (\ref{a16}) or by considering their real part from (\ref{a15}). From this viewpoint, if we add some $\gamma_N$-potentials with $N\ge 3$ to $ \mathcal H_{\rm Zk}$ either in the form $\mathcal H_N$ (\ref{hamN}) or in (\ref{a10}), we obtain imaginary and real superintegrable perturbations of $\mathcal H_{\rm Zk} $. For instance, if we consider a single $\gamma_3$-potential, we find from (\ref{a15}) a cubic superintegrable perturbation given by \be \mathcal H_3= \mathcal H_{\rm Zk}+ \gamma_3 (\mathbf{q} \boldsymbol{\cdot} \mathbf{p})^3, \label{a17} \ee while from (\ref{a16}) adopts the following more cumbersome expression \be \mathcal H_3= \mathcal H_{\rm Zk}+{\rm i}\gamma_3\, \omega^3 \Tk^6_\kk( \rho)-\gamma_3\big(3 \omega^2 \Tk^5_\kk( \rho)p_\rho+ 3{\rm i}\omega \Tk^4_\kk( \rho)p_\rho^2- \Tk^3_\kk( \rho)p_\rho^3 \big). \label{a18} \ee The central potential determined by $\!\Tk^6_\kk( \rho)$ is real whenever $\gamma_3$ is a pure imaginary number. In this case, if one compute the real part of the trajectories either from (\ref{a17}) or from (\ref{a18}), one finds bounded trajectories which `deform' the ellipses associated with the initial Zernike system. Some of them are drawn in Fig.~1 with $\kk=+1$ for some imaginary values of $\gamma_3$ in the projective plane $(q_1,q_2)$ (so on the sphere). Similar trajectories arises for $\kk=-1$ (thus on $\mathbf H^2$). Likewise, we can consider a quartic perturbation with $\gamma_3=0$ and $\gamma_4\ne 0$, that is, \be \mathcal H_4= \mathcal H_{\rm Zk}+ \gamma_4 (\mathbf{q} \boldsymbol{\cdot} \mathbf{p})^4, \label{a19} \ee which in geodesic polar variables turns out to be \be \mathcal H_4= \mathcal H_{\rm Zk}+ \gamma_4\, \omega^4 \Tk^8_\kk( \rho)+\gamma_4\big(4 {\rm i}\omega^3 \Tk^7_\kk( \rho)p_\rho-6 \omega^2 \Tk^6_\kk( \rho)p_\rho^2-4{\rm i}\omega \Tk^5_\kk( \rho)p_\rho^3+ \Tk^4_\kk( \rho)p_\rho^4 \big). \label{a20} \ee Then the central potential associated with $\Tk^8_\kk( \rho)$ is real if $\gamma_4\in\mathbb R$. The real part of the corresponding trajectories with $\kk=+1$ are shown in Fig.~2 for some real values of $\gamma_4$ in the projective plane $(q_1,q_2)$. \bigskip \begin{figure}[H] \begin{center} \includegraphics[scale=0.72]{gamma3positivo.pdf} \hspace{1cm} \includegraphics[scale=0.72]{gamma3negativo.pdf} \caption{\small Plots of the real part of trajectories from the cubic perturbation of the Zernike system $ \mathcal H_3$ (\ref{a17}) with $\kk=+1$.} \end{center} \end{figure} \vskip-0.25cm \begin{figure}[H] \begin{center} \includegraphics[scale=0.72]{gamma4positivo.pdf} \hspace{1cm} \includegraphics[scale=0.72]{gamma4negativo.pdf} \\ \caption{\small Plots of the real part of trajectories from the quartic perturbation of the Zernike system $\mathcal H_4$ (\ref{a19}) with $\kk=+1$.} \end{center} \end{figure} From the expression (\ref{a10}) one can easily check that central potential $\mathcal U_\kk(\rho)$ with $\gamma_1$ given by (\ref{a14}) and with a single parameter $\gamma_N\ne 0$ ($N\ge 3$) is a real potential according to the parity of $N$: $\gamma_N$ must be a pure imaginary number when $N$ is odd, while $\gamma_N\in \mathbb R$ when $N$ is even. For these cases, it can be obtained that the real part of the trajectory is bounded. We illustrate this fact by drawing the fifth-order perturbation of the Zernike system in Fig.~3 and the sixth-order perturbation in Fig.~4 with $\kk=+1$ and again in the projective plane $(q_1,q_2)$. \bigskip \begin{figure}[H] \begin{center} \label{fig:N5} \includegraphics[scale=0.71]{gamma5positivo.pdf} \hspace{1cm} \includegraphics[scale=0.71]{gamma5negativo.pdf} \\ \caption{\small Plots of the real part of trajectories from the fifth-order perturbation of the Zernike system obtained from $\mathcal H_5$ (\ref{hamN}) under the identification (\ref{a14}) with $\kk=+1$ and with a single term $\gamma_5\ne0$.} \end{center} \end{figure} \vskip-0.25cm \begin{figure}[H] \begin{center} \label{fig:N6} \includegraphics[scale=0.72]{gamma6positivo.pdf} \hspace{1cm} \includegraphics[scale=0.72]{gamma6negativo.pdf} \\ \caption{\small Plots of the real part of trajectories from the sixth-order perturbation of the Zernike system obtained from $\mathcal H_6$ (\ref{hamN}) under the identification (\ref{a14}) with $\kk=+1$ and with a single term $\gamma_6\ne0$.} \end{center} \end{figure} The superintegrable properties for the four particular perturbations of the Zernike system here considered can be extracted straightforwardly from the general results presented in Table~\ref{table2} since this covers the cases with $N\le 6$. Obviously, one can always construct superpositions of different higher-order perturbations of the Zernike system. \sect{Conclusions and outlook} Throughout this work we have constructed a new class of higher-order superintegrable momentum-dependent Hamiltonians summarized in Theorem~\ref{teor1}, which allows for an arbitrary superposition of potentials beyond the linear and quadratic momentum-dependent ones. Moreover, these systems have not only been interpreted on the 2D Euclidean plane $\mathbf E^2$ but also on the sphere $\mathbf S^2$ and the hyperbolic plane $\mathbf H^2$ in Proposition~\ref{prop2}. The corresponding higher-order momentum-dependent constants of the motion have been explicitly written and some algebraic properties have also been studied, such as the relationship among the constants of the motion in Proposition~\ref{prop1} and the Racah algebra in Section~\ref{s4}. It is worth recalling that the cornerstone of our construction is based in the superintegrable classical Zernike system~\cite{PWY2017zernike} described in Theorem~\ref{teor0} together with its underlying Poisson $\mathfrak{sl}(2,\mathbb R)$-coalgebra symmetry, presented in Section~\ref{s22}, which holds for $\mathcal H_N$ (\ref{zz}) for any $N$. From the latter property, four open problems naturally arise which could be faced in order to generalize $\mathcal H_N$ and the results of Theorem~\ref{teor1}: \begin{itemize} \item If we consider arbitrary real parameters $\lambda_i$ $(i=1,2)$ in the symplectic realization (\ref{zv}), we obtain a new integrable Hamiltonian $\mathcal H_{\lambda,N}$ generalizing the superintegrable $\mathcal H_N$ (\ref{zz}) via a superposition with a potential $\mathcal W_{\lambda}(q_1,q_2)$ as \be \mathcal H_{\lambda,N}=\mathcal H_N+\mathcal W_{\lambda}(q_1,q_2)=\mathbf{p}^2 + \sum_{n=1}^N \gamma_n (\mathbf{q} \boldsymbol{\cdot} \mathbf{p})^n+\frac {\otra_1}{ q_1^2} +\frac {\otra_2}{ q_2^2}\, , \label{y1} \ee which is always endowed with the constant of the motion given by $ {C}^{(2)}$ (\ref{zw}). In $\mathbf E^2$, with $(q_1,q_2)$ identified with Cartesian coordinates, the $\lambda_i$-terms are `centrifugal' (or Rosochatius--Winternitz) potentials such that they provide centrifugal barriers when both constants are positive so restricting the trajectories to some quadrants in the Euclidean plane. In geodesic polar variables (\ref{zl2}) the additional potential $\mathcal W_{\lambda}$ becomes \be \mathcal W_{\lambda}(\rho,\phi)=\frac {\otra_1}{ \Sk^2_\kk( \rho)\cos^2\phi} +\frac {\otra_2}{ \Sk^2_\kk( \rho) \sin^2\phi}\, , \ee which can be interpreted as two {\em noncentral} 1\,:\,1 isotropic curved oscillators on $\mathbf S^2$ or as centrifugal barriers on $\mathbf H^2$ when both $\lambda_i>0$~\cite{BaHeMu13}. \item The $\mathfrak{sl}(2,\mathbb R)$-coalgebra symmetry~\cite{Ballesteros2007,BBHMR2009} directly leads to the following quasi-maximally superintegrable generalization of the Hamiltonian (\ref{zz}) in arbitrary dimension $d$: \be \mathcal H^{(d)}_{\lambda,N}=J_+^{(d)} + \sum_{n=1}^N \gamma_n \left(J_3^{(d)}\right)^n =\sum_{i=1}^d p_i^2 + \sum_{n=1}^N \gamma_n\left( \sum_{i=1}^d q_i p_i\right)^n+\sum_{i=1}^d \frac {\otra_i}{ q_i^2} \, ,\qquad d\ge 2, \label{y2} \ee which, by construction, is endowed with $(2d-3)$ functionally independent `universal' constants of the motion~\cite{Ballesteros2007,BBHMR2009,Latini2019,Latini2021} and can be further interpreted on either $\mathbf E^d$, $\mathbf S^d$ or $\mathbf H^d$. \item The Hamiltonian $\mathcal H^{(d)}_{\lambda,N}$ (\ref{y2}) can also be generalized to spaces of nonconstant curvature through (non-deformed) Poisson $\mathfrak{sl}(2,\mathbb R)$-coalgebra spaces following~\cite{AEHR2007} which would allow several possibilities for a generalized momentum-dependent potential. \item And, finally, the last possible generalization is to consider Poisson--Hopf algebra deformations of $\mathfrak{sl}(2,\mathbb R)$~\cite{BBHMR2009,AHR2005,RBHM2007} which convey an additional quantum deformation parameter $q={\rm e}^z$ giving rise to a deformed classical Hamiltonian $\mathcal H^{(d)}_{z,\lambda,N}$ such that $\lim_{z\to 0} \mathcal H^{(d)}_{z,\lambda,N}= \mathcal H^{(d)}_{\lambda,N}$. In this case, the deformation parameter $z$ would determine superintegrable perturbations of the initial (underformed) Hamiltonian (\ref{y2}). \end{itemize} The crucial point to solve any of the above four problems is to obtain the corresponding generalized counterpart of the constant of the motion $ \mathcal I_N$ (\ref{eq:I}), since both the coalgebra and deformed coalgebra symmetries ensure the existence of $(2d-3)$ functionally independent constants of the motion. Clearly, these tasks are by no means trivial. In contrast to the previous (open) discussion, it might be straightforward to apply the results of Theorem~\ref{teor1} and Proposition~\ref{prop2} to the three (1+1)D Lorentzian spacetimes of constant curvature, {\em i.e.}, the Minkowskian and (anti-)de Sitter spacetimes. The procedure requires to incorporate a second `contraction' parameter, say $\kk_2$, beyond the curvature of the space $\kk\equiv \kk_1$, depending on the speed of light $c$ as $\kk_2=-1/c^2$~\cite{conf,trigo}, which could be performed by analytic continuation. Therefore, the `additional' constant of the motion $ \mathcal I_N$ (\ref{eq:I}) would formally hold but now in a Riemannian--Lorentzian form, so that no further cumbersome computations would be needed. For instance, under this approach the Zernike system written as the natural Hamiltonian given in Proposition~\ref{prop0} in geodesic polar variables (\ref{zo}), with $\gamma_1=2 {\rm i} \omega$, turns out to be \be \mathcal H_{{\rm Zk},\kk_1,\kk_2} =\mathcal T_{\kk_1,\kk_2}+\mathcal U_{\kk_1}(\rho) ,\qquad \mathcal T_{\kk_1,\kk_2} = p_\rho^2 + \frac{p_\phi^2 }{\kk_2\Sk^2_\kk( \rho)} , \qquad \mathcal U_{\kk_1}(\rho)=\omega^2 \Tk^2_{\kk_1}( \rho) , \label{y4} \ee where $\mathcal T_{\kk_1,\kk_2}$ is the kinetic energy on the curved space and $\mathcal U_{\kk_1}(\rho)$ is the 1\,:\,1 isotropic curved oscillator. Hence, for $\kk_2=+1$ $(c={\rm i} )$ the results here presented for the three Riemannian spaces of constant curvature would be recovered, meanwhile for $\kk_2<0$ ($c$ finite), new results concerning Lorentzian spacetimes would be obtained. We recall that the Hamiltonian (\ref{y4}) has been deeply studied in~\cite{HB2006} in (2+1)-dimensions (see also~\cite{Petrosian1} for the specific anti-de Sitter case). To conclude, we would like to comment on what, in our opinion, is the main open problem of this work, which is precisely to obtain the quantum analogue of the superintegrable classical Hamiltonian $\mathcal H_N$ (\ref{zz}). Let us consider the usual quantum position $\hat{\mathbf{q}}$ and momenta $\hat{\mathbf{p}}$ operators, with canonical Lie brackets and differential representation given by \be [\hat q_i,\hat p_j]={\rm i}\hbar \delta_{ij},\qquad \hat q_i \psi( \mathbf{q} ) =q_i\psi( \mathbf{q} ),\qquad \hat p_i \psi( \mathbf{q} )=-{\rm i}\hbar \,\frac{\partial \psi( \mathbf{q} )}{\partial q_i}\, . \label{y5} \ee From them, we quantize the two-particle symplectic realization (\ref{zv}) (with $\lambda_i=0)$ in the form \be \hat J_-^{(2)}=\hat q_1^2+\hat q_2^2\equiv \hat{\mathbf{q}}^2, \qquad \hat J_+^{(2)}= \hat p_1^2 +\hat p_2^2\equiv \hat{\mathbf{p}}^2 , \qquad \hat J_3^{(2)}=\hat q_1 \hat p_1 + \hat q_2 \hat p_2\equiv \hat{\mathbf{q}} \boldsymbol{\cdot} \hat{\mathbf{p}} \, . \label{y6} \ee These operators close on a Lie algebra isomorphic to $\mathfrak{gl}(2)$: \be \bigl[ \hat J_3^{(2)}, \hat J_\pm^{(2)} \bigr]=\pm 2{\rm i}\hbar \hat J_\pm^{(2)},\qquad \bigl[ \hat J_-^{(2)}, \hat J_+^{(2)} \bigr]= 4 {\rm i}\hbar \hat J_3^{(2)}+ 4 \hbar^2 {\rm Id}, \label{y7} \ee where ${\rm Id}$ is the identity operator. Then we propose that the quantization of $\mathcal H_N$ (\ref{zz}) is defined by the following quantum Hamiltonian \be \hat{\mathcal H}_N = \hat J_+^{(2)} + \sum_{n=1}^N \gamma_n \left(\hat J_3^{(2)}\right)^n = \hat{\mathbf{p}}^2 + \sum_{n=1}^N \gamma_n ( \hat{\mathbf{q}} \boldsymbol{\cdot} \hat{\mathbf{p}})^n , \label{y8} \ee that is, \be \hat{\mathcal H}_N \psi( \mathbf{q} ) = -\hbar^2\left(\frac{\partial^2\psi( \mathbf{q} ) }{\partial q_1^2}+\frac{\partial^2\psi( \mathbf{q} ) }{\partial q_2^2} \right)+ \sum_{n=1}^N \gamma_n(-{\rm i}\hbar )^n \left( q_1 \frac{\partial }{\partial q_1}+ q_2 \frac{\partial }{\partial q_2~} \right)^n\psi( \mathbf{q} )\, . \label{y9} \ee Thus $\hat{\mathcal H}_N$ is now endowed with a Lie $\mathfrak{gl}(2)$-coalgebra symmetry (instead of a Poisson $\mathfrak{sl}(2,\mathbb R)$-coalgebra one). We stress that such a `direct' quantization does not work on the constant of the motion $ \mathcal I_N$ (\ref{eq:I}) since serious ordering problems arise, so that additional terms must be added in order to obtain the quantum analogue of $ \mathcal I_N$ and thus proving quantum superintegrability of (\ref{y8}). Work on the above research lines is currently in progress. \section*{Acknowledgements} \phantomsection \addcontentsline{toc}{section}{Acknowledgements} {This work has been partially supported by Agencia Estatal de Investigaci\'on (Spain) under grant PID2019-106802GB-I00/AEI/10.13039/501100011033.}
93,751
This list was submitted by Dr Drunk on Saturday, December 26th, 2009 at 11:05 PM PST for The 2010 Lee Atwater Invitational Dead Pool. Pick some famous people you think are going to die soon. Whoever gets the most right wins. © 2018 Stiffs.com He definitely stopped loving her today. (d) April 26th, 2013
94,697
\chapter{Relativistic theories of dissipation} This chapter presents a review of various theories of relativistic dissipation. The aim is to make evident how the distinct models of relativistic dissipation share the same philosophy in their construction, that is, by providing an energy momentum tensor written in a particular frame together with a suitable definition for the entropy flow in order to impose the second law of thermodynamics. We begin this review with a discussion of the so-called `first-order' theories of dissipation. In particular, we work through the construction of Eckart's model - the first extension of non-equilibrium thermodynamics to a relativistic framework. In this case, the simple definition of the entropy flow leads to Eckart's hypothesis, which is nothing more than the relativistic analogue of Fourier's law. Therefore, a thermal disturbance will propagate with an unbounded speed, leading to internal inconsistencies in a relativistic setting. For the sake of completeness, we mention the Landau \& Lifshitz model, which is not fundamentally different from Eckart's original proposal. Owing to the inconsistencies arising from the first-order theories, a class of `second-order' theories emerged. Here we will introduce two distinct approaches of this kind which differ essentially in their thermodynamic assumptions. Firstly, we will discuss the model proposed by Israel \& Stewart \cite{israel}. They generalise the definition of the entropy flow to include `second order' corrections from thermal equilibrium. This is normally interpreted as a truncated series on deviations from equilibrium. The second-order terms of such an expansion are given by all the possible covariant combinations of the scalars, vectors and tensors available in the theory. These additional couplings need to be obtained by external means, {\it i.e.} by indirect measurement or through a direct kinetic calculation. Interestingly, it has been argued by Geroch and Lindblom \cite{diver} that a direct measurement of these terms may not be possible. However, experiments showing the existence of second-sound in solids or, alternatively, in superfluid helium seem to contradict their result. The imposition of the second law in the second-order theories is made in an analogous manner to the Eckart `first-order' model. However, in this case it leads to a generalised version of the Cattaneo equation [{\it c.f.} equation \eqref{dis.cattaneo}] and, hence, to a relaxed propagation of thermal signals. There is a subtle point to note here. In the Israel and Stewart programme, one assumes the validity of the local equilibrium hypothesis and yet one correctly obtains a finite speed for heat propagation. We shall discuss these matters towards the end of section 4.2.1. Secondly, we include a discussion of a different point of view sustained by Carter. Although Carter's original proposal aimed at a simple way of doing `off the peg' calculations, the discussion presented here will serve two purposes. On the one hand it will shed light onto a highly obscure reference on relativistic thermodynamics. On the other, such approach constitutes the original motivation for the multifluid approach to relativistic dissipation presented in this thesis. For completeness and accessibility, we include a transcript of Carter's original views in Appendix A. Let us emphasise that the various theories presented in this chapter does not exhaust all the possibilities for the first and second order theories available. It is simply a collection of the most representative and widely used. The inclusion of a wider selection of models, we believe, would not contribute in a significant manner to our discussion. However, it would be remiss not to mention some other roads that have been taken. Such is the case of the work by Geroch and Lindblom on relativistic theories of dissipation of divergence type \cite{diver}, or the recent re-derivation of a second-order theory from kinetic theory by Koide \cite{koide}. Finally, the interested reader may wish to consult the review article by Herrera and Pavon \cite{herrerap} for a broader discussion about hyperbolic theories of dissipation. \section{First-order theories} Let us begin this review with the simplest generalization to describe dissipative process in a relativistic context. In 1940, Eckart published a collection of articles entitled `Thermodynamics of Irreversible Processes', the third of which was devoted to irreversible processes in relativity. This publication set the basic strategy followed by later developments in relativistic theories of dissipation. A generic theory of relativistic dissipation consists of at least one fluid, whose particle number density is conserved, together with a local energy balance written in a manner that allows us to impose the second law of thermodynamics. This section is based on Eckart's original work \cite{eckart03}. \subsection{Eckart's model} Let $n^a$ represent the particle number density flux of a single species fluid system. In the previous chapter we saw that we can express the conservation law for $n^a$ by the expression \beq \label{cons0} n^a_{\ ;a}=0. \eeq In Eckart's model, one uses this fact to choose a frame in which to carry out all physical measurements. Thus, the Eckart frame is defined by the family of observers moving with the normalised four-velocity parallel to the matter fluid \beq \label{fourv} u^a=\frac{n^a}{n} \eeq where $n^2= - g_{ab}n^a n^b$. Given an arbitrary vector field $F^a$, we can form the pair \begin{subequations} \begin{align} f & = -g_{ab}u^a F^b,\\ f^a & =h^a_{\ b} F^b, \end{align} \end{subequations} where we have introduced the \emph{orthogonal projector} to the observer's four-velocity\glo{$h^a_{\ b}$}{Orthogonal projector to the observer's four velocity $u^a$} \beq \label{dis.proj} h^a_{\ b}=\delta^a_{\ b} + u^a u_b. \eeq One can easily verify that, indeed, $h^a_{\ b} u^b = 0$. Therefore, we can decompose any vector field $F^a$ at each point of spacetime into its parallel and orthogonal components to $u^a$ \beq F^a=fu^a+f^a. \eeq This is a very useful decomposition to describe vector quantities from the point of view of an observer moving together with a fluid. In a similar manner, an arbitrary tensor $F^{ab}$ can be decomposed in terms of its components \begin{subequations} \label{dis.dec} \begin{align} \label{eck01} \phi &=u_a u_b F^{ab},\\ \label{eck02} \phi^a &=h^a_{\ b}u_c F^{bc},\\ \label{eck03} \phi^{ab}&=h^a_{\ c}h^b_{\ d}F^{cd}. \end{align} \end{subequations} Instead of proving a variational principle from which the conservation of energy follows as an identity, most relativistic theories of dissipation postulate that there exists a symmetric energy momentum tensor which satisfies the conservation law \beq \label{cons01} T^{ab}_{\ \ ;b}=0. \eeq Then, using the decomposition \eqref{dis.dec}, they write it in its general form \beq \label{dec01} T^{ab}= \rho u^a u^b +2 q^{(a}u^{b)} + \pi^{ab}, \eeq where \beq q^au_a = 0 \quad \text{and} \quad \pi^{ab}u_a =0. \eeq In this sense, any divergence-free symmetric second rank tensor serves to define a matter model if we interpret (with the right units) $\rho$ as the energy density, $q^a$ as the transverse momentum and $\pi^{ab}$ as the anisotropic stress tensor as measured by an observer moving with the particle flux. With this physical interpretation, one can define the internal energy $\epsilon$ through the relation \cite{hawkingellis} \beq \label{internal} n(\epsilon + a)= \rho \eeq where $a$ is an arbitrary constant. The dynamical relations contained in our matter model imposed by the constraint \eqref{cons01} are easily obtained by contracting the energy-momentum tensor $T^{ab}$ with the observer's four velocity $u^a$ to obtain \beq \label{first01} \left(u_a T^{ab}\right)_{;b}- u_{a;b} T^{ab}=0. \eeq Let us begin with the first term in the above expression. It follows from the decomposition \eqref{dec01} that \beq u_a T^{ab} = -\rho u^b - q^b, \eeq and, from the definition of the internal energy \eqref{internal}, it becomes \beq \label{dis.first02} \left(-u_a T^{ab}\right)_{;b}=\left[n(\epsilon + a)u^b\right]_{;b} + q^b_{\ ;b}. \eeq Finally, using the conservation of the particle number density flux, equation \eqref{cons0}, it follows that \beq \left(a n^b\right)_{\ ;b} = 0, \eeq and equation \eqref{dis.first02} reduces to \beq \left(-u_a T^{ab}\right)_{;b}=n{\dot\epsilon} + q^b_{\ ;b}. \eeq The second term in the LHS of \eqref{first01} can be written simply as \beq u_{a;b}T^{ab}=q^a {\dot u_a} + u_{a;b}\pi^{ab}. \eeq Thus, the local form of the energy balance \eqref{first01} from the fluid observer's point of view is given by \beq \label{FL} n{\dot \epsilon} + q^b_{\ ;b} + q^a {\dot u_a} + u_{a;b}\pi^{ab} = 0. \eeq Before giving the physical significance of \eqref{FL}, a further simplification can be made. Note that when the energy-momentum tensor takes the form \beq T^{ab}= (\rho + p)u^a u^b + p g^{ab} \eeq we can define the \emph{hydrostatic pressure} of the fluid through the trace of the stress tensor \beq \label{preas} p \equiv\frac{1}{3}\pi^a_{\ a}. \eeq Therefore, in the general case, this allows us to define the viscous stress tensor \beq \label{visc0} P^{ab}= -\pi^{ab} + p h^{ab}, \eeq which is assumed to be a linear function of $u_{a;b}$, in terms of the hydrostatic pressure. It follows that the contractions of this viscous stress with the observer's four velocity vanish \beq \label{ort0} u_a P^{ab} = 0, \quad P^a_{\ a}=0, \eeq and we can obtain a general expression for $P^{ab}$ \begin{align} \label{visc} P^{ab} &= \lambda \left[2 h^{ac}h^{bd} u_{(c;d)} - \frac{2}{3}h^{ac} h^{cd}u_{c;d} \right]\nn\\ &= \lambda \left[u^{(a;b)} + 2\dot u^{(a}u^{b)} - \frac{1}{3} u^c_{\ ;c}h^{ab} \right]\nn\\ &= \lambda \left[\sigma^{ab} + 2\dot u^{(a}u^{b)}\right], \end{align} where $\lambda$ is the viscosity coefficient and we have introduced $\sigma^{ab}$ as the traceless shear tensor \beq \sigma_{ab} = u_{(a;b)} - \frac{1}{3} u^c_{\ ;c} h_{ab}. \eeq Let us define the the \emph{invariant} specific volume $v$ as the inverse of the particle number density $n$ and use it to express the purely spatial part of $u_{a;b}$ as \beq \label{aux} h^{ab}u_{a;b}= u^b_{\ ;b} = n {\dot v}. \eeq Thus, substituting the above expressions for the viscous stress, equation \eqref{visc0}, and the spatial projection \eqref{aux} back into the local energy balance \eqref{FL} we obtain \beq \label{sc0} n({\dot \epsilon} + p{\dot v}) + q^a_{\ ;a} + q^a{\dot u_a} - u_{a;b}P^{ab} = 0. \eeq In this form, equation \eqref{sc0} is completely analogous to the non-relativistic energy balance\footnote{We can write the Newtonian energy balance in as \[ n\frac{\d \varepsilon}{\d t} + \nabla \cdot {\bf q} - (\mathscr{P} \cdot \nabla) \cdot {\bf v} = 0 \] where $\varepsilon$ is the internal energy, ${\bf q}$ is the heat flow, $\mathscr{P}$ is the total stress and ${\bf v}$ is the fluid's three-velocity.}. It is only the term containing the four-acceleration $\dot u^a$ which has no Newtonian counterpart. Formally, it results from the fact that infinitesimal 3-spaces orthogonal to the observer's worldline are not parallel to each other, but relatively tipped because of the curvature of such a line \cite{ehlers}. One can interpret this as a contribution due to the \emph{inertia} of heat. For a simple fluid, where the internal energy is only a function of the pressure $p$ and the specific volume $v$, there are always two functions, $\theta$ and $s$ say, such that \cite{eckart01,eckart03} \beq \frac{\partial \epsilon}{\partial v} + p = \theta \frac{\partial s}{\partial v}. \eeq Hence, we can write the quantity inside the bracket in the first term of the energy balance \eqref{sc0} in terms of these two functions as \beq \label{th1} \dot\epsilon + p \dot v = \theta \dot s. \eeq This is simply the Gibbs relation we found in section 2.2.1 [{\it c.f.} equation \eqref{thermo.gibbs}]. Thus, the two functions $\theta$ and $s$ correspond to the equilibrium temperature and entropy density. In terms of these variables, we can re-express \eqref{sc0} to get \beq \label{th2} n\dot s + (\theta^{-1}q^a)_{;a} = -\frac{1}{\theta^2}q^a [\theta_{;a} + \theta \dot u_a] + u_{a;b} P^{ab}\theta^{-1}. \eeq Written in this form, we can impose the second law of thermodynamics on the dynamics of the matter model described by the general energy momentum tensor $T^{ab}$ in terms of the thermodynamic quantities $n$, $\rho$ and $s$ relative to observer's frame in the following manner. Consider a vector field $s^a$ representing the entropy flux within the fluid [{\it c.f.} section 3.4]. From \eqref{th2} one can infer that \beq \label{sflow} s^a = sn^a + \frac{1}{\theta}q^a. \eeq Note that this is far from being the only manner in which we could have written the entropy density current. It is, however, the simplest for it is `linear' in departures from thermal equilibrium, {\it i.e} it is of `first-order' in the heat flow. In this case, the second law of thermodynamics takes the local form [{\it c.f.} equation \eqref{kin.entdiv}] \beq \label{SL} s^a_{\ ;a} = n\dot s + (\theta^{-1}q^a)_{;a} \geq 0. \eeq As we have argued in the previous chapter, the inequality has to be imposed from the outset; it cannot be proven within the dynamical content of the matter model. \subsubsection{Eckart's hypothesis} In order to make the divergence of the entropy flux positive definite, the simplest assumption is that the two terms in the RHS of \eqref{th2} are independent and non-negative. On the one hand, the first term in the RHS of \eqref{th2} lead Eckart to propose the relativistic analogue of Fourier's law, namely \beq \label{heat01} q^a=-\kappa h^{ab}[\theta_{;b} + \theta \dot u_b]. \eeq As before, the proportionality scalar $\kappa$ represents the thermal conductivity of the fluid. It is also clear that equation \eqref{heat01} is orthogonal to the fluid's four-velocity. The spatial projection of the covariant derivative of the function $\theta$ corresponds to the relativistic temperature gradient. As we have already discussed, there is no Newtonian analogue of the acceleration term $-h^{ab}\theta \dot u_b$. Such a term implies an isothermal flow of heat in accelerated matter in the direction opposite to the acceleration \cite{eckart03}. We will provide further details on this point in the next chapter. Soon it will become clear, that every reasonable relativistic generalization of Fourier's law contains this acceleration term as a consequence of the effect of the \emph{thermal inertia}. On the other hand, from the definition of the viscous-stress tensor \eqref{visc} and the decomposition of $u_{a;b}$ into a trace, shear, vorticity and acceleration parts \beq \label{cdecom} u_{a;b} = \frac{1}{3}u^c_{\ ;c}h_{ab} + \sigma_{ab} + \omega_{ab} - \dot u_a u_b, \eeq using a symmetry argument and the orthogonality condition \eqref{ort0}, it follows that \beq u_{a;b}P^{ab} = \sigma_{ab} P^{ab} = \lambda\sigma^2. \eeq Hence, we can write the divergence of the entropy flux explicitly as a quadratic function of its sources \beq \label{ec2l} s^a_{\ ;a}= \frac{q^a q_a}{k\theta^2} + \frac{\sigma_{ab}\sigma^{ab}}{\lambda \theta}\geq 0. \eeq This constitutes the first attempt to generalise Fourier's law to the relativistic regime. However, it can be readily seen that this result contains a covariant analogue of the heat equation, without altering its parabolic character. Indeed, it was shown later by Hiscock and Lindblom that the Eckart model not only suffers from non-causal behaviour, but also it possesses unstable modes for thermal perturbations of ordinary matter models \cite{hiscock01}. \subsection{Landau \& Lifshitz theory} The theory of relativistic dissipation proposed by Landau and Lifshitz as an alternative to Eckart's model shares essentially the same features as in the original proposal. Therefore, there is no need to extend the discussion in this particular direction beyond a few main results to keep the discussion complete and self-contained. The assumptions and calculations are completely analogous to the ones in the previous section and can be found in \cite{landaufm}. The main difference this approach has with respect to the one followed by Eckart is the choice of the observer's frame. In the case of Eckart's model we used co-moving observers to describe the dynamics of the fluid whilst Landau \& Lifshitz used a timelike eigenvector of the energy momentum tensor of the matter. In both cases, the equations of motion are determined by the conservation of the particle number density flow and the vanishing of the divergence of the energy-momentum tensor. Thus, the dynamics are completely equivalent and, although the specific form of the local energy balance will take a different form, the physical content of both approaches is necessarily the same. Crucially, they lead to the same ansatz to ensure the positivity of the entropy flux. Therefore, they also share the same pathologies as found by Hiscock and Lindblom. \section{Second-order theories} This two-part section contains an account of the well known Israel \& Stewart second-order theory together with the original version of the pioneering work of Carter known as the `regular model'. One of the aims of this section is to highlight the main ontological difference between the two `second-order' approaches, namely, the local equilibrium assumption for small deviations from thermal equilibrium in the former and, the EIT approach to non-equilibrium thermodynamics of relativistic systems of the latter. There is no claim of originality of the material presented in this section. However, it has been written in a manner that, we hope, will facilitate an understanding of the details in the original sources. \subsection{The Israel \& Stewart second-order theory} In view of the unsatisfactory results of Eckart's theory, Israel \cite{israel} and Stewart \cite{Stewart77} developed a new strategy to solve the inconsistencies of the relativistic first-order theories of dissipation. Their approach, firmly grounded in relativistic kinetic theory, was named the transient relativistic thermodynamics, but it is better known as the second-order theory of relativistic dissipation. This model follows the same dynamical construction as in Eckart's model up to the definition of the entropy density flux, where they included a full set of second-order corrections in the entropy sources. This truncated expansion leads to a relativistic generalization of Cattaneo's equation \eqref{dis.cattaneo}. The following is a detailed derivation of such an extension. Let us begin by writing the decomposition of the stress-energy tensor [see \eqref{dec01}] in a slightly different manner \beq \label{stress} T^{ab} = \rho u^a u^b + (p + \tau)h^{ab} + 2 u^{(a}q^{b)} + \tau^{ab}. \eeq The difference from \eqref{dec01} lies in the explicit inclusion of a viscous pressure term $\tau$. Again $q^a$ is the heat flow orthogonal to the matter's four velocity , while $\tau^{ab}$ and $\tau$ are the stresses caused by viscosity in the fluid. Here the tensor $\tau^{ab}$ satisfies the relations \beq \label{ort1} 0 = u^a \tau_{ab}= \tau^a_{\ a}=\tau_{[ab]}. \eeq Thus, it relates to Eckart's definition through \beq \tau^{ab}= -\pi^{ab} + (p+\tau)h^{ab}. \eeq As before, the equations of motion are given by the conservation laws \eqref{cons0} and \eqref{first01} and the imposition of the second law is completely analogous to the previous section. From the modified form of the energy-momentum tensor \eqref{stress} and properties \eqref{ort1} of the viscous-stress tensor, the divergence of the entropy density current, equation \eqref{th2}, can be written as \beq \label{sflowi} s^a_{\ ;a}= -\frac{1}{\theta^2}q^a [\theta_{;a} + \theta \dot u_a] + \langle u_{a;b}\rangle \tau^{ab}\theta^{-1} - \tau u^a_{\ ;a}\theta^{-1}. \eeq Here $s^a$ is the entropy current defined in Eckart's model, equation \eqref{sflow}, the brackets $\langle \rangle$ represent the symmetric traceless part of a given second rank tensor, {\it i.e.} for a given tensor $A^{ab}$ \beq \langle A_{ab} \rangle = \frac{1}{2}h^c_{\ a}h^d_{\ b}\left[2A_{(ab)} - \frac{2}{3}h_{cd}h^{ef}A_{ef} \right], \eeq and, crucially, $\theta$ is the \emph{equilibrium} temperature of the fluid. It is obtained through the assumption that the local equilibrium hypothesis holds for small deviations from thermal equilibrium, {\it i.e.} that the energy density remains a function of the particle number and entropy densities when the system is driven `slightly' away from thermal equilibrium. This means that, implicitly, the heat flux is still considered to be a \emph{fast} variable in the EIT sense ({\it c.f.} section 2.2.3) and, therefore, it \emph{cannot} be treated as a state variable. In order to satisfy the requirement of the second law, in addition to Eckart's hypothesis \eqref{heat01}, we require that \begin{subequations} \begin{align} \tau &= -u^a_{\ ;a}\xi,\\ \tau^{ab} &= -2\eta \langle u^{a;b}\rangle, \end{align} \end{subequations} where $\xi$ and $\eta$ are the bulk and shear viscosity coefficients, respectively. Note that in the case of the Eckart model we only had the coefficient $\lambda$. This is because here we have made an explicit distinction between the trace and shear parts of the congruence. We see from \eqref{visc} that $\tau^{ab}$ is proportional to $P^{ab}$. Hence, the quadratic form for the divergence of the entropy density current, equation \eqref{ec2l}, becomes \beq \label{seclaw} s^a_{\ ;a}= \frac{\tau^2}{\xi \theta} + \frac{q^a q_a}{k\theta^2} + \frac{\tau_{ab}\tau^{ab}}{2\eta \theta}\geq 0. \eeq Motivated by the non-causal and unstable behaviour of Eckart's proposal, Israel and Stewart proposed to generalise the definition of the entropy density flux \eqref{sflow} by including a complete set of second order terms, namely \beq \label{sisrael} s^a=sn^a +\frac{q^a}{\theta} - \frac{1}{2}\left(\beta_0 \tau^2 + \beta_1 q^b q_b + \beta_2 \tau_{bc}\tau^{bc} \right)\frac{u^a}{\theta} + \alpha_0 \frac{\tau q^a}{\theta} + \alpha_1\frac{\tau^a_{\ b} q^b}{\theta}. \eeq Here the coefficients $\beta_0$, $\beta_1$, $\beta_2$, $\alpha_0$ and $\alpha_1$ correspond to the different couplings for the second-order terms. In particular, these quantities need to be provided by some other means, {\it i.e.} by direct measurement or directly through kinetic theory, but cannot be obtained from an equilibrium equation of state. To compute the entropy production $s^a_{\ ;a}\theta$, we can split \eqref{sisrael} into first and second order pieces. The divergence of the linear part is given by \eqref{sflowi} \beq s^a_{I\ ;a}\theta = -\frac{1}{\theta}q^a [\theta_{;a} + \theta \dot u_a] + \langle u_{a;b}\rangle \tau^{ab} - \tau u^a_{\ ;a}. \eeq Therefore we only have to compute the divergence of the second-order part \begin{align} \label{sflowii} s^a_{II\ ;a}\theta =&\frac{\theta}{2}\left[ \left(\frac{\beta_0 \tau^2 u^a}{\theta}\right)_{;a} + \left(\frac{\beta_1 q_b q^b u^a}{\theta}\right)_{;a} + \left(\frac{\beta_2 \tau_{bc}\tau^{bc} u^a}{\theta}\right)_{;a}\right]\nn \\ &+\theta\left[\left(\frac{\alpha_0 \tau q^a}{\theta}\right)_{;a} + \left(\frac{\alpha_1 q_b \tau^{ab}}{\theta}\right)_{;a} \right]. \end{align} Expanding term by term, we have the following factorisations for the first square bracket \begin{subequations} \begin{align} \left(\frac{\beta_0 \tau^2 u^a}{\theta}\right)_{;a} & = \tau \left[ \left(\frac{\beta_0 u^a}{\theta}\right)_{;a}\tau + 2\frac{\tau_{;a}u^a \beta_0}{\theta} \right],\\ \left(\frac{\beta_1 q_b q^b u^a}{\theta}\right)_{;a}& = q^a \left[\left(\frac{\beta_1 u^b}{\theta}\right)_{;b}q_a + 2 \frac{q_{a;b}u^b \beta_1}{\theta} \right],\\ \left(\frac{\beta_2 \tau_{bc}\tau^{bc}u^a}{\theta} \right)_{;a}& = \tau^{ab}\left[\left(\frac{\beta_2 u^c}{\theta}\right)_{;c}\tau_{ab} + 2\frac{\tau_{ab;c}u^c \beta_2}{\theta} \right]. \end{align} \end{subequations} Whilst the second bracket in \eqref{sflowii} is given by \begin{subequations} \begin{align} \label{apd} \left(\frac{\alpha_0 \tau q^a}{\theta} \right)_{;a} & = q^a_{\ ;a}\frac{\tau \alpha_0}{\theta} + \tau_{;a}\frac{q^a \alpha_0}{\theta}+ \left(\frac{\alpha_0}{\theta}\right)_{;a}q^a \tau,\\ \label{ape} \left(\frac{\alpha_1 q_b \tau^{ab}}{\theta} \right)_{;a} & = \tau^{ab}_{\ \ ;a}\frac{q_b \alpha_1}{\theta} + \left(\frac{\alpha_1}{\theta}\right)_{;a}q_b\tau^{ab} + q_{b;a}\frac{\tau^{ab}\alpha_1}{\theta}. \end{align} \end{subequations} In order to factorise the last two equations, Israel (see \cite{israel}) introduced two extra thermodynamic coefficients $\gamma_0$ and $\gamma_1$ such that \beq \gamma_0 + \gamma_1 = 1. \eeq Thus, it follows that \eqref{apd} and \eqref{ape} become \begin{subequations} \begin{align} \left(\frac{\alpha_0 \tau q^a}{\theta} \right)_{;a} & = q^a\left[ \tau_{;a} \frac{\alpha_0}{\theta} + \left(\frac{\alpha_0}{\theta}\right)_{;a} \tau \gamma_1\right] + \tau \left[q^a_{\ ;a}\frac{\alpha_0}{\theta} + \left(\frac{\alpha_0}{\theta} \right)_{;a}q^a \gamma_0 \right],\\ \left(\frac{\alpha_1 q_b \tau^{ab}}{\theta} \right)_{;a} & = \tau^{ab} \left[ q_{b;a}\frac{\alpha_1}{\theta}+ \left(\frac{\alpha_1}{\theta} \right)_{;a} q_b \gamma_1\right] + q^a \left[\tau^b_{\ a;b}\frac{\alpha_1}{\theta} + \left(\frac{\alpha_1}{\theta} \right)_{b}\tau^b_{\ a}\gamma_0\right]. \end{align} \end{subequations} Finally, collecting all the terms, we obtain the expression for the entropy production; \begin{align} s^a_{\ ;a}\theta =& -\tau \left[u^a_{\ ;a} + \tau_{;a}u^a \beta_0 + \frac{1}{2}\left(\frac{\beta_0 u^a}{\theta}\right)_{;a}\tau \theta -q^a_{\ ;a}\alpha_0 - \left(\frac{\alpha_0}{\theta}\right)_{;a} q^a\theta \gamma_0\right]\nn\\ & - q^a {\Bigg [}\theta_{;a}\frac{1}{\theta} + \dot u_a +\frac{1}{2}\left(\frac{\beta_1 u^b}{\theta}\right)_{;b} q_a \theta - \tau_{;a}\alpha_0 - \left(\frac{\alpha_0}{\theta}\right)_{;a}\tau\theta\gamma_1\nn\\ &\quad +q_{a;b}u^b\beta_1 - \tau^b_{\ a;b}\alpha_1 - \left(\frac{\alpha_1}{\theta}\right)_{;b}\tau^b_{\ a}\theta\gamma_0 {\Bigg ]}\nn\\ & -\tau^{ab}{\Bigg [}{\Bigg\langle}u_{a;b} +\frac{1}{2}\left( \frac{\beta_2 u^c}{\theta}\right)_{;c}\tau_{ab}\theta +\tau_{ab;c}u^c \beta_2 - q_{b;a}\alpha_1 -\left( \frac{\alpha_1}{\theta}\right)_{;a}q_b \theta \gamma_1 {\Bigg \rangle}{\Bigg ]}. \end{align} In order to satisfy the second law constraint \eqref{seclaw}, the simplest - though not the most general - choice is to make each term positive. Thus, the Israel and Stewart theory makes the following identifications \begin{subequations} \begin{align} \tau =& -\xi \left[u^a_{\ ;a} + \tau_{;a}u^a \beta_0 + \frac{1}{2}\left(\frac{\beta_0 u^a}{\theta}\right)_{;a}\tau \theta -q^a_{\ ;a}\alpha_0 - \left(\frac{\alpha_0}{\theta}\right)_{;a} q^a\theta \gamma_0\right],\\ \label{iscat} q^a=& -k\theta h^{ab} {\Bigg [}\theta_{;b}\frac{1}{\theta} + \dot u_b +\frac{1}{2}\left(\frac{\beta_1 u^c}{\theta}\right)_{;c} q_b \theta - \tau_{;b}\alpha_0 - \left(\frac{\alpha_0}{\theta}\right)_{;b}\tau\theta\gamma_1\nn\\ &\quad +q_{b;c}u^c\beta_1 - \tau^c_{\ b;c}\alpha_1 - \left(\frac{\alpha_1}{\theta}\right)_{;c}\tau^c_{\ b}\theta\gamma_0 {\Bigg ]},\\ \tau^{ab} =& -2\eta h^{ac} h^{bd}{\Bigg\langle}u_{c;d} +\frac{1}{2}\left( \frac{\beta_2 u^e}{\theta}\right)_{;e}\tau_{cd}\theta +\tau_{cd;e}u^e \beta_2 - q_{d;c}\alpha_1 -\left( \frac{\alpha_1}{\theta}\right)_{;c}q_d \theta \gamma_1 {\Bigg \rangle}. \end{align} \end{subequations} \subsubsection{Heat conduction} For the purely heat conducting case we artificially turn off the viscous terms, {\it i.e.} we set $\tau=\tau^{ab}=0$. In this case, the energy-momentum tensor reduces to \beq T^{ab}=(\rho + p)u^a u^b + pg^{ab} + 2u^{(a}q^{b)}, \eeq and the entropy current is simply given by \beq s^a= sn^a + \frac{1}{\theta}q^a - \frac{u^a}{2\theta}\beta_1 q_b q^b. \eeq The simplified version of the divergence of the entropy density flux is \beq \label{isheat} s^a_{\ ;a}\theta = -q^a \left[\theta_{;a}\frac{1}{\theta}+ \dot u_a +\frac{1}{2}\left(\frac{\beta_1 u^b}{\theta}\right)_{;b} q_a \theta +q_{a;b}u^b \beta_1 \right], \eeq and we see that the quadratic ansatz \eqref{iscat} becomes \beq \label{dis.cat} q^a= -k\theta h^{ab}\left[\theta_{;b} \frac{1}{\theta} + \dot u_b +\frac{1}{2}\left(\frac{\beta_1 u^c}{\theta} \right)_{;c} q_b\theta + q_{a;c}u^c \beta_1 \right]. \eeq This is the relativistic generalization of the Cattaneo equation given in section 2.2.2. It represents the central result of the purely heat conducting Israel \& Stewart model and it will serve as a point of comparison for the results in the next chapter. Note that the last term in \eqref{dis.cat} is essentially a time derivative of the heat transport four-vector. Therefore, its coefficient $\beta_1$ can be interpreted as a `relaxation time' for the propagation of thermal disturbances. The subtle point we mentioned earlier lies in the fact that, although we assumed that the local equilibrium hypothesis holds\footnote{The energy density is a function of the particle number and entropy densities alone, $\rho = \rho(n,s)$.}, we have obtained a critical time-scale on which dissipation operates, a result which follows from an EIT point-of-view. Thus, the reader may wonder where did such a time scale come from? The answer is simple, we have removed the thermodynamic information of heat as a state variable at the expense of a complete set of coupling coefficients for the second order terms in the entropy flux. The remarkable feature of this approach is that it correctly reproduces an EIT result from the thermodynamic principles of LIT. \subsection{Carter's theory of dissipation} The final section of this chapter is devoted to the original work by Carter on a relativistic heat conduction. Although it was not made explicit at the time, soon we will notice that Carter's ideas are rooted in those which gave rise to the the EIT programme. Furthermore, he presented an argument in which the Cattaneo equation \eqref{dis.cattaneo} follows directly from a Gibbs relation where the heat flux is included as a state variable. As before, we have tried to make every calculation explicit and consistent with the method discussed in the previous sections. Let us now show that Carter's thermodynamic programme belongs to the class of EIT theories\footnote{We can readily see this from his original arguments on these matters. For completeness, we have included a transcript of a highly unknown reference where this fact was made explicit first. The reader will find a recollection of Carter's original ideas in appendix A.}. Following the same methodology as in the previous sections, let us consider a single species fluid. Recalling the decomposition of a general energy momentum tensor with respect to the four-velocity of an observers moving with the particles of the fluid, we write [see equation \eqref{dec01}] \beq \label{cstress} T^{ab} = \rho u^a u^b + 2 q^{(a}u^{b)} + \pi^{ab}. \eeq In addition to the particle number density flow $n^a$, let us introduce an entropy density current $s^a$ - in general not parallel with $n^a$. We can decompose such a current by ({\it c.f.} section 3.4) \beq \label{centropy} s^a = s^*u^a + {j}^a, \eeq where $s^*$\glo{$s^*$}{Entropy density as measured on the matter frame} is the entropy density measured in the Eckart (matter) frame, and $j^a$ is the component of the entropy current transverse to the matter flow. As before, the equations of motion are given by the conservation laws \eqref{cons0} and \eqref{cons01} and the second law of thermodynamics is expressed by the positivity of the entropy production. Thus, we can express the equations of motion together with the second law in terms of the projections \begin{subequations} \begin{align} \label{cncons} n^a_{\ ;a} & = \dot n + u^a_{\ ;a} n = 0,\\ \label{cscons} s^a_{\ ;a} & = \dot s + u^a_{\ ;a} s + {j}^a_{ ;a} \geq 0 . \end{align} \end{subequations} Using the decomposition of $u_{a;b}$, equation \eqref{cdecom}, we can define the symmetric part of $u_{a;b}$ as \beq \tilde \sigma_{ab} = \frac{1}{3} u^c_{\ ;c}h_{ab} + \sigma_{ab}. \eeq Thus, the equations of motion imply a local energy balance equivalent to Eckart's expression \eqref{FL} \beq \label{ctu} T^b_{\ a;b}u^a = \dot\rho + u^b_{\ ;b}\rho + q^b_{\ ;b} + q^b \dot u_b + \pi^{ab}\tilde\sigma_{ab} = 0. \eeq Up to this point, the construction is identical to Eckart's model. It is here, where we need to introduce a thermodynamic assumption to extract the second law from the energy balance that Carter's model differs from Eckart's and Israel \& Stewart. Let us consider the possible functional dependence of the energy on the state variables. In the special situation, when the medium is in strict thermal equilibrium, the energy density is a function of the particle number and entropy densities alone. However, in a general situation, we could also expect $\rho$ to be a function of the non-equilibrium magnitude of the entropy transfer vector, {\it i.e.} \beq \rho = \rho(n,s,j). \eeq In an ordinary situation, we would consider states which are close to equilibrium. In such a case, $j^\q$ is very small quantity. Moreover, local isotropy requires that the dependence on $j^\q$ of the energy density should be of second order in the neighbourhood of equilibrium, that is \beq \rho= \rho(n,s,0) + \mathscr{O}({j}^2). \eeq Therefore, a general variation of the energy density takes the form \beq \label{dis.extendedgibbs} \d \rho = \mu \d n + \theta \d s + \mu_q \d j, \eeq which we recognize as the extended Gibbs relation of section 2.2.3. Thus, Carter's theory of dissipation adopts an EIT point of view before imposing the second law of thermodynamics. The specific details of how the inclusion of the heat flux as a state variable leads to a relativistic version of Cattaneo equation are presented in the next chapter. Carter's regular model is a particular case of the general multifluid formalism for relativistic dissipation. For completeness, we finish our discussion as in the original version of Carter's manuscript. However, this \emph{incomplete} view does not present us with the final outcome of the theory. The reader has been warned. In order to take this last step, let us consider the Legendre-transformed energy density \beq \label{clt01} \hat\rho = \rho - \mu_\q j. \eeq Differentiating with respect to the affine parameter of the observer's worldline gives \beq \label{cenp} \dot{\hat\rho} = \mu \dot n + \theta \dot s + j^b\mathcal{L}_u[\mu_\q]_b - \sigma_{ab}\mu_\q^a {j}^b. \eeq where we have made the substitution \beq \dot {\mu^\q}_b {j}^b = \mathcal{L}_{\rm u} \left[{\mu^q}_b {j}^b\right] - u_{a;b}\mu_\q^a j^b. \eeq Let us further assume that the heat flow is simply proportional to the transverse entropy current $j^a$, so we can write \beq \label{cheatf} q^a = \tilde \theta j^a_\s, \eeq where we call $\tilde \theta$ the \emph{thermodynamic} temperature. Notice that we cannot tell if this is the temperature that appears in the extended Gibbs relation \eqref{dis.extendedgibbs}. In analogy with Eckart's model, collecting the previous result and substituting into the local energy balance, one can show that the divergence of the entropy density current can be written as the sum of three terms \begin{align} s^a_{\ ;a}\theta =&-j^b\left[\tilde\theta_{;b} + \mathcal{L}_{\rm u}\left(\tilde\theta u_b + \mu^\q_b\right)\right]\nn\\ &+\left[h^{ab}\left(\mu n + \theta s - \hat\rho\right) + \mu_\q^a j^b - \pi^{ab}\right]\tilde\sigma_{ab}\nn\\ &+j_{\ ;b}^b\left(\theta - \tilde \theta\right). \end{align} Once again, the simplest way to impose the second law of thermodynamics is to require each term to be positive. Thus, we obtain a heat conduction equation of the form \beq \tilde\theta_{;b} + \mathcal{L}_u\pi^\q_b = - Y_{ab} q^a. \eeq Here, the quantity $\pi^\q_b$ can be interpreted as momentum conjugate to the entropy density flux given by \beq \pi^\q_a = \tilde\theta u_a + \mu^\q_a, \eeq and the transverse tensor $Y_{ab}$ is the thermal resistivity. Since the other two terms cannot be made strictly positive, it is required that they both vanish. This imposes the further restriction for the energy momentum tensor \beq \label{cstress00} \pi^{ab} = h^{ab}(\mu n +\theta s - \hat\rho) + \mu^a_\q j^b, \eeq and allows us to conclude that the proportionality coefficient $\tilde\theta$ in our assumption for the heat flow is indeed the \emph{thermostatic} temperature of the fluid \beq \tilde\theta = \theta. \eeq This last equality is also a consequence of the regularity ansatz. It is to severe a constraint for the relaxation time of thermal phenomena described by this early model. Indeed, this was a crucial part of the argument in the criticism of Hiscock and Olson of the regular model \cite{hiscolson}. In the next chapter, we construct from scratch a multifluid approach to heat conduction based on the ideas presented in this subsection. We will show that the regularity ansatz removes the information contained in the equation of state from the relaxation time. Therefore, finding a model which would violate local causality was not very difficult. Thus, as expected by Hiscock and Olson, if the regularity ansatz is relaxed, we stand a better chance to obtain a consistent relativistic theory of heat.
142,139
Anton and Augusta Birkel Foundation Profile At A Glance Anton and Augusta Birkel Foundation P.O. Box 168 Reedville, VA United States 22539-0168 Type of Grantmaker Independent foundation Financial Data (yr. ended 2013-12-31) Assets: $2,373,966 Total giving: $0 EIN 526924820 Bridge Number 1653638590 Background Classified as a private operating foundation in 1989 in VA Fields of Interest Subjects - Animal training - Arts and culture - Children's hospital care - Domesticated animals - Education - Elementary and secondary education - Environment - Health - Hospital care - Human services - In-patient medical care - Independent living for people with disabilities - Music - Performing arts - Special population support Population Groups - Academics - Children and youth - Economically disadvantaged people - Ethnic and racial groups - Low-income and poor people - People with disabilities - People with physical disabilities - People with vision impairments - Students Financial Data Year ended 2013-12-31 Assets: $2,373,966 (market value) Expenditures: $36,175 Total giving: $0 Qualifying distributions: N/A
3,819
TITLE: Cofinal chains in directed sets QUESTION [9 upvotes]: Let $(D,\prec) $ be a directed set. Is there a "cofinal chain" in $D $? By chain I mean a $D'\subset D $ such that for all $\lambda,\gamma \in D'$, then $\lambda\prec \gamma $ or $\gamma\prec \lambda $. By cofinal chain I mean a chain $D'$ for which, for every $\gamma\in D $, there is some $\lambda \in D'$ such that $\gamma\prec \lambda $. REPLY [5 votes]: Take $X$ to be your favorite uncountable set. Now take $D$ to be the finite subsets of $X$, ordered by inclusion. Easily, this set is directed. But there are no chains with order type longer than $\omega$, and so no chain can have an uncountable union, and in particular, no chain is cofinal. (If you choose $X$ wisely, then no choice is needed either. And by wisely I just mean not a countable union of finite sets.)
149,055
TITLE: Rolling a Dice Up to S QUESTION [1 upvotes]: Which is the algorithm (the program) to count the number of ways a dice can be rolled until the sum of all outcomes be a given $n$? $n$ is in the order of 1000. I tried with the formula of polynomial (generating function). But I was thinking to do it with a program unaware of any formula. REPLY [1 votes]: These "number of ways" constitute the so-called Hexanacci numbers (https://oeis.org/A001592) $$ 1, 2, 4, 8, 16, 32, 63, 125, 248, 492, 976, 1936, 3840, 7617, 15109, 29970, 59448, 117920,...$$ defined by recurrence relationship: $$\tag{1}a_{n} = \sum_{k=1}^6 a_{n-k} \ \ \text{with} \ \ a_{-5}=a_{-4}=a_{-3}=a_{-2}=a_{-1}=0, \ \text{and} \ a_{0}=1.$$ NB: six artificial values have had to be introduced before the beginning of the "true" sequence with $a_1$. let us consider now an example: let us take $n=4$; there are $a_4=8$ ways to obtain a total of 4: $$\underbrace{(1111, 211, 121, 31)}_{a_3=4}, \underbrace{(112, 22)}_{a_2=2}, \underbrace{(13)}_{a_1=1}, \underbrace{(4)}_{a_0=1}.$$ (we have grouped them according to their last number, either 1,2,3,4, here). The proof of (1) is easy, and can be understood as a generalization of example above. The set $\frak{S}_n$ of sequences of numbers $\{1,2,\cdots 6\}$ whose sum is $n$ can be partitioned into at most 6 (non intersecting) classes $C_1, C_2, ... C_6$, this index being in correspondence with the last digit of the sequence. How such a sequence can be built ? By adding 1 to any of the $a_{n-1}$ sequences summing to $n-1$, 2 to any of the $a_{n-2}$ sequences summing to $n-2$, ... 6 to any of the $a_{n-6}$ sequences summing to $n-6$. (some of these sets being hopefully void) [end of proof]. Recurrence (1) is a very efficient way to obtain these numbers. It is rather easy to prove that sequence $a_n$ (shown in red on the figure) is equivalent to (and bounded from above by) sequence $2^{k-1}$, $k=1\cdots (n-1)$ (in blue), with coincidence for the first 6 terms. Here is a Matlab program implementing these ideas, generating the sequence $b_n:=a_{n+6}$ n=1000;n1=n+6; b=zeros(1,n1); b(6)=1; for k=7:n1 a(k)=sum(b(k-6:k-1)); end b(n1) For $n=1000$, one finds $b_{1006}=a_{1000}\approx 1.47 \times 10^{297}$ !
74,412
How To Plan The Perfect Weekend Getaway With Your Friends Are you looking for a badly needed pals-cation? A gathering of your champs to overthrow the mundaneness of routine life? Escaping your daily people and surrounding your mind and body somewhere in a calm backdrop? If so, here we are with some efficacious tips that would help you to plan a weekend getaway to shed your mental weariness and rejoice your spirit. So who’s the Planner? First things first, there is a long list to follow for a perfect weekend getaway, and it’s obligatory to have an impeccable planner. The good thing is that there is always one person in a group who is super-organized, responsible and gets things done. So your straw boss is responsible for the traveling arrangements, accommodations, feasts, activities, etc. while staying in a defined budget. Give your ideas, let the planner assemble them properly and get going. Gather your people Even a usual highway drive or a casual meal becomes memorable when you are with the right people. Choose such people for the weekend getaway who would shed your month’s long exhaustion and bounce your spirit with positive energy. Simply, the people you love to spend time with and the one who makes you feel good. Prepare your happy list, phone them to be ready for the upcoming adventure. Where to go Choosing a destination on which the whole crew comes along is a ho-hum job that too while staying in budget. Whether it’s a long road trip, or a weekend getaway, mountain hiking or beaching, sydney harbour boat tour, nature or wildlife, the choice is yours. Though, keep a check on weather, holiday season to escape any unexpected situations. Once the destination is decided, take account of your traveling budget, also, check for special discounts with travel agents and tour operators to get the best deals. Group activities If you are planning with your high school friends, there is a good chance you already know about their favorite sports or habits. Ice skating, mountaineering, hiking or swimming? If they are newbies, find out their interests and follow your group dynamics and plan several activities that would ensure everybody has the time of his/her life and nobody feels left out. Give yourself a chance to try and enjoy everything. What to eat? Arrange healthy snacks and prepare a menu that won’t be heavy on the pocket. Ask someone to volunteer to cook pasta, sandwiches or pancakes for the group. Make sure you spend more on enjoying the weekend getaway rather on feasting. Open Yourself up for Adventure Travelling, like life, could be intricate. Many unforeseen circumstances came come up while wandering. Make sure your crew is not fussy if anything goes against the plan. Our busy lives do not allow us to escape our daily responsibilities now and then. Even a weekend getaway could only be planned once or twice a year. While leaving home for this getaway, keep in mind to have the time of your life and bring back the best of memories with you.
361,106
\begin{document} \title[Second Variations for Allen-Cahn Energies and Neumann Eigenvalues] {Asymptotic Behavior of Allen-Cahn Type Energies and Neumann Eigenvalues via Inner Variations} \author{Nam Q. Le} \address{Department of Mathematics, Indiana University, Bloomington, 831 E 3rd St, Bloomington, IN 47405, USA. } \email {nqle@indiana.edu $\qquad$ sternber@indiana.edu} \author{Peter J. Sternberg} \thanks{The research of the first author was supported in part by NSF grants DMS-1500400 and DMS-1764248. The second author was supported by NSF grant DMS-1362879} \subjclass[2000]{49R05, 49J45, 58E30, 49K20, 58E12} \keywords{Allen-Cahn functional, Ohta-Kawasaki functional, inner variations, sharp interface limit, stable hypersurface, Neumann eigenvalue problem} \maketitle \begin{abstract} We use the notion of first and second inner variations as a bridge allowing one to pass to the limit of first and second Gateaux variations for the Allen-Cahn, Cahn-Hilliard and Ohta-Kawasaki energies. Under suitable assumptions, this allows us to show that stability passes to the sharp interface limit, including boundary terms, by considering non-compactly supported velocity and acceleration fields in our variations. This complements the results of Tonegawa, and Tonegawa and Wickramasekera, where interior stability is shown to pass to the limit. As a further application, we prove an asymptotic upper bound on the $k^{th}$ Neumann eigenvalue of the linearization of the Allen-Cahn operator, relating it to the $k^{th}$ Robin eigenvalue of the Jacobi operator, taken with respect to the minimal surface arising as the asymptotic location of the zero set of the Allen-Cahn critical points. We also prove analogous results for eigenvalues of the linearized operators arising in the Cahn-Hilliard and Ohta-Kawasaki settings. These complement the earlier result of the first author where such an asymptotic upper bound is achieved for Dirichlet eigenvalues for the linearized Allen-Cahn operator. Our asymptotic upper bound on Allen-Cahn Neumann eigenvalues extends, in one direction, the asymptotic {\it equivalence} of these eigenvalues established in the work of Kowalczyk in the two-dimensional case where the minimal surface is a line segment and specific Allen-Cahn critical points are suitably constructed. \end{abstract} \pagenumbering{arabic} \section{Introduction and Statements of the Main Results} \setcounter{equation}{0} Within the calculus of variations, the second variation is of course a powerful tool in analyzing the nature of critical points. This is in particular the case in the context of energetic models involving double-well potentials perturbed by a gradient penalty term such as the Allen-Cahn or Modica-Mortola, Cahn-Hilliard and Ohta-Kawasaki functionals \cite{AC,OK}. As the scale of interfacial energy approaches zero, these energy functionals all converge, in the sense of $\Gamma$-convergence, to a variety of sharp interface models and there are many studies of critical points associated with these energies or with their $\Gamma$-limits for which the second variation plays a crucial role. Taking a limit of the second variations themselves to obtain the second variation of the $\Gamma$-limit, however, can be problematic and the results in this direction are far fewer. Here, building on the techniques and results found in \cite{Le, Le2}, we carry out this limiting process using the notion of inner variation, to be defined precisely in Section \ref{Var_sec}. The inner variation provides a bridge between the second variations of the so-called diffuse models listed above and those of the sharp interface variational problems arising as their $\Gamma$-limits which tend to involve minimal or constant mean curvature hypersurfaces. For more on $\Gamma$-convergence, we refer to \cite{Braides} or \cite{DM}. Its definition for the Allen-Cahn functional will be briefly recalled in Section \ref{AC_sec}. In \cite{Le, Le2}, the first author passes to the limit in second variations of various energies including the Allen-Cahn functional \begin{equation} E_{\varepsilon}(u):=\int_{\Omega}\left(\frac{\varepsilon \abs{\nabla u}^2}{2} +\frac{(1-u^2)^2}{2\varepsilon}\right) dx,\quad u:\Omega\to\R,\;\Omega\subset\R^N \;(N\geq 2),\label{ACintro} \end{equation} in the context of critical points $u_\e$, that is $u_\e$ satisfying $-\e\Delta u_\e + 2\e^{-1} (u_\e^3-u_\e)=0$ in $\Omega$, subject to Dirichlet boundary conditions. This leads, in particular, to an asymptotic upper bound on the Dirichlet eigenvalues, namely \begin{equation} \limsup_{\e\to 0}\frac{\lambda_{\e,k}}{\e}\leq \lambda_k\quad\mbox{for}\;k=1,2,\ldots\label{Devals} \end{equation} where $\lambda_{\e,k}$ denotes the $k^{th}$ Dirichlet eigenvalue of the linearized Allen-Cahn operator \[-\e\Delta+\frac{2}{\e}(3u_\e^2-1), \] subject to zero boundary conditions on $\partial\Omega$ and $\lambda_k$ denotes the $k^{th}$ eigenvalue of the Jacobi operator $-\Delta_{\Gamma}-\abs{A_\Gamma}^2$ associated with a minimal surface $\Gamma$ subject to zero boundary conditions on $\partial\Gamma$. Here $\Gamma$ denotes the asymptotic location of the interfacial layer bridging $\{u_\e\approx 1\}$ and $\{u_\e\approx -1\}$ and $A_\Gamma$ denotes the associated second fundamental form. This particular result in \cite{Le2} (see Corollary 1.1 there) has been recently extended to the closed Riemannian setting in \cite{Ga} by Gaspar who also relaxed the multiplicity 1 assumption in \cite{Le2}; see also Hiesmayr \cite{Hi} for related results. Related to such results on the Dirichlet problem is the elegant work in \cite{Tonegawa,TW}, where the authors show within the context of varifolds that when stable critical points of the Allen-Cahn functional converge to a limit, the limiting interface is stable with respect to interior perturbations; moreover, the limiting interface is smooth in dimensions $N\leq 7$ while its singular set (if any) has Hausdorff dimension at most $N-8$ in dimensions $N>7$. We would like to emphasize that the convergence and regularity results in \cite{Tonegawa, TW} rely on an important interior convergence result for the Allen-Cahn equation from the work of Hutchinson-Tonegawa \cite{HT} and a deep interior regularity theory for stable codimension 1 integral varifolds from the work of Wickramasekera \cite{W}. At present, to the best of our knowledge, there are no boundary analogues for the above results. In this article we extend the techniques of \cite{Le, Le2} in three directions: we allow for a mass constraint so as to cover not just the Allen-Cahn context but also Cahn-Hilliard, we allow for perturbation by a nonlocal term as arises in the Ohta-Kawasaki functional, \eqref{OKintro}, and most crucially, we consider non-compactly supported variations of domain in taking inner variations, allowing us to capture boundary effects in passing to the limit in the case of Neumann boundary conditions in all of these problems. Regarding this last extension, we point out that the ``natural'' Neumann boundary conditions satisfied by critical points in all of these models are {\it not} the boundary conditions associated with the limit. Rather, for example, in the case of Allen-Cahn energy, the analogue of the result \eqref{Devals} from \cite{Le2} is that \eqref{Devals} holds for $\lambda_{\e,k}$ associated with homogeneous Neumann boundary conditions but for $\lambda_k$ associated with Robin boundary conditions, cf. \eqref{Robinintro}. For two-dimensional Allen-Cahn, this shift from Neumann for $\e>0$ to Robin in the limit is examined in detail by Kowalczyk in \cite{K} where it is shown that $$\lim_{\e\to 0}\frac{\lambda_{\e,k}}{\e}= \lambda_k$$ for a carefully constructed sequence of Neumann critical points $\{u_\e\}$ and so for that problem our results represent a one-sided generalization to a more general class of critical points and to arbitrary dimensions. In the next section we will give a precise definition of first and second inner variations while reviewing the more standard notion of first and second Gateaux variations. Roughy speaking, though, the difficulty in transitioning from the second Gateaux variation $d^2 E_{\e}(u_\e,\varphi)$ of a functional like $E_\e$ in \eqref{ACintro} to that of its limit, say $E(\Gamma)$, which is essentially area or $(N-1)$-dimensional Hausdorff measure $\mathcal{H}^{N-1}(\Gamma)$, is that the former is computed by taking the second $t$-derivative of $E_{\e} (u_\e+t\varphi)$ evaluated at $t=0$ where $\varphi$ is a scalar function, while the latter comes from taking the second $t$-derivative of $\mathcal{H}^{N-1}\big(\Phi_t (\Gamma)\big)$ evaluated at $t=0$ where $\Phi_t$ is a deformation of the identity map of the form \[ \Phi_t(x)\sim x+t\eta(x)+\frac{t^2}{2} \zeta(x) \] for some velocity and acceleration vector fields $\eta$ and $\zeta$ mapping $\R^N\to\R^N$. A successful passage from one of these variations to the other, however, should be computed by similar methods. Bridging these two disparate notions is the inner variation. Indeed, if we view $\Gamma$ as the asymptotic location of the zero level set of $u_\e$, and if we view $\Phi_t$ as a deformation not just of $\Gamma$ but of all points in $\R^N$, then $\Phi_t(\Gamma)$ corresponds to the limit of the zero level of $u_\e(\Phi^{-1}_t(x))$. Thus, we might be led to compute the first and second $t$-derivatives of $E_\e(u_\e(\Phi^{-1}_t(x)))$, and these are precisely the inner variations. Then relating these quantities to the more standard first and second Gateaux variations becomes one of our first tasks. Differently put, inner variation allows us to more directly compare the energy landscapes of diffuse models and their sharp interface limits. In the present paper we carry out this explicit bridging for the Allen-Cahn functional as well as its nonlocal counterpart, the Ohta-Kawasaki functional, where the limiting object is a hypersurface, but we would like to point out that examples of this bridging via inner variations already exists in the literature, especially in the Ginzburg-Landau setting, where limiting objects are instead finite sets of points in planar domains, namely Ginzburg-Landau vortices. This includes Serfaty's stability analysis in \cite{Serfaty}, as well as \cite{SS1} (see also \cite{Serfaty2}), where Sandier and Serfaty introduce a powerful $\Gamma$-convergence of gradient flows scheme in which they identify certain energetic conditions between the $\Gamma$-converging functionals and their $\Gamma$-limits that guarantee convergence of their corresponding gradient flows. When applied to Ginzburg-Landau vortices which lie in the interior of the planar domain sample, the verification of one of the two key sufficient conditions is done by a constructive argument using inner variations with compactly supported vector fields; see \cite[equation (3.27)]{SS1}. For boundary vortices in thin magnetic films, this verification is carried out by Kurzke \cite{Kurzke} using inner variations with non-compactly supported vector fields; see \cite[Theorem 6.1]{Kurzke}. Along with giving the definitions of first and second inner variations, and reviewing the definitions of Gateaux variations, establishing this relationship between the two notions of variation is the content of Section \ref{Var_sec}. In Section 3 we pass to the limit in the inner variations of the Allen-Cahn functional; see Theorem \ref{thm-AC2}. The proof relies crucially on a convergence result of Reshetnyak \cite{Res} stated in a convenient form from Spector \cite{Sp} in Theorem \ref{Sp_thm}. In Section 4 we present two applications of Theorem \ref{thm-AC2}. The first, Theorem \ref{ACstab}, shows that under suitable regularity hypotheses on the limiting interface, stability of Allen-Cahn critical points passes to the limit. Thus, in the limit we recover the second variation formula including boundary terms derived in \cite{SZ2}. The second is the previously alluded to generalization of \eqref{Devals} to the Neumann setting which we state here as our first main result: \begin{thm}[Upper semicontinuity of the Allen-Cahn Neumann eigenvalues] \label{eigen_thm} Let $\Omega$ be an open smooth bounded domain in $\RR^{N}$ ($N\geq 2$). Let $\{u_{\e}\}\subset C^3 (\overline{\Omega})$ be a sequence of critical points of the Allen-Cahn functional \eqref{ACintro} that converges in $L^1(\Omega)$ to a function $u_0\in BV (\Omega, \{1, -1\})$ with an interface $\Gamma:=\partial\{u_0=1\}\cap\Omega$ having the property that $\overline{\Gamma}$ is $C^2$. Assume that $\lim_{\varepsilon \rightarrow 0} E_{\varepsilon}(u_{\varepsilon}) = \frac{4}{3}\mathcal{H}^{N-1}(\Gamma)$, and assume that $\Gamma$ is connected. Let $\lambda_{\e, k}$ be the $k^{th}$ eigenvalue of the operator $-\e \Delta + 2\e^{-1}(3u_{\e}^2-1)$ in $\Omega$ with zero Neumann condition on $\partial\Omega$. Let $\lambda_k$ and $\varphi^{(k)}:\overline{\Gamma}\to\R$ be the $k^{th}$ eigenvalue and eigenfunction of the operator $-\Delta_{\Gamma} - |A_{\Gamma}|^2$ in $\Gamma$ subject to Robin boundary conditions on $\partial\Gamma \cap\partial\Omega$, namely \begin{equation} \label{Robinintro} \left\{ \begin{alignedat}{2} (-\Delta_{\Gamma} - |A_{\Gamma}|^2)\varphi^{(k)} &=\lambda_k \varphi^{(k)}~&&\text{in} ~\Gamma, \\\ \frac{\partial\varphi^{(k)}}{\p {\bf n}}+A_{\partial\Omega}({\bf n}, {\bf n}) \varphi^{(k)} &=0\h~&&\text{on}~\partial\Gamma\cap\partial\Omega. \end{alignedat} \right. \end{equation} Here ${\bf n} = (n_{1},\cdots,n_{N})$ denotes the unit normal to $\Gamma$ pointing out of the region $\{x\in\Omega:\,u_0(x)=1\}$ and $A_{\Gamma}$ and $A_{\p\Omega}$ denote the second fundamental forms of $\Gamma$ and $\p\Omega$, respectively. Then \[ \limsup_{\e\rightarrow 0}\frac{\lambda_{\e, k}}{\e}\leq \lambda_k. \] \end{thm} The proof of Theorem \ref{eigen_thm} will be given in Section 4. \vglue 0.1cm \noindent We mention that when $\Gamma$ is a minimal hypersurface satisfying certain nondegeneracy conditions, Pacard and Ritor\'{e} \cite{PR} construct critical points $u_{\e}$ of $E_{\e}$ whose zero level sets converge to $\Gamma$ and the limit $\lim_{\varepsilon \rightarrow 0} E_{\varepsilon}(u_{\varepsilon}) = \frac{4}{3}\mathcal{H}^{N-1}(\Gamma)$ holds. Thus, Theorem \ref{eigen_thm} applies in particular to this case. Also we should say that we do not know whether there are contexts beyond the previously mentioned planar result in \cite{K} where asymptotic {\it equality} holds rather than just inequality. In Sections \ref{B_sec} and \ref{OK_sec} we extend our study to the Ohta-Kawasaki functional which involves a nonlocal term: \begin{equation} \label{OKintro} \mathcal{E}_{\e,\gamma} (u) =\int_{\Omega}\left(\frac{\varepsilon \abs{\nabla u}^2}{2} +\frac{(1-u^2)^2}{2\varepsilon}\right) dx +\frac{4}{3}\gamma\int_{\Omega}\int_{\Omega} G(x,y)u(x) u(y) dx dy \end{equation} where $\gamma\geq 0$ is a fixed constant and $G(x,y)$ is the Green's function for $\Omega$ satisfying $$-\Delta G =\delta -\frac{1}{|\Omega|} ~\text{on } \Omega$$ with Neumann boundary condition. We associate to each $u\in L^2(\Omega)$ a function $v\in W^{2,2}(\Omega)$, denoted by $(-\Delta)^{-1} u$, as the solution to the following Poisson equation with Neumann boundary condition: $$ -\Delta v = u-\frac{1}{|\Omega|}\int_{\Omega} u dx~\text{in}~\Omega, \frac{\partial v}{\partial \nu}=0~\text{on}~\partial\Omega, ~\int_{\Omega} v(x) dx=0. $$ Note that $$(-\Delta)^{-1} u = \int_{\Omega} G(x, y) u(y) dy.$$ Let us denote the second inner variation of $\mathcal{E}_{\e, \gamma}$ at $u_\e$ with respect to $C^3(\overline{\Omega})$ vector fields $\eta,\zeta$ by $$\delta^{2} \mathcal{E}_{\e,\gamma}(u_{\varepsilon}, \eta, \zeta):= \left.\frac{d^2}{dt^2}\right\rvert_{t=0} \mathcal{E}_{\e,\gamma}\left(u_\e\circ (I +t\eta + \frac{t^2}{2}\zeta)^{-1}\right).$$ A more comprehensive analysis concerning inner variations will be presented in Section \ref{Var_sec}. Our second main result is summarized in the following theorem. \begin{thm}[Stability of Ohta-Kawasaki passes to the limit; upper semicontinuity of Ohta-Kawasaki eigenvalues] \label{OK2} Let $\Omega$ be an open smooth bounded domain in $\RR^{N}$ ($N\geq 2$). Let $\gamma\geq 0$. Fix $m\in (-1,1)$. Let $\{u_{\e}\}\subset C^3 (\overline{\Omega})$ be a sequence of critical points of the Ohta-Kawasaki functional (\ref{OKintro}) subject to the mass constraint $ \frac{1}{|\Omega|} \int_{\Omega} u\,dx = m $ that converges in $L^2(\Omega)$ to a function $u_0\in BV (\Omega, \{1, -1\})$ with an interface $\Gamma=\partial\{u_0=1\}\cap\Omega$ having the property that $\overline{\Gamma}$ is $C^2$. Assume that $$\frac{3}{4}\lim_{\varepsilon \rightarrow 0} \mathcal{E}_{\e,\gamma}(u_{\varepsilon}) = \mathcal{E}_{\gamma}(\Gamma):= \mathcal{H}^{N-1}(\Gamma) + \gamma \int_{\Omega}\int_{\Omega} G(x,y)u_0(x) u_0(y) dx dy.$$ Let $v_0(x)= \int_{\Omega} G(x, y) u_0(y) dy$. For any smooth function $\xi:\overline{\Omega}\rightarrow\RR$, we denote \begin{multline*} \delta^2 \mathcal{E}_{\gamma} (\Gamma,\xi):=\int_{\Gamma} \left(|\nabla_{\Gamma}\xi|^2 -|A_{\Gamma}|^2\xi^2\right)\, d\mathcal{H}^{N-1} - \int_{\partial\Gamma\cap\partial\Omega} A_{\partial\Omega}({\bf n}, {\bf n})\xi^2 \,d\mathcal{H}^{N-2}\nonumber \\ + 8\gamma\int_{\Gamma}\int_{\Gamma} G(x, y) \xi(x)\xi(y) d \mathcal{H}^{N-1} (x) d \mathcal{H}^{N-1}(y) + 4 \gamma\int_{\Gamma} (\nabla v_0\cdot {\bf n}) \xi^2 d \mathcal{H}^{N-1} (x). \end{multline*} Here ${\bf n} = (n_{1},\cdots,n_{N})$ denotes the unit normal to $\Gamma$ pointing out of the region $\{x\in\Omega:\,u_0(x)=1\}$. Then, the following conclusions hold: \begin{myindentpar}{1cm} (i) There is a constant $\lambda$ such that $(N-1) H + 4 \gamma v_0 =\lambda$ on $\Gamma$ where $H$ is the mean curvature of $\Gamma$. Moreover, $\p\Gamma$ must meet $\p\Omega$ orthogonally (if at all).\\ (ii) Let $\xi:\overline{\Omega}\rightarrow\RR$ be any smooth function satisfying $ \int_{\Gamma} \xi(x) d\mathcal{H}^{N-1}(x)=0. $ Then, for all smooth vector fields $\eta\in (C^{3}(\overline{\Omega}))^{N}$ with $\eta=\xi {\bf n}$ on $\Gamma$, $\eta\cdot \nu=0$ on $\partial\Omega$, $({\bf n},{\bf n}\cdot\nabla\eta) =0$ on $\Gamma$ and for $W:= (\eta \cdot\nabla) \eta-(\div \eta)\eta$, we have \begin{align}\frac{3}{4}\lim_{\varepsilon\rightarrow 0}\delta^{2} \mathcal{E}_{\e,\gamma}(u_{\varepsilon}, \eta, W) = \delta^2 \mathcal{E}_{\gamma} (\Gamma,\xi). \label{secst} \end{align} (iii) If $\{u_\e\}$ are stable critical points of $\mathcal{E}_{\e,\gamma}$ with respect to the mass constraint $\frac{1}{|\Omega|} \int_{\Omega} u\, dx = m$, then for all smooth function $\xi:\overline{\Omega}\rightarrow\RR$ satisfying $ \int_{\Gamma} \xi(x) d\mathcal{H}^{N-1}(x)=0, $ we have $$\delta^2 \mathcal{E}_{\gamma} (\Gamma,\xi)\geq 0.$$ (iv) Assume that $\Gamma$ is connected. Let $\lambda_{\e, \gamma, k}$ be the $k^{th}$ eigenvalue of the operator $-\e \Delta + 2\e^{-1}(3u_{\e}^2-1) +\frac{8}{3} \gamma(-\Delta)^{-1}$ in $\Omega$ with zero Neumann condition on $\partial\Omega$. Let $\lambda_{\gamma, k}$ and $\varphi^{(\gamma, k)}:\overline{\Gamma}\to\R$ be the $k^{th}$ eigenvalue and eigenfunction of the operator $-\Delta_{\Gamma} - |A_{\Gamma}|^2 + 8\gamma (-\Delta)^{-1}(\chi_{\Gamma})+ 4\gamma (\nabla v_0\cdot {\bf n})$ in $\Gamma$ subject to Robin boundary conditions on $\partial\Gamma \cap\partial\Omega$, namely \begin{equation*} \small \left\{ \begin{alignedat}{2} \left(-\Delta_{\Gamma} - |A_{\Gamma}|^2 + 4\gamma (\nabla v_0\cdot {\bf n})\right)\varphi^{(\gamma, k)} (x) + 8\gamma \int_{\Gamma} G(x, y) \varphi^{(\gamma, k)} (y)d \mathcal{H}^{N-1} (y) &=\lambda_{\gamma, k}\varphi^{(\gamma, k)}(x)~&&\text{in} ~\Gamma, \\\ \frac{\partial\varphi^{(\gamma, k)}}{\p {\bf n}}+A_{\partial\Omega}({\bf n}, {\bf n}) \varphi^{(\gamma, k)} &=0\h~&&\text{on}~\p \Gamma\cap\partial\Omega. \end{alignedat} \right. \end{equation*} Then \[ \limsup_{\e\rightarrow 0}\frac{\lambda_{\e, \gamma, k}}{\e}\leq \lambda_{\gamma, k}. \] (v) The conclusion in (iv) also holds if in the above eigenvalue problems we replace the homogeneous Neumann conditions and Robin boundary conditions by homogeneous Dirichlet boundary conditions. \end{myindentpar} \end{thm} The proof of Theorem \ref{OK2} will be given in Section \ref{OK_sec}. \vglue 0.1cm \noindent Item (i) in Theorem \ref{OK2} above is just the condition of criticality for the limiting functional $\mathcal{E}_\gamma$ while the right-hand side of \eqref{secst}, that is $\delta^2 \mathcal{E}_{\gamma} (\Gamma,\xi)$, is its second variation (see \cite[Theorems 2.3 and 2.6]{CS}), so item (iii) of the theorem asserts that stability is passed to the limiting interface. A special case of Theorem \ref{OK2} (iv) where $\gamma=0$ is an extension of our Theorem \ref{eigen_thm} on the Allen-Cahn functional to the mass-constrained Cahn-Hilliard setting. We should say that throughout this article we have not sought to present results under weakest possible regularity assumptions on the limiting interface. Adapting results to the situation where the limiting interface possesses a low-dimensional singular set should be feasible. \subsection*{Notation} Throughout, $\Omega$ is an open, smooth, bounded domain in $\RR^{N}$ ($N\geq 2$). We let $\nu$ be the outer unit normal to $\p\Omega$. For any Lebesgue measurable subset $S\subset \R^N$, we use $|S|$ to denote its $N$-dimensional Lebesgue measure. If $F: \RR\times \RR^N\rightarrow \RR$ is a smooth function then we will write $F= F(z, {\bf p})$ for $z\in \RR$ and ${\bf p} =(p_1, \cdots, p_N)\in \RR^N$ and we will set $\nabla_{\bf p} F = (F_{p_1}, \cdots, F_{p_N}).$ If $\eta:\Omega\rightarrow \RR^N$ is a vector field, then we write $\eta = (\eta^1,\cdots,\eta^N)$. If $\eta\in (C^1 (\Omega))^N$, we define a new vector field $Z:=(\eta\cdot \nabla ) \eta $ whose $i$-th component is $ Z^i = \frac{\partial \eta^i}{\partial x_j}\eta^j, $ invoking the summation convention on repeated indices. We use $(\nabla \eta)^2$ to denote the matrix whose $(i, k)$ entry is $ \frac{\p \eta^i}{\p x_j} \frac{\p \eta^j}{\p x_k}, $ and we use $(\cdot, \cdot)$ to denote the standard inner product in $\RR^N$. When a differentiable function, say $\phi$, is scalar-valued so that there is no room for confusion, we write $\phi_i=\frac{\p \phi}{\p x_i}$. \section{The Relationship Between Gateaux and Inner Variations} \label{Var_sec} In this section, we first review the definitions of Gateaux variations, then give the definitions of first and second inner variations, and finally establish the relationship between the two notions of variation. The typical functionals we consider are of the form \begin{equation} A(u):= \int_{\Omega} F(u(x), \nabla u(x)) dx\label{basic} \end{equation} where $u\in C^3(\overline{\Omega})$ and $F: \RR\times \RR^N\rightarrow \RR$ is a smooth function. We mention that in this paper, for ease of presentation, we state results under very generous regularity conditions on the functions and functionals involved. No doubt many of these smoothness assumptions could be relaxed. \subsection{Gateaux variations and inner variations} We recall that the first and second Gateaux variations of $A$ at $u\in C^{3}(\Omega)$ with respect to $\varphi \in C^{3}(\Omega)$, denoted here by $dA(u,\varphi)$ and $d^2 A(u,\varphi)$ respectively, are defined by $$ dA(u,\varphi):= \left.\frac{d}{dt}\right\rvert_{t=0} A(u + t\varphi),\quad d^{2} A(u,\varphi) := \left.\frac{d^2}{dt^2}\right\rvert_{t=0} A(u + t\varphi);$$ see, for example, \cite[Chapter 1]{Wi}. On the other hand, a distinct notion of variation is that of inner variation, usually taken with respect to compactly supported vector fields, see e.g. \cite[pp. 283-293 of Section 3.1.1]{GMS}. It has been used in several contexts, for example, in the study of weakly Noether harmonic maps \cite[Section 1.4.2]{H}, in the investigation of the asymptotics for solutions of the Ginzburg-Landau system \cite[Chapter 13]{SS2}, and also in second order asymptotic limits in phase transitions \cite{Le, Le2}, to name a few. Most closely related to the subject of this paper are the works \cite{Le, Le2} where the first author studies the Morse index and upper semicontinuity of eigenvalue problems in phase transitions when Dirichlet boundary conditions are enforced. Inspired by the case of compactly supported vector fields, we define below the concept of inner variations with respect to general, that is, not necessarily compactly supported, vector fields, in order to examine the corresponding asymptotics of Neumann eigenvalues. To this end, consider any smooth vector field $\eta\in (C^{3}(\overline{\Omega}))^{N}$ and associated with it, suppose that we have a $t$-dependent map $\Phi_t$ with the property that \begin{equation} \label{map-deform1} \Phi_{t} (x) = x + t\eta(x) + O(t^2). \end{equation} In this paper, by $O(t^k)$ $(k\leq 3)$, we mean any quantity $Q(x,t)$ such that it is $C^3$ in the variables $x$ and $t$ and furthermore $\abs{Q(x,t)}/\abs{t}^k$ is uniformly bounded in $\overline{\Omega}$ when $|t|$ is small. For $\abs{t}$ sufficiently small, the map $\Phi_t$ is a diffeomorphism of $\RR^N$ onto itself and thus we can define its inverse map $\Phi_t^{-1}$. We then define the first inner variation of $A$ at $u$ with respect to the velocity vector field $\eta$ by \begin{equation} \label{inner1defn} \delta A(u,\eta) := \left.\frac{d}{dt}\right\rvert_{t=0} A(u\circ\Phi_t^{-1}). \end{equation} Now if in addition to $\eta$ we consider a second smooth vector field $\zeta\in (C^{3}(\overline{\Omega}))^{N}$ and if the diffeomorphism $\Phi_t(x)$ satisfies \begin{equation} \label{map-deform2} \Phi_{t} (x) = x + t\eta(x) + \frac{t^2}{2}\zeta(x)+O(t^3), \end{equation} then we define the second inner variation of $A$ at $u$ with respect to the velocity vector field $\eta$ and acceleration vector field $\zeta$ by \begin{equation} \delta^{2} A(u,\eta,\zeta) := \left.\frac{d^2}{dt^2}\right\rvert_{t=0} A(u\circ\Phi_t^{-1}).\label{innerdefn2} \end{equation} We note that $\Phi_t^{-1}$ does not map $\Omega$ to $\Omega$ in general. Thus, in calculating inner variations, we implicitly extend $u$ to be a smooth function on a neighborhood of $\overline{\Omega}$. The calculations show that the inner variations do not depend on these extensions. \begin{rem} In the above definitions of variations, we do not use any particular form of $A$. Thus, they apply equally to local functionals of the form (\ref{basic}) and nonlocal functionals of the form (\ref{B_def}) in Section \ref{B_sec}. \end{rem} The goal of the next subsection is to calculate the above variations and to explore their relationship. \subsection{Calculation and relationship between variations} Let $A$ be as in (\ref{basic}). Carrying out the standard computation of $\left.\frac{d}{dt}\right\rvert_{t=0} A(u + t\varphi)$ and $\left.\frac{d^2}{dt^2}\right\rvert_{t=0} A(u + t\varphi)$ for $u$ and $\phi$ in $C^1(\Omega)$, we obtain the well-known formulas for the first and second Gateaux variations: \begin{eqnarray} \label{fveq} dA(u,\varphi)=\left.\frac{d}{dt}\right\rvert_{t=0} A(u + t\varphi)=\int_{\Omega}\left(F_{z}\varphi + F_{p_i}\varphi_i\right) dx. \end{eqnarray} and \begin{equation}\label{sveq} d^2 A(u,\varphi)=\left.\frac{d^2}{dt^2}\right\rvert_{t=0} A(u + t\varphi)=\int_{\Omega}\left( F_{zz}\varphi^2 + 2 F_{z p_i} \varphi\varphi_i + F_{p_i p_j}\varphi_i \varphi_j\right) dx, \end{equation} where in these formulae all derivatives of $F$ are evaluated at $z=u$ and ${\bf p}=\nabla u.$ We turn now to the calculation of inner variations. In the following lemmas, we establish two different formulas for the inner variations of the functional $A$. The first is more general and is obtained via direct calculation. The second we prove via a change of variables. These formulas will be used in our proof of the asymptotic upper bound for Allen-Cahn Neumann eigenvalues. \begin{lemma} [Inner variations via direct calculation] \label{SIV_direct} Let $A$ be as in (\ref{basic}). Assume that $u\in C^3(\overline{\Omega})$. Let $\eta, \zeta\in (C^{3}(\overline{\Omega}))^{N}$. The first inner variation of $A$ at $u$ with respect to $\eta$ is given by \[ \delta A(u,\eta)= \int_{\Omega} \left[F_z (-\nabla u\cdot\eta ) + F_{p_i} (\frac{\partial}{x_i}(-\nabla u\cdot \eta))\right] dx. \] The second inner variation of $A$ at $u$ with respect to $\eta$ and $\zeta$ is \begin{multline*} \delta^2 A(u,\eta,\zeta) =\int_{\Omega} \left[F_{zz} (\nabla u\cdot \eta)^2 + 2 F_{zp_i} (\nabla u\cdot\eta)\frac{\partial}{x_i}(\nabla u\cdot \eta) + F_{p_i p_j} \frac{\partial}{x_i} (\nabla u \cdot \eta)\frac{\partial}{x_j}(\nabla u \cdot \eta) \right] dx\\ + \int_{\Omega} \left[F_{z} X_0 + F_{p_i} \frac{\partial}{x_i}X_0\right] dx, \end{multline*} where $X_0$ is given by \begin{equation} \label{Xzero} X_0:= (D^2 u \cdot \eta, \eta) + (\nabla u, 2(\eta\cdot \nabla ) \eta -\zeta). \end{equation} \end{lemma} In view of \eqref{fveq}, it then immediately follows that: \begin{cor} \label{inner_rem} Let $A$ be as in (\ref{basic}). If $u\in C^3(\overline{\Omega})$ and $\eta, \zeta\in (C^{3}(\overline{\Omega}))^{N}$, then one has \begin{eqnarray} &&\delta A(u,\eta,\zeta)= dA (u, -\nabla u\cdot \eta),\label{first}\\ &&\delta^2 A(u,\eta,\zeta)= d^2A (u, -\nabla u\cdot \eta) + dA (u, X_0), \label{secondgen} \end{eqnarray} and if $u$ is a critical point of $A$, that is, if $dA (u,\varphi)=0$ for all $\varphi \in C^3 (\overline{\Omega})$, then $\delta^2 A(u,\eta,\zeta)$ is independent of $\zeta$. Moreover, in this case, \begin{equation} \delta^2 A(u,\eta,\zeta)= d^2A (u, -\nabla u\cdot \eta)\quad\mbox{for all }\eta, \zeta\in (C^{3}(\overline{\Omega}))^{N}.\label{seconde} \end{equation} \end{cor} \begin{lemma}[Inner variations for velocity vector fields tangent to the domain boundary] \label{SIV_ch} Let $A$ be as in (\ref{basic}). Assume that $u\in C^3(\overline{\Omega})$. Suppose that $\eta\in (C^{3}(\overline{\Omega}))^{N}$ where $\eta\cdot \nu=0$ on $\p\Omega$. The first inner variation of $A$ at $u$ with respect to $\eta$ is \begin{equation*} \delta A(u,\eta)= \int_{\Omega}\left\{ F \div \eta - (\nabla_{\bf p} F, \nabla u\cdot \nabla\eta) \right\} dx. \end{equation*} The second inner variation of $A$ at $u$ with respect to $\eta$ and $Z:= (\eta\cdot\nabla)\eta$ is \begin{multline*} \delta^2 A(u,\eta, Z)= \int_{\Omega}\left\{ FX - 2 (\nabla_{\bf p} F, \nabla u\cdot \nabla\eta) \,{\rm div} \,\eta - 2 (\nabla_{\bf p} F, Y) + F_{p_i p_j}(\nabla u\cdot\nabla \eta)^{i}(\nabla u\cdot\nabla \eta)^{j}\right\} dx. \end{multline*} where \begin{equation} \label{XY_eq} X:= {\rm div}\, Z + ({\rm div}\, \eta)^2 -{\rm trace} \,(\nabla\eta)^2;~~Y= \frac{1}{2} \nabla u\cdot\nabla Z-(\nabla \eta)^2\cdot \nabla u. \end{equation} \end{lemma} \begin{rem} \label{weird} In light of the fact that the formula for the second inner variation in Lemma \ref{SIV_ch} is a special case of the general second inner variation $\delta^2 A(u,\eta,\zeta)$ in the case where $\eta\cdot \nu=0$ on $\p\Omega$ and $\zeta= (\eta\cdot\nabla)\eta$, it follows that if one imposes this boundary condition on $\eta$ and this choice of $\zeta$ in the formula for $\delta^2 A(u,\eta,\zeta)$ given in Lemma \ref{SIV_direct}, then it must be equivalent to the formula given in Lemma \ref{SIV_ch}. We note, however, that it does not seem easy to directly verify this equivalence. \end{rem} \begin{rem} \label{rederive} We would like to point out that the formulae for inner variations in Lemmas \ref{SIV_direct} and \ref{SIV_ch} already appeared in the proof of \cite[Proposition 2.1]{Le2} for compactly supported vector fields $\eta$ and $\zeta$. The proof of Lemma \ref{SIV_direct} here follows the same line of argument as in \cite{Le2}. Since it is short and to avoid confusion when adapting to our general vector fields, we include it for the reader's convenience. The proof of Lemma \ref{SIV_ch} is a bit different, utilizing the ODE \eqref{ode} to build the diffeomorphism of $\Omega$. \end{rem} The rest of this section will be devoted to proving Lemmas \ref{SIV_direct} and \ref{SIV_ch}. \begin{proof}[Proof of Lemma \ref{SIV_direct}] Let $u_t(y) = u(\Phi_t^{-1}(y))$ where $$\Phi_{t} (x) = x + t\eta(x) + \frac{t^2}{2}\zeta(x).$$ The formulae are based on the following formula (see \cite[equation (2.16)]{Le2}) \begin{equation}u_t(y) = u (y) -t\nabla u \cdot \eta + \frac{t^2}{2} X_0 + O(t^3). \label{ut_expand} \end{equation} We observe, using (\ref{inner1defn}), (\ref{innerdefn2}) and (\ref{ut_expand}), that the first and second inner variations are equal to the first and second derivatives, respectively, of the following function at $0$: $$\mathcal{A}_{1}(t) = \int_{\Omega} F(u-t \nabla u\cdot\eta + \frac{t^2}{2}X_0, \nabla u - t\nabla (\nabla u\cdot\eta) +\frac{t^2}{2}\nabla X_0) dx.$$ We compute $$\mathcal{A}_{1}^{'}(t) =\int_{\Omega}\left [F_z (-\nabla u\cdot\eta + t X_0)-F_{p_i} (\frac{\partial}{x_i}(\nabla u\cdot \eta)- t \frac{\partial}{x_i} X_0)\right]dx$$ and \begin{multline*}\mathcal{A}_{1}^{''}(t) = \int_{\Omega} \left[F_{zz} (-\nabla u\cdot\eta + t X_0)^2-2F_{zp_i} (\frac{\partial}{x_i}(\nabla u\cdot \eta)- t \frac{\partial}{x_i} X_0)(-\nabla u\cdot\eta + t X_0) \right]dx\\+ \int_{\Omega}\left[F_{p_i p_j} (\frac{\partial}{x_i}(\nabla u\cdot \eta)- t \frac{\partial}{x_i} X_0)(\frac{\partial}{ x_j}(\nabla u\cdot \eta)- t \frac{\partial}{x_j} X_0)\right]dx+ \int_{\Omega} \left[F_{z} X_0 + F_{p_i} \frac{\partial}{x_i} X_0\right]. \end{multline*} It follows that \begin{equation*} \delta A(u,\eta)= \mathcal{A}_{1}^{'}(0)= \int_{\Omega} \left[F_z (-\nabla u\cdot\eta ) + F_{p_i} (\frac{\partial}{x_i}(-\nabla u\cdot \eta))\right] \end{equation*} and \begin{multline*} \delta^2 A(u,\eta,\zeta) =\mathcal{A}_{1}^{''}(0) =\int_{\Omega} \left[F_{zz} (\nabla u\cdot \eta)^2 + 2 F_{zp_i} (\nabla u\cdot\eta)\frac{\partial}{x_i}(\nabla u\cdot \eta)\right] dx\\+ \int_{\Omega}\left[F_{p_i p_j} \frac{\partial}{x_i} (\nabla u \cdot \eta)\frac{\partial}{x_j}(\nabla u \cdot \eta)\right] dx+ \int_{\Omega} \left[F_{z} X_0 + F_{p_i} \frac{\partial}{x_i}X_0\right]dx. \end{multline*} \end{proof} \begin{proof}[Proof of Lemma \ref{SIV_ch}] Suppose that $\eta\in (C^{3}(\overline{\Omega}))^{N}$ where $\eta\cdot \nu=0$ on $\p\Omega$. Then for $\tau>0$ small, we let $\Psi: \Omega\times (-\tau,\tau)\rightarrow\Omega$ denote the unique solution to the following system of ordinary differential equations \begin{equation} \frac{\p \Psi}{\p t}(x, t)= \eta (\Psi(x, t)), \quad\Psi(x, 0)= x. \label{ode} \end{equation} Then we have the expansion \begin{equation}\Psi(x, t)= x + t\eta (x)+ \frac{t^2}{2} Z(x) + O(t^3)\quad\mbox{where}\quad Z:=(\eta\cdot\nabla )\eta. \label{Psi_exp} \end{equation} Letting $\Phi_t(x) := \Psi(x, t)$ we observe that for all $t$ such that $\abs{t}<\tau$, the mapping $x\mapsto\Psi(x,t)$ is a diffeomorphism of $\Omega$ into itself, using the tangency of $\eta$ along the boundary. From \eqref{innerdefn2} and \eqref{Psi_exp} we have \[\delta^{2} A(u,\eta,Z) = \left.\frac{d^2}{dt^2}\right\rvert_{t=0} A(u\circ\Phi_t^{-1})= \left.\frac{d^2}{dt^2}\right\rvert_{t=0} A(u_t),\quad\text{and } \quad\delta A(u,\eta)= \left.\frac{d}{dt}\right\rvert_{t=0} A(u_t)\] where $u_t(y) := u(\Phi_t^{-1}(y)).$ By the change of variables $y=\Phi_{t}(x)$ and using $\Phi_t^{-1}(\Omega)=\Omega$, we have \begin{eqnarray} A(u_{t}) &=& \int_{\Phi_t^{-1}(\Omega)} F(u(x),\nabla u\cdot\nabla \Phi_{t}^{-1}(\Phi_{t}(x)) \abs{\text{det}\nabla \Phi_{t}(x)}dx\nonumber \\ &=& \int_{\Omega} F(u(x),\nabla u\cdot\nabla \Phi_{t}^{-1}(\Phi_{t}(x)) \abs{\text{det}\nabla \Phi_{t}(x)}dx. \label{rewrite_E} \end{eqnarray} We need to expand the right-hand side of the above formula up to the second power in $t$. Note that \begin{equation*} \nabla\Phi_{t}^{-1}(\Phi_{t}(x)) = [I + t\nabla\eta (x) +\frac{t^2}{2}\nabla Z(x) + O(t^3)]^{-1} = I - t\nabla\eta -\frac{t^2}{2}\nabla Z(x)+ t^{2}(\nabla\eta)^2 + O(t^3), \end{equation*} hence \begin{equation} \nabla u\cdot\nabla \Phi_{t}^{-1}(\Phi_{t}(x)) = \nabla u - t\nabla u\cdot\nabla\eta -\frac{t^2}{2}\nabla u\cdot\nabla Z(x)+ t^{2}(\nabla\eta)^2\cdot\nabla u + O(t^3). \label{u1_expand} \end{equation} We then use the following identity for matrices $A$ and $B$ \begin{equation*} \text{det}(I + tA + \frac{t^{2}}{2}B) = 1 + t\,\text{trace}(A) + \frac{t^2}{2}[\text{trace}(B) + (\text{trace}(A))^2 - \text{trace}(A^2)] + O(t^3). \end{equation*} Therefore, since for $\abs{t}$ sufficiently small, $\text{det}\nabla\Phi_{t}(x)>0$ and we find \begin{multline} \abs{\text{det}\nabla\Phi_{t}(x)}=\text{det}\nabla\Phi_{t}(x) =\text{det} (I + t\nabla \eta (x) +\frac{t^2}{2}\nabla Z)\\= 1 + t\,\text{div} \,\eta + \frac{t^2}{2}[ \text{div}\,Z + (\text{div}\eta)^2 - \text{trace}((\nabla\eta)^2)] + O(t^3). \label{det_expand} \end{multline} Plugging (\ref{u1_expand}) and (\ref{det_expand}) into (\ref{rewrite_E}), we find that \begin{equation} \label{inner_formu} \delta A(u,\eta)= \left.\frac{d}{dt}\right\rvert_{t=0} \int_{\Omega} \hat F(x, t) dx~\text{and } \delta^2A(u,\eta,Z)= \left.\frac{d^2}{dt^2}\right\rvert_{t=0} \int_{\Omega} \hat F(x, t) dx \end{equation} where $$\hat F(x,t)=F(u, \nabla u - t\nabla u\cdot\nabla\eta - t^2 Y) (1 + t\,{\rm div}\, \eta + \frac{t^2}{2}X).$$ Here $X$ and $Y$ are defined as in (\ref{XY_eq}). We compute \begin{equation} \begin{split} \label{Fhat_d} \frac{\partial}{\partial t} \hat F(x,t) =-F_{p_i}(u, \nabla u - t\nabla u\cdot\nabla\eta - t^2 Y) (\frac{\partial}{x_i}\eta^j u_j + 2t Y^i) \\ + F\,{\rm div}\, \eta + \left( \frac{d}{dt}F(u, \nabla u - t\nabla u\cdot\nabla\eta - t^2 Y)\right) t \,{\rm div}\, \eta + t FX + \frac{t^2}{2} \frac{d}{dt} (FX). \end{split} \end{equation} The formula for the first inner variation $\delta A(u,\eta)$ easily follows from (\ref{inner_formu}) and (\ref{Fhat_d}). For the second inner variation, we note that \begin{multline} \label{Fhat_dd} \left.\frac{\p^2}{\p t^2}\right\rvert_{t=0} \hat F(x,t)= F_{p_i p_k} (\frac{\partial}{x_i}\eta^j u_j)(\frac{\partial}{x_k}\eta^l u_l) - 2F_{p_i} Y^i \\+ 2 \frac{d}{dt}F(u, \nabla u - t\nabla u\cdot\nabla\eta - t^2 Y)\,{\rm div}\,\eta\mid_{t=0} + FX\\ = FX - 2 (\nabla_{\bf p} F, \nabla u\cdot \nabla\eta)\,{\rm div}\, \eta - 2 (\nabla_{\bf p} F, Y) + F_{p_i p_j}(\nabla u\cdot\nabla \eta)^{i}(\nabla u\cdot\nabla \eta)^{j}. \end{multline} Therefore, from (\ref{inner_formu}) and (\ref{Fhat_dd}), we find that the second inner variation $\delta^{2} A(u,\eta, Z)$ is given by \begin{multline*}\delta^{2} A(u,\eta, Z)= \left.\frac{d^2}{dt^2}\right\rvert_{t=0} \int_{\Omega} \hat F(x, t) dx= \int_{\Omega} \left.\frac{\p^2}{\p t^2}\right\rvert_{t=0} \hat F(x,t) dx \\= \int_{\Omega}\left\{ FX - 2 (\nabla_{\bf p} F, \nabla u\cdot \nabla\eta) \,{\rm div}\,\eta - 2 (\nabla_{\bf p} F, Y) + F_{p_i p_j}(\nabla u\cdot\nabla \eta)^{i}(\nabla u\cdot\nabla \eta)^{j}\right\} dx. \end{multline*} \end{proof} \section{Passage to the limit in the inner variations of the Allen-Cahn functional} \label{AC_sec} In this section we will apply the formulae established in the previous section to the case of the Allen-Cahn or Modica-Mortola sequence of functionals \begin{equation} E_{\varepsilon}(u)=\int_{\Omega}\left(\frac{\varepsilon \abs{\nabla u}^2}{2} +\frac{(1-u^2)^2}{2\varepsilon}\right) dx,\label{MM}\end{equation} for $\e>0$, where $u: \Omega\subset\RR^N\rightarrow \RR$, $N\geq 2$. Thus, we specialize to the case where $F(z,{\bf p})=\frac{\e}{2}\abs{{\bf p}}^2+\frac{(1-z^2)^2}{2\varepsilon}$ in \eqref{basic}. These functionals, which in particular arise in the theory of phase transitions \cite{AC}, are known to $\Gamma$-converge in $L^1(\Omega)$ to a multiple of the perimeter functional $E$ defined by \begin{equation*} E(u_0)=\left\{ \begin{alignedat}{1} \frac{1}{2}\int_{\Omega}|\nabla u_0| ~&~ \text{if} ~u_0\in BV (\Omega, \{1, -1\}), \\\ \infty~&~ \text{otherwise}, \end{alignedat} \right. \end{equation*} (\cite{MM}). More precisely, $E_\e$ $\Gamma$-converges in $L^1(\Omega)$ to $\frac{4}{3}E.$ For a function $u_0$ of bounded variation taking values $\pm 1$, i.e. $u_0\in BV (\Omega, \{1, -1\})$, $|\nabla u_0|$ denotes the total variation of the vector-valued measure $\nabla u_0$ (see \cite{Giusti}), and $\Gamma:= \partial\{x\in \Omega: u_0(x)=1\}\cap \Omega$ denotes the interface separating the $\pm1$ phases of $u_0$. If $\Gamma$ is sufficiently regular, say $C^1$, then $E(u_0)=\mathcal{H}^{N-1}(\Gamma)$ and hence we identify \begin{equation} \label{E_defn} E(u_0)\equiv E(\Gamma)=\mathcal{H}^{N-1}(\Gamma) \end{equation} where $\mathcal{H}^{N-1}$ denotes $(N-1)$-dimensional Hausdorff measure. Throughout, we will denote by ${\bf n} = (n_{1},\cdots,n_{N})$ the unit normal to $\Gamma$ pointing out of the region $\{x\in\Omega:\,u_0(x)=1\}.$ Though we will not use the specific properties of $\Gamma$-convergence in this article, we recall that this convergence of $E_\e$ to $\frac{4}{3}E$ consists of two conditions: a liminf inequality and the existence of a recovery sequence. For reader's convenience and for later reference, we give the definition below. \begin{definition} [$\Gamma$-convergence] \label{G_defn} We say that a sequence of functionals $E_\e$ $\Gamma$-converges in the $L^1(\Omega)$ topology to the functional $\frac{4}{3}E$ if for any $u\in L^1(\Omega)$ one has the following two conditions: \begin{myindentpar}{1cm} (i) (Liminf inequality) If a sequence $\{v_\e\}$ converges to $u$ in $L^1(\Omega)$, then $$\liminf_{\e\to 0}E_\e(v_\e)\geq \frac{4}{3}E(u), $$ (ii) (Existence of a recovery sequence) There exists a sequence $\{w_\e\}\subset L^1(\Omega)$ converging to $u$ such that $$\lim_{\e\to 0}E_\e(w_\e)=\frac{4}{3}E(u).$$ \end{myindentpar} \end{definition} This convergence, when accompanied by a compactness condition on energy-bounded sequences, guarantees that global minimality passes to the limit. In this article, however, we will be more concerned with the passage of stability in the limit $\e\to 0.$ The first variation of $E$, defined by (\ref{E_defn}), at $\Gamma$ with respect to a smooth velocity vector field $\eta$ is given by \begin{eqnarray} \label{FVE} \delta E(\Gamma,\eta)&: =& \left.\frac{d}{dt}\right\rvert_{t=0} \mathcal{H}^{N-1}(\Phi_t(\Gamma))= \int_{\Gamma}\text{div}^{\Gamma}\eta\mathcal{H}^{N-1}, \end{eqnarray} and the second variation of $E$ at $\Gamma$ with respect to smooth velocity and acceleration vector fields $\eta$ and $\zeta$ is given by \begin{eqnarray} \label{SVE} \delta^{2}E(\Gamma,\eta,\zeta)&: =& \left.\frac{d^2}{dt^2}\right\rvert_{t=0} \mathcal{H}^{N-1}(\Phi_t(\Gamma))\nonumber\\ &=& \int_{\Gamma}\left\{ \text{div}^{\Gamma}\zeta + (\text{div}^{\Gamma}\eta)^2 + \sum_{i=1}^{N-1}\abs{(D_{\tau_{i}}\eta)^{\perp}}^2 - \sum_{i,j=1}^{N-1}(\tau_{i}\cdot D_{\tau_{j}}\eta)(\tau_{j}\cdot D_{\tau_{i}}\eta)\right\}d\mathcal{H}^{N-1}; \end{eqnarray} see \cite[Chapter 2]{Simon}. Here $\Phi_t$ is given by (\ref{map-deform2}), $\text{div}^{\Gamma}\varphi$ denotes the tangential divergence of $\varphi$ on $\Gamma$, and for each point $x\in\Gamma$, $\{\tau_{1}(x),\cdots,\tau_{N-1}(x)\}$ is any orthonormal basis for the tangent space $T_{x}(\Gamma)$. Further, for each $\tau\in T_{x}(\Gamma)$, $D_{\tau} \eta$ is the directional derivative and the normal part of $D_{\tau_{i}} \eta$ is denoted by $(D_{\tau_{i}} \eta)^{\perp} := D_{\tau_{i}} \eta -\sum_{j=1}^{N-1}(\tau_{j}\cdot D_{\tau_{i}} \eta)\tau_{j}. $ We point that out for a hypersurface, there are no distinct notions of first or second {\it inner} variation so while we chose the notation $ \delta E(\Gamma,\eta)$ and $\delta^{2}E(\Gamma,\eta,\zeta)$ we could just as well have used $ dE(\Gamma,\eta)$ and $d^{2}E(\Gamma,\eta,\zeta)$. For later use, we also record the following (see \cite[formula (12.39)]{SZ2}): \begin{thm} [Second variation of the area functional \cite{SZ2}] \label{PI_ineq} Suppose that $\Gamma\subset\Omega$ is a smooth hypersurface with mean curvature $H$. Suppose further that, $\overline{\Gamma}$ is $C^2$ and $\partial\Gamma$ meets $\partial\Omega$ orthogonally. Then for any smooth vector field $\eta:\overline{\Omega}\to \R^N$ that is tangent to $\partial\Omega$ with $\eta=\xi\,{\bf n}$ and $({\bf n},{\bf n}\cdot\nabla\eta) =0$ on $\Gamma$ for some smooth $\xi:\Gamma\to\R$, and for $Z:= (\eta \cdot \nabla) \eta$, we have \begin{multline} \delta^{2}E(\Gamma,\eta, Z) =\delta^{2}E(\Gamma,\xi): = \int_{\Gamma} \left(|\nabla_{\Gamma}\xi|^2 + (n-1)^2H^2\xi^2 -|A_{\Gamma}|^2\xi^2\right) d\mathcal{H}^{N-1} \\- \int_{\partial\Gamma\cap\partial\Omega} A_{\partial\Omega}({\bf n}, {\bf n})\xi^2 d\mathcal{H}^{N-2}.\label{sziden} \end{multline} Here $A_{\Gamma}$ and $A_{\partial\Omega}$ denote the second fundamental form of $\Gamma$ and $\p\Omega$ respectively. \end{thm} \begin{rem} The derivation of \cite[formula (12.39)]{SZ2} uses the stability of $\Gamma$ only in order to assert the necessary regularity to carry out the calculation. Here, as we do throughout the article, we assume smoothness of $\overline{\Gamma}$ so a stability assumption is not needed. \end{rem} In a previous paper \cite{Le}, the first author studied the relationship between the second inner variations of $\{E_\e\}$ and the second variation of the $\Gamma$-limit, $\frac{4}{3} E(u_0)$. While the first inner variations of $E_\e$ converge to the first variation of $E_0$, it was shown in \cite{Le} that an extra positive discrepancy term emerges in the limit of the second inner variation. More precisely, if $u_\e\in C^2(\Omega), u_{\e}\rightarrow u_0\in BV (\Omega, \{1, -1\})$ with a $C^2$ interface $\Gamma$ and $\lim_{\e\rightarrow 0} E_{\e}(u_{\e})= \frac{4}{3}E (u_0)$, then for all smooth vector fields $\eta,\zeta\in (C_{c}^{1}(\Omega))^{N}$, it was found in \cite[Theorem 1.1]{Le} that \begin{equation*} \lim_{\varepsilon\rightarrow 0}\delta^{2} E_{\varepsilon}(u_{\varepsilon},\eta,\zeta) = \frac{4}{3}\left\{ \delta^{2}E(\Gamma,\eta,\zeta) + \int_{\Gamma} ({\bf n},{\bf n}\cdot\nabla\eta)^2d\mathcal{H}^{N-1}\right\}.\end{equation*} With the aim of studying the asymptotic behavior of Allen-Cahn critical points and linearizations subject the natural Neumann boundary conditions, we now establish the same type of result {\it without} the assumption of compact support on the vector fields $\eta$ and $\zeta$: \begin{thm}[Limits of the inner variations of the Allen-Cahn functional] \label{thm-AC2} Let $\{u_{\e}\}\subset C^3 (\overline{\Omega})$ be a sequence of functions that converges in $L^1(\Omega)$ to a function $u_0\in BV (\Omega, \{1, -1\})$ with an interface $\Gamma=\partial\{u_0=1\}\cap\Omega$ having the property that $\overline{\Gamma}$ is $C^2$. Assume that $\lim_{\varepsilon \rightarrow 0} E_{\varepsilon}(u_{\varepsilon}) = \frac{4}{3}E(\Gamma).$ Then, for all smooth vector fields $\eta\in (C^{3}(\overline{\Omega}))^{N}$ with $\eta\cdot \nu=0$ on $\partial\Omega$ and for $Z:= (\eta \cdot\nabla) \eta$, we have \begin{equation*}\lim_{\varepsilon\rightarrow 0}\delta E_{\varepsilon}(u_{\varepsilon}, \eta) = \frac{4}{3}\delta E(\Gamma,\eta) \end{equation*} and \begin{equation*}\lim_{\varepsilon\rightarrow 0}\delta^{2} E_{\varepsilon}(u_{\varepsilon}, \eta, Z) = \frac{4}{3}\left\{\delta^{2}E(\Gamma,\eta, Z) + \int_{\Gamma} ({\bf n},{\bf n}\cdot\nabla\eta)^2 d\mathcal{H}^{N-1}\right\}. \end{equation*} \end{thm} \begin{rem} (i) One important point in Theorem \ref{thm-AC2} is that $u_\e$ is not assumed to necessarily be a critical point of $E_{\e}$. We will find ourselves in need of the formula in this situation in Section 6.\\ (ii) In the convergence result for the second inner variations $\delta^{2} E_{\varepsilon}(u_{\varepsilon}, \eta, Z)$ in Theorem \ref{thm-AC2}, it would be very interesting to relax the hypothesis $\lim_{\varepsilon \rightarrow 0} E_{\varepsilon}(u_{\varepsilon}) = \frac{4}{3}E(\Gamma)$ (which amounts to assuming multiplicity 1 convergence of the nodal sets of $u_\e$) to just a uniform bound $E_\e(u_\e)\leq C$ on the energies $E_\e(u_\e)$ as done by Gaspar \cite[Proposition 3.3]{Ga} for compactly supported vector fields $\eta$ (and hence $Z$). Gaspar's elegant observation (see \cite[Proposition 2.2]{Ga}) is that, under the energy bound $E_\e(u_\e)\leq C$ and the vanishing of the discrepancy measures $\xi_{\e}:=\left(\e |\nabla u_\e|^2 - \frac{(1-u_\e^2)^2}{\e}\right)$ in the interior of $\Omega$, the second inner variations $\delta^2 E_\e(u_\e, \cdot,\cdot)$ are continuous under varifold convergence in the interior of the Euclidean domain $\Omega$. The application of this continuity result to $\delta^{2} E_{\varepsilon}(u_{\varepsilon}, \eta, Z)$ requires the vector fields $\eta$ and $Z:=(\eta\cdot\nabla)\eta$ be compactly supported in $\Omega$, which is not the case in our present setting of Theorem \ref{thm-AC2}, however. On the other hand, adapting the analysis of \cite{Ga} for general vector fields $\eta$ requires the vanishing of the discrepancy measures $\xi_{\e}$ up to the boundary of $\Omega$ in the limit of $\e\rightarrow 0$. To the best of the authors' knowledge, the most general setting for the validity of this result is the work of Mizuno-Tonegawa \cite{MT} where the authors require that $u_\e$ is a critical point of $E_\e$, uniformly bounded in $\e$ and that the domain $\Omega$ is strictly convex. Recently, Kagaya \cite{Ka} relaxes the strict convexity of the Euclidean domain $\Omega$ for certain classes of critical points of $E_{\e}$. Note that our present setting of Theorem \ref{thm-AC2} (see also item (i) above) does not fulfill these requirements in general. We briefly sketch a generalization of Theorem \ref{thm-AC2} to the setting of \cite{MT} in Theorem \ref{thm-ACc}. \end{rem} The rest of this section is devoted to proving Theorem \ref{thm-AC2} and its slight generalization, Theorem \ref{thm-ACc}. Let \begin{equation} \label{Phi_a} \Phi (a): = \int_{0}^{a} |s^2-1| ds. \end{equation} We next recall the following results from \cite[Lemmas 3.1 and 3.2]{Le2} applied to the double well potential $(1-u^2)^2$ that are crucial to proving Theorem \ref{thm-AC2}. \begin{lemma} \label{equi_lem} Under the assumptions of Theorem \ref{thm-AC2}, we have the following convergences: \begin{myindentpar}{1cm} \begin{equation} \label{ener1bis} \lim_{\e\rightarrow 0}\int_{\Omega}|\nabla \Phi (u_\e)| dx= \int_{\Omega}|\nabla \Phi (u_0)|dx, \end{equation} \begin{equation} \label{equi-ab} \lim_{\e\rightarrow 0} \int_{\Omega} |\e |\nabla u_\e|^2 - \frac{(1-u_\e^2)^2}{\e}| dx =0, \end{equation} \begin{equation}\label{equi-abphi}\lim_{\e\rightarrow 0}\int_{\Omega}|\e |\nabla u_\e|^2- |\nabla \Phi (u_\e)|| dx=0. \end{equation} \end{myindentpar} We also have the following convergence: \begin{equation}\label{L1_con}\Phi (u_\e)\rightarrow \Phi (u_0)~\text{in } L^1 (\Omega) \end{equation} and thus, in the sense of Radon measures, we have the convergence: \[ \nabla \Phi(u_\e)\rightharpoonup \nabla \Phi (u_0)= \frac{4}{3} {\bf n} \,d\mathcal{H}^{N-1}\lfloor\Gamma\;\mbox{as}\;\e\to 0. \] \end{lemma} \begin{rem} In the special case where $u_\e$ is a minimizer of $E_\e$, the above lemma was proved by Luckhaus and Modica; see \cite[Proposition 1, Lemmas 1 and 2]{LM}. Equation (\ref{L1_con}) in Lemma \ref{equi_lem} was used in \cite{Le2} without proof. Its proof is based on a truncation argument as in the proof of (1.11) in \cite{Stern1}. For completeness, we include it below. \end{rem} \begin{proof} [Proof of equation (\ref{L1_con}) in Lemma \ref{equi_lem}] Let us define $$u^{\ast}_\e = u_\e~\text{on } \{-1\leq u_\e\leq 1\} ~\text{ and }u_\e^{\ast} = \text{sign} (u_\e) ~\text{on }\{|u_\e|>1\}.$$ First, note that $u_\e\rightarrow u_0$ in $L^1 (\Omega)$ implies that $u_\e^{\ast}\rightarrow u_0$ in $L^1(\Omega)$. Moreover $\Phi (u^{\ast}_\e)\rightarrow \Phi (u_0) $ in $L^1 (\Omega)$. It suffices to show that $\Phi (u^{\ast}_\e)\rightarrow \Phi (u_\e)~\text{in } L^1 (\Omega).$ Since $$\int_{\Omega} |\Phi(u_\e)-\Phi(u^{\ast}_\e)| dx= \int_{\{|u_\e|>1\}} |\Phi(u_\e)-\Phi(\text{sign} (u_\e))| dx,$$ by symmetry, it suffices to show that \begin{equation}\lim_{\e\rightarrow 0} \int_{\{u_\e>1\}} |\Phi(u_\e)-\Phi(1)|dx=0. \label{above1_eq} \end{equation} From the construction of $u_\e^{\ast}$, we have \begin{eqnarray*} E_{\e} (u_\e) = \int_{\Omega} \left( \frac{\e |\nabla u_\e|^2}{2} + \frac{(u_\e^2-1)^2}{2\e} \right)dx = E_\e (u^\ast_\e) + \int_{\{|u_\e|>1\}} \left( \frac{\e |\nabla u_\e|^2}{2} + \frac{(u_\e^2-1)^2}{2\e} \right)dx. \end{eqnarray*} By the liminf inequality in the $\Gamma$-convergence of $E_\e$ to $\frac{4}{3} E$ (see Definition \ref{G_defn}), we have from $u_\e^{\ast}\rightarrow u_0$ in $L^1(\Omega)$ that $$\liminf_{\e\rightarrow 0} E_{\e}(u^\ast_\e)\geq \frac{4}{3} E(u_0)=\frac{4}{3} E(\Gamma).$$ Because $\lim_{\e\rightarrow 0} E_{\e}(u_\e)= \frac{4}{3} E(\Gamma),$ we find that \begin{equation}\lim_{\e\rightarrow 0} \int_{\{|u_\e|>1\}} \left( \frac{\e |\nabla u_\e|^2}{2} + \frac{(u_\e^2-1)^2}{2\e} \right) dx=0. \label{lim01} \end{equation} When $u_\e>1$, we have from the definition of $\Phi$ in (\ref{Phi_a}) that $\Phi(u_\e)-\Phi (1) = (u_\e-1)^2 (u_\e+2)/3.$ Thus, using (\ref{lim01}), we obtain \begin{equation*} \int_{\{u_\e>1\}} |\Phi(u_\e)-\Phi(1)| dx\leq \int_{\{u_\e>1\}} (u_\e-1)^2 (u_\e+2)dx\leq \int_{\{u_\e>1\}} (u_\e^2-1)^2 dx\rightarrow 0~\text{when } \e\rightarrow 0. \end{equation*} The proof of (\ref{above1_eq}) is complete. \end{proof} Before recalling a theorem of Reshetnyak, we introduce some notation. Let $ [C_0(\Omega)]^m$ be the space of $\RR^m$-valued continuous functions with compact support in $\Omega$. Let $[M_b(\Omega)]^m$ be the space of $\RR^m$-valued measures on $\Omega$ with finite total mass. Given $\mu\in [M_b(\Omega)]^m$, we write $|\mu|$ for the total variation of $\mu$ and $\frac{d\mu}{d|\mu|}$ for the Radon-Nikodym derivative of $\mu$ with respect to $|\mu|$. Given $\mu_n,\mu\in [M_b(\Omega)]^m$, we say that $\mu_n$ converges to $\mu$ in the sense of Radon measures if for all $\varphi \in [C_0(\Omega)]^m$, we have $$\lim_{n\rightarrow\infty} \int_{\Omega}\varphi\cdot d\mu_n=\int_{\Omega}\varphi\cdot d\mu.$$ We now recall a theorem of Reshetnyak \cite{Res} concerning continuity of functionals with respect to Radon convergence of measures. Its equivalent form that we write down below is taken from Spector \cite[Theorem 1.3]{Sp}. \begin{thm} [Reshetnyak's continuity theorem] \label{Sp_thm} Let $\Omega\subset\RR^N$ be open, $\mu_n,\mu\in [M_b(\Omega)]^m$ be such that $\mu_n$ converges to $\mu$ in the sense of Radon measures and $|\mu_n|(\Omega)\rightarrow |\mu|(\Omega)$. Then \begin{equation*} \lim_{n\rightarrow\infty} \int_{\Omega} f\left(x, \frac{d\mu_n}{d|\mu_n|}(x)\right) d|\mu_n|= \int_{\Omega} f\left(x, \frac{d\mu}{d|\mu|}(x)\right) d|\mu| \end{equation*} for every continuous and bounded function $f:\Omega\times \mathcal{S}^{m-1}\rightarrow \RR$ where $\mathcal{S}^{m-1}:=\{x\in\R^m: |x|=1\}$. \end{thm} We emphasize that in Theorem \ref{Sp_thm}, $f$ is not required to be compactly supported in $\Omega$. This is crucial to applications in our paper. The following lemma provides a key ingredient in the proof of Theorem \ref{thm-AC2}. It allows us to pass to the limit in certain quadratic expressions involving $\nabla u_\e$. \begin{lemma} \label{Res_Sp} Under the assumptions of Theorem \ref{thm-AC2}, for all $\varphi\in C(\overline{\Omega})$, we have \begin{equation} \int_{\Omega}\e \nabla u_{\e}\otimes \nabla u_{\e} \varphi dx\rightarrow \frac{4}{3} \int_{\Gamma}{\bf n}\otimes {\bf n}\,\varphi\, d\mathcal{H}^{N-1}. \end{equation} \end{lemma} \begin{proof}[Proof of Lemma \ref{Res_Sp}] The proof is a simple application of Theorem \ref{Sp_thm} using Lemma \ref{equi_lem}. Let $\Phi$ be as in (\ref{Phi_a}). We have $$\e \nabla u_{\e}\otimes \nabla u_{\e} =\frac{\nabla u_{\e}}{|\nabla u_{\e}|}\otimes \frac{\nabla u_{\e}}{|\nabla u_{\e}|} \e |\nabla u_\e|^2= \frac{\nabla \Phi(u_{\e})}{|\nabla \Phi (u_{\e})|}\otimes \frac{\nabla \Phi(u_{\e})}{|\nabla \Phi(u_{\e})|} \e |\nabla u_\e|^2.$$ From equation (\ref{equi-abphi}) in Lemma \ref{equi_lem}, we find that for any $\varphi\in C(\overline{\Omega}),$ \begin{equation*} \lim_{\e\rightarrow 0}\int_{\Omega}\frac{\nabla \Phi(u_{\e})}{|\nabla \Phi (u_{\e})|}\otimes \frac{\nabla \Phi(u_{\e})}{|\nabla \Phi(u_{\e})|} \e |\nabla u_\e|^2 \varphi dx= \lim_{\e\rightarrow 0} \int_{\Omega}\frac{\nabla \Phi(u_{\e})}{|\nabla \Phi (u_{\e})|}\otimes \frac{\nabla \Phi(u_{\e})}{|\nabla \Phi(u_{\e})|} |\nabla \Phi (u_{\e})| \varphi dx. \end{equation*} Applying Theorem \ref{Sp_thm} to $\nabla \Phi(u_\e)$ and $\nabla \Phi (u_0)$ with $f(x, {\bf p}) =\left( {\bf p}\otimes {\bf p}\right)\,\varphi(x)$, we find \begin{equation*} \lim_{\e\rightarrow 0} \int_{\Omega}\frac{\nabla \Phi(u_{\e})}{|\nabla \Phi (u_{\e})|}\otimes \frac{\nabla \Phi(u_{\e})}{|\nabla \Phi(u_{\e})|} |\nabla \Phi (u_{\e})| \varphi dx = \int_{\Gamma} \frac{4}{3} {\bf n}\otimes {\bf n}\,\varphi\, d\mathcal{H}^{N-1}. \end{equation*} \end{proof} We can now present: \begin{proof}[Proof of Theorem \ref{thm-AC2}] When $\eta\cdot \nu=0$ on $\partial\Omega$ and $Z:= (\eta\cdot\nabla) \eta$, Lemma \ref{SIV_ch} applied to $E_{\e}$ gives \begin{eqnarray} \delta E_{\varepsilon}(u_{\varepsilon},\eta) =\int_{\Omega} \left[\left( \frac{\e |\nabla u_\e|^2}{2} + \frac{(u_\e^2-1)^2}{2\e}\right) \div\eta - \e(\nabla u^{\varepsilon},\nabla u^{\varepsilon}\cdot\nabla\eta)\right]dx\label{star} \end{eqnarray} and \begin{multline} \delta^{2} E_{\varepsilon}(u_{\varepsilon},\eta, Z) =\int_{\Omega} \left\{ \left( \frac{\e |\nabla u_\e|^2}{2} + \frac{(u_\e^2-1)^2}{2\e}\right) \left(\text{div} Z + (\text{div}\eta)^2 - \text{trace}((\nabla \eta)^2)\right) \right\}dx\\ -2\int_{\Omega}\e(\nabla u^{\varepsilon},\nabla u^{\varepsilon}\cdot\nabla\eta)\text{div}\eta dx -2\int_{\Omega}\left(\e\nabla u^{\varepsilon}, \frac{1}{2}\nabla u^{\varepsilon}\cdot\nabla Z - (\nabla\eta)^2\cdot \nabla u^\e\right) dx + \int_{\Omega}\e\abs{\nabla u^{\varepsilon}\cdot\nabla\eta}^2 dx. \label{svep-p}\end{multline} By letting $\e\rightarrow 0$ and using Lemmas \ref{equi_lem} and \ref{Res_Sp} together with (\ref{FVE}), we find that \begin{equation*} \lim_{\e\rightarrow 0} \delta E_{\varepsilon}(u_{\varepsilon},\eta)= \frac{4}{3}\int_{\Gamma}( \div \eta- ({\bf n}, {\bf n}\cdot\nabla\eta)) d\mathcal{H}^{N-1} = \frac{4}{3} \delta E(\Gamma, \eta). \end{equation*} Let us now analyze $\delta^{2} E_{\varepsilon}(u_{\varepsilon},\eta, Z) $. Using equation (\ref{equi-ab}) in Lemma \ref{equi_lem} together with Lemma \ref{Res_Sp}, we find that \begin{multline*} \lim_{\e\rightarrow 0 }\int_{\Omega} \left( \frac{\e |\nabla u_\e|^2}{2} + \frac{(u_\e^2-1)^2}{2\e} \right)\left(\text{div}Z + (\text{div}\eta)^2 - \text{trace}((\nabla \eta)^2)\right) \\ = \int_{\Omega} \e |\nabla u_\e|^2 \left(\text{div} Z + (\text{div}\eta)^2 - \text{trace}((\nabla \eta)^2)\right) = \frac{4}{3} \int_{\Gamma} \left(\text{div} Z + (\text{div}\eta)^2 - \text{trace}((\nabla \eta)^2)\right)d\mathcal{H}^{N-1}. \end{multline*} By letting $\e\rightarrow 0$ and using Lemma \ref{Res_Sp}, we obtain \begin{multline}\lim_{\varepsilon\rightarrow 0}\delta^{2} E_{\varepsilon}(u_{\varepsilon}, \eta, Z) = \frac{4}{3}\int_{\Gamma}\left\{\text{div} Z + (\text{div}\eta)^2 - \text{trace}((\nabla \eta)^2- 2({\bf n},{\bf n}\cdot\nabla\eta) \text{div} \eta \right\}d\mathcal{H}^{N-1}\\ - \frac{8}{3} \int_{\Gamma}({\bf n}, \frac{1}{2}{\bf n}\cdot\nabla Z -{\bf n} \cdot (\nabla\eta)^2 )d\mathcal{H}^{N-1} + \frac{4}{3}\int_{\Gamma}|{\bf n}\cdot \nabla \eta|^2 d\mathcal{H}^{N-1}. \label{SVpp} \end{multline} As in the proof of Theorem 1.1 in \cite{Le} (see (2.8) there), we find that \begin{multline*} \text{div} Z + (\text{div}\eta)^2 - \text{trace}((\nabla \eta)^2- 2({\bf n},{\bf n}\cdot\nabla\eta) \text{div} \eta - 2 ({\bf n}, \frac{1}{2}{\bf n}\cdot\nabla Z -{\bf n} \cdot (\nabla\eta)^2 ) + |{\bf n}\cdot \nabla \eta|^2 \\= \text{div}^{\Gamma} Z + (\text{div}^{\Gamma}\eta)^2 + \sum_{i=1}^{N-1}\abs{(D_{\tau_{i}}\eta)^{\perp}}^2 - \sum_{i,j=1}^{N-1}(\tau_{i}\cdot D_{\tau_{j}}\eta)(\tau_{j}\cdot D_{\tau_{i}}\eta) + ({\bf n},{\bf n}\cdot\nabla\eta)^2. \end{multline*} In light of (\ref{SVE}), we find that the right hand side of (\ref{SVpp}) is equal to \[ \frac{4}{3}\left\{\delta^{2}E(\Gamma,\eta, Z) + \int_{\Gamma} ({\bf n},{\bf n}\cdot\nabla\eta)^2d\mathcal{H}^{N-1}\right\} . \] Therefore, we obtain the desired formula for $\lim_{\varepsilon\rightarrow 0}\delta^{2} E_{\varepsilon}(u_{\varepsilon}, \eta, Z) $ as stated in the theorem. \end{proof} For the remainder of this section, we briefly sketch a generalization of Theorem \ref{thm-AC2} to the special setting of Allen-Cahn critical points with a Neumann boundary condition on strictly convex domains. \begin{thm}[Limits of the inner variations of the Allen-Cahn functional with a uniform energy bound on strictly convex domains] \label{thm-ACc} Assume that $\Omega$ is an open, smooth, bounded and strictly convex domain in $\R^N~(N\geq 2)$. Let $\{u_{\e_j}\}\subset C^3 (\overline{\Omega})$ be a sequence of critical points of the Allen-Cahn functionals $E_{\e_j}$ that converges in $L^1(\Omega)$ to a function $u_0\in BV (\Omega, \{1, -1\})$ with an interface $\Gamma=\partial\{u_0=1\}\cap\Omega$ having the property that $\overline{\Gamma}$ is $C^2$. Assume that there is a positive constant $C$ such that $\|u_{\e_j}\|_{L^{\infty}(\Omega)} + E_{\e_j}(u_{\e_j}) \leq C $ for all $j$. Let $\Gamma_1, \cdots, \Gamma_K$ be connected components of $\Gamma$. Then, \begin{myindentpar}{1cm} (i) there are positive integers $m_1, \cdots, m_K$ such that $$\lim_{j\rightarrow \infty} E_{\e_j}(u_{\e_j}) = \frac{4}{3}\sum_{i=1}^K m_i E(\Gamma_i); $$ (ii) for all smooth vector fields $\eta\in (C^{3}(\overline{\Omega}))^{N}$ with $\eta\cdot \nu=0$ on $\partial\Omega$ and for $Z:= (\eta \cdot\nabla) \eta$, we have \begin{equation*}\lim_{j\rightarrow \infty}\delta E_{\e_j}(u_{\e_j}, \eta) = \frac{4}{3}\sum_{i=1}^K m_i \delta E(\Gamma_i,\eta) \end{equation*} and \begin{equation*}\lim_{j\rightarrow \infty}\delta^{2} E_{\e_j}(u_{\e_j}, \eta, Z) = \frac{4}{3}\left\{\sum_{i=1}^K m_i \left(\delta^{2}E(\Gamma_i,\eta, Z) +\int_{\Gamma_i} ({\bf n},{\bf n}\cdot\nabla\eta)^2 d\mathcal{H}^{N-1}\right)\right\}. \end{equation*} \end{myindentpar} \end{thm} \begin{proof}[Sketch of Proof of Theorem \ref{thm-ACc}] (i) By the criticality of $u_{\e_j}$, $\Gamma$ is a minimal surface. By the connectedness of each $\Gamma_i$, the conclusion in (i) follows from the Constancy Theorem for stationary varifolds \cite[Theorem 41.1]{Simon}; see, for example, \cite[p. 1854]{Le} or the paragraph following Theorem 2.1 in \cite{Ga}.\\ (ii) From the uniform bound $\|u_{\e_j}\|_{L^{\infty}(\Omega)} + E_{\e_j}(u_{\e_j}) \leq C $, the criticality of $u_{\e_j}$ which implies that $u_{\e_j}$ satisfies the Neumann boundary condition $\frac{\p u_{\e_j}}{\p \nu}=0$ on $\p\Omega$ and the strict convexity of $\Omega$, we can use \cite[Proposition 6.4]{MT} to conclude the following vanishing property of the discrepancy measure (or equi-partition of energy) \begin{equation} \label{equi-abc} \lim_{j\rightarrow \infty} \int_{\Omega} |\e_j |\nabla u_{\e_j}|^2 - \frac{(1-u_{\e_j}^2)^2}{\e_j}| dx =0, \end{equation} With (\ref{equi-abc}) and (i), we can follow the arguments in the proof of Proposition 2.2 in \cite{Ga} to have the following modified version of Lemma \ref{Res_Sp}: for all $\varphi\in C(\overline{\Omega})$, we have \begin{equation} \label{mKc} \int_{\Omega}\e \nabla u_{\e_j}\otimes \nabla u_{\e_j} \varphi dx\rightarrow \frac{4}{3}\sum_{i=1}^K m_i \int_{\Gamma_i}{\bf n}\otimes {\bf n}\,\varphi\, d\mathcal{H}^{N-1}. \end{equation} Now, using (\ref{equi-abc}) and (\ref{mKc}), instead of Lemmas \ref{equi_lem} and \ref{Res_Sp}, in (\ref{star}) and (\ref{svep-p}) in the proof of Theorem \ref{thm-AC2}, we obtain (ii). \end{proof} \section{Applications of Second Variation Convergence for Allen-Cahn} We now present two applications of our convergence formula for the second inner variation of the Allen-Cahn functional in Theorem \ref{thm-AC2}. The first, Theorem \ref{ACstab}, concerns the passage of stability from critical points of the Allen-Cahn functional to that of the limiting interface. The second concerns an asymptotic upper bound for the Neumann eigenvalues associated with the linearized Allen-Cahn operator. This is the content of Theorem \ref{eigen_thm}. \subsection{Stable Critical Points Leading to Stable Interfaces} An interesting and at times subtle question involves the issue of whether stability of a sequence of critical points passes to the limit within the context of $\Gamma$-convergence. This topic has been looked at from a variety of angles, including \cite{Serfaty} where some conditions related to, but not equivalent to, $\Gamma$-convergence are shown to be sufficient to guarantee stability of the limiting object. Interestingly, the verification of one of the two key sufficient conditions in \cite{Serfaty} for 2D Ginzburg-Landau vortices uses inner variations; see \cite[equation (3.12)]{Serfaty}. Within the Allen-Cahn context, the question of whether stability of critical points passes to the limiting interface is addressed in \cite{Tonegawa}. Assuming that a sequence of Allen-Cahn critical points $\{u_\e\}$ have non-negative second Gateaux variations with respect to compactly supported variations, and assuming that their energies $E_\e(u_\e)$ are uniformly bounded, Tonegawa identifies a limiting varifold and shows that in an appropriately defined weak sense, it has non-negative generalized second variation; see \cite[Theorem 3]{Tonegawa}. Roughly speaking, stability in this weak sense looks like non-negativity of $\delta^{2}E(\Gamma,\xi)$ given by \eqref{sziden} with the boundary integral absent due to the assumption of compact support on $\xi.$ In a subsequent work, Tonegawa and Wickramasekera \cite{TW} show that support of the limiting varifold identified in \cite{Tonegawa} is smooth in dimensions $N\leq 7$ while its singular set (if any) has Hausdorff dimension at most $N-8$ in dimensions $N>7$. As mentioned in the introduction, the convergence and regularity results in \cite{Tonegawa, TW} rely on an important interior convergence result for the Allen-Cahn equation from the work of \cite{HT} and interior regularity results from \cite{W} and we are not aware of boundary analogues of these results. Here, with stronger assumptions on the regularity of the limiting interface up to the boundary and convergence of energies, we establish a result in this vein which incorporates the boundary term. \begin{thm} [Stability of the limiting interface] \label{ACstab} Let $\{u_{\e}\}\subset C^3 (\overline{\Omega})$ be a sequence of stable critical points of $E_\e$ given in \eqref{MM} that converges in $L^1(\Omega)$ to a function $u_0\in BV (\Omega, \{1, -1\})$ with an interface $\Gamma:=\partial\{u_0=1\}\cap\Omega$ having the property that $\overline{\Gamma}$ is $C^2$. Assume that $\lim_{\varepsilon \rightarrow 0} E_{\varepsilon}(u_{\varepsilon}) = \frac{4}{3}E(\Gamma)$ where $E$ is given by (\ref{E_defn}). Then for all smooth $\xi:\overline{\Omega}\to\R$ we have the stability inequality \[ \int_{\Gamma} \left(|\nabla_{\Gamma}\xi|^2 -|A_{\Gamma}|^2\xi^2\right) d\mathcal{H}^{N-1} - \int_{\partial\Gamma\cap\partial\Omega} A_{\partial\Omega}({\bf n}, {\bf n})\xi^2 d\mathcal{H}^{N-2}\geq 0. \] \end{thm} \begin{rem} The stability criterion given above for a hypersurface subject to Neumann boundary conditions is derived in \cite[Theorem 2.2]{SZ2}. \end{rem} \begin{rem} Under the assumption of $\Gamma$ being an isolated local minimizer of the $\Gamma$-limit $E$ defined as in (\ref{E_defn}), one can of course construct stable, in fact locally minimizing, critical points $u_\e$ of $E_\e$ using the approach of \cite{KS}. In this case, the above stability inequality for $\Gamma$ holds trivially, since local minimality is a stronger assumption than stability. \end{rem} \begin{proof}[Proof of Theorem \ref{ACstab}] We have assumed that the critical points $u_\e$ of the Allen-Cahn functional $E_\e$ have non-negative second Gateaux variation and so by \eqref{seconde} they also have non-negative second inner variations, that is, for all $\eta,\zeta\in (C^3(\overline{\Omega}))^N$, we have $\delta^{2} E_{\varepsilon}(u_{\varepsilon}, \eta, \zeta)\geq 0.$ By Lemma \ref{ortho_lem} below, $\Gamma$ is a minimal surface and $\partial\Gamma$ meets $\partial\Omega$ orthogonally (if at all). Thus, for any smooth function $\xi:\overline{\Omega}\to\R$, we can choose a smooth vector field $\eta$ on $\overline{\Omega}$ such that $\eta=\xi{\bf n}$ on $\Gamma$, $\eta\cdot \nu=0$ on $\partial\Omega$ and such that $({\bf n}, {\bf n}\cdot\nabla \eta)=0$ on $\Gamma$. Then applying Theorems \ref{PI_ineq} and \ref{thm-AC2} with $Z:= (\eta \cdot \nabla) \eta$, we find \[ 0\leq \lim_{\varepsilon\rightarrow 0}\frac{3}{4}\delta^{2} E_{\varepsilon}(u_{\varepsilon}, \eta, Z) = \delta^{2}E(\Gamma,\eta, Z)=\delta^2E(\Gamma,\xi) \] for all smooth function $\xi:\overline{\Omega}\to\R$, using \eqref{sziden}. The stability inequality is thus established. \end{proof} \begin{lemma} [Minimality of the limiting interface] \label{ortho_lem} Let $\{u_{\e}\}\subset C^3 (\overline{\Omega})$ be a sequence of critical points of $E_\e$ that converges in $L^1(\Omega)$ to a function $u_0\in BV (\Omega, \{1, -1\})$ with an interface $\Gamma=\partial\{u_0=1\}\cap\Omega$ having the property that $\overline{\Gamma}$ is $C^2$. Assume that $\lim_{\varepsilon \rightarrow 0} E_{\varepsilon}(u_{\varepsilon}) = \frac{4}{3}E(\Gamma)$. Then $\Gamma$ is a minimal surface and $\partial\Gamma$ meets $\partial\Omega$ orthogonally (if at all). \end{lemma} \begin{proof} The criticality of $u_\e$ implies that $\delta E(u_\e,\eta)=0$ for all $C^3(\overline{\Omega})$ vector fields $\eta$. Now, for any smooth vector field $\eta\in (C^3(\overline{\Omega}))^N$ such that $\eta\cdot \nu=0$ on $\partial\Omega$, we have from Theorem \ref{thm-AC2} that $$\int_{\Gamma} \div^{\Gamma}\eta\, d\mathcal{H}^{N-1}=\delta E(\Gamma,\eta)=\frac{3}{4}\lim_{\e\rightarrow 0} \delta E_{\e} (u_\e,\eta)=0.$$ We decompose $\eta=\eta^{\perp}+ \eta^{T}$ where $\eta^{\perp} =(\eta\cdot{\bf n}){\bf n}$. Then $\div^{\Gamma} \eta^{\perp}= (n-1)H(\eta\cdot {\bf n})$ where $H$ denotes the mean curvature of $\Gamma$. Now, we have from the Divergence Theorem that \begin{equation} \label{Hzero} 0=\int_{\Gamma} \div^{\Gamma}\eta \,d\mathcal{H}^{N-1}=(n-1)\int_{\Gamma} H (\eta\cdot{\bf n})\,d\mathcal{H}^{N-1} + \int_{\p \Gamma \cap \p\Omega} \eta^{T}\cdot {\bf n}^\ast \,d\mathcal{H}^{N-2} \end{equation} where ${\bf n}^\ast$ is the outward unit co-normal of $\p\Gamma\cap\Omega$, that is, $n^\ast$ is normal to $\p\Gamma\cap \p\Omega$ and tangent to $\Gamma$. First, we consider vector fields $\eta$ compactly supported in $\Omega$. From (\ref{Hzero}), we then obtain $$\int_{\Gamma} H (\eta\cdot{\bf n})\,d\mathcal{H}^{N-1}=0 $$ for all $\eta \in (C^3_{0}(\Omega))^N$. This allows us to conclude that $H=0$ on $\Gamma$, that is, $\Gamma$ is a minimal surface. Now, using this new information and returning to (\ref{Hzero}), we find that $$\int_{\p \Gamma \cap \p\Omega} \eta^{T}\cdot {\bf n}^\ast d\mathcal{H}^{N-2}$$ for all smooth vector fields $\eta$ such that $\eta\cdot \nu=0$ on $\partial\Omega$. This implies that $\p\Gamma$ is orthogonal to $\p\Omega$ (see, for example, \cite[p. 70]{SZ2}). \end{proof} \subsection{Upper semicontinuity of the Neumann eigenvalues} Now we prove Theorem \ref{eigen_thm} concerning an asymptotic upper bound for the Neumann eigenvalues of the operators $-\e \Delta + 2\e^{-1}(3u_{\e}^2-1)$ in the limit $\e\to 0$ under appropriate conditions. \begin{proof}[Proof of Theorem \ref{eigen_thm}] The proof follows the argument of \cite[Corollary 1.1]{Le2}. We include its details for completeness. Let denote by $Q_{\e}$ the quadratic function associated to the operator $-\e \Delta + 2\e^{-1}(3u_{\e}^2-1)$ with zero Neumann boundary conditions, that is, for $\varphi\in C^{1}(\overline{\Omega})$, we have $$Q_{\e}(u_\e)(\varphi)=\int_{\Omega} \left(\e |\nabla \varphi|^2 + 2\e^{-1} (3u_\e^2-1)\varphi^2\right) dx\equiv d^2 E_{\e}(u_{\e}, -\varphi).$$ Similarly, for the Robin eigenvalue problem (\ref{Robinintro}), we can define a quadratic function $Q$ for the operator $-\Delta_{\Gamma} - |A_{\Gamma}|^2$ in $\Gamma$ with a Robin condition on $\partial\Gamma \cap\partial\Omega$ for the corresponding eigenfunctions for $-\Delta_{\Gamma} - |A_{\Gamma}|^2$. That is, for $\varphi\in C^{1}(\overline{\Gamma})$, we define \[ Q(\varphi)= \int_{\Gamma} \left(\abs{\nabla^{\Gamma} \varphi}^2 - \abs{A}^2 \varphi^2\right) d\mathcal{H}^{N-1}-\int_{\partial\Gamma\cap\partial\Omega} A_{\partial\Omega}({\bf n}, {\bf n})|\varphi|^2 d\mathcal{H}^{N-2}; \] see \cite[p. 398]{CH}. We can naturally extend $Q$ to be defined for vector fields in $\overline{\Omega}$ that are generated by functions defined on $\overline{\Gamma}$ as follows. Given $f\in C^{1}(\overline{\Gamma})$, let $\eta = f {\bf n}$ be a normal vector field defined on $\Gamma$. Assuming the smoothness of $\overline{\Gamma}$, we deduce from Lemma \ref{ortho_lem} that $\Gamma$ is a minimal surface and $\partial\Gamma$ meets $\partial\Omega$ orthogonally (if at all). Thus, we can find an extension $\tilde{\eta}$ of $\eta$ to $\overline{\Omega}$ such that it is tangent to $\partial\Omega$, that is $\tilde \eta\cdot\nu=0$ on $\p\Omega$, $({\bf n}, {\bf n}\cdot\nabla\tilde{\eta}) =0$ on $\Gamma$. Then, define $Q(\tilde{\eta}):= Q(f).$ For any vector field $V$ defined on $\overline{\Gamma}$ and is normal to $\Gamma$, we also denote by $V$ its extension to $\overline{\Omega}$ in such a way that it is tangent to $\partial\Omega$, $({\bf n}, {\bf n}\cdot \nabla V) =0$ on $\Gamma$. Let $\xi=\xi_V= V\cdot {\bf n}. $ Note that, using the stationarity of $u_\e$ and Corollary \ref{inner_rem}, we have for all vector field $\zeta$ $$Q_{\e}(\nabla u_{\e}\cdot V) = d^2 E_{\e}(u_{\e}, -\nabla u_{\e}\cdot V)= \delta^2 E_{\e}(u_{\e}, V, \zeta).$$ We choose $$\zeta= (V\cdot \nabla) V.$$ Then, we have, by Theorems \ref{thm-AC2} and \ref{PI_ineq} \begin{eqnarray}\lim_{\e\rightarrow 0} Q_{\e}(\nabla u_{\e}\cdot V) &=& \lim_{\e\rightarrow 0} \delta^2 E_{\e}(u_{\e}, V, \zeta)\nonumber\\ &=& \frac{4}{3}\int_{\Gamma} \left(|\nabla_{\Gamma}\xi|^2 -|A_{\Gamma}|^2|\xi|^2\right) d\mathcal{H}^{N-1} - \frac{4}{3}\int_{\partial\Gamma\cap\partial\Omega} A_{\partial\Omega}({\bf n}, {\bf n})|\xi|^2 d\mathcal{H}^{N-2} \nonumber\\ &=& \frac{4}{3} Q(V). \label{polarident} \end{eqnarray} By the definition of $\lambda_k$, we can find $k$ linearly independent, orthonormal vector fields $V^{1} = v^{1}{\bf n},\cdots, V^{k}= v^{k}{\bf n}$ which are defined on $\Gamma$ and normal to $\Gamma$ such that \begin{equation}\label{V_ortho} \int_{\Gamma} v^i v^j d\mathcal{H}^{N-1}=\delta_{ij}~\text{and}~ \max_{\sum_{i=1}^{k} a^2_{i}=1} Q(\sum_{i=1}^{k} a_{i} V^{i})\leq \lambda_k. \end{equation} Denote $$V^{i}_{\e} = \left.\frac{d}{dt}\right\rvert_{t=0} u_{\e} \left(\left(x+ tV^{i}(x)\right)^{-1}\right)=-\nabla u_{\e}\cdot V^i.$$ As in \cite{Le}, we can use Lemma \ref{Res_Sp} to show that the map $V\longmapsto -\nabla u_{\e}\cdot V$ is linear and one-to-one for $\e$ small. Thus, the linear independence of $V^{i}$ implies that of $V^{i}_{\e}$ for $\e$ small. Therefore, the $V^{i}_{\e}$ span a space of dimension $k$. It follows from the variational characterization of $\lambda_{\e, k}$ that \begin{equation}\displaystyle \sup_{\sum_{i=1}^{k} a^2_{i}=1} \frac{Q_{\e}(\sum_{i=1}^k a_i V^{i}_{\e})}{\e\int_{\Omega} |\sum_{i=1}^k a_i V^{i}_{\e}|^2}\geq \frac{\lambda_{\e, k}}{\e}. \end{equation} Take any sequence $\e\rightarrow 0$ such that $$\frac{\lambda_{\e, k}}{\e}\rightarrow \limsup_{\e\rightarrow 0}\frac{\lambda_{\e, k}}{\e}:= \gamma_k.$$ Then, for any $\delta>0$, we can find $a_1, \cdots, a_k$ with $\sum_{i=1}^k a_i^2=1$ such that for $\e$ small enough \begin{equation} \frac{Q_{\e}(\sum_{i=1}^k a_i V^{i}_{\e})}{\e\int_{\Omega} |\sum_{i=1}^k a_i V^{i}_{\e}|^2} \geq \gamma_k -\delta. \label{Qeq1} \end{equation} By polarizing (\ref{polarident}) as in \cite{Le}, we have for all $a_{i}$ \begin{equation} \lim_{\e\rightarrow 0} Q_{\e} (\sum_{i=1}^{k} a_{i} V_{\e}^{i}) = \frac{4}{3} Q (\sum_{i=1}^{k} a_{i} V^{i}).\label{Qeq2} \end{equation} and the convergence is uniform with respect to $\{a_{i}\}$ such that $\sum_{i=1}^{k} a^2_{i} =1$. Next, we study the convergence of the denominator of the left hand side of (\ref{Qeq1}) when $\e\rightarrow 0$. Invoking Lemma \ref{Res_Sp}, we have \begin{eqnarray}\lim_{\e\rightarrow 0} \e\int_{\Omega} |\sum_{i=1}^k a_i V^{i}_{\e}|^2dx &= &\lim_{\e\rightarrow 0} \e\int_{\Omega} \sum_{i, j=1}^k a_i a_j (\nabla u^{\e}\cdot V^i)(\nabla u^{\e}\cdot V^j)dx\nonumber \\ &=& \frac{4}{3}\sum_{i,j=1}^k a_i a_j \int_{\Gamma} v^i v^j d\mathcal{H}^{N-1}=\frac{4}{3}, \label{Qeq3} \end{eqnarray} where we used the first equation in (\ref{V_ortho}) in the last equation. Combining (\ref{Qeq1})-(\ref{Qeq3}) together with (\ref{V_ortho}), we find that $$ \gamma_k -\delta\leq Q(\sum_{i=1}^{k} a_{i} V^{i})\leq \lambda_k.$$ Therefore, by the arbitrariness of $\delta$, we have $\gamma_k\leq \lambda_k,$ proving the theorem. \end{proof} \begin{rem} If the hypotheses of Theorem \ref{eigen_thm} are replaced by those of Theorem \ref{thm-ACc} then the upper semicontinuity of the Allen-Cahn Neumann eigenvalues still holds as stated in Theorem \ref{eigen_thm}. For this, we just replace the following in the above proof of Theorem \ref{eigen_thm}: \begin{myindentpar}{1cm} (i) the use of Theorem \ref{thm-AC2} by the use of Theorem \ref{thm-ACc};\\ (ii) the min-max characterization of eigenvalues by the {\bf weighted} min-max characterization of eigenvalues as in \cite[Section 4]{Ga} and \cite[Section 3.2]{Hi};\\ (iii) Lemma \ref{Res_Sp} by (\ref{mKc}). \end{myindentpar} \end{rem} \section{The inner variations of a nonlocal energy and their asymptotic limits } \label{B_sec} With the ultimate aim of studying the asymptotic limits of the Gateaux variations and inner variations of the nonlocal Ohta-Kawasaki energy in the following section (see \eqref{OKa}), we turn now to the calculation and asymptotic behavior of these variations for the nonlocal part of this energy. To this end, for each $u\in L^1(\Omega)$, we denote its average on $\Omega$ by \begin{equation}\bar{u}_{\Omega}:=\frac{1}{\abs{\Omega}}\int_{\Omega} u(x) dx. \label{volc} \end{equation} We associate to each $u\in L^2(\Omega)$ a function $v\in W^{2,2}(\Omega)$ as the solution to the following Poisson equation with Neumann boundary condition: \begin{equation} -\Delta v = u-\bar{u}_{\Omega}~\text{in}~\Omega, \frac{\partial v}{\partial \nu}=0~\text{on}~\partial\Omega, ~\int_{\Omega} v(x) dx=0. \label{Poisson} \end{equation} Let $G(x, y)$ be the Green's function for $\Omega$ with the Neumann boundary condition: \begin{equation} \label{G_def} -\Delta_y G(x, y)=\delta_x -\frac{1}{|\Omega|} ~\text{in }\Omega, ~\frac{\partial G(x,y)}{\partial \nu_y} =0~~\text{on }\partial\Omega,~ \int_{\Omega} G(x, y) dx =0 ~(\text{for all } y\in\Omega), \end{equation} where $\delta_x$ is a delta-mass measure supported at $x\in\Omega$. If ${\bf \Phi}(x)$ is the fundamental solution of Laplace's equation, that is, \begin{equation*} {\bf \Phi}(x):=\left\{ \begin{array}{ll} -\frac{1}{2\pi}\log |x|& \mbox{if $N =2$},\\ \frac{1}{|B_1(0)|N (N-2) |x|^{N-2}} & \mbox{if $N >2$},\end{array} \right. \end{equation*} then, for any fixed $x\in \Omega$, \begin{equation} \label{G_Phi} G(x, y)-{\bf \Phi} (x-y)~\text{is a } C^{\infty}~\text{function (of y) in a neigborhood of x}. \end{equation} Note that $$v(x) = \int_{\Omega} G(x, y) u(y) dy.$$ Consider the following nonlocal functional on $L^{2}(\Omega)$ \begin{equation} \label{B_def} B(u):= \int_{\Omega}|\nabla v(x)|^2dx =\int_{\Omega}G(x, y) u(x) u(y) dxdy. \end{equation} The following lemma provides formulae for the Gateaux variations and inner variations of $B$ up to the second order. \begin{lemma} [Gateaux variations and inner variations of $B$] \label{B_lem} Assume that $u\in C^3(\overline{\Omega})$, $\varphi\in C^3(\overline{\Omega})$ and $\eta,\zeta\in (C^3(\overline{\Omega}))^N$. Let $B(u)$ be defined as in (\ref{B_def}). Then, one has, \begin{equation} d B(u, \varphi) = 2\int_{\Omega}\int_{\Omega} G(x, y) u(y) \varphi(x) dxdy =2\int_{\Omega} v\varphi dx, \end{equation} \begin{equation} d^2 B(u, \varphi) = 2\int_{\Omega}\int_{\Omega} G(x, y) \varphi(x) \varphi(y) dxdy, \end{equation} \begin{equation} \label{oneB} \delta B(u, \eta)= -2\int_{\Omega}\int_{\Omega} G(x, y) u(y)\nabla u(x) \cdot \eta(x) dxdy, \end{equation} and \begin{multline} \delta^2 B(u, \eta, \zeta)= 2\int_{\Omega}\int_{\Omega} G(x, y) (\nabla u(y)\cdot \eta(y)) (\nabla u(x)\cdot \eta(x)) dxdy \\+ 2\int_{\Omega}\int_{\Omega} G(x, y) u(x)X_0(y) dxdy\label{twoB} \end{multline} where we recall from \eqref{Xzero} that $$X_0=(D^2 u \cdot \eta, \eta) + (\nabla u, 2(\eta\cdot\nabla ) \eta -\zeta).$$ \end{lemma} An immediate consequence of Lemma \ref{B_lem} is the following corollary which is a nonlocal counterpart of Corollary \ref{inner_rem}. It establishes the relationship between Gateaux variations and inner variations up to the second order. \begin{cor} \label{B_cor} Assume that $u\in C^3(\overline{\Omega})$, and $\eta,\zeta\in (C^3(\overline{\Omega}))^N$. Let $B(u)$ be defined as in (\ref{B_def}). Then, one has, \begin{equation} \label{Bfirst} \delta B(u, \eta) = d B(u, -\nabla u\cdot \eta), \end{equation} \begin{equation}\delta^2 B(u, \eta, \zeta) = d^2 B(u, -\nabla u\cdot \eta) + dB (u, X_0). \label{Bsecondgen} \end{equation} \end{cor} \begin{proof}[Proof of Lemma \ref{B_lem}] The formulae for $d B(u, \varphi)$ and $d^2 B(u, \varphi)$ can be obtained easily using their definitions \[ dB(u,\varphi)= \left.\frac{d}{dt}\right\rvert_{t=0} B(u + t\varphi),~d^{2} B(u,\varphi) = \left.\frac{d^2}{dt^2}\right\rvert_{t=0} B(u + t\varphi),\] so we skip their derivations. Now we establish the formulae for $\delta B(u,\eta,\zeta)$ and $\delta^2 B(u,\eta,\zeta)$. Let $\Phi_t(x)= x+ t\eta (x) + \frac{t^2}{2} \zeta(x)$ and $u_{ t}(y):= u(\Phi^{-1}_t(y))$. Then, by (\ref{ut_expand}), we have \begin{equation} u_{ t}(y):= u(\Phi^{-1}_t(y)) = u (y) -t\nabla u(y)\cdot \eta(y) + \frac{t^2}{2} X_0 (y)+ O(t^3).\label{utnew} \end{equation} It follows that \begin{multline*} B(u_t)=\int_{\Omega}\int_{\Omega} G(x, y) u_{t}(y)u_{t}(x)dxdy= \int_{\Omega}G(x, y) u(x) u(y) dxdy\\ -2t\int_{\Omega}\int_{\Omega} G(x, y) u(y)\nabla u(x) \cdot \eta(x) dxdy\\ + t^2 \left(\int_{\Omega}\int_{\Omega} G(x, y) (\nabla u(y)\cdot \eta(y)) (\nabla u(x)\cdot \eta(x)) dxdy + \int_{\Omega}\int_{\Omega} G(x, y) u(x)X_0(y) dxdy\right) + O(t^3). \end{multline*} Recalling (see (\ref{inner1defn}) and (\ref{innerdefn2})) that \begin{equation*} \delta B(u,\eta) = \left.\frac{d}{dt}\right\rvert_{t=0} B(u_t),~ \delta^{2} B(u,\eta,\zeta) = \left.\frac{d^2}{dt^2}\right\rvert_{t=0} B(u_t), \end{equation*} we obtain the first and second inner variations for $B$ as asserted. \end{proof} The next theorem studies the asymptotic limits of the inner variations of the nonlocal functional $B$ under suitable assumptions. It can be viewed as a nonlocal analogue of Theorem \ref{thm-AC2}. As in this theorem, in order to pass to the limit the second inner variation $\delta^2 B(u_\e, \eta, \zeta)$, we can focus on a particular choice of the acceleration vector field $\zeta$. Instead of imposing $\zeta= Z:= (\eta\cdot\nabla)\eta$ as in Theorem \ref{thm-AC2}, we find that we can still pass to the limit when the tangential parts of $\zeta$ and $Z$ coincide on the boundary $\partial\Omega$. \begin{thm}[Limits of inner variations of the nonlocal energy $B$] \label{B_thm} Let $\{u_{\e}\}\subset C^3 (\overline{\Omega})$ be a sequence of functions that converges in $L^2(\Omega)$ to a function $u_0\in BV (\Omega, \{1, -1\})$ with an interface $\Gamma=\partial\{u_0=1\}\cap\Omega$ having the property that $\overline{\Gamma}$ is $C^2$. Throughout, we will denote by ${\bf n}$ the unit normal to $\Gamma$ pointing out of the region $\{x:\,u_0(x)=1\}$. Let $G$ be defined as in (\ref{G_def}). Let $B$ be defined as in (\ref{B_def}). Then, for all smooth vector fields $\eta,\zeta\in (C^{3}(\overline{\Omega}))^{N}$ with $\eta\cdot \nu=0$ on $\partial\Omega$, and $\zeta\cdot \nu=Z\cdot\nu$ on $\partial\Omega$ where we recall $Z:= (\eta\cdot\nabla)\eta$ we have \begin{equation} \label{oneB_lim} \lim_{\e\rightarrow 0}\delta B(u_\e, \eta)= 4 \int_{\Gamma} v_0(\eta\cdot {\bf n}) d \mathcal{H}^{N-1} (x). \end{equation} and \begin{multline} \label{twoB_lim} \lim_{\e\rightarrow 0}\delta^2 B(u_\e, \eta, \zeta)= 8\int_{\Gamma}\int_{\Gamma} G(x, y) (\eta(x)\cdot {\bf n}(x))(\eta(y)\cdot {\bf n}(y)) d \mathcal{H}^{N-1} (x) d \mathcal{H}^{N-1}(y) \\+ 4 \int_{\Gamma} (\nabla v_0\cdot\eta) (\eta\cdot {\bf n}) d \mathcal{H}^{N-1} (x) + 4 \int_{\Gamma} v_0 (\zeta-Z + (\div \eta)\eta)\cdot {\bf n} d \mathcal{H}^{N-1} (x). \end{multline} Here we use the following notations: \[ v_\e(x) := \int_{\Omega} G(x, y) u_\e(y) dy\quad\mbox{and}\quad v_0(x) := \int_{\Omega} G(x, y) u_0(y) dy. \] \end{thm} \begin{proof}[Proof of Theorem \ref{B_thm}] We will apply Lemma \ref{B_lem} where $X_0$ is now replaced by \begin{eqnarray}X_{\e}&:=& (D^2 u_\e \cdot \eta, \eta) + (\nabla u_\e, 2(\eta\cdot\nabla ) \eta -\zeta)\nonumber\\&=&(D^2 u_{\e} \cdot \eta, \eta) + (\nabla u_{\e}, (\eta\cdot\nabla ) \eta + \div (\eta) \eta) + (\nabla u_\e, (\eta\cdot\nabla)\eta- (\div\eta)\eta -\zeta) \nonumber\\ &=& \div \left((\nabla u_\e\cdot \eta)\eta\right) + (\nabla u_\e, Z-\zeta- (\div\eta)\eta)\equiv D_\e + (\nabla u_\e, Z-\zeta- (\div\eta)\eta) \label{Xep} \end{eqnarray} where \begin{equation} \label{Dep} D_\e:= \div \left((\nabla u_\e\cdot \eta)\eta\right). \end{equation} From (\ref{oneB}) in Lemma \ref{B_lem}, we have \begin{equation} \delta B(u_\e, \eta)= -2\int_{\Omega}\int_{\Omega} G(x, y) u_{\e}(y)(\nabla u_{\e}(x)\cdot \eta(x))\, dxdy\label{Bone}. \end{equation} From (\ref{twoB}) in Lemma \ref{B_lem} together with (\ref{Xep}), we obtain \begin{multline} \delta^2 B(u_\e, \eta, \zeta)= 2\int_{\Omega}\int_{\Omega} G(x, y) (\nabla u_{\e}(y)\cdot \eta(y)) (\nabla u_{\e}(x)\cdot \eta(x)) dxdy \\+ 2\int_{\Omega}\int_{\Omega} G(x, y) u_{\e}(y)D_{\e}(x) dxdy+ 2\int_{\Omega}\int_{\Omega} G(x, y) u_{\e}(y)(\nabla u_\e(x), Z(x)-\zeta(x)- (\div\eta(x))\eta(x)) dxdy.\label{Btwo} \end{multline} {\bf Claim 1:} We have \begin{equation*} \lim_{\e\rightarrow 0} \int_{\Omega}\int_{\Omega} G(x, y) u_{\e}(y)(\nabla u_{\e}(x)\cdot \eta(x)) dxdy= -2\int_{\Gamma} v_0(\eta\cdot {\bf n}) d \mathcal{H}^{N-1} (x) \end{equation*} and \begin{multline*} \lim_{\e\rightarrow 0} \int_{\Omega}\int_{\Omega} G(x, y) u_{\e}(y)(\nabla u_\e(x), Z(x)-\zeta(x)- (\div\eta(x))\eta(x)) dxdy\\= -2 \int_{\Gamma} v_0 (Z-\zeta - (\div \eta)\eta)\cdot {\bf n} d \mathcal{H}^{N-1} (x). \end{multline*} {\bf Claim 2:} We have \begin{align*} \lim_{\e\rightarrow 0} \int_{\Omega}\int_{\Omega} &G(x, y) (\nabla u_{\e}(y)\cdot \eta(y)) (\nabla u_{\e}(x)\cdot \eta(x)) dxdy \\ &= 4\int_{\Gamma}\int_{\Gamma} G(x, y) (\eta(x)\cdot {\bf n}(x))(\eta(y)\cdot {\bf n}(y)) d \mathcal{H}^{N-1} (x) d \mathcal{H}^{N-1}(y). \end{align*} {\bf Claim 3:} For $D_\e$ as in (\ref{Dep}), we have \[ \lim_{\e\rightarrow 0} \int_{\Omega} \int_{\Omega} G(x, y) u_{\e}(y)D_{\e}(x)\, dxdy=2\int_{\Gamma} (\nabla v_0\cdot\eta) (\eta\cdot {\bf n})\ d \mathcal{H}^{N-1}(x). \] Using the above claims in (\ref{Bone}) and (\ref{Btwo}), we obtain (\ref{oneB_lim}) and (\ref{twoB_lim}) as claimed in the theorem. We now prove the above claims. Let us start with the proof of Claim 3. Using (\ref{Dep}) and $\eta\cdot\nu=0$ on $\partial\Omega$, we find after two integrations by parts that \begin{eqnarray} \label{GDep1} \int_{\Omega}\int_{\Omega}G(x, y) u_{\e}(y)D_{\e}(x) \,dxdy&=& \int_{\Omega} v_{\e}(x) D_{\e}(x)\, dx =\int_{\Omega} v_{\e} \div ((\nabla u_{\e}\cdot \eta)\eta)\,dx\nonumber\\&=&-\int_{\Omega}(\nabla v_{\e}\cdot\eta)(\nabla u_{\e}\cdot\eta)\,dx= \int_{\Omega} \div \left((\nabla v_{\e}\cdot\eta)\eta\right) u_\e\, dx. \end{eqnarray} From $u_\e\rightarrow u_0$ in $L^2(\Omega)$ and the global $W^{2,2}(\Omega)$ estimates for the Poisson equation (\ref{Poisson}), we have \begin{equation} \label{vep_w22} v_\e\rightarrow v_0~\text{in } W^{2,2}(\Omega). \end{equation} In particular $D^2 v_\e\rightarrow D^2 v_0~\text{in } L^{2}(\Omega).$ Thus, when $\e\rightarrow 0$, we have \begin{equation} \label{GDep2} \int_{\Omega} \div \left((\nabla v_{\e}\cdot\eta)\eta\right) u_\e dx \rightarrow \int_{\Omega} \div \left((\nabla v_0\cdot\eta)\eta\right) u_0 dx=2\int_{\Gamma} (\nabla v_0\cdot\eta) (\eta\cdot {\bf n})\ d \mathcal{H}^{N-1}(x). \end{equation} Combining (\ref{GDep1}) and (\ref{GDep2}), we obtain Claim 3. Let us now prove Claim 1. We start with the first limit. We have \begin{equation*} \int_{\Omega}\int_{\Omega} G(x, y) u_{\e}(y)(\nabla u_{\e}(x)\cdot \eta(x)) dxdy= \int_{\Omega} v_\e (x) (\nabla u_{\e}(x)\cdot \eta(x)) dx. \end{equation*} Integrating by parts and using the fact that $\eta\cdot\nu=0$ on $\partial\Omega$, we have \begin{eqnarray*} \int_{\Omega} v_\e (x) (\nabla u_{\e}(x)\cdot \eta(x))dx=-\int_{\Omega} \div(v_\e\eta) u_\e dx &\rightarrow& -\int_{\Omega} \div(v_0\eta) u_0 dx\\&=& -2\int_{\Gamma} v_0(\eta\cdot {\bf n}) d \mathcal{H}^{N-1} (x). \end{eqnarray*} In the above convergence, we have used the facts that $u_\e\rightarrow u_0$ in $L^2(\Omega)$ and $\div(v_\e\eta)\rightarrow \div(v_0\eta)$ in $W^{1,2}(\Omega)$ which is a consequence of (\ref{vep_w22}). The first limit of Claim 1 is hence established. The proof of the second limit in Claim 1 is similar. Here we replace $\eta$ in the first limit by $Z-\zeta- (\div\eta)\eta$ in the second limit. For this, we note that from $\zeta\cdot\nu=Z\cdot\nu$ on $\partial\Omega$ and $\eta\cdot\nu=0$ on $\partial\Omega$, we also have $(Z-\zeta- (\div\eta)\eta)\cdot\nu=0$ on $\partial\Omega$. The proof of Claim 1 is thus completed. Finally, we prove Claim 2. To do this, we introduce some notations. Let \begin{equation} \label{aep} a_\e(x)= \nabla u_\e(x)\cdot\eta (x) \in C^2 (\overline{\Omega}). \end{equation} Let $w_\e$ be the solution to the following Poisson equation with Neumann boundary condition: $$-\Delta w_\e = a_\e-\bar{a_\e}_{\Omega}~\text{in}~\Omega, \frac{\partial w_\e}{\partial \nu}=0~\text{on}~\partial\Omega, ~\int_{\Omega} w_\e(x) dx=0.$$ Then $w_\e\in C^{3,\alpha}(\overline{\Omega})$ for all $\alpha\in (0, 1)$ and \begin{equation} \label{wep} w_\e(x) = \int_{\Omega} G(x, y) a_\e(y) dy. \end{equation} Integrating by parts and using the fact that $\eta\cdot\nu=0$ on $\partial\Omega$, we have \begin{eqnarray} \label{GtwoU} \int_{\Omega}\int_{\Omega} G(x, y) (\nabla u_{\e}(y)\cdot \eta(y)) (\nabla u_{\e}(x)\cdot \eta(x))\, dxdy&=& \int_{\Omega} w_\e(x) (\nabla u_{\e}(x)\cdot \eta(x)) \,dx \nonumber \\&=&- \int_{\Omega} \div(w_\e\eta) u_{\e} dx. \end{eqnarray} To prove Claim 2, we study the convergence property in $L^p(\Omega)$ of $w_\e$ and $\nabla w_\e$. Integrating by parts and using the fact that $\eta\cdot\nu=0$ on $\partial\Omega$, we have from (\ref{aep}) and (\ref{wep}) \begin{equation} \label{wep2} w_\e(x) = \int_{\Omega} G(x, y) \nabla u_\e(y)\cdot\eta (y) dy =-\int_{\Omega} \div_y (G(x, y)\eta(y)) u_\e(y) dy. \end{equation} Using (\ref{G_Phi}), we find that the most singular term in $\div_y (G(x, y)\eta(y))$ is of the form $\frac{C}{|x-y|^{N-1}}$ which, for a fixed $x$, belongs to $L^p(\Omega)$ for all $p<\frac{N}{N-1}$. Thus, when $u_\e\in L^2(\Omega)$, we have by Young's convolution inequality that $w_\e\in L^ q$ for all $q<q_\ast=\frac{2N}{N-2}$ which comes from the relation $$\frac{1}{q_\ast} + 1= \frac{N-1}{N} +\frac{1}{2}.$$ In particular, if $u_\e\rightarrow u_0$ in $L^2(\Omega)$ then from (\ref{wep2}), we have the following convergence in $L^q(\Omega)$ for all $q<\frac{2N}{N-2}$: \begin{equation}w_\e\rightarrow w_0. \label{wlq} \end{equation} where \begin{equation} \label{w0for} w_0(x):= -\int_{\Omega} \div_y (G(x, y)\eta(y)) u_0(y) dy =-2\int_{\Gamma} G(x, y)\eta (y) \cdot {\bf n}(y) d \mathcal{H}^{N-1}(y). \end{equation} For the convergence of $\nabla w_\e$, we observe from (\ref{wep2}) that \begin{equation} \label{dwep} \nabla w_\e(x) = -\int_{\Omega} \div_y (\nabla_x G(x, y)\eta(y)) u_\e(y) dy. \end{equation} Expanding $\div_y (\nabla_x G(x, y)\eta(y))$ and using (\ref{G_Phi}), we find that the most singular term on the right hand side of (\ref{dwep}) is of the form $$R_{ij} (\eta u_\e)(x):=\int_{\Omega} \frac{(x_i-y_i)(x_j-y_j)}{|x-y|^{N+2}}\eta(y) u_\e(y) dy.$$ Applying the $L^2-L^2$ estimates in Calderon-Zygmund theory of singular integral operators, we find that $$\|R_{ij}(\eta u_\e)\|_{L^2(\Omega)} \leq C(N,\Omega) \|\eta u_\e\|_{L^{2}(\Omega)}.$$ It follows that, if $u_\e\rightarrow u_0$ in $L^2(\Omega)$ then we have the following convergence in $L^2(\Omega)$: \begin{equation}\nabla w_\e(x)\rightarrow \nabla w_0(x) = -\int_{\Omega} \div_y (\nabla_xG(x, y)\eta(y)) u_0(y) dx. \label{dwl2} \end{equation} From (\ref{wlq}) and (\ref{dwl2}), we have \begin{equation} \label{wu_lim} - \int_{\Omega} \div(w_\e\eta) u_{\e} dx\rightarrow -\int_{\Omega} \div(w_0\eta) u_0 dx. \end{equation} Using (\ref{w0for}), we find that \begin{eqnarray} \label{wu0} -\int_{\Omega} \div(w_0\eta) u_0 dx&=& -2\int_{\Gamma} w_0(x) \eta (x)\cdot {\bf n}(x) d \mathcal{H}^{N-1} (x)\nonumber \\ &=& 4\int_{\Gamma}\int_{\Gamma} G(x, y) (\eta(x)\cdot {\bf n}(x))(\eta(y)\cdot {\bf n}(y)) d \mathcal{H}^{N-1} (x) d \mathcal{H}^{N-1}(y). \end{eqnarray} Combining (\ref{GtwoU}), (\ref{wu_lim}) and (\ref{wu0}), we obtain the limit as asserted in Claim 2. This completes the proof of Claim 2 and also the proof of our theorem. \end{proof} \section{Applications of Second Variation Convergence for Ohta-Kawasaki} \label{OK_sec} We now wish to analyze the asymptotic behavior of the inner first and second variations of the Ohta-Kawasaki functional \begin{equation} \mathcal{E}_{\e,\gamma}(u)= E_{\varepsilon}(u) + \frac{4}{3}\gamma B(u)=\int_{\Omega}\left(\frac{\varepsilon \abs{\nabla u}^2}{2} +\frac{(1-u^2)^2}{2\varepsilon}\right) dx + \frac{4}{3}\gamma\int_{\Omega}|\nabla v|^2 dx,\label{OKa}\end{equation} a model for microphase separation in diblock copolymers; see \cite{OK}. Here $\e>0$ and $\gamma\geq 0$, $u: \Omega\rightarrow \RR$ and we are using the same notation for $B$ as in (\ref{B_def}) so that $v$ is required to satisfy \eqref{Poisson}. The factor of $\frac{4}{3}$ is simply put in for convenience in stating the $\Gamma$-convergence result. These functionals are known to $\Gamma$-converge in $L^1(\Omega)$ to $\frac{4}{3}\mathcal{E}_{\gamma}$ where \begin{equation} \mathcal{E}_{\gamma}(u_0):= E(u_0) +\gamma\, B(u_0)\label{Egamma}, \end{equation} (see \cite{RW}) where we recall that \begin{equation*} E(u_0)=\left\{ \begin{alignedat}{1} \frac{1}{2}\int_{\Omega}|\nabla u_0| ~&~ \text{if} ~u_0\in BV (\Omega, \{1, -1\}), \\\ \infty~&~ \text{otherwise}. \end{alignedat} \right. \end{equation*} As in Section \ref{AC_sec}, if the interface $\Gamma:= \partial\{x\in \Omega: u_0(x)=1\}\cap \Omega$ separating the $\pm1$ phases of $u_0\in BV (\Omega, \{1, -1\})$ is sufficiently regular, say $C^1$, then we also identify $$E(u_0)\equiv E(\Gamma)=\mathcal{H}^{N-1}(\Gamma)$$ and \begin{equation} \label{Egamma2} \mathcal{E}_{\gamma}(u_0) \equiv \mathcal{E}_{\gamma}(\Gamma)= E(\Gamma) + \gamma B(u_0) = \mathcal{H}^{N-1}(\Gamma) + \gamma B(u_0). \end{equation} Competitors $u:\Omega\to\R$ in the Ohta-Kawasaki functional are generally required to satisfy a mass constraint \begin{equation} \label{mass} \frac{1}{|\Omega|}\int_{\Omega}u\,dx=m\quad\mbox{for some constant}\; m\in (-1,1). \end{equation} We should mention that all of the analysis of this section applies, in particular, to the special case where $\gamma=0$, that is to the case of the mass-constrained Allen-Cahn or Modica-Mortola functionals. Under such a constraint this context is perhaps better known as the equilibrium setting for the Cahn-Hilliard problem. We first establish the following theorem which is the nonlocal Ohta-Kawasaki analogue of Theorem \ref{thm-AC2}. It allows us to pass the the limit the first and second inner variations of the Ohta-Kawasaki functionals, without imposing any criticality conditions. \begin{thm}[Limits of inner variations of the Ohta-Kawasaki functional] \label{SIV_OK} Let $ \mathcal{E}_{\e,\gamma}$ and $ \mathcal{E}_{\gamma}$ be as in (\ref{OKa}) and (\ref{Egamma2}) respectively. Let $G$ be defined as in (\ref{G_def}). Let $\{u_{\e}\}\subset C^3 (\overline{\Omega})$ be a sequence of functions that converges in $L^2(\Omega)$ to a function $u_0\in BV (\Omega, \{1, -1\})$ with an interface $\Gamma=\partial\{u_0=1\}\cap\Omega$ having the property that $\overline{\Gamma}$ is $C^2$. Assume that $\lim_{\varepsilon \rightarrow 0} \mathcal{E}_{\e,\gamma}(u_{\varepsilon}) = \frac{4}{3}\mathcal{E}_\gamma(\Gamma).$ Let $v_0(x) := \int_{\Omega} G(x, y) u_0(y) dy.$ Then, for all smooth vector fields $\eta\in (C^{3}(\overline{\Omega}))^{N}$ with $\eta\cdot \nu=0$ on $\partial\Omega$, we have \begin{equation} \label{1vm_lim} \lim_{\varepsilon\rightarrow 0}\delta\mathcal{E}_{\e,\gamma}(u_{\varepsilon}, \eta) = \frac{4}{3}\left(\delta E(\Gamma,\eta)+ 4 \gamma\int_{\Gamma} v_0 (\eta\cdot {\bf n}) d \mathcal{H}^{N-1} \right) \end{equation} and for such $\eta$ and for $\zeta\in (C^{3}(\overline{\Omega}))^{N}$ with $\zeta\cdot \nu=Z\cdot\nu$ on $\partial\Omega$ where $Z=(\eta\cdot\nabla)\eta$, we have \begin{multline} \label{2vm_lim} \lim_{\varepsilon\rightarrow 0}\frac{3}{4}\delta^{2} \mathcal{E}_{\e,\gamma}(u_{\varepsilon}, \eta, \zeta) = \delta^2 E(\Gamma,\eta, Z) + \int_{\Gamma} ({\bf n},{\bf n}\cdot\nabla\eta)^2 d\mathcal{H}^{N-1} + \int_{\Gamma} \div^{\Gamma} (\zeta-Z) d\mathcal{H}^{N-1}\\ + 8\gamma\int_{\Gamma}\int_{\Gamma} G(x, y) (\eta(x)\cdot {\bf n}(x))(\eta(y)\cdot {\bf n}(y)) d \mathcal{H}^{N-1} (x) d \mathcal{H}^{N-1}(y) \\+ 4 \gamma\int_{\Gamma} (\nabla v_0\cdot\eta) (\eta\cdot {\bf n}) d \mathcal{H}^{N-1} + 4 \gamma\int_{\Gamma} v_0 (\zeta-Z + (\div \eta)\eta)\cdot {\bf n} d \mathcal{H}^{N-1}. \end{multline} \end{thm} \begin{proof} Let $B(u)$ be defined as in (\ref{B_def}). First, note that from (\ref{OKa}), (\ref{Egamma}) and $\lim_{\varepsilon \rightarrow 0} \mathcal{E}_{\e,\gamma}(u_{\varepsilon}) = \frac{4}{3}\mathcal{E}_\gamma(\Gamma),$ we also have $$\lim_{\varepsilon \rightarrow 0} E_{\e}(u_{\varepsilon}) = \frac{4}{3}E(\Gamma),$$ since the $L^2(\Omega)$-convergence of $\{u_\e\}$ to $u_0$ implies that $B(u_\e)\rightarrow B(u_0).$ This means that all conditions of Theorems \ref{thm-AC2} and \ref{B_thm} are satisfied and we can apply their results to the proof of our theorem. Next, observe that \begin{equation*} \delta \mathcal{E}_{\e,\gamma}(u_{\varepsilon}, \eta) = \delta E_{\e}(u_{\varepsilon}, \eta) + \frac{4}{3}\gamma\,\delta B(u_{\varepsilon}, \eta). \end{equation*} Therefore, (\ref{1vm_lim}) follows from Theorems \ref{thm-AC2} and \ref{B_thm}. Turning to the proof of (\ref{2vm_lim}), we have from the definition of $\mathcal{E}_{\e,\gamma}$ in (\ref{OKa}) that \begin{equation*} \frac{3}{4}\delta^{2} \mathcal{E}_{\e,\gamma}(u_{\varepsilon}, \eta, \zeta) = \frac{3}{4}\delta^{2} E_{\e}(u_{\varepsilon}, \eta, \zeta) + \gamma\,\delta^{2} B(u_{\varepsilon}, \eta, \zeta). \end{equation*} We now apply \eqref{secondgen} to $E_{\e}$ at $u_\e$, first with $X_0$ given by (\ref{Xzero}) with $\zeta$ itself and then with $\zeta=Z$ and subtract to find that \begin{eqnarray*} \delta^{2} E_{\e}(u_{\varepsilon}, \eta, \zeta) &=& \delta^{2} E_{\e}(u_{\varepsilon}, \eta, Z) + dE_{\e}(u_\e, \nabla u_\e \cdot (Z-\zeta))\\ &=& \delta^{2} E_{\e}(u_{\varepsilon}, \eta, Z) + \delta E_{\e}(u_\e, \zeta-Z). \end{eqnarray*} In the last equation, we have used (\ref{first}) relating the first Gateaux variation and the first inner variation. It follows that \begin{equation} \label{E_split} \frac{3}{4}\delta^{2} \mathcal{E}_{\e,\gamma}(u_{\varepsilon}, \eta, \zeta) = \frac{3}{4}\left( \delta^{2} E_{\e}(u_{\varepsilon}, \eta, Z) + \delta E_{\e}(u_\e, \zeta-Z)\right) +\gamma\delta^{2} B(u_{\varepsilon}, \eta, \zeta) \end{equation} Letting $\e\rightarrow 0$ in $\delta E_{\e}(u_\e, \zeta-Z)$, we find from Theorem \ref{thm-AC2} and \eqref{FVE} that \begin{equation*} \lim_{\e\rightarrow 0} \frac{3}{4}\delta E_{\e}(u_\e, \zeta-Z) = \delta E(\Gamma,\zeta-Z)= \int_{\Gamma} \div^{\Gamma} (\zeta-Z) d\mathcal{H}^{N-1}. \end{equation*} Letting $\e\rightarrow 0$ in (\ref{E_split}), using the above limit together with Theorems \ref{thm-AC2} and \ref{B_thm}, we obtain (\ref{2vm_lim}). \end{proof} Next we wish to apply Theorem \ref{SIV_OK} to the case of stable critical points of the Ohta-Kawasaki functional $\mathcal{E}_{\e,\gamma}$ subject to a mass constraint which is the context of Theorem \ref{OK2}. To be clear, we refer to a function $u:\Omega\to\R$ as a critical point of $\mathcal{E}_{\e,\gamma}$ subject to a mass constraint if $d\mathcal{E}_{\e,\gamma}(u,\phi)=0$ whenever $\int_\Omega\phi(y)\,dy=0$, and we say $u$ is a stable critical point of the Ohta-Kawasaki functional $ \mathcal{E}_{\e,\gamma}$ if additionally $d^2\mathcal{E}_{\e,\gamma}(u,\phi)\geq 0$ for such functions $\phi.$ Before proving Theorem \ref{OK2}, we would like to explain the peculiar choices of the velocity and acceleration vector fields $\eta$ and $\zeta$ stated in the theorem. Their choices were explained in \cite[Theorem 1.4]{Le2}. For reader's convenience, we repeat the argument here in the following remark. \begin{rem} \label{W_rem} The choice of the velocity and acceleration vector fields $\eta$ and $\zeta$ in $$\Phi_t(x)= x+ t\eta(x) +\frac{t^2}{2}\zeta(x)$$ in applications to the inner variations of the mass-constrained Ohta-Kawasaki functional is motivated by the fact that we wish the family $\Phi_t(E_0)$ of deformations of $E_0:=\{x\in\Omega: u_0(x)=1\}$ to preserve the volume of $E_0$ up to the second order in $t$, that is, \begin{equation} \label{vol_Et} |\Phi_t(E_0)|= |E_0| + o(t^2). \end{equation} For $t$ sufficiently small, we have as in (2.16), \begin{multline*} \abs{\det\nabla\Phi_{t}(x)}=\det\nabla\Phi_{t}(x) =\det (I + t\nabla \eta (x) +\frac{t^2}{2}\nabla \zeta)\\= 1 + t\div \eta + \frac{t^2}{2}[ \div\zeta + (\div\eta)^2 - \text{trace}((\nabla\eta)^2)] + O(t^3). \end{multline*} It follows that, for small $t$, we have $$|\Phi_t(E_0)| = \int_{E_0} \abs{\det\nabla\Phi_{t}(x)}dx= \int_{E_0} \{1 + t\div \eta + \frac{t^2}{2}[ \div\zeta + (\div\eta)^2 - \text{trace}\,((\nabla\eta)^2)] + O(t^3)\} \,dx .$$ The requirement (\ref{vol_Et}) is reduced to a set of two equations: \begin{equation} \label{vol_Et2} \int_{E_0} \text{div}\, \eta~ dx =0,~\text{and}~\int_{E_0} [ \text{div}\,\zeta + (\text{div}\,\eta)^2 - \text{trace}\,((\nabla\eta)^2)]\,dx=0. \end{equation} Note that $$ (\div\,\eta)^2 - \text{trace}\,((\nabla\eta)^2) = \div \left((\div\,\eta)\eta - (\eta\cdot\nabla)\eta\right).$$ Thus, for any $\eta$, we can choose $\zeta = W:= - (div\eta)\eta + (\eta\cdot\nabla)\eta$ so that the second equation in (\ref{vol_Et2}) holds. The issue is now reduced to the first equation in (\ref{vol_Et2}). However, when $\int_{\Gamma}\eta\cdot{\bf n}d\mathcal{H}^{n-1}=0$ and $\eta\cdot\nu=0$ on $\p\Omega$, an application of the divergence theorem shows that the first equation is also satisfied. \end{rem} We can now present the proof of Theorem \ref{OK2} from the introduction. \begin{proof}[Proof of Theorem \ref{OK2}] Consider smooth vector fields $\eta\in (C^{3}(\overline{\Omega}))^{N}$ satisfying \begin{equation} \int_\Gamma \eta \cdot {\bf n}d\mathcal{H}^{N-1}(x)=0\quad\mbox{and}\quad \eta\cdot \nu=0\;\mbox{on}\; \partial\Omega.\label{gdeta} \end{equation} As explained in Remark \ref{W_rem}, (\ref{gdeta}) guarantees the preservation of mass up to the first order for the limit problem if we deform the set $E_0:=\{x\in\Omega: u_0(x)=1\}$ using $\Phi_t(E_0)$ where $\Phi_t(x) = x + t\eta (x) + O(t^2).$ Furthermore, with (\ref{gdeta}) in hand, we can choose the acceleration vector field $\zeta:=W= (\eta\cdot\nabla )\eta -(\div\eta)\eta$ so that if we deform the set $E_0$ using $\Phi_t(E_0)$ where $\Phi_t(x) = x + t\eta (x) + \frac{t^2}{2}\zeta (x) + O(t^3),$ the mass is preserved up to second order. Now, we ``lift'' all these to the $\e$-level. Our first task will be to create a perturbation of $u_\e$ in the form of \begin{equation} \label{ue_mass} u_{\e, t}(y)=u_\e(\Phi_{\e, t}^{-1}(y)) \end{equation} that preserves the mass constraint \eqref{mass} to second order for a suitable deformation map $$\Phi_{\e, t}(y)= y + t\eta^{\e}(y) + \frac{t^2}{2} \zeta^{\e}(y) + O(t^3).$$ To this end, first we construct $C^3(\overline{\Omega})$ perturbations $\eta^{\e}$ of $\eta$ such that $\eta^\e\cdot\nu=0$ on $\p\Omega$, and \begin{equation} \label{etae} \lim_{\e\rightarrow 0}\|\eta^{\e}-\eta\|_{C^{3}(\overline{\Omega})}=0,\quad\int_{\Omega} u_{\e}\div \eta^{\e}dx=0. \end{equation} In light of \eqref{ue_mass} and \eqref{ut_expand} with $\eta$ replaced by $\eta^\e$, the integral condition in \eqref{etae} will guarantee that to first order, mass is conserved since \begin{equation} \left.\frac{d}{dt}\right\rvert_{t=0} \int_{\Omega}u_{\e, t}(y)\,dy=-\int_{\Omega}\nabla u_\e\cdot \eta^\e\,dy=0.\label{zeromean} \end{equation} Here is a simple way to construct $\eta^{\e}$. Choose any smooth vector field $\beta\in (C^{3}(\overline{\Omega}))^{N}$ satisfying $\beta\cdot\nu=0$ on $\p\Omega$ and $\int_{\Gamma}\beta\cdot {\bf n} d\mathcal{H}^{N-1}(x)\neq 0.$ Let $$h(\e):=\frac{-\int_{\Omega} u_{\e}\div \eta \,dx}{\int_{\Omega} u_{\e}\div \beta \,dx}~\text{and}~\eta^{\e}= \eta (x) + h(\e) \beta(x).$$ Then, the second equation in (\ref{etae}) is satisfied. Let $E_0=\{x\in \Omega: u_0(x)=1\}.$ Then, as $\e\rightarrow 0,$ we have $$h(\e)\rightarrow \frac{-2\int_{E_0}\div\,\eta\, dx}{2\int_{E_0} \div\,\beta \,dx}= \frac{-2\int_{\Gamma}\eta\cdot {\bf n}\, d\mathcal{H}^{N-1}(x)}{2\int_{\Gamma}\varphi\cdot {\bf n}\,d\mathcal{H}^{N-1}(x)}=0.$$ Therefore, the first equation in (\ref{etae}) is also satisfied. With \eqref{zeromean} in hand, the function $-\nabla u_\e \cdot\eta^\e$ is admissible in computing the first and second Gateaux variations of $\mathcal{E}_{\e,\gamma}$ with respect to the mass constraint \eqref{mass}. We will first investigate the $\e\rightarrow 0$ limit of the criticality condition $d \mathcal{E}_{\e,\gamma} (u_{\e}, -\nabla u^{\e}\cdot\eta^{\e})=0$ (i) Using the convergence of $\eta^\e$ to $\eta$ given in \eqref{etae}, along with the uniform boundedness of $\mathcal{E}_{\e,\gamma}(u_{\e})$, a glance at the explicit formulae for $\delta \mathcal{E}_{\e,\gamma} (u_{\e}, \eta^{\e})= \delta E_{\e}(u_\e,\eta^\e) + \frac{4}{3}\gamma \delta B(u_\e,\eta^\e)$ given in \eqref{star} and \eqref{Bone} easily leads to the conclusion that \begin{equation*} \lim_{\e\rightarrow 0}\delta \mathcal{E}_{\e,\gamma} (u_{\e}, \eta^{\e})= \lim_{\e\rightarrow 0}\delta \mathcal{E}_{\e,\gamma} (u_{\e}, \eta) =\frac{4}{3}\left(\delta E(\Gamma,\eta )+ 4 \gamma\int_{\Gamma} v_0 (\eta\cdot {\bf n}) d \mathcal{H}^{N-1} (x)\right), \end{equation*} where the last equality comes from \eqref{1vm_lim} of Theorem \ref{SIV_OK}. Using (\ref{first}) and (\ref{Bfirst}), we have \begin{equation*} d \mathcal{E}_{\e,\gamma} (u_{\e}, -\nabla u^{\e}\cdot\eta^{\e})= \delta \mathcal{E}_{\e,\gamma} (u_{\e}, \eta^{\e}). \end{equation*} Combining the above equations with $d \mathcal{E}_{\e,\gamma} (u_{\e}, -\nabla u^{\e}\cdot\eta^{\e})=0$, we get \begin{equation*} \delta E(\Gamma,\eta )+ 4 \gamma\int_{\Gamma} v_0 (\eta\cdot {\bf n})\, d \mathcal{H}^{N-1} (x)=0. \end{equation*} Invoking \eqref{FVE} we find that \begin{equation} \int_{\Gamma} (\div^{\Gamma}\eta + 4 \gamma v_0 (\eta\cdot {\bf n})) \,d \mathcal{H}^{N-1} (x)=0. \end{equation} By decomposing $\eta= \eta^{\perp} +\eta^{T}$ where $\eta^{\perp}= (\eta\cdot{\bf n}){\bf n}$, we have \[ 0=\int_{\Gamma}((n-1) H + 4 \gamma v_0)(\eta\cdot{\bf n})\,d \mathcal{H}^{N-1} + \int_{\p\Gamma\cap\p\Omega} \eta^{T}\cdot {\bf n}^{\ast} d \mathcal{H}^{N-2}. \] Here we have used the Divergence Theorem to evaluate $\int_{\Gamma} \div^{\Gamma}\eta $ as in (\ref{Hzero}), and ${\bf n}^{\ast}$ denotes the co-normal vector orthogonal to $\partial\Omega\cap\partial\Gamma.$ Since this relation holds for all $\eta$ satsifying \eqref{gdeta}, it follows that there is a constant $\lambda$ such that $(n-1) H + 4 \gamma v_0 =\lambda$ on $\Gamma$ and $\p\Gamma$ must meet $\p\Omega$ orthogonally, if at all. (See \cite[p. 70]{SZ2} for more details.) Thus, (i) is established. (ii) Turning to the proof of (ii) we introduce \[ W:= (\eta \cdot\nabla) \eta-(\div \eta)\eta\quad\mbox{and}\quad W^\e:=(\eta^\e \cdot\nabla) \eta^\e-(\div \eta^\e)\eta^\e. \] In light of the $C^3$ convergence of $\eta^\e$ to $\eta$ we note that \[ \lim_{\e\rightarrow 0}\|W^{\e}-W\|_{C^{2}(\overline{\Omega})}=0. \] Consequently, the uniform energy bound on $\mathcal{E}_{\e,\gamma}(u_{\e})$ and the explicit formulae for $\delta^2 \mathcal{E}_{\e,\gamma} (u_{\e}, \eta^{\e}, W^{\e})= \delta^2 E_{\e}(u_\e,\eta^\e) + \frac{4}{3}\gamma \delta^2 B(u_\e,\eta^\e)$ given in \eqref{svep-p} and \eqref{Btwo} imply that \begin{equation}\lim_{\e\rightarrow 0}\delta^2 \mathcal{E}_{\e,\gamma} (u_{\e}, \eta^{\e}, W^{\e})= \lim_{\e\rightarrow 0}\delta^2 \mathcal{E}_{\e,\gamma} (u_{\e}, \eta, W). \label{2svme} \end{equation} Now using the relation between the Gateaux and inner second variation of $E_\e$ and $B$ provided by Corollaries \ref{inner_rem} and \ref{B_cor}, we obtain \begin{equation}d^2 \mathcal{E}_{\e,\gamma} (u_{\e}, -\nabla u_{\e}\cdot\eta^{\e})=\delta^2 \mathcal{E}_{\e,\gamma}(u_{\e}, \eta^{\e}, W^{\e})-d\mathcal{E}_{\e,\gamma}(u_{\e}, X_{\e}) \label{2svm} \end{equation} where \[ X_{\e}= (D^2 u_{\e}(y) \cdot \eta^{\e}(y), \eta^{\e} (y)) + (\nabla u_{\e}(y), (\eta^{\e}\cdot\nabla ) \eta^{\e} (y) + \div (\eta^{\e}) \eta^{\e})= \div ((\nabla u_{\e}\cdot \eta^{\e})\eta^{\e}). \] But since $\eta^\e\cdot\nu=0$ on $\p\Omega$, the divergence theorem implies that $\int_\Omega X_\e\,dx=0$ and so by the criticality of $u_\e$ we have $d\mathcal{E}_{\e,\gamma}(u_{\e}, X_{\e})=0$. The fact that the integral of $X_\e$ vanishes is no coincidence. It is precisely related to the fact that our choice of $W$ and of $W^\e$ preserve mass to second order. The first order preservation was already guaranteed by \eqref{zeromean}. For the second order preservation, we note that with $u_{\e, t}$ defined by (\ref{ue_mass}), we can use (\ref{ut_expand}) with with $X_0$ replaced by $X_\e$ to get $$ \left.\frac{d^2}{dt^2}\right\rvert_{t=0} \int_{\Omega}u_{\e, t}(y)\,dy=\int_{\Omega}X_\e(y)\,dy =0.$$ At this point we further restrict $\eta$ to additionally satisfy \[ \eta=\xi {\bf n} \quad\mbox{and}\quad ({\bf n},{\bf n}\cdot\nabla\eta) =0\quad \mbox{on} \;\Gamma \] for any smooth function $\xi:\overline{\Omega}\rightarrow\RR$ satisfying $$ \int_{\Gamma} \xi(x) d\mathcal{H}^{N-1}(x)=0.$$ From (\ref{2svme}) and (\ref{2svm}) together with Theorems \ref{PI_ineq} and \ref{SIV_OK}, noting that $W-Z=-(\div \eta)\eta$, we obtain \begin{multline}\frac{3}{4}\lim_{\e\rightarrow 0}d^2 \mathcal{E}_{\e,\gamma} (u_{\e}, -\nabla u_{\e}\cdot\eta^{\e}) = \frac{3}{4}\lim_{\e\rightarrow 0}\delta^2 \mathcal{E}_{\e,\gamma} (u_{\e}, \eta, W)= \delta^2 E(\Gamma,\eta, Z)\\- \int_{\Gamma} \div^{\Gamma} ((\div\eta)\eta) d\mathcal{H}^{N-1}+ 8\gamma\int_{\Gamma}\int_{\Gamma} G(x, y) (\eta(x)\cdot {\bf n}(x))(\eta(y)\cdot {\bf n}(y)) d \mathcal{H}^{N-1} (x) d \mathcal{H}^{N-1}(y) \\+ 4 \gamma\int_{\Gamma} (\nabla v_0\cdot\eta) (\eta\cdot {\bf n}) d \mathcal{H}^{N-1} \\= \int_{\Gamma} \left(|\nabla_{\Gamma}\xi|^2 + (N-1)^2H^2\xi^2 -|A_{\Gamma}|^2|\xi|^2\right) d\mathcal{H}^{N-1} - \int_{\partial\Gamma\cap\partial\Omega} A_{\partial\Omega}({\bf n}, {\bf n})|\xi|^2 d\mathcal{H}^{N-2} \\- \int_{\Gamma} \div^{\Gamma} ((\div\eta)\eta) d\mathcal{H}^{N-1}+ 8\gamma\int_{\Gamma}\int_{\Gamma} G(x, y) \xi(x) \xi(y) d \mathcal{H}^{N-1} (x) d \mathcal{H}^{N-1}(y) + 4 \gamma\int_{\Gamma} (\nabla v_0\cdot {\bf n}) \xi^2d \mathcal{H}^{N-1}\\= \delta^2 \mathcal{E}_{\gamma} (\Gamma,\xi) +\int_{\Gamma} \left[ (N-1)^2H^2\xi^2-\div^{\Gamma} ((\div\eta)\eta)\right] d\mathcal{H}^{N-1}(x) . \label{2svmlim} \end{multline} Using $({\bf n},{\bf n}\cdot\nabla\eta) =0$ on $\Gamma$, we find that $\div \eta = \div^\Gamma \eta =(N-1)H \xi$ on $\Gamma$. Thus, on $\Gamma$ we have $$\div^{\Gamma} ((\div\eta)\eta) = \div^{\Gamma} ((N-1)H \xi^2 {\bf n})= (N-1)^2 H^2\xi^2.$$ Therefore, we get from (\ref{2svmlim}) the following limit \begin{equation} \label{2svmlim2} \frac{3}{4}\lim_{\e\rightarrow 0}d^2 \mathcal{E} (u_{\e}, -\nabla u_{\e}\cdot\eta^{\e}) = \frac{3}{4}\lim_{\e\rightarrow 0}\delta^2 \mathcal{E}_{\e,\gamma} (u_{\e}, \eta, W) =\delta^2 \mathcal{E}_{\gamma} (\Gamma,\xi). \end{equation} The proof of (\ref{secst}) is complete. (iii) From (i), we know that $\p\Gamma$ must meet $\p\Omega$ orthogonally, if at all. Thus, for any smooth function $\xi:\overline{\Omega}\to\R$ satisfying $\int_{\Gamma} \xi(x) d\mathcal{H}^{N-1}(x)=0$, we can choose a smooth vector field $\eta$ on $\overline{\Omega}$ such that $\eta=\xi{\bf n}$ on $\Gamma$, $\eta\cdot \nu=0$ on $\partial\Omega$ and such that $({\bf n}, {\bf n}\cdot\nabla \eta)=0$ on $\Gamma$. Let $\eta^\e$ be as in the proofs of (i) and (ii). Then, the stability inequality $\delta^2 \mathcal{E}_{\gamma} (\Gamma,\xi)\geq 0$ follows immediately from the limit (\ref{2svmlim2}) above, since $d^2 \mathcal{E}_{\e,\gamma} (u_{\e}, -\nabla u_{\e}\cdot\eta^{\e})\geq 0$ by the stability of $u_\e$. (iv) The proof is similar to that of Theorem \ref{eigen_thm}. The most crucial point in the proof of Theorem \ref{eigen_thm} is the identity (\ref{polarident}) between two quadratic forms $Q_\e$ and $Q$ associated with the two eigenvalue problems. Now, in our nonlocal context, we will also obtain a similar identity (\ref{polarident2}). To do so, we first set up the corresponing quadratic forms for our two eigenvalue problems. Let denote by $\mathcal{Q}_{\e,\gamma}(u_\e)$ the quadratic function associated to the operator $$-\e \Delta + 2\e^{-1}(3u_{\e}^2-1) +\frac{8}{3}\gamma (-\Delta)^{-1}$$ with zero Neumann boundary conditions, that is, for $\varphi\in C^{1}(\overline{\Omega})$, we have \begin{eqnarray*}\mathcal{Q}_{\e,\gamma}(u_\e)(\varphi)&=&\int_{\Omega} \left(\e |\nabla \varphi|^2 + \frac{1}{\e}(6u_\e^2-2)\varphi^2\right) dx +\frac{8}{3}\gamma\int_{\Omega}\int_{\Omega} G(x, y)\varphi(x)\varphi(y) dxdy\\ &\equiv& d^2 \mathcal{E}_{\e,\gamma}(u_{\e}, -\varphi).\end{eqnarray*} Similarly, we can define a quadratic function $\mathcal{Q}_{\gamma}$ for the operator \[ -\Delta_{\Gamma} - |A_{\Gamma}|^2+ 8\gamma (-\Delta)^{-1}(\chi_{\Gamma}) + 4\gamma (\nabla v_0\cdot{\bf n}) \] on $\Gamma$ with a Robin condition on $\partial\Gamma \cap\partial\Omega$ for the corresponding eigenfunctions. That is, for $\varphi\in C^{1}(\overline{\Gamma})$, we define \begin{multline*}\mathcal{Q}_{\gamma}(\varphi)= \int_{\Gamma} \left(\abs{\nabla^{\Gamma} \varphi}^2 - \abs{A}^2 \varphi^2\right) d\mathcal{H}^{N-1}-\int_{\partial\Gamma\cap\partial\Omega} A_{\partial\Omega}({\bf n}, {\bf n})|\varphi|^2 d\mathcal{H}^{N-2}\\+ 8\gamma\int_{\Gamma}\int_{\Gamma} G(x, y) \varphi(x) \varphi(y) d \mathcal{H}^{N-1} (x) d \mathcal{H}^{N-1}(y) + 4 \gamma\int_{\Gamma} (\nabla v_0\cdot {\bf n}) \varphi^2d \mathcal{H}^{N-1}. \end{multline*} We can naturally extend $\mathcal{Q}_{\gamma}$ to be defined for vector fields in $\overline{\Omega}$ that are generated by functions defined on $\overline{\Gamma}$ as follows. Given $f\in C^{1}(\overline{\Gamma})$, let $\eta = f {\bf n}$ be a normal vector field defined on $\Gamma$. Assuming the smoothness of $\Gamma$, we know from (i) that $\partial\Gamma$ must meet $\partial\Omega$ orthogonally (if at all). Thus, we can find an extension $\tilde{\eta}$ of $\eta$ to $\overline{\Omega}$ such that it is tangent to $\partial\Omega$, that is $\tilde\eta\cdot\nu=0$ on $\p\Omega$, $({\bf n}, {\bf n}\cdot\nabla\tilde{\eta}) =0$ on $\Gamma$. Then, define $\mathcal{Q}_{\gamma}(\tilde{\eta}):= \mathcal{Q}_{\gamma}(f).$ For any vector field $V$ defined on $\overline{\Gamma}$ that is normal to $\Gamma$, we also denote by $V$ its extension to $\overline{\Omega}$ in such a way that it is tangent to $\partial\Omega$, $({\bf n}, {\bf n}\cdot \nabla V) =0$ on $\Gamma$. Let $\xi=\xi_V= V\cdot {\bf n}. $\\ Note that, using the stationarity of $u_\e$ with respect to a mass constraint, and (\ref{2svm}), we have for $$\zeta:=(V\cdot \nabla) V-(\div V) V,$$ the identity $$\mathcal{Q}_{\e,\gamma}(\nabla u_{\e}\cdot V) = d^2 \mathcal{E}_{\e,\gamma}(u_{\e},- \nabla u_{\e}\cdot V)= \delta^2 \mathcal{E}_{\e,\gamma}(u_{\e}, V, \zeta).$$ Then, we have, by (ii) \begin{eqnarray}\lim_{\e\rightarrow 0} \mathcal{Q}_{\e,\gamma}(\nabla u_{\e}\cdot V) = \lim_{\e\rightarrow 0} \delta^2 \mathcal{E}_{\e,\gamma}(u_{\e}, V, \zeta)&=&\frac{4}{3}\delta^2\mathcal{E}_{\gamma}(\Gamma, \xi)\nonumber\\ &=& \frac{4}{3} \mathcal{Q}_{\gamma}(V). \label{polarident2} \end{eqnarray} Now, arguing similarly as in the proof of Theorem \ref{eigen_thm} starting right after (\ref{polarident}), we obtain the desired result. (v) The proof of this part is similar to that of (iv). In fact, it is simpler. We use the argument in (iv) for functions $f$ and vector fields $V$ compactly supported in $\Gamma$. \end{proof} {\bf Acknowledgements.} The authors would like to thank the anonymous referee for the careful reading of the paper together with his/her constructive comments.
137,565
TITLE: How many even numbers will $99$ dice show if we roll them for eternity under a certain condition? QUESTION [2 upvotes]: Consider a six-sided die with numbers from $1$ to $6$. Imagine you have a jar with $99$ of such dice. You throw all dice on the floor randomly. You look at one of the dice on the floor at a time. For each die, you do the following: If it landed at an even number $(2,4,6)$, you turn the die so that it lands on the number $1$. If the die landed on an odd number $(1,3,5)$, you throw the die up in the air, so it can land on any number. After you finish doing the above for all dice on the floor, you come back to the first die and repeat the entire process again. You keep on doing this until eternity (for a billion years, let’s say). If I come into the room after a billion years, how many dice on the floor will have even numbers up? REPLY [2 votes]: If $X_{n}$ denotes the number of dice landed on even numbers after $n$ rounds and $X_{0}:=0$ then: $$\mathbb{E}X_{n}=\frac{1}{2}\left(99-\mathbb{E}X_{n-1}\right)$$ This equation tells us that: $$\lim_{n\rightarrow\infty}\mathbb{E}X_{n}=33$$
13,358
Top Guidelines Of Steel Buildings Bending members are also called beams, girders, joists, spandrels, purlins, lintels, and girts. Each individual of those members have their unique structural software, but commonly bending users will have bending times and shear forces as Major loads and axial forces and torsion as secondary loads. Merged force members are commonly often called beam-columns and therefore […] Commentaires récents
203,915
169,535 Published on Mar 15, 2012 'Flutes' is taken from Hot Chip's forthcoming album 'In Our Heads' released on Domino on June 11th - pre-order the vinyl at: or on iTunes 'Best Dance Recording'. With In Our Heads, an unadultered delight of an album bursting with dynamic floor fillers, euphoric earworms and verbose synth-fuelled love songs, Hot Chip cement their position as the world's most talented electro-romantics. -------------------------------------- Hot Chip Flutes In Our Heads Domino Two Bears About Group New Build Ready For The Floor Domino Recording Company is an independent record label founded in 1993. Subscribe to Domino: - Buy "Flutes" on Google PlayiTunesAmazonMP3 - Artist Hot Chip 49 videos Play all YouTube Mix - Hot Chip - Flutes 8:35 The Shoes - Time to Dance (Official Video)by Noisey Featured 4,052,167 4:08 Hot Chip - How Do You Do? (Official Video)by Domino Recording Co.460,336 views 5:46 Animal Collective - My Girls (2009)by Domino Recording Co.8,338,524 views 3:44 Arctic Monkeys - R U Mine?by Domino Recording Co.1,405,035 views 4:03 Franz Ferdinand - Matinee (2004)by Domino Recording Co.1,814,284 views 7:05 Hot Chip - Flutesby MusicIsLife2901169,312 views 28:08 Friderikusz Most - Feldmár Andrásby Vilibattya18,278 views 8:23 Hot Chip - Flutes (Sasha remix) - (Official Video)by lastnightonearthtv1,092,687 views HotChipVEVO 4 videos4K 9:56 bolero - ravel - les uns et les autres part II - jorge donnby PaintedBirdIII243,920 views 5:19 Hot Chip - Motion Sicknessby Dinakos92410,528 views 5:33 Grimes - Genesisby Grimezsz13,966,848 views 4:15 Hot Chip - These Chainsby Mecha King Ghirdorah110,196 views 42:28 The Black Keys BBC Radio 1 Live Lounge Zane Lowe 2012by gsr286472,573,756 views 4:35 Sigur Ross - Hoppipollaby 88Silvietta881,945,897 views 6:59 Hot Chip - Flutes (Live in Sydney) | Moshcamby Moshcam10,190 views 9:27 Hot Chip I Feel Betterby Spiros Papadatos546,059 views 23:03 Victoria Beckham - interview 2012 [ CNN Talk Asia ]by iCarl84479,666 views 6:29 Lykke Li - Possibility (Video)by Atlantic Records4,097,275 views 7:44 Hot Chip - Let Me Be Himby Brandon Luco76,120 views
258,838
Junior High Students Lend Support To Stony Brook's Center For Public Health Education For HIV/AIDS TrainingJunior High Students Lend Support To Stony Brook's Center For Public Health Education For HIV/AIDS Training Gelinas National Junior Honor Society Contributes Donation, Plans Further Support STONY BROOK, N.Y., February 18, 2011 – A group of National Junior Honor Society students from Paul J. Gelinas Junior High School in Setauket pooled their resources to donate $150 to the Center for Public Health Education (CPHE), a Health Sciences program through the School of Health Technology and Management at Stony Brook University. The gift will be used for educational materials needed for the CPHE’s mission of providing timely information on HIV/AIDS to support health and human service professionals. Each month the Honor Society students contribute to or donate their time and talents to various causes in the community. In future months, they hope to build upon their December 2010 donation to the CPHE in recognition of national AIDS Awareness month. “The thoughtful gift from these students will certainly help our program to educate and train hundreds of professionals in our region who care for HIV/AIDS patients,” says Ilvan Arroyo, Associate Director of the CPHE. “Understanding new health concerns and changes regarding treatment and care of these patients are essential for practitioners, and training provides the means necessary for them to stay up-to-date.” Arroyo says that while the CPHE’s HIV/AIDS education and training is ongoing, the Center will hold a special training for physicians who practice at the Suffolk County Department of Health’s Health Centers this coming spring. The training will be held within Stony Brook’s Health Science Center on May 11. Topics of training include HIV and substance abuse, HIV and alcohol abuse, and mental illnesses related to HIV/AIDS. About the Center for Public Health Education at Stony Brook University Established in 1984, the Center for Public Health Education (CPHE) is a program of the School of Health Technology and Management at Stony Brook University. The Center provides free training on critical and relevant information on HIV/AIDS related topics. The program is funded under three separate grants: the Health Resources and Services Administration grant as part of the New York/New Jersey AIDS Education and Training Center; N.Y. State Department of Health AIDS Institute Regional Training Center Initiative, and the N.Y. State DOH Center of Expertise in HIV Case Management. To date approximately 47,000 individuals have been trained by the CPHE.
251,893
Hey – we missed February. oops. Oh well, Happy St Patricks Day! Its been quiet in Soundzania lately – that is until last night. We had a “Lets Make Music” night at our church and I (with the help of a couple of other Soundzania alumni, Jeff Pierce and Scott Powers!) led a group a kids in learning some simple instruments. We talked a little about the three simple instruments we brought: Comb + Wax Paper = kazoo Cardboard Tube + Stick = drum Box + Rubber Bands = guitar We taught each group a pattern on the instruments and then brought them together to play the parts as a little orchestra. The kids were awesome! At the end, after everyone had warmed up, we had everyone do a little jamming and solos all around. Excellent. Coming up – Soundzania has a few potential shows. We are hoping to help out the local La Leche League in their picnic. We were also asked back to the Ashland Farmer’s Market (which I really hope we can match our schedules on). And last, we’ve got a birthday party coming up in June. I hope everyone’s doing alright out there. Drop us a line. Like us on Facebook, buy a CD for a friend 🙂 Until next time…. Scott There are no comments yet...Kick things off by filling out the form below. You must log in to post a comment.
96,185
a 19 year old in london was literally killed by too much bass. that sounds like crazy talk a 19 year old in london was literally killed by too much bass. that sounds like crazy talk This was posted on Sunday 13th December, 2009 at 7:21 pm Pacific. Read more posts from December 2009. Read more posts tagged with: music death news bass
212,543
TITLE: Product of Distinct $\delta$ functions QUESTION [0 upvotes]: Let $\delta_a$ and $\delta_b$ be delta functions where $a,b \in \mathbb{R}$ and $a < b$. Is it the case that $\delta_a \delta_b = 0$? My idea is that if we take $\varepsilon = |b-a| > 0$, then for every $\phi \in C_0^\infty(\mathbb{R})$ we have the following: \begin{align*} \langle \delta_a \delta_b , \phi \rangle &= \int_{-\infty}^\infty \delta_a \delta_b \phi(x) \ dx \\ &= \int_{-\infty}^{a + \frac{\varepsilon}{2}} \delta_a \cdot\left( 0 \cdot \phi(x) \right) dx \ + \int_{a + \frac{\varepsilon}{2}}^{\infty} \delta_b \cdot \left( 0 \cdot \phi(x) \right) \\ &= 0 + 0 \\ &= 0, \end{align*} and hence, $\delta_a \delta_b \equiv 0$. Is this correct reasoning or is multiplying distinct $\delta$ functions just as nonsensical as squaring a single $\delta$ function? REPLY [2 votes]: Given a sufficiently nice approximate identity $g_n$ e.g. $g_n(x)=\frac{n}{\sqrt{2 \pi}} e^{-n^2 x^2/2}$, $g_n(x-a) g_n(x-b)$ will converge in the sense of distributions to zero if $a \neq b$. In this sense "$\delta_a \delta_b=0$", since we formally identify the limit of one factor with $\delta_a$ and the limit of the other factor with $\delta_b$. But in the strict sense of distribution theory $\delta_a \delta_b$ is not defined.
13,776
\begin{document} \setlength{\baselineskip}{1.2\baselineskip} \title [A CONSTANT RANK THEOREM FOR PARTIAL CONVEX SOLUTIONS] {A CONSTANT RANK THEOREM FOR PARTIAL CONVEX SOLUTIONS OF PARTIAL DIFFERENTIAL EQUATIONS} \author{Chuanqiang Chen} \address{Department of Mathematics\\ University of Science and Technology of China\\ Hefei, 230026, Anhui Province, CHINA} \email{cqchen@mail.ustc.edu.cn} \thanks{Research of the author was supported by Grant 10871187 from the National Natural Science Foundation of China.} \maketitle \begin{abstract} Thanks to the test function of Bian-Guan[2], we successfully obtain a constant rank theorem for partial convex solutions of a class partial differential equations. This is the microscopic version of the macroscopic partial convexity principle in [1], and also is a generalization of the result in [2]. \end{abstract} \section{Introduction} The convex solution of partial differential equation is an interesting issue for a long time. And so far as we know, there are two important methods for this problem, which are macroscopic and microscopic methods. Whereas there are many solutions which are not convex. For example, the admissible solutions of the Hessian equations were studied in [7,10], the power concave solutions in [13,18], and the $k$-convex solutions in [11]. In this paper we will consider the partial convex solutions (see [1] or Definition 1.1 as below) of the elliptic and parabolic equations. The study of macroscopic convexity is using a weak maximum principle, while the study of microscopic convexity is using a strong maximum principle. For the macroscopic convexity argument, Korevaar made breakthroughs in [14,15], he introduced concavity maximum principles for a class of quasilinear elliptic equations. And later it was improved by Kennington [13] and by Kawhol [12]. The theory further developed to its great generality by Alvarez-Lasry-Lions [1]. The key of the study of microscopic convexity is a method called \emph{constant rank theorem} which was discovered in 2 dimension by Caffarelli-Friedman [5] (a similar result was also discovered by Singer-Wong-Yau-Yau [19] at the same time). Later the result in [5] was generalized to $\mathbb{R}^n$ by Korevaar-Lewis [17]. Recently the \emph{constant rank theorem} was generalized to fully nonlinear equations in [6] and [2], where the result in [2] is the microscopic version of the macroscopic convexity principle in [1]. \emph{Constant rank theorem} is a very useful tool to produce convex solutions in geometric analysis. By the corresponding homotopic deformation, the existence of convex solution comes from the \emph{constant rank theorem}. For the geometric application of the \emph{constant rank theorem}, the Christoffel-Minkowski problem and the related prescribing Weingarten curvature problems were studied in [8,9,10]. The preservation of convexity for the general geometric flows of hypersurfaces was given in [2]. Soon after the \emph{constant rank theorem} for the level set was established in [3], where [3] is a microscopic version of [4] (also it was studied in [16]). And the existence of the $k$-convex hypersurface with prescribed mean curvature was given in [11] recently. In this paper we consider the partial convexity of solutions of the following elliptic equation, and give a constant rank theorem for partial convex solutions \begin{equation} F(D^2 u,Du,u,x) = 0, \quad x \in \Omega \subset \mathbb{R}^N, \end{equation} where $ F \in C^{2,1} (\mathcal{S}^N \times \mathbb{R}^N \times \mathbb{R} \times \Omega )$ and $F$ is elliptic in the following sense \begin{equation} (\frac{{\partial F}} {{\partial u_{ab} }}(D^2 u,Du,u,x))_{N \times N} > 0, \quad \text{for all } x \in \Omega. \end{equation} First, we give the definition of the partial convexity of a function $u$, which could be found in [1]. \begin{definition} Suppose $u \in C^2 (\Omega ) \cap C(\overline\Omega )$, where $ \Omega$ is a domain in $\mathbb{R}^N = \mathbb{R}^{N'} \times \mathbb{R}^{N'' } $, and $N'$ and $N''$are two integers with $N=N'+N''$. Then $u$ is partial convex (with respect to the first variable) is that $ x' \to u(x',x'') $ is convex for every $x=(x',x'') \in \overline \Omega$. In particular, if $N''=0$, i.e. $u$ is convex in $\Omega \subset \mathbb{R}^N=\mathbb{R}^{N'}$, $u$ is said degenerate partial convex. \end{definition} For simplicity, we introduce additional notations . As in [1], we denote $\mathcal {S}^n$ to be the set of all real symmetric $n \times n$ matrices. And we shall write $p \in \mathbb{R}^N$ in the form $(p',p'')$ with $p' \in \mathbb{R}^{N'}$, $p'' \in \mathbb{R}^{N''}$ and split a matrix $A \in \mathcal{S}^N$ into $ \left( {\begin{matrix} a & b \\ {b^T } & c \\ \end{matrix} } \right) $ with $a \in \mathcal{S}^{N'}$, $b \in \mathbb{R}^{N' \times N''}$ and $c \in \mathcal{S}^{N''}$; we also let \begin{equation} F(A,p,u,x)=F(\left( {\begin{matrix} a & b \\ {b^T } & c \\ \end{matrix} } \right),p',p'',u,x',x''). \end{equation} One of our main results is the following theorem \begin{theorem}\label{TH}(CONSTANT RANK THEOREM) Suppose $\Omega$ is a domain in $\mathbb{R}^N = \mathbb{R}^{N'} \times \mathbb{R}^{N'' } $ and $ F(A,p,u,x) \in C^{2,1} (\mathcal{S}^N \times \mathbb{R}^N \times \mathbb{R} \times \Omega )$. If $F$ satisfies (1.2) and the following condition \begin{equation} F(\left( {\begin{matrix} a^{-1} & a^{-1}b \\ {({a^{-1}b})^T } & c+ b^Ta^{-1}b \\ \end{matrix} } \right),p',p'',u,x',x'') \text{ is locally convex in } (a,b,c,p'',u,x'). \end{equation} If $u \in C^{2,1} (\Omega)$ is a partial convex solution of (1.1), then $(u_{ij})_{N' \times N'}$ has constant rank in $\Omega$. \end{theorem} \begin{remark} if $N''=0$, i.e. for the degenerate partial convexity, structure condition (1.4) is \emph{inverse-convex} condition, the result of Bian-Guan [2]. And for general partial convexity, structure condition (1.4) is strictly stronger than the \emph{inverse-convex} condition. \end{remark} An immediate consequence of Theorem 1.2 is for partial convex solutions of the following quasilinear second elliptic equation \begin{equation} \sum\limits_{a,b = 1}^N {a^{ab} (x'',u_1 (x), \cdots ,u_{N'} (x))u_{ab} (x)} = f(x,u(x),Du(x)) > 0, \end{equation} where $x \in \Omega \subset \mathbb{R}^N $ and \begin{equation} (a^{ab} (x'',u_1 (x), \cdots ,u_{N'} (x)))_{N \times N} > 0, \quad \text{for all } x \in \Omega. \end{equation} \begin{corollary} Suppose $\Omega$ is a domain in $\mathbb{R}^N = \mathbb{R}^{N'} \times \mathbb{R}^{N'' } $, and $u \in C^{2,1} (\Omega) $ is the partial convex solution of (1.5). If \begin{center} $f(x',x'',u,p',p'')$ is locally concave in $(p'',u,x')$, \end{center} then $(u_{ij})_{N' \times N'}$ has constant rank in $\Omega$. \end{corollary} Set \begin{equation} F(D^2 u,Du,u,x)=\sum\limits_{a,b = 1}^N {a^{ab} (x'',u_1 (x), \cdots ,u_{N'} (x))u_{ab} (x)}-f(x,u(x),Du(x)), \end{equation} we can verify that $F$ satisfies the structure condition (1.4) (see the equivalent condition (3.13) in the third section). A corresponding result holds for the parabolic equation. \begin{theorem} Suppose $\Omega$ is a domain in $\mathbb{R}^N = \mathbb{R}^{N'} \times \mathbb{R}^{N'' } $, and $ F(A,p,u,x,t) \in C^{2,1} (\mathcal{S}^N \times \mathbb{R}^N \times \mathbb{R} \times \Omega \times (0,T])$. If $F$ satisfies (1.2) for each $t$ and the following condition \begin{equation} F(\left( {\begin{matrix} a^{-1} & a^{-1}b \\ {({a^{-1}b})^T } & c+ b^Ta^{-1}b \\ \end{matrix} } \right),p',p'',u,x',x'',t) \text{ is locally convex in } (a,b,c,p'',u,x'). \end{equation} If $u \in C^{2,1} (\Omega \times (0,T]) $ is a partial convex solution of the equation \begin{equation} \frac{{\partial u}} {{\partial t}} = F(D^2 u,Du,u,x,t), \quad (x,t) \in \Omega \times (0,T], \end{equation} then $(u_{ij}(x,t))_{N' \times N'}$ has constant rank in $\Omega$ for each $T \geqslant t >0$. Moreover, let $l(t)$ be the minimal rank of $(u_{ij}(x,t))_{N' \times N'}$ in $\Omega$, then $l(s) \leqslant l(t)$ for all $s \leqslant t \leqslant T$. \end{theorem} The rest of the paper is organized as follows. In section 2, we work on the Laplace equation, a special case of Corollary 1.4. In section 3, using the key auxiliary function $q(x)$ in [2], we do some preliminarily calculations on the constant rank theorem. In section 4, we prove the Theorem 1.2 using a strong maximum principle. In section 5, we prove Theorem 1.5. And the last section is devoted to a discussion of the structure condition. \textbf{Acknowledgement}. The author would like to express sincere gratitude to Prof. Xi-Nan Ma for his encouragement and many suggestions in this subject. \section{an example} In this section, we give a constant rank theorem for partial convex solutions of Laplace equation, a special case of Corollary 1.4. We rewrite the result as follows. \begin{theorem} Suppose $\Omega$ is a domain in $\mathbb{R}^N = \mathbb{R}^{N'} \times \mathbb{R}^{N'' } $, and $u \in C^{2,1} (\Omega)$ is the partial convex solution of the following equation \begin{equation} \Delta u(x)=\sum\limits_{a = 1}^N{ u_{aa} (x)} = f(x,u(x),Du(x)) > 0, \quad x \in \Omega. \end{equation} Assume \begin{align} f(x',x'',u,p',p'') \text{ is locally concave in }(p'',u,x'), \end{align} then $(u_{ij})_{N' \times N'}$ has constant rank in $\Omega$. \end{theorem} Before the proof of Theorem 2.1, we do some preliminaries. As in [9], we recall the definition of $k$-symmetric functions: For $1 \leqslant k \leqslant N'$, and $ \lambda = (\lambda _1 ,\lambda _2 , \cdots ,\lambda _{N'} ) \in \mathbb{R}^{N'} $, $$ \sigma _k (\lambda ) = \sum\limits_{i_1 < i_2 < \cdots < i_k } {\lambda _{i_1 } \lambda _{i_2 } \cdots \lambda _{i_k } }, $$ we denote by $\sigma _k (\lambda \left| i \right.)$ the symmetric function with $\lambda_i = 0$ and $\sigma _k (\lambda \left| ij \right.)$ the symmetric function with $\lambda_i =\lambda_j = 0$. The definition can be extended to symmetric matrices by letting $\sigma_k(W) = \sigma_k(\lambda(W))$, where $ \lambda(W)= (\lambda _1(W),\lambda _2 (W), \cdots ,\lambda _{N'}(W))$ are the eigenvalues of the symmetric matrix $W$. We also set $\sigma_0 = 1$ and $\sigma_k = 0$ for $k > N'$. We need the following standard formulas, which could be found in [9], [2] or [3]. \begin{lemma} Suppose $W=(W_{ij})$ is diagonal, and $m$ is positive integer, then $$ \frac{{\partial \sigma _m (W)}} {{\partial W_{ij} }} = \left\{ \begin{matrix} \sigma _{m - 1} (W\left| i \right.), \quad if \quad i = j, \hfill\cr 0, \qquad if \quad i \ne j. \hfill \cr \end{matrix} \right. $$ $$ \frac{{\partial ^2 \sigma _m (W)}} {{\partial W_{ij} \partial W_{kl} }} = \left\{ \begin{matrix} \sigma _{m - 2} (W\left| {ik} \right.), \quad if \quad i = j,k = l,i \ne k, \hfill \cr - \sigma _{m - 2} (W\left| {ik} \right.), \quad if \quad i = l,j = k,i \ne j, \hfill \cr 0, \qquad \qquad otherwise. \hfill \cr \end{matrix} \right. $$ \end{lemma} \textbf{Proof of Theorem 2.1}. With the assumptions of $u$ in Theorem 2.1, $u$ is automatically in $C^{3,1}$. We denote $W=(u_{ij})_{N' \times N'}$. For each $z_0 \in \Omega$ where $W$ is of minimal rank $l$. We pick a small open neighborhood $\mathcal {O}$ of $z_0$, we will prove it always be rank of $l$ in $\mathcal {O}$. We shall use the strong minimum principle to prove the theorem. Let \begin{align} \phi (x)= \sigma _{l + 1} (W), \end{align} then $ \phi(z_0)=0 $. We shall show $\phi(x)\equiv 0$ in $\mathcal {O}$. If true, it implies the set $ \left\{ {x \in \Omega|\phi (x) = 0} \right\} $ is an open set. But it is also closed, then we get $ \phi (x) \equiv 0$ in $\Omega$ since $\Omega$ connected, i.e. $(u_{ij})_{N' \times N'}$ is of constant rank $l$ in $\Omega$. Following Caffarelli and Friedman [5], for two functions $h(y)$ and $k(y)$ defined in an open set $ \mathcal {O} \subset \Omega$, we say that $h(y)\lesssim k(y)$ provided there exist positive constants $c_1$ and $c_2$ such that \begin{equation} (h-k)(y)\leq (c_1 |\nabla \phi| + c_2 \phi)(y). \end{equation} We also write $h(y)\sim k(y)$ if $h(y)\lesssim k(y)$ and $k(y)\lesssim h(y)$ . Next, we write $h\lesssim k$ if the above inequality holds in the neighborhood $\mathcal {O}$, with the constants $c_1$ and $c_2$ independent of $y$ in this neighborhood. Finally $h\sim k$ if $h\lesssim k$ and $k\lesssim h$. We shall show that \begin{equation} \Delta \phi(x) =\sum\limits_{a = 1}^N { \phi _{aa} (x)}\mathop \lesssim 0. \end{equation} Since $ {\phi (x) \geqslant 0} $ in $\Omega$ and $\phi(z_0)=0$, it then follows from the Strong Minimum Principle that $\phi(x) \equiv 0$ in $\mathcal {O}$. For any fixed point $x \in \mathcal {O}$, we rotate coordinate $e_1, \cdots, e_{N'}$ such that the matrix ${u_{ij}},i,j=1, \cdots, N'$ is diagonal and without loss of generality we assume $ u_{11} \leqslant u_{22} \leqslant \cdots \leqslant u_{N'N'} $. Then there is a positive constant $C > 0$ depending only on $\left\| u \right\|_{C^{3,1} }$ and $\mathcal {O}$, such that $ u_{N'N'} \geqslant \cdots \geqslant u_{N' - l+1N' - l+1} \geqslant C > 0 $ for all $x \in \mathcal {O}$. For convenience we denote $ G = \{ N' - l+1, \cdots ,N'\} $ and $ B = \{ 1,2, \cdots ,N'- l\} $ which means good terms and bad ones in indices respectively. Without confusion we will also simply denote $ B = \{ u_{11} , \cdots ,u_{N'-lN'-l} \} $ and $ G = \{ u_{N' - l+1N' - l+1} , \cdots ,u_{N'N'} \} $. In the following, all the calculation at the point $x$ are using the relation $\lesssim$ with the understanding that the constants in (2.4) are under control. Following a direct computation as in [9] and $W$ is diagonal, we can get \begin{align} &0 \sim \phi \sim \sigma _l (G)\sum\limits_{i \in B} {u_{ii} } , \text { and } u_{ii} \sim 0 \text{ for each } i \in B;\\ &0 \sim \phi _a \sim \sigma _l (G)\sum\limits_{i \in B} {u_{iia }}; \end{align} then by (2.6),(2.7) and Lemma 2.2, we obtain \begin{align} &\Delta \phi = \sum\limits_{a = 1}^N {\frac{{\partial ^2 \phi }} {{\partial x_a \partial x_a }}} = \sum\limits_{a = 1}^N {[\sum\limits_{i,j = 1}^{N'} {\frac{{\partial \sigma _{l + 1} (W)}} {{\partial u_{ij} }}u_{ijaa} + \sum\limits_{i,j,k,l = 1}^{N'} {\frac{{\partial ^2 \sigma _{l + 1} (W)}} {{\partial u_{ij} \partial u_{kl} }}u_{ija} u_{kla} } ]} } \notag \\ &= \sum\limits_{a = 1}^N {[\sum\limits_{i = 1}^{N'} {\frac{{\partial \sigma _{l + 1} (W)}} {{\partial u_{ii} }}u_{iiaa} + \sum\limits_{i,j = 1}^{N'} {\frac{{\partial ^2 \sigma _{l + 1} (W)}} {{\partial u_{ii} \partial u_{jj} }}u_{iia} u_{jja} + \sum\limits_{i,j = 1}^{N'} {\frac{{\partial ^2 \sigma _{l + 1} (W)}} {{\partial u_{ij} \partial u_{ji} }}u_{ija} u_{jia} } ]} } } \notag \\ & \sim \sum\limits_{a = 1}^N {[\sigma _l (G)\sum\limits_{i \in B} {u_{iiaa} } - 2\sigma _l (G)\sum\limits_{i \in B,j \in G} {\frac{1} {{u_{jj} }}u_{ija} u_{jia} } ]} \\ &\sim \sigma _l (G)\sum\limits_{i \in B} {(\Delta u)_{ii} } - 2\sigma _l (G)\sum\limits_{a = 1}^N {\sum\limits_{i \in B,j \in G} {\frac{1} {{u_{jj} }}u_{ija} ^2 } }. \notag \end{align} For each $i \in B$, we differentiate (2.1) twice in $x_i$, then we obtain \begin{align*} (\Delta u)_{ii}=&[f_{x_i}+f_u u_i+\sum\limits_{a = 1}^N f_{p_a} u_{a i} ]_i \\ =&f_{x_i x_i } + 2f_{u,x_i } u_i + f_{u,u} u_i ^2 \\ &+ 2\sum\limits_{a = 1}^N {f_{x_i ,p_a } u_{a i} } + 2\sum\limits_{a = 1}^N {f_{u,p_a } u_i u_{a i} } + \sum\limits_{a ,b = 1}^N {f_{p_a,p_b } u_{a i} u_{b i} }, \end{align*} since $W=(u_{ij})_{N' \times N'}$ is diagonal and (2.6), we get from the above equation \begin{align} (\Delta u)_{ii} \sim& f_{x_i x_i } + 2f_{u,x_i } u_i + f_{u,u} u_i ^2 \notag \\ &+ 2\sum\limits_{\alpha = N' + 1}^N {f_{x_i ,p_\alpha } u_{\alpha i} } + 2\sum\limits_{\alpha = N' + 1}^N {f_{u,p_\alpha } u_i u_{\alpha i} } + \sum\limits_{\alpha ,\beta = N' + 1}^N {f_{p_\alpha ,p_\beta } u_{\alpha i} u_{\beta i} }, \end{align} so we obtain from (2.8) and (2.9) \begin{align} \frac{\Delta \phi}{\sigma _l (G)} \sim &\sum\limits_{i =1}^{N'-l} {(\Delta u)_{ii} } - 2\sum\limits_{a = 1}^N {\sum\limits_{j =N'-l+1}^{N'}{\frac{1} {{u_{jj} }}\sum\limits_{i = 1}^{N' - l} {u_{ija} ^2 } } } \notag \\ \sim &- 2\sum\limits_{a = 1}^N {\sum\limits_{j =N'-l+1}^{N'} {\frac{1} {{u_{jj} }}\sum\limits_{i = 1}^{N' - l} {u_{ija} ^2 } } } +\sum\limits_{i =1}^{N'-l} {[ f_{x_i x_i } + 2f_{u,x_i } u_i + f_{u,u} u_i ^2 } \\ &+ 2\sum\limits_{\alpha = N' + 1}^N {f_{x_i ,p_\alpha } u_{\alpha i} } + 2\sum\limits_{\alpha = N' + 1}^N {f_{u,p_\alpha } u_i u_{\alpha i} } + \sum\limits_{\alpha ,\beta = N' + 1}^N {f_{p_\alpha ,p_\beta } u_{\alpha i} u_{\beta i} } ]. \notag \end{align} By the condition (2.2), we obtain (2.5). The proof of Theorem 2.1 is completed. \begin{remark} In (2.8), we have used Lemma 2.5 in [2], otherwise the first "$\sim$" will be "$\lesssim$". \end{remark} \begin{remark} By a similar proof as above, we can get the general case of Corollary 1.4. \end{remark} \section{primarily calculations on the constant rank theorem} \subsection{calculations on the test function} With the assumptions in Theorem 1.2 and Theorem 1.5, $u$ is in $C^{3,1}$. Let $ W= (u_{ij} )_{N' \times N'} $ and $ l = \mathop {\min }\limits_{x \in \Omega } rank(W(x)) $. We may assume $l \leqslant N' - 1 $, otherwise there is nothing to prove. Suppose $z_0 \in \Omega$ is a point where $W$ is of minimal rank $l$. Throughout this paper we assume that $ 1 \leqslant i,j,k,l,m,n \leqslant N' $, $ N' \leqslant \alpha ,\beta ,\gamma ,\eta ,\xi ,\zeta \leqslant N'' $, $ 1 \leqslant a,b,c,d \leqslant N $ and $ \sigma_j(W) = 0$ if $j < 0$ or $j > N'$. As in Bian-Guan [2],we define for $W=(u_{ij}(x)) \in \mathcal{S}^{N'}$, \begin{equation} q(W) = \left\{ \begin{matrix} \frac{{\sigma _{l + 2} (W)}}{{\sigma _{l + 1} (W)}}, \quad if \quad\sigma _{l + 1} (W) > 0, \hfill \cr 0, \qquad if \quad \sigma _{l + 1} (W) = 0. \hfill \cr \end{matrix} \right. \end{equation} and we consider the following test function \begin{align} \phi = \sigma _{l + 1} (W) + q(W). \end{align} For each $z_0 \in \Omega$ where $W$ is of minimal rank $l$. We pick an open neighborhood $\mathcal {O}$ of $z_0$, and for any fixed point $x \in \mathcal {O}$, we rotate coordinate $e_1, \cdots, e_{N'}$ such that the matrix ${u_{ij}},i,j=1, \cdots, N'$ is diagonal and without loss of generality we assume $ u_{11} \leqslant u_{22} \leqslant \cdots \leqslant u_{N'N'} $. Then there is a positive constant $C > 0$ depending only on $\left\| u \right\|_{C^{3,1} }$ and $\mathcal {O}$, such that $ u_{N'N'} \geqslant \cdots \geqslant u_{N' - l+1N' - l+1} \geqslant C > 0 $ for all $x \in \mathcal {O}$. For convenience we denote $ G = \{ N' - l+1, \cdots ,N'\} $ and $ B = \{ 1,2, \cdots ,N'- l\} $ which means good terms and bad ones in indices respectively. Without confusion we will also simply denote $ B = \{ u_{11} , \cdots ,u_{N'-lN'-l} \} $ and $ G = \{ u_{N' - l+1N' - l+1} , \cdots ,u_{N'N'} \} $. Note that for any $\delta > 0$, we may choose $\mathcal {O}$ small enough such that $u_{jj} < \delta$ for all $j \in B$ and $x \in \mathcal {O}$. We will use notation $h = O(f)$ if $ \left| {h(x)} \right| \leqslant Cf(x)$ for $x \in \mathcal {O}$ with positive constant $C$ under control. It is clear that $u_{ii} = O(\phi)$ for all $i \in B$. To get around $\sigma_{l+1}(W) = 0$, for $\varepsilon> 0 $ sufficient small, we consider \begin{align} q(W_\varepsilon )=\frac{{\sigma _{l + 2} (W_\varepsilon)}}{{\sigma _{l + 1} (W_\varepsilon)}}, \quad \phi _\varepsilon = \sigma _{l + 1} (W_\varepsilon ) + q(W_\varepsilon ), \end{align} where $W_\varepsilon = W + \varepsilon I$. We will also denote $ B_\varepsilon = \{ u_{11} + \varepsilon , \cdots ,u_{N'- lN'-l} + \varepsilon \} $, $ G_\varepsilon = \{ u_{N' - l+1N' - l+1} + \varepsilon , \cdots ,u_{N'N'} + \varepsilon \} $. (see Bian-Guan[2]). Set $u_\varepsilon (x)=u(x)+\frac{\varepsilon}{2} \left| {x'} \right|^2 $, then $ W_\varepsilon= ((u_\varepsilon)_{ij} )_{N' \times N'} $. To simplify the notations, we will write $u$ for $u_\varepsilon$, $q$ for $q_\varepsilon$, $W$ for $W_\varepsilon$, $G$ for $G_\varepsilon$, and $B$ for $B_\varepsilon$ with the understanding that all the estimates will be independent of $\varepsilon$. In this setting, if we pick $\mathcal {O}$ small enough, there is $C > 0$ independent of $\varepsilon$ such that \begin{equation} \phi \geqslant C\varepsilon ,\sigma _1 (B) \geqslant C\varepsilon , \text{ for all } x\in \mathcal {O}. \end{equation} First, we consider the regularity of $q(W(x))$. \begin{proposition} ([2]) let $u \in C^{3,1} (\Omega) $ be a partial convex function with the first variable and $ W(x)= (u_{ij}(x) )_{N' \times N'} $. Let $ l = \mathop {\min }\limits_{x \in \Omega } rank(W(x)) $, then the function $q(x) = q(W(x))$ defined in (3.1) is in $C^{1,1} (\Omega) $. \end{proposition} In the following, we denote \begin{align*} & F^{ab} = \frac{{\partial F}} {{\partial u_{ab} }} , F^{p_a } = \frac{{\partial F}} {{\partial u_a }} , F^u = \frac{{\partial F}} {{\partial u}} ,\\ &F^{ab,cd} = \frac{{\partial ^2 F}} {{\partial u_{ab} \partial u_{cd} }}, F^{ab,p_c } = \frac{{\partial ^2 F}} {{\partial u_{ab}\partial u_c }} , F^{ab,u} = \frac{{\partial ^2 F}} {{\partial u_{ab}\partial u}},\\ & F^{p_a p_b } = \frac{{\partial ^2 F}} {{\partial u_a \partial u_b}} , F^{p_a, u} = \frac{{\partial ^2 F}} {{\partial u_a \partial u}} , F^{u,u} = \frac{{\partial ^2 F}} {{\partial u\partial u}}, \end{align*} where $ 1 \leqslant a,b,c \leqslant N $. \begin{theorem} Suppose $\Omega$ is a domain in $\mathbb{R}^N = \mathbb{R}^{N'} \times \mathbb{R}^{N'' } $ and $u \in C^{3,1} (\Omega) $ is a partial convex solution of (1.1). Let $l$ be the minimal rank of $ W= (u_{ij} )_{N' \times N'} $ in $\Omega$. Suppose $l$ is attained at $z_0 \in \Omega$, and $\mathcal {O}$ is a small neighborhood of $z_0$ as above. For any fixed point $x \in \mathcal {O}$ we choose the coordinate such that $W(x)$ is diagonal. Then at $x$ we have \begin{eqnarray} &\sum\limits_{a,b = 1}^N {F^{ab} \phi _{ab} }& = \sum\limits_{i \in B} {[\sigma _l (G) + \frac{{\sigma _1 ^2 (B\left| i \right.) - \sigma _2 (B\left| i \right.)}} {{\sigma _1 ^2 (B)}}]\sum\limits_{a,b= 1}^N {F^{ab} u_{iiab} } } \notag \\ &&- 2\sum\limits_{i \in B,j \in G} {[\sigma _l (G) + \frac{{\sigma _1 ^2 (B\left| i \right.) - \sigma _2 (B\left| i \right.)}} {{\sigma _1 ^2 (B)}}]\frac{1} {{u_{jj} }}\sum\limits_{a,b = 1}^N {F^{ab} u_{ija}u_{ijb} } } \notag \\ &&- \frac{1}{{\sigma _1 ^3 (B)}}\sum\limits_{i \in B} {\sum\limits_{a,b = 1}^N {F^{ab} } [\sigma _1 (B)u_{iia} - u_{ii} \sum\limits_{j \in B} {u_{jja} } ]} [\sigma _1 (B)u_{iib} - u_{ii} \sum\limits_{j \in B}{u_{jjb} } ]\\ &&- \frac{1}{{\sigma _1 (B)}}\sum\limits_{\scriptstyle i,j \in B \hfill \atop \scriptstyle i \ne j \hfill} {\sum\limits_{a,b = 1}^N {F^{ab} u_{ija} u_{ijb} }} \notag \\ &&+ O(\sum\limits_{i,j \in B} {\left| {\nabla u_{ij} } \right|} + \phi ). \notag \end{eqnarray} In fact, $u$ in (3.5) is $u_\varepsilon (x)=u(x)+\frac{\varepsilon}{2} \left| {x'} \right|^2 $ defined as above (we omit the subindex $\varepsilon$). \end{theorem} \textbf{Proof}. The proof is similar to the proof in [2]. We give the main process. Following the assumptions as above, and for a similar computation as in [2], we have \begin{align} \sigma _1 (B) = O(\phi ), u_{ii} = O(\phi ) \text{ for every } i \in B. \end{align} Since $ \phi (x)= \sigma _{l + 1} (W) + q(W)$, then by the chain rule we have \begin{align} \sum\limits_{a,b = 1}^N {F^{ab} \phi _{ab} } =& \sum\limits_{a,b = 1}^N {F^{ab} [\sum\limits_{i,j} {\frac{{\partial \phi }} {{\partial u_{ij} }}u_{ijab} + \sum\limits_{i,j,k,l} {\frac{{\partial ^2 \phi}} {{\partial u_{ij} \partial u_{kl} }}} u_{ija} u_{klb} ]} } \notag \\ = &\sum\limits_{a,b = 1}^N {F^{ab} \sum\limits_{i,j} {[\frac{{\partial \sigma _{l + 1} (W)}} {{\partial u_{ij} }} + \frac{{\partial q(W)}} {{\partial u_{ij}}}]u_{ijab} } } \\ & + \sum\limits_{a,b = 1}^N {F^{ab} \sum\limits_{i,j,k,l} {[\frac{{\partial ^2 \sigma _{l + 1} (W)}} {{\partial u_{ij} \partial u_{kl} }} + } \frac{{\partial ^2 q(W)}} {{\partial u_{ij} \partial u_{kl} }}]u_{ija} u_{rsb} }. \notag \end{align} Since $W$ is diagonal and by lemma 2.2, the first term on the right hand side of (3.7) is \begin{align} \sum\limits_{a,b = 1}^N {F^{ab} \sum\limits_{i,j} {\frac{{\partial \sigma _{l + 1} (W)}} {{\partial u_{ij} }}u_{ijab} } } = &\sum\limits_{a,b = 1}^N {F^{ab} \sum\limits_{i} {\frac{{\partial \sigma _{l + 1} (W)}} {{\partial u_{ii} }}u_{iiab} } } \notag \\ = &\sum\limits_{a,b = 1}^N {F^{ab} \sum\limits_{i \in B} {\sigma _l (G)u_{iiab} } } + O(\phi). \end{align} Using Lemma 2.4 in [2], the second term on the right hand side of (3.7) is \begin{align} \sum\limits_{a,b = 1}^N {F^{ab} \sum\limits_{i,j} {\frac{{\partial q(W)}} {{\partial u_{ij} }}u_{ijab} } } =& \sum\limits_{a,b = 1}^N {F^{ab} \sum\limits_i {\frac{{\partial q(W)}} {{\partial u_{ii} }}u_{iiab} } } \notag \\ = &\sum\limits_{a,b = 1}^N {F^{ab} \sum\limits_{i \in B} {\frac{{\sigma _1 ^2 (B\left| i \right.) - \sigma _2 (B\left| i \right.)}} {{\sigma _1 ^2 (B)}}u_{iiab} } }+ O(\phi ). \end{align} As in [2], the third term on the right hand side of (3.7) is \begin{align} &\sum\limits_{a,b = 1}^N {F^{ab} \sum\limits_{i,j,k,l} {\frac{{\partial ^2 \sigma _{l + 1} (W)}} {{\partial u_{ij} \partial u_{kl} }}u_{ija} u_{klb} } } \notag \\ = &\sum\limits_{a,b = 1}^N {F^{ab} [\sum\limits_{i \ne j} {\frac{{\partial ^2 \sigma _{l + 1} (W)}} {{\partial u_{ii} \partial u_{jj} }}u_{iia} u_{jjb} + \sum\limits_{i \ne j} {\frac{{\partial ^2 \sigma _{l + 1} (W)}} {{\partial u_{ij} \partial u_{ji} }}u_{ija} u_{jib} ]} } }\\ = &- 2\sum\limits_{a,b = 1}^N {F^{ab} \sum\limits_{i \in B,j \in G} {\sigma _{l - 1} (G\left| j \right.)u_{ija} u_{jib} } } + O(\sum\limits_{i,j \in B} {\left| {\nabla u_{ij} } \right|} + \phi ). \notag \end{align} From Proposition 2.1 in [2], we can get \begin{align} &\sum\limits_{i,j,k,l} {\frac{{\partial ^2 q(W)}} {{\partial u_{ij} \partial u_{kl} }}u_{ija} u_{klb} } = - 2\sum\limits_{i \in B,j \in G} {\frac{{\sigma _1 ^2 (B\left| i \right.) - \sigma _2 (B\left| i \right.)}} {{\sigma _1 ^2 (B)u_{jj} }}u_{ija} u_{ijb} } \notag \\ &- \frac{1}{{\sigma _1 ^3 (B)}}\sum\limits_{i \in B} {[\sigma _1 (B)u_{iia} - u_{ii} \sum\limits_{j \in B} {u_{jja} } ][\sigma _1 (B)u_{iib} - u_{ii} \sum\limits_{j \in B} {u_{jjb} } ]}\\ &- \frac{1}{{\sigma _1 (B)}}\sum\limits_{\scriptstyle i,j \in B \hfill \atop \scriptstyle i \ne j \hfill} {u_{ija} u_{ijb} } + O(\sum\limits_{i,j \in B} {\left| {\nabla u_{ij} } \right|} + \phi ).\notag \end{align} So the fourth term on the right hand side of (3.7) is \begin{align} &\sum\limits_{a,b = 1}^N {F^{ab} \sum\limits_{i,j,k,l} {\frac{{\partial ^2 q(W)}} {{\partial u_{ij} \partial u_{kl} }}u_{ija} u_{klb} } } \notag \\ = &- 2\sum\limits_{a,b = 1}^N {F^{ab} \sum\limits_{i \in B,j \in G} {\frac{{\sigma _1 ^2 (B\left| i \right.) - \sigma _2 (B\left| i \right.)}} {{\sigma _1 ^2 (B)u_{jj}}}u_{ija} u_{ijb} } } \notag \\ & - \frac{1} {{\sigma _1 ^3 (B)}}\sum\limits_{a,b = 1}^N {F^{ab} \sum\limits_{i \in B} {[\sigma _1 (B)u_{iia} - u_{ii} \sum\limits_{j \in B} {u_{jja} } ][\sigma _1 (B)u_{iib} - u_{ii} \sum\limits_{j \in B} {u_{jjb} } ]} } \\ & - \frac{1} {{\sigma _1 (B)}}\sum\limits_{a,b = 1}^N {F^{ab} \sum\limits_{\scriptstyle i,j \in B \hfill \atop \scriptstyle i \ne j \hfill} {u_{ija} u_{ijb} } } + O(\sum\limits_{i,j \in B} {\left| {\nabla u_{ij} } \right|} + \phi ). \notag \end{align} Substitute (3.8), (3.9), (3.10) and (3.12) into (3.7), then we obtain(3.5). \subsection{calculation on structure condition} Now we discuss the structure condition (1.4). We write $ A= \left( {\begin{matrix} a & b \\ {b^T } & c \\ \end{matrix} } \right) $ and $a^{-1}=(a^{ij})$, where $a=(a_{ij}) \in \mathcal{S}^{N'}$, $b=(b_{k \alpha}) \in \mathbb{R}^{N' \times N''}$ and $c=(c_{\alpha \beta}) \in \mathcal{S}^{N''}$. \begin{lemma} The condition (1.4) is equivalent to \begin{align} &\sum\limits_{a,b,c,d = 1}^N {F^{ab,cd} (A,p,u,x)X_{ab} X_{cd} } + 2\sum\limits_{a,b = 1}^N {\sum\limits_{k,l = 1}^{N'} {F^{ab} a^{kl} X_{ka} X_{lb} } } + 2\sum\limits_{a,b = 1}^N {\sum\limits_{\alpha = N' + 1}^N {F^{ab,p_\alpha } X_{ab} X_\alpha } } \\ &+ 2\sum\limits_{a,b = 1}^N {F^{ab,u} X_{ab} Y} + 2\sum\limits_{a,b = 1}^N {\sum\limits_{i = 1}^{N'} {F^{ab,x_i } X_{ab} Z_i } } + \sum\limits_{\alpha ,\beta = N' + 1}^N {F^{p_\alpha ,p_\beta } X_\alpha X_\beta } + 2\sum\limits_{\alpha = N' + 1}^N {F^{p_\alpha ,u} X_\alpha Y} \notag \\ &+ 2\sum\limits_{\alpha = N' + 1}^N {\sum\limits_{i = 1}^{N'} {F^{p_\alpha ,x_i } X_\alpha Z_i } } + F^{u,u} Y^2 + 2\sum\limits_{i = 1}^{N'} {F^{u,x_i } YZ_i } + \sum\limits_{i,j = 1}^{N'} {F^{x_i ,x_j } Z_i Z_j } \geqslant 0, \notag \end{align} for every $ \widetilde X = ((X_{ab} ),(X_\alpha ),Y,(Z_i )) \in \mathcal{S}^N \times \mathbb{R}^{N''} \times \mathbb{R} \times \mathbb{R}^{N'}$. \end{lemma} Proof. We denote $G(a,b,c,p'',u,x')=F(\left( {\begin{matrix} a^{-1} & a^{-1}b \\ {({a^{-1}b})^T } & c+ b^Ta^{-1}b \\ \end{matrix} } \right),p',p'',u,x',x'') $, and we let $ 1 \leqslant i,j,k,l,m,n,s,t \leqslant N' $, $ N' \leqslant \alpha ,\beta ,\gamma ,\eta ,\xi ,\zeta \leqslant N'' $, and $ 1 \leqslant a,b,c,d \leqslant N $. Then condition (1.4) is equivalent to \begin{align} &\sum\limits_{i,j,k,l} {\frac{{\partial ^2 G}} {{\partial a_{ij} \partial a_{kl} }}X_{ij} X_{kl} } + 2\sum\limits_{i,j,k,\alpha } {\frac{{\partial ^2 G}} {{\partial a_{ij} \partial b_{k\alpha } }}X_{ij} Y_{k\alpha } } + 2\sum\limits_{i,j,\alpha ,\beta } {\frac{{\partial ^2 G}} {{\partial a_{ij} \partial c_{\alpha \beta } }}X_{ij} Z_{\alpha \beta } } + 2\sum\limits_{i,j,\alpha } {\frac{{\partial ^2 G}} {{\partial a_{ij} \partial p_\alpha }}X_{ij} X_\alpha } \\ & + 2\sum\limits_{i,j} {\frac{{\partial ^2 G}} {{\partial a_{ij} \partial u}}X_{ij} Y} + 2\sum\limits_{i,j,k} {\frac{{\partial ^2 G}} {{\partial a_{ij} \partial x_k }}X_{ij} Z_k } + \sum\limits_{k,\alpha ,l,\beta } {\frac{{\partial ^2 G}} {{\partial b_{k\alpha } \partial b_{l\beta } }}Y_{k\alpha } Y_{l\beta } } + 2\sum\limits_{k,\alpha ,\gamma ,\eta } {\frac{{\partial ^2 G}} {{\partial b_{k\alpha } \partial c_{\gamma \eta } }}Y_{k\alpha } Z_{\gamma \eta } } \notag \\ &+ 2\sum\limits_{k,\alpha ,\beta } {\frac{{\partial ^2 G}} {{\partial b_{k\alpha } \partial p_\beta }}Y_{k\alpha } X_\beta } + 2\sum\limits_{k,\alpha } {\frac{{\partial ^2 G}} {{\partial b_{k\alpha } \partial u}}Y_{k\alpha } Y} + 2\sum\limits_{k,\alpha ,l} {\frac{{\partial ^2 G}} {{\partial b_{k\alpha } \partial x_l }}Y_{k\alpha } Z_l } + \sum\limits_{\alpha ,\beta ,\gamma ,\eta } {\frac{{\partial ^2 G}} {{\partial c_{\alpha \beta } \partial c_{\gamma \eta } }}Z_{\alpha \beta } Z_{\gamma \eta } } \notag \\ &+ 2\sum\limits_{\alpha ,\beta ,\gamma } {\frac{{\partial ^2 G}} {{\partial c_{\alpha \beta } \partial p_\gamma }}Z_{\alpha \beta } X_\gamma } + 2\sum\limits_{\alpha ,\beta} {\frac{{\partial ^2 G}} {{\partial c_{\alpha \beta } \partial u}}Z_{\alpha \beta } Y} + 2\sum\limits_{\alpha ,\beta ,l} {\frac{{\partial ^2 G}} {{\partial c_{\alpha \beta } \partial x_l }}Z_{\alpha \beta } Z_l + } \sum\limits_{\alpha ,\gamma } {\frac{{\partial ^2 G}} {{\partial p_\alpha \partial p_\gamma }}X_\alpha X_\gamma } \notag \\ & + 2\sum\limits_\alpha {\frac{{\partial ^2 G}} {{\partial p_\alpha \partial u}}X_\alpha Y} + 2\sum\limits_{\alpha,l}{\frac{{\partial ^2 G}} {{\partial p_\alpha \partial x_l }}X_\alpha Z_l } + \frac{{\partial ^2 G}} {{\partial u\partial u}}Y^2 + 2\sum\limits_l {\frac{{\partial ^2 G}} {{\partial u\partial x_l }}YZ_l } + \sum\limits_{k,l} {\frac{{\partial ^2 G}} {{\partial x_k \partial x_l }}Z_k Z_l } \geqslant 0, \notag \end{align} for every $ ((X_{ij}), (Y_{k\alpha }), (Z_{\alpha \beta }), (X_\alpha), Y, (Z_i) ) \in \mathcal{S}^{N'} \times \mathbb{R}^{N' \times N''} \times \mathcal{S}^{N''} \times \mathbb{R}^{N''} \times \mathbb{R} \times \mathbb{R}^{N'} $. To get the equivalent condition (3.13), we shall represent all the derivatives of $G$ in (3.14) by the derivatives of $F$. Suppose $ a^{ - 1} b = (B_{k\alpha } ) = (\sum\limits_l {a^{kl} b_{l\alpha } } ) $, and $ c + b^T a^{ - 1} b = (C_{\alpha \beta } ) = (c_{\alpha \beta } + \sum\limits_{k,l} {b_{k\alpha } a^{kl} b_{l\beta } } ) $, then $G(a,b,c,p'',u,x')=F(\left( {\begin{matrix} a^{-1} & (B_{k\alpha} ) \\ {{(B_{k\alpha}) }^T } & (C_{\alpha \beta } ) \\ \end{matrix} } \right),p',p'',u,x',x'') $.\\ A direct computation yields \begin{align} &\frac{{\partial G}} {{\partial a_{ij} }} = \sum\limits_{k,l} {F^{kl} \frac{{\partial a^{kl} }} {{\partial a_{ij} }} + } \sum\limits_{k,\beta } {F^{k\beta } \frac{{\partial B_{k\beta } }} {{\partial a_{ij} }}} + \sum\limits_{\alpha ,l} {F^{\alpha l} \frac{{\partial B_{l\alpha } }} {{\partial a_{ij} }}} + \sum\limits_{\alpha ,\beta } {F^{\alpha \beta } \frac{{\partial C_{\alpha \beta } }} {{\partial a_{ij} }}}, \\ &\frac{{\partial G}} {{\partial b_{k\beta } }} = \sum\limits_{m,\alpha } {F^{m\alpha } \frac{{\partial B_{m\alpha } }} {{\partial b_{k\beta } }}} + \sum\limits_{\alpha ,n} {F^{\alpha n} \frac{{\partial B_{n\alpha } }} {{\partial b_{k\beta } }}} + \sum\limits_{\gamma ,\eta } {F^{\gamma \eta } \frac{{\partial C_{\gamma \eta } }} {{\partial b_{k\beta } }}}. \end{align} So we have the second derivatives of $G$ in (3.14) as follows. The derivatives of $G$ in the last ten terms are simple, \begin{align*} & \frac{{\partial ^2 G}} {{\partial x_k \partial x_l }} = F^{x_k ,x_l }, \frac{{\partial ^2 G}} {{\partial u\partial x_l }} = F^{u,x_l}, \frac{{\partial ^2 G}} {{\partial u\partial u}} = F^{u,u},\\ & \frac{{\partial ^2 G}} {{\partial p_\alpha \partial x_l }} = F^{p_\alpha ,x_l }, \frac{{\partial ^2 G}} {{\partial p_\alpha \partial u}} = F^{p_\alpha ,u}, \frac{{\partial ^2 G}} {{\partial p_\alpha \partial p_\gamma }} = F^{p_\alpha ,p_\gamma },\\ & \frac{{\partial ^2 G}} {{\partial c_{\alpha \beta } \partial x_l }} = F^{\alpha \beta ,x_l } ,\frac{{\partial ^2 G}} {{\partial c_{\alpha \beta } \partial u}} = F^{\alpha \beta ,u},\\ & \frac{{\partial ^2 G}} {{\partial c_{\alpha \beta } \partial p_\gamma }} = F^{\alpha \beta ,p_\gamma } ,\frac{{\partial ^2 G}} {{\partial c_{\alpha \beta } \partial c_{\gamma \eta } }} = F^{\alpha \beta ,\gamma \eta }. \end{align*} From (3.15), we can get the derivatives of $G$ in the third-sixth terms of (3.14) $$ \frac{{\partial ^2 G}} {{\partial a_{ij} \partial c_{\gamma \eta } }} = \sum\limits_{k,l} {F^{kl,\gamma \eta } \frac{{\partial a^{kl} }} {{\partial a_{ij} }} + } \sum\limits_{k,\beta } {F^{k\beta ,\gamma \eta } \frac{{\partial B_{k\beta } }} {{\partial a_{ij} }}} + \sum\limits_{\alpha ,l} {F^{\alpha l,\gamma \eta } \frac{{\partial B_{l\alpha } }} {{\partial a_{ij} }}} + \sum\limits_{\alpha ,\beta } {F^{\alpha \beta ,\gamma \eta } \frac{{\partial C_{\alpha \beta } }} {{\partial a_{ij} }}}, $$ $$ \frac{{\partial ^2 G}} {{\partial a_{ij} \partial p_\gamma }} = \sum\limits_{k,l} {F^{kl,p_\gamma } \frac{{\partial a^{kl} }} {{\partial a_{ij} }} + } \sum\limits_{k,\beta } {F^{k\beta ,p_\gamma } \frac{{\partial B_{k\beta } }} {{\partial a_{ij} }}} + \sum\limits_{\alpha ,l} {F^{\alpha l,p_\gamma } \frac{{\partial B_{l\alpha } }} {{\partial a_{ij} }}} + \sum\limits_{\alpha ,\beta } {F^{\alpha \beta ,p_\gamma } \frac{{\partial C_{\alpha \beta } }} {{\partial a_{ij} }}}, $$ $$ \frac{{\partial ^2 G}} {{\partial a_{ij} \partial u}} = \sum\limits_{k,l} {F^{kl,u} \frac{{\partial a^{kl} }} {{\partial a_{ij} }} + } \sum\limits_{k,\beta } {F^{k\beta ,u} \frac{{\partial B_{k\beta } }} {{\partial a_{ij} }}} + \sum\limits_{\alpha ,l} {F^{\alpha l,u} \frac{{\partial B_{l\alpha } }} {{\partial a_{ij} }}} + \sum\limits_{\alpha ,\beta } {F^{\alpha \beta ,u} \frac{{\partial C_{\alpha \beta } }} {{\partial a_{ij} }}}, $$ $$ \frac{{\partial ^2 G}} {{\partial a_{ij} \partial x_m }} = \sum\limits_{k,l} {F^{kl,x_m } \frac{{\partial a^{kl} }} {{\partial a_{ij} }} + } \sum\limits_{k,\beta } {F^{k\beta ,x_m } \frac{{\partial B_{k\beta } }} {{\partial a_{ij} }}} + \sum\limits_{\alpha ,l} {F^{\alpha l,x_m } \frac{{\partial B_{l\alpha } }} {{\partial a_{ij} }}} + \sum\limits_{\alpha ,\beta } {F^{\alpha \beta ,x_m } \frac{{\partial C_{\alpha \beta } }} {{\partial a_{ij} }}}. $$ From (3.16), we can get the derivatives of $G$ in the eighth-eleventh terms of (3.14) $$ \frac{{\partial ^2 G}} {{\partial b_{k\beta } \partial c_{\xi \zeta } }} = \sum\limits_{m,\alpha } {F^{m\alpha ,\xi \zeta } \frac{{\partial B_{m\alpha } }} {{\partial b_{k\beta } }}} + \sum\limits_{\alpha ,n} {F^{\alpha n,\xi \zeta } \frac{{\partial B_{n\alpha } }} {{\partial b_{k\beta } }}} + \sum\limits_{\gamma ,\eta } {F^{\gamma \eta ,\xi \zeta } \frac{{\partial C_{\gamma \eta } }} {{\partial b_{k\beta } }}}, $$ $$ \frac{{\partial ^2 G}} {{\partial b_{k\beta } \partial p_\zeta }} = \sum\limits_{m,\alpha } {F^{m\alpha ,p_\zeta } \frac{{\partial B_{m\alpha } }} {{\partial b_{k\beta } }}} + \sum\limits_{\alpha ,n} {F^{\alpha n,p_\zeta } \frac{{\partial B_{n\alpha } }} {{\partial b_{k\beta } }}} + \sum\limits_{\gamma ,\eta } {F^{\gamma \eta ,p_\zeta } \frac{{\partial C_{\gamma \eta } }} {{\partial b_{k\beta } }}}, $$ $$ \frac{{\partial ^2 G}} {{\partial b_{k\beta } \partial u}} = \sum\limits_{m,\alpha } {F^{m\alpha ,u} \frac{{\partial B_{m\alpha } }} {{\partial b_{k\beta } }}} + \sum\limits_{\alpha ,n} {F^{\alpha n,u} \frac{{\partial B_{n\alpha } }} {{\partial b_{k\beta } }}} + \sum\limits_{\gamma ,\eta } {F^{\gamma \eta ,u} \frac{{\partial C_{\gamma \eta } }} {{\partial b_{k\beta } }}}, $$ $$ \frac{{\partial ^2 G}} {{\partial b_{k\beta } \partial x_i }} = \sum\limits_{m,\alpha } {F^{m\alpha ,x_i } \frac{{\partial B_{m\alpha } }} {{\partial b_{k\beta } }}} + \sum\limits_{\alpha ,n} {F^{\alpha n,x_i } \frac{{\partial B_{n\alpha } }} {{\partial b_{k\beta } }}} + \sum\limits_{\gamma ,\eta } {F^{\gamma \eta ,x_i } \frac{{\partial C_{\gamma \eta } }} {{\partial b_{k\beta } }}}. $$ Also from (3.15) we can get the derivative of $G$ in the first term of (3.14) \begin{align*} \frac{{\partial ^2 G}} {{\partial a_{ij} \partial a_{mn} }}& = \sum\limits_{k,l} {F^{kl} \frac{{\partial ^2 a^{kl} }} {{\partial a_{ij} \partial a_{mn} }} + } \sum\limits_{k,\beta } {F^{k\beta } \frac{{\partial ^2 B_{k\beta } }} {{\partial a_{ij} \partial a_{mn} }}} + \sum\limits_{\alpha ,l} {F^{\alpha l} \frac{{\partial ^2 B_{l\alpha } }} {{\partial a_{ij} \partial a_{mn} }}} + \sum\limits_{\alpha ,\beta } {F^{\alpha \beta } \frac{{\partial ^2 C_{\alpha \beta } }} {{\partial a_{ij} \partial a_{mn} }}}\\ &+ \sum\limits_{k,l} {\frac{{\partial a^{kl} }} {{\partial a_{ij} }}[\sum\limits_{s,t} {F^{kl,st} \frac{{\partial a^{st} }} {{\partial a_{mn} }} + } \sum\limits_{s,\eta } {F^{kl,s\eta } \frac{{\partial B_{s\eta } }} {{\partial a_{mn} }}} + \sum\limits_{\gamma ,t} {F^{kl,\gamma t} \frac{{\partial B_{t\gamma } }} {{\partial a_{mn} }}} + \sum\limits_{\gamma, \eta } {F^{kl,\gamma \eta } \frac{{\partial C_{\gamma \eta } }} {{\partial a_{mn} }}} } ]\\ &+ \sum\limits_{k,\beta } {\frac{{\partial B_{k\beta } }} {{\partial a_{ij} }}[\sum\limits_{s,t} {F^{k\beta ,st} \frac{{\partial a^{st} }} {{\partial a_{mn} }} + } \sum\limits_{s,\eta } {F^{k\beta ,s\eta } \frac{{\partial B_{s\eta } }} {{\partial a_{mn} }}} + \sum\limits_{\gamma ,t} {F^{k\beta ,\gamma t} \frac{{\partial B_{t\gamma } }} {{\partial a_{mn} }}} + \sum\limits_{\gamma,\eta } {F^{k\beta ,\gamma \eta } \frac{{\partial C_{\gamma \eta } }} {{\partial a_{mn} }}} ]}\\ & + \sum\limits_{\alpha ,l} {\frac{{\partial B_{l\alpha } }} {{\partial a_{ij} }}[\sum\limits_{s,t} {F^{\alpha l,st} \frac{{\partial a^{st} }} {{\partial a_{mn} }} + } \sum\limits_{s,\eta } {F^{\alpha l,s\eta } \frac{{\partial B_{s\eta } }} {{\partial a_{mn} }}} + \sum\limits_{\gamma ,t} {F^{\alpha l,\gamma t} \frac{{\partial B_{t\gamma } }} {{\partial a_{mn} }}} + \sum\limits_{\gamma, \eta } {F^{\alpha l,\gamma \eta } \frac{{\partial C_{\gamma \eta } }} {{\partial a_{mn} }}} ]}\\ &+ \sum\limits_{\alpha ,\beta } {\frac{{\partial C_{\alpha \beta } }} {{\partial a_{ij} }}[\sum\limits_{s,t} {F^{\alpha \beta ,st} \frac{{\partial a^{st} }} {{\partial a_{mn} }} + } \sum\limits_{s,\eta } {F^{\alpha \beta ,s\eta } \frac{{\partial B_{s\eta } }} {{\partial a_{mn} }}} + \sum\limits_{\gamma ,t} {F^{\alpha \beta ,\gamma t} \frac{{\partial B_{t\gamma } }} {{\partial a_{mn} }}} + \sum\limits_{\gamma, \eta } {F^{\alpha \beta ,\gamma \eta } \frac{{\partial C_{\gamma \eta } }} {{\partial a_{mn} }}} ]}, \end{align*} and the derivative of $G$ in the second term in (3.14) \begin{align*} \frac{{\partial ^2 G}} {{\partial a_{ij} \partial b_{m\eta } }} &= \sum\limits_{k,\beta } {F^{k\beta } \frac{{\partial ^2 B_{k\beta } }} {{\partial a_{ij} \partial b_{m\eta } }}} + \sum\limits_{\alpha ,l} {F^{\alpha l} \frac{{\partial ^2 B_{l\alpha } }} {{\partial a_{ij} \partial b_{m\eta } }}} + \sum\limits_{\alpha ,\beta } {F^{\alpha \beta } \frac{{\partial ^2 C_{\alpha \beta } }} {{\partial a_{ij} \partial b_{m\eta } }}}\\ & + \sum\limits_{k,l} {\frac{{\partial a^{kl} }} {{\partial a_{ij} }}[\sum\limits_{s,\zeta } {F^{kl,s\zeta } \frac{{\partial B_{s\zeta } }} {{\partial b_{m\eta } }}} + \sum\limits_{\xi ,t} {F^{kl,\xi t} \frac{{\partial B_{t\xi } }} {{\partial b_{m\eta } }}} + \sum\limits_{\xi ,\zeta } {F^{kl,\xi \zeta } \frac{{\partial C_{\xi \zeta } }} {{\partial b_{m\eta } }}} } ]\\ & + \sum\limits_{k,\beta } {\frac{{\partial B_{k\beta } }} {{\partial a_{ij} }}[\sum\limits_{s,\zeta } {F^{k\beta ,s\zeta } \frac{{\partial B_{s\zeta } }} {{\partial b_{m\eta } }}} + \sum\limits_{\xi ,t} {F^{k\beta ,\xi t} \frac{{\partial B_{t\xi } }} {{\partial b_{m\eta } }}} + \sum\limits_{\xi ,\zeta } {F^{k\beta ,\xi \zeta } \frac{{\partial C_{\xi \zeta } }} {{\partial b_{m\eta } }}} ]}\\ & + \sum\limits_{\alpha ,l} {\frac{{\partial B_{l\alpha } }} {{\partial a_{ij} }}[\sum\limits_{s,\zeta } {F^{\alpha l,s\zeta } \frac{{\partial B_{s\zeta } }} {{\partial b_{m\eta } }}} + \sum\limits_{\xi ,t} {F^{\alpha l,\xi t} \frac{{\partial B_{t\xi } }} {{\partial b_{m\eta } }}} + \sum\limits_{\xi ,\zeta } {F^{\alpha l,\xi \zeta } \frac{{\partial C_{\xi \zeta } }} {{\partial b_{m\eta } }}} ]}\\ & + \sum\limits_{\alpha ,\beta } {\frac{{\partial C_{\alpha \beta } }} {{\partial a_{ij} }}[\sum\limits_{s,\zeta } {F^{\alpha \beta ,s\zeta } \frac{{\partial B_{s\zeta } }} {{\partial b_{m\eta } }}} + \sum\limits_{\xi ,t} {F^{\alpha \beta ,\xi t} \frac{{\partial B_{t\xi } }} {{\partial b_{m\eta } }}} + \sum\limits_{\xi ,\zeta } {F^{\alpha \beta ,\xi \zeta } \frac{{\partial C_{\xi \zeta } }} {{\partial b_{m\eta } }}} ]}. \end{align*} From (3.16), we can get the derivative of $G$ in the seventh term of (3.14) \begin{align*} \frac{{\partial ^2 G}} {{\partial b_{k\beta } \partial b_{l\alpha } }} &= \sum\limits_{\gamma ,\eta } {F^{\gamma \eta } \frac{{\partial ^2 C_{\gamma \eta } }} {{\partial b_{k\beta } \partial b_{l\alpha } }}}\\ &+ \sum\limits_{m,\eta } {\frac{{\partial B_{m\eta } }} {{\partial b_{k\beta } }}} [\sum\limits_{s,\zeta } {F^{m\eta ,s\zeta } \frac{{\partial B_{s\zeta } }} {{\partial b_{l\alpha } }}} + \sum\limits_{\xi ,t} {F^{m\eta ,\xi t} \frac{{\partial B_{t\xi } }} {{\partial b_{l\alpha } }}} + \sum\limits_{\xi ,\zeta } {F^{m\eta ,\xi \zeta } \frac{{\partial C_{\xi \zeta } }} {{\partial b_{l\alpha } }}} ]\\ & + \sum\limits_{\gamma ,n} {\frac{{\partial B_{n\gamma } }} {{\partial b_{k\beta } }}} [\sum\limits_{s,\zeta } {F^{\gamma n,s\zeta } \frac{{\partial B_{s\zeta } }} {{\partial b_{l\alpha } }}} + \sum\limits_{\xi ,t} {F^{\gamma n,\xi t} \frac{{\partial B_{t\xi } }} {{\partial b_{l\alpha } }}} + \sum\limits_{\xi ,\zeta } {F^{\gamma n,\xi \zeta } \frac{{\partial C_{\xi \zeta } }} {{\partial b_{l\alpha } }}} ]\\ & + \sum\limits_{\gamma ,\eta } {\frac{{\partial C_{\gamma \eta } }} {{\partial b_{k\beta } }}} [\sum\limits_{s,\zeta } {F^{\gamma \eta ,s\zeta } \frac{{\partial B_{s\zeta } }} {{\partial b_{l\alpha } }}} + \sum\limits_{\xi ,t} {F^{\gamma \eta ,\xi t} \frac{{\partial B_{t\xi } }} {{\partial b_{l\alpha } }}} + \sum\limits_{\xi ,\zeta } {F^{\gamma \eta ,\xi \zeta } \frac{{\partial C_{\xi \zeta } }} {{\partial b_{l\alpha } }}} ]. \end{align*} So we denote \begin{align} &\widetilde X_{kl} = \sum\limits_{i,j} {\frac{{\partial a^{kl} }} {{\partial a_{ij} }}X_{ij} } ,\widetilde X_{\beta k} = \widetilde X_{k\beta } = \sum\limits_{i,j} {\frac{{\partial B_{k\beta } }} {{\partial a_{ij} }}X_{ij} } ,\widetilde X_{\alpha \beta } = \sum\limits_{i,j} {\frac{{\partial C_{\alpha \beta } }} {{\partial a_{ij} }}X_{ij} },\\ & \widetilde Y_{kl} = 0,\widetilde Y_{\beta k} = \widetilde Y_{k\beta } = \sum\limits_{m,\eta } {\frac{{\partial B_{k\beta } }} {{\partial b_{m\eta } }}Y_{m\eta } } ,\widetilde Y_{\alpha \beta } = \sum\limits_{m,\eta } {\frac{{\partial C_{\alpha \beta } }} {{\partial b_{m\eta } }}Y_{m\eta } },\\ & \widetilde Z_{kl} = 0,\widetilde Z_{\beta k} = \widetilde Z_{k\beta } = 0,\widetilde Z_{\alpha \beta } = Z_{\alpha \beta }. \end{align} From the above calculation, and (3.17)-(3.19), we can get the first term of (3.14) \begin{align} \sum\limits_{i,j,m,n} &{\frac{{\partial ^2 G}} {{\partial a_{ij} \partial a_{mn} }} X_{ij} X_{mn} } \notag \\ =& \sum\limits_{k,l} {F^{kl} \sum\limits_{i,j,m,n} {\frac{{\partial ^2 a^{kl} }} {{\partial a_{ij} \partial a_{mn} }}} X_{ij} X_{mn} + } \sum\limits_{k,\beta } {F^{k\beta } \sum\limits_{i,j,m,n} {\frac{{\partial ^2 B_{k\beta } }} {{\partial a_{ij} \partial a_{mn} }}} } X_{ij} X_{mn} \notag \\ &+ \sum\limits_{\alpha ,l} {F^{\alpha l} \sum\limits_{i,j,m,n} {\frac{{\partial ^2 B_{l\alpha } }} {{\partial a_{ij} \partial a_{mn} }}X_{ij} X_{mn} } } + \sum\limits_{\alpha ,\beta } {F^{\alpha \beta } \sum\limits_{i,j,m,n} {\frac{{\partial ^2 C_{\alpha \beta } }} {{\partial a_{ij} \partial a_{mn} }}} X_{ij} X_{mn} } \notag \\ & + \sum\limits_{a,b,c,d = 1}^N {F^{ab,cd} \widetilde X_{ab} \widetilde X_{cd} } \notag \\ =& 2\sum\limits_{a,b} {F^{ab} } \sum\limits_{ij} {a_{ij} } \widetilde X_{ia} \widetilde X_{jb}+ \sum\limits_{a,b,c,d = 1}^N {F^{ab,cd} \widetilde X_{ab} \widetilde X_{cd} }, \end{align} the second term of (3.14) \begin{align} \sum\limits_{i,j,m,\eta } &{\frac{{\partial ^2 G}} {{\partial a_{ij} \partial b_{m\eta } }}X_{ij} Y_{m\eta } } \notag \\ =& \sum\limits_{k,\beta } {F^{k\beta } \sum\limits_{i,j,m,\eta } {\frac{{\partial ^2 B_{k\beta } }} {{\partial a_{ij} \partial b_{m\eta } }}X_{ij} Y_{m\eta } } } + \sum\limits_{\alpha ,l} {F^{\alpha l} \sum\limits_{i,j,m,\eta } {\frac{{\partial ^2 B_{l\alpha } }} {{\partial a_{ij} \partial b_{m\eta } }}X_{ij} Y_{m\eta } } } \notag \\ &+ \sum\limits_{\alpha ,\beta } {F^{\alpha \beta } \sum\limits_{i,j,m,\eta } {\frac{{\partial ^2 C_{\alpha \beta } }} {{\partial a_{ij} \partial b_{m\eta } }}X_{ij} Y_{m\eta } } } + \sum\limits_{a,b,c,d = 1}^N {F^{ab,cd} \widetilde X_{ab} \widetilde Y_{cd} } \notag \\ =& \sum\limits_{k,\beta } {F^{k\beta } \sum\limits_{i,j} {a_{ij}\widetilde X_{ik} \widetilde Y_{j\beta } } }+ \sum\limits_{\alpha ,l} {F^{\alpha l} \sum\limits_{i,j} {a_{ij} \widetilde X_{il } \widetilde Y_{j\alpha} } }\notag \\ &+ \sum\limits_{\alpha ,\beta } {F^{\alpha \beta } \sum\limits_{i,j} {a_{ij} (\widetilde X_{i\alpha } \widetilde Y_{j\beta } + \widetilde X_{i\beta } \widetilde Y_{j\alpha } )} } + \sum\limits_{a,b,c,d = 1}^N {F^{ab,cd}\widetilde X_{ab} \widetilde Y_{cd} }\notag \\ = &\sum\limits_{a,b} {F^{ab} \sum\limits_{i,j} {a_{ij} (\widetilde X_{ia} \widetilde Y_{jb} + \widetilde X_{ib} \widetilde Y_{ja} )} } + \sum\limits_{a,b,c,d = 1}^N {F^{ab,cd} \widetilde X_{ab} \widetilde Y_{cd} }, \end{align} and the seventh term of (3.14) \begin{align} \sum\limits_{k,\beta ,l,\alpha } {\frac{{\partial ^2 G}} {{\partial b_{k\beta } \partial b_{l\alpha } }}Y_{k\beta } Y_{l\alpha } } = &\sum\limits_{\gamma ,\eta } {F^{\gamma \eta } \sum\limits_{k,\beta ,l,\alpha } {\frac{{\partial ^2 C_{\gamma \eta } }} {{\partial b_{k\beta } \partial b_{l\alpha } }}Y_{k\beta } Y_{l\alpha } } } + \sum\limits_{a,b,c,d = 1}^N {F^{ab,cd} \widetilde Y_{ab} \widetilde Y_{cd} } \notag \\ =& 2\sum\limits_{\gamma ,\eta } {F^{\gamma \eta } \sum\limits_{i,j} {a_{ij} \widetilde Y_{i\gamma } \widetilde Y_{j\eta}}} +\sum\limits_{a,b,c,d = 1}^N {F^{ab,cd} \widetilde Y_{ab} \widetilde Y_{cd}} \notag \\ = &2\sum\limits_{a,b} {F^{ab} \sum\limits_{i,j} {a_{ij} \widetilde Y_{ia} \widetilde Y_{jb} } } + \sum\limits_{a,b,c,d = 1}^N {F^{ab,cd} \widetilde Y_{ab} \widetilde Y_{cd} }. \end{align} Also we obtain the third-sixth terms in (3.14) \begin{align} &\sum\limits_{i,j,\gamma ,\eta } {\frac{{\partial ^2 G}} {{\partial a_{ij} \partial c_{\gamma \eta } }}X_{ij} Z_{\gamma \eta } } = \sum\limits_{a,b,c,d = 1}^N {F^{ab,cd} \widetilde X_{ab} \widetilde Z_{cd} },\\ & \sum\limits_{i,j,\gamma } {\frac{{\partial ^2 G}} {{\partial a_{ij}\partial p_\gamma }}X_{ij} X_\gamma } = \sum\limits_{a,b = 1}^N {\sum\limits_\gamma {F^{ab,p_\gamma } \widetilde X_{ab} X_\gamma }},\\ &\sum\limits_{i,j} {\frac{{\partial ^2 G}} {{\partial a_{ij} \partial u}}X_{ij} Y} = \sum\limits_{a,b = 1}^N {F^{ab,u} \widetilde X_{ab} Y}, \\ &\sum\limits_{i,j,k}{\frac{{\partial ^2 G}} {{\partial a_{ij} \partial x_k }}X_{ij} Z_k } = \sum\limits_{a,b = 1}^N {\sum\limits_k {F^{ab,x_k } \widetilde X_{ab} Z_k } }, \end{align} and the eighth-eleventh terms in (3.14) \begin{align} & \sum\limits_{k,\beta ,\xi ,\zeta } {\frac{{\partial ^2 G}} {{\partial b_{k\beta } \partial c_{\xi \zeta } }}Y_{k\beta } Z_{\xi \zeta } } = \sum\limits_{a,b,c,d = 1}^N {F^{ab,cd} \widetilde Y_{ab} \widetilde Z_{cd} },\\ &\sum\limits_{k,\beta ,\xi } {\frac{{\partial ^2 G}} {{\partial b_{k\beta } \partial p_\zeta }}Y_{k\beta } X_\zeta } = \sum\limits_{a,b = 1}^N {\sum\limits_\zeta {F^{ab,p_\zeta } \widetilde Y_{ab} X_\zeta } },\\ & \sum\limits_{k,\beta } {\frac{{\partial ^2 G}} {{\partial b_{k\beta } \partial u}}Y_{k\beta } Y} = \sum\limits_{a,b = 1}^N {F^{ab,u} \widetilde Y_{ab} Y},\\ & \sum\limits_{k,\beta ,i} {\frac{{\partial ^2 G}} {{\partial b_{k\beta } \partial x_i }}Y_{k\beta } Z_i } = \sum\limits_{a,b = 1}^N {\sum\limits_i {F^{ab,x_i } \widetilde Y_{ab} Z_i } }. \end{align} So let$ \widetilde X = ((\widetilde X_{ab} + \widetilde Y_{ab} + \widetilde Z_{ab} ),(X_\alpha ),Y,(Z_i ))$, then we can obtain (3.13). Also the equivalence holds. \section{ structure condition and the proof of theorem 1.2 } In this section, we prove Theorem 1.2 using a strong maximum principle and Lemma 3.3. Also Corollary 1.4 holds directly from the proof. We denote $\mathcal{S}^n$ to be the set of all real symmetric $n \times n$ matrices, and denote $\mathcal{S}_+^n \in \mathcal{S}^n$ to be the set of all positive definite symmetric $n \times n$ matrices. Let $\mathbb{O}_n$ be the space consisting all $n \times n$ orthogonal matrices and $I_{N''}$ be the $N'' \times N''$ identity matrix. We define $$ \mathcal{S}_{N' - 1} = \{ \left( {\begin{matrix} {Q\left( {\begin{matrix} 0 & 0 \\ 0 & B \\ \end{matrix} } \right)Q^T } & {Qb} \\ {b^TQ^T } & c \\ \end{matrix} } \right) \in \mathcal{S}^N \left| {\forall b \in \mathbb{R}^{N' \times N''} , \forall c \in \mathcal{S}^{N''} ,\forall Q \in \mathbb{O}_{N'} ,\forall B \in \mathcal{S}^{N' - 1} } \right.\}, $$ and for given $ Q \in \mathbb{O}_{N'}$ \begin{align*} &\mathcal{S}_{N' - 1} (Q) = \{ \left( {\begin{matrix} {Q\left( {\begin{matrix} 0 & 0 \\ 0 & B \\ \end{matrix} } \right)Q^T } & {Qb} \\ {b^TQ^T} & c \\ \end{matrix} } \right) \in \mathcal{S}^N \left| {\forall b \in \mathbb{R}^{N' \times N''} , \forall c \in \mathcal{S}^{N''} ,\forall B \in \mathcal{S}^{N' - 1} } \right.\}\\ = & \{ \left( {\begin{matrix} Q & 0 \\ 0 & {I_{N''} } \\ \end{matrix} } \right)\left( {\begin{matrix} {\left( {\begin{matrix} 0 & 0 \\ 0 & B \\ \end{matrix} } \right)} & b \\ {b^T } & c \\ \end{matrix} } \right)\left( {\begin{matrix} {Q^T } & 0 \\ 0 & {I_{N''} } \\ \end{matrix} } \right)\left| {\forall b \in \mathbb{R}^{N' \times N''} , \forall c \in \mathcal{S}^{N''} ,\forall B \in \mathcal{S}^{N' - 1} } \right.\}. \end{align*} Therefore $ \mathcal{S}_{N' - 1} (Q) \subset \mathcal{S}_{N' - 1} \subset \mathcal{S}^N $. For any $(p',x'')$ fixed and $ Q \in \mathbb{O}_{N'}$, $(A,p'',u,x') \in \mathcal{S}_{N' - 1} (Q)\times \mathbb{R}^{N''} \times \mathbb{R} \times \mathbb{R}^{N'} $, we set $$ X_F^* = ((F^{ab}(A, p,u,x) ),F^{p_{N' + 1} } , \cdots ,F^{p_N } ,F^u ,F^{x_1 } , \cdots ,F^{x_{N'} } ) $$ as a vector in $\mathcal{S}^N \times \mathbb{R}^{N''} \times \mathbb{R} \times \mathbb{R}^{N'} $. Set \begin{equation} \Gamma _{X_F^* }^ \bot = \{ \widetilde X \in \mathcal{S}_{N' - 1} (Q) \times \mathbb{R}^{N''} \times \mathbb{R} \times \mathbb{R}^{N'} |\left\langle {\widetilde X, X_F^* } \right\rangle = 0\}. \end{equation} Let $ B \in \mathcal{S}^{N' - 1} $, $ A = B^{ - 1} $, and \begin{center} $\widetilde B = \left( {\begin{matrix} 0 & 0 \\ 0 & B \\ \end{matrix} } \right) $, $ \widetilde A = \left( {\begin{matrix} 0 & 0 \\ 0 & A \\ \end{matrix} } \right) $. \end{center} For any given $ Q \in \mathbb{O}_{N'}$ and $ \widetilde X = ((X_{ab} ),(X_\alpha ),Y,(Z_i )) \in \mathcal{S}_{N'-1}(Q) \times \mathbb{R}^{N''} \times \mathbb{R} \times \mathbb{R}^{N'} $, we define a quadratic form \begin{align} Q^* (\widetilde X,\widetilde X)=&\sum\limits_{a,b,c,d = 1}^N {F^{ab,cd} X_{ab} X_{cd} } + 2\sum\limits_{a,b = 1}^N {\sum\limits_{k,l = 1}^{N'} {F^{ab} [ {Q\widetilde A Q^T}]_{kl} X_{ka} X_{lb} } } \notag \\ &+ 2\sum\limits_{a,b = 1}^N {\sum\limits_{\alpha = N' + 1}^N {F^{ab,p_\alpha } X_{ab} X_\alpha } } + 2\sum\limits_{a,b = 1}^N {F^{ab,u} X_{ab} Y} + 2\sum\limits_{a,b = 1}^N {\sum\limits_{i = 1}^{N'} {F^{ab,x_i } X_{ab} Z_i } } \\ &+ \sum\limits_{\alpha ,\beta = N' + 1}^N {F^{p_\alpha ,p_\beta } X_\alpha X_\beta } + 2\sum\limits_{\alpha = N' + 1}^N {F^{p_\alpha ,u} X_\alpha Y} + 2\sum\limits_{\alpha = N' + 1}^N {\sum\limits_{i = 1}^{N'}{F^{p_\alpha ,x_i } X_\alpha Z_i } } \notag \\ &+ F^{u,u} Y^2 + 2\sum\limits_{i = 1}^{N'} {F^{u,x_i } YZ_i } + \sum\limits_{i,j = 1}^{N'} {F^{x_i ,x_j } Z_i Z_j }, \notag \end{align} where the derivative functions of $F$ are evaluated at $(\left( {\begin{matrix} {Q\widetilde B Q^T } & {Qb} \\ {b^TQ^T} & c \\ \end{matrix} } \right),p,u,x) $. From lemma 3.3, we can get \begin{lemma} If $F$ satisfies condition (1.4), then for each $(p',x'')$ \begin{align} F( {\left( {\begin{matrix} 0 & b \\ b^T & c \\ \end{matrix} } \right)}, p, u, x) \text{ is locally convex in } (c,p'',u,x'), \text{ and }Q^* (\widetilde X,\widetilde X) \geqslant 0, \forall \widetilde X \in \Gamma _{X_F^* }^ \bot, \end{align} where $Q^*$ is defined in (4.2). \end{lemma} Proof. Taking $\varepsilon >0$ small enough such that $a={Q\left( {\begin{matrix} \varepsilon & 0 \\ 0 & B+\varepsilon I_{N'-1} \\ \end{matrix} } \right)Q^T}$ is invertible, and using (3.13), where $ \widetilde X = ((X_{ab} ),(X_\alpha ),Y,(Z_i ))\in \Gamma _{X_F^* }^ \bot $, then we can obtain (4.3) when $\varepsilon \to 0$. Theorem 1.2 is a direct consequence of the following theorem and Lemma 4.1. \begin{theorem} Suppose $\Omega$ is a domain in $\mathbb{R}^N = \mathbb{R}^{N'} \times \mathbb{R}^{N'' } $ and $ F(A,p,u,x) \in C^{2,1} (\mathcal{S}^N \times \mathbb{R}^N \times \mathbb{R} \times \Omega )$ satisfies (1.2) and (1.4). Let $u \in C^{3,1} (\Omega) $ is a partial convex solution of (1.1). If $ W(x)=(u_{ij}(x))_{N' \times N'}$ attains minimum rank $l$ at certain point $z_0 \in \Omega$, then there is a neighborhood $\mathcal {O}$ of $z_0$ and a positive constant $C$ independent of $\phi$ (defined in (3.2)), such that \begin{equation} \sum\limits_{a,b = 1}^N {F^{ab} \phi _{ab} } \leqslant C(\phi + \left| {\nabla \phi } \right|), \quad \forall x \in \mathcal {O}. \end{equation} In turn, $W(x)$is of constant rank in $\mathcal {O}$. \end{theorem} \textbf{Proof of Theorem 4.2}. Let $u \in C^{3,1}(\Omega)$ be a partial convex solution of equation (1.1) and $ W(x)=(u_{ij}(x))_{N' \times N'}$. For each $z_0 \in \Omega$ where $W$ attains minimal rank $l$. We may assume $l \leqslant N'-1$, otherwise there is nothing to prove. As in the previous section, we pick an open neighborhood $\mathcal {O}$ of $z_0$, and for any $x \in \mathcal {O}$, let $ G = \{ N' - l+1, \cdots ,N'\} $ and $ B = \{ 1,2, \cdots ,N'- l\} $ which means good terms and bad ones in indices for eigenvalues of $W(x)$ respectively. Setting $\phi$ as (3.2), then we see from Proposition 3.1 that $$ \phi \in C^{1,1}(\mathcal {O}) ,\quad \phi(x) \geqslant 0, \quad \phi(z_0) = 0, $$ and there is a constant $C > 0$ such that for all $x \in \mathcal {O}$, \begin{equation} \frac{1} {C}\sigma _1 (B)(x) \leqslant \phi (x) \leqslant C\sigma _1 (B)(x), \quad \frac{1} {C}\sigma _1 (B)(x) \leqslant \sigma _{l + 1} (W(x)) \leqslant C\sigma _1 (B)(x). \end{equation} We shall fix a point $x \in \mathcal {O}$ and prove (4.4) at $x$. For each $x \in \mathcal {O}$ fixed, we rotate coordinate $e_1, \cdots, e_{N'}$ such that the matrix ${u_{ij}},i,j=1, \cdots, N'$ is diagonal and without loss of generality we assume $ u_{11} \leqslant u_{22} \leqslant \cdots \leqslant u_{N'N'} $. Then there is a positive constant $C > 0$ depending only on $\left\| u \right\|_{C^{3,1} }$ and $\mathcal {O}$, such that $ u_{N'N'} \geqslant \cdots \geqslant u_{N' - l+1N' - l+1} \geqslant C > 0 $ for all $x \in \mathcal {O}$. Without confusion we will also simply denote $ B = \{ u_{11} , \cdots ,u_{N'- lN'- l} \} $ and $ G = \{ u_{N' - l+1N' - l+1} , \cdots ,u_{N'N'} \} $. Note that for any $\delta > 0$, we may choose $\mathcal {O}$ small enough such that $u_{jj} < \delta$ for all $j \in B$ and $x \in \mathcal {O}$. Again, as in section 3, we will avoid to deal with $\sigma_{l+1}(W) = 0$. By considering $W_\varepsilon = W + \varepsilon I$, and $u_\varepsilon(x)=u(x)+ \frac{\varepsilon } {2}\left| x' \right|^2 $ for $\varepsilon >0$ sufficient small. Thus $u_\varepsilon(x)$ satisfies equation \begin{equation} F(D^2 u_\varepsilon,Du_\varepsilon,u_\varepsilon,x) = R_\varepsilon(x), \end{equation} where $R_\varepsilon(x)=F(D^2 u_\varepsilon,Du_\varepsilon,u_\varepsilon,x)-F(D^2 u,Du,u,x)$. Since $u \in C^{3,1}$; we have, \begin{equation} \left| {R_\varepsilon (x)} \right| \leqslant C\varepsilon , \quad \left| {\nabla R_\varepsilon (x)} \right| \leqslant C\varepsilon , \quad \left| {\nabla ^2 R_\varepsilon (x)} \right| \leqslant C\varepsilon ,\quad \forall x \in \mathcal {O}. \end{equation} We will work on equation (4.6) to obtain differential inequality (4.4) for $\phi_\varepsilon (x)$ defined in (3.3) with constant $C_1$, $C_2$ independent of $\varepsilon$. Theorem 4.2 would follow by letting $ \varepsilon \to 0$. In the following, we may as well omit the subindex $\varepsilon$ for convenience. We note that by (3.4), we have $$ \varepsilon \leqslant C\phi (x), \quad \forall x \in \mathcal {O}, $$ with $R(x)$ under control as follows, \begin{equation} \left|{D^jR_\varepsilon (x)} \right| \leqslant C\varepsilon, \text{ for all } j = 0, 1, 2, \text{ and for all } x \in \mathcal {O}. \end{equation} Differentiate (4.6) one time in $x_i$ for $i \in B$, then we can get $$ \sum\limits_{a,b = 1}^N {F^{ab} u_{abi} } + \sum\limits_{a = 1}^N {F^{p_a } u_{ai} } + F^u u_i + F^{x_i } = O(\phi ), $$ i.e. \begin{align} \sum\limits_{a,b = N' - l+1}^N {F^{ab} u_{abi} } + \sum\limits_{a = N' +1}^N {F^{p_a } u_{ai} } + F^u u_i + F^{x_i } = O(\sum\limits_{i,j \in B} {\left| {\nabla u_{ij} } \right|} + \phi ). \end{align} Differentiate (4.6) twice in $x_i$ for $i \in B$, then we obtain \begin{align} &\sum\limits_{a,b = 1}^N {F^{ab} u_{abii} } + \sum\limits_{a,b = 1}^N {u_{abi} [\sum\limits_{c,d = 1}^N {F^{ab,cd} u_{cdi} } + \sum\limits_{c = 1}^N {F^{ab,p_c } u_{ci} } + F^{ab,u} u_i + F^{ab,x_i } ]} \notag \\ & + \sum\limits_{a = 1}^N {F^{p_a } u_{aii} } + \sum\limits_{a = 1}^N {u_{ai} [\sum\limits_{c,d = 1}^N {F^{p_a ,cd} u_{cdi} } + \sum\limits_{c = 1}^N {F^{p_a ,p_c } u_{ci} } + F^{p_a ,u} u_i + F^{p_a ,x_i } ]} \\ & + F^u u_{ii} + u_i [\sum\limits_{c,d = 1}^N {F^{u,cd} u_{cdi} } + \sum\limits_{c = 1}^N {F^{u,p_c } u_{ci} } + F^{u,u} u_i + F^{u,x_i } ] \notag \\ &+ \sum\limits_{c,d = 1}^N {F^{x_i ,cd} u_{cdi} } + \sum\limits_{c = 1}^N {F^{x_i ,p_c } u_{ci} } + F^{x_i ,u} u_i + F^{x_i ,x_i } = O(\phi ), \notag \end{align} i.e. \begin{align} &\sum\limits_{a,b = 1}^N {F^{ab} u_{abii} } + \sum\limits_{a,b,c,d = N' - l+1}^N {F^{ab,cd} u_{abi} u_{cdi} } + 2\sum\limits_{a,b = N' -l+1}^N {\sum\limits_{c = N' + 1}^N {F^{ab,p_c } u_{abi} u_{ci} } } \notag \\ & + 2\sum\limits_{a,b = N' - l+1}^N {F^{ab,u} u_{abi} u_i } + 2\sum\limits_{a,b = N' - l+1}^N {F^{ab,x_i } u_{abi} } + \sum\limits_{a,c = N' + 1}^N {F^{p_a ,p_c } u_{ai} u_{ci} }\\ & + 2\sum\limits_{a = N' + 1}^N {F^{p_a ,u} u_{ai} u_i } + 2\sum\limits_{a = N' + 1}^N {F^{p_a ,x_i } u_{ai} } + F^{u,u} u_i ^2 + 2F^{u,x_i } u_i + F^{x_i ,x_i } \notag \\ = &O(\sum\limits_{i,j \in B} {\left| {\nabla u_{ij} } \right|} + \phi ). \notag \end{align} So for each $i \in B$, let \begin{align} J_i= &\sum\limits_{a,b,c,d = N' - l+1}^N {F^{ab,cd} u_{abi} u_{cdi} } + 2\sum\limits_{a,b = N' -l+1}^N {\sum\limits_{c = N' + 1}^N {F^{ab,p_c } u_{abi} u_{ci} } } + 2\sum\limits_{a,b = N' - l+1}^N {F^{ab,u} u_{abi} u_i }\notag \\ & + 2\sum\limits_{j \in G} {\frac{1} {{u_{jj} }}\sum\limits_{a,b = N' - l+1}^N {F^{ab} u_{ija} u_{ijb} } }+2\sum\limits_{a,b = N' - l+1}^N {F^{ab,x_i } u_{abi} } +\sum\limits_{a,c = N' + 1}^N {F^{p_a ,p_c } u_{ai} u_{ci} }\notag \\ & + 2\sum\limits_{a = N' + 1}^N {F^{p_a ,u} u_{ai} u_i } + 2\sum\limits_{a = N' + 1}^N {F^{p_a ,x_i } u_{ai} } + F^{u,u} u_i ^2 + 2F^{u,x_i } u_i + F^{x_i ,x_i }. \end{align} Substitute (4.11) and (4.12) into (3.5), then we obtain \begin{eqnarray} &\sum\limits_{ab = 1}^N {F^{ab} \phi _{ab} }& = -\sum\limits_{i \in B} {[\sigma _l (G) + \frac{{\sigma _1 ^2 (B\left| i \right.) - \sigma _2 (B\left| i \right.)}} {{\sigma _1 ^2 (B)}}]}J_i \notag \\ &&- \frac{1}{{\sigma _1 ^3 (B)}}\sum\limits_{i \in B} {\sum\limits_{ab = 1}^N {F^{ab} } [\sigma _1 (B)u_{iia} - u_{ii} \sum\limits_{j \in B} {u_{jja} } ]} [\sigma _1 (B)u_{iib} - u_{ii} \sum\limits_{j \in B}{u_{jjb} } ]\\ &&- \frac{1}{{\sigma _1 (B)}}\sum\limits_{\scriptstyle i,j \in B \hfill \atop \scriptstyle i \ne j \hfill} {\sum\limits_{ab = 1}^N {F^{ab} u_{ija} u_{ijb} }} \notag \\ &&+ O(\sum\limits_{i,j \in B} {\left| {\nabla u_{ij} } \right|} + \phi ). \notag \end{eqnarray} By condition (1.4), since $u \in C^{3,1}$, so $F^{ab} \in C^{0,1}$. For $ \overline {\mathcal {O}} \subset \Omega $, there exists a constant $ \delta_0 > 0$, such that \begin{equation} (F^{ab} ) \geqslant \delta _0 I_N, \quad \forall x \in \mathcal {O}. \end{equation} \textbf{Case(i): } $l=0$. Then $G= \emptyset$ and \begin{align} J_i= &\sum\limits_{a,b,c,d = N' + 1}^N {F^{ab,cd}(D^2u,Du,u,x) u_{abi} u_{cdi} } + 2\sum\limits_{a,b = N' + 1}^N {\sum\limits_{c = N' + 1}^N {F^{ab,p_c } u_{abi} u_{ci} } } \notag \\ & + 2\sum\limits_{a,b =N' + 1}^N {F^{ab,u} u_{abi} u_i } +2\sum\limits_{a,b = N' + 1}^N {F^{ab,x_i } u_{abi} } +\sum\limits_{a,c = N' + 1}^N {F^{p_a ,p_c } u_{ai} u_{ci} } \\ &+ 2\sum\limits_{a = N' + 1}^N {F^{p_a ,u} u_{ai} u_i } +2\sum\limits_{a = N' + 1}^N {F^{p_a ,x_i } u_{ai} } + F^{u,u} u_i ^2 + 2F^{u,x_i } u_i + F^{x_i ,x_i }, \notag \end{align} where all the derivative functions of $F$ are evaluated at $(D^2u,Du,u,x)$. Since $F \in C^{2,1}$ and $ \left\| {\left. {W(x)} \right\|_{C^0 } } \right. = O(\phi ) $, by Taylor formula and condition (4.3), we can get \begin{align} J_i = & O(\phi)+\sum\limits_{a,b,c,d = N' + 1}^N {F^{ab,cd} u_{abi} u_{cdi} } + 2\sum\limits_{a,b = N' + 1}^N {\sum\limits_{c = N' + 1}^N {F^{ab,p_c } u_{abi} u_{ci} } } \notag \\ & + 2\sum\limits_{a,b =N' + 1}^N {F^{ab,u} u_{abi} u_i } +2\sum\limits_{a,b = N' + 1}^N {F^{ab,x_i } u_{abi} } +\sum\limits_{a,c = N' + 1}^N {F^{p_a ,p_c } u_{ai} u_{ci} } \notag \\ &+ 2\sum\limits_{a = N' + 1}^N {F^{p_a ,u} u_{ai} u_i } +2\sum\limits_{a = N' + 1}^N {F^{p_a ,x_i } u_{ai} } + F^{u,u} u_i ^2 + 2F^{u,x_i } u_i + F^{x_i ,x_i } \notag \\ \geqslant& -C\phi, \end{align} where all the derivative functions of $F$ are evaluated at $(\left( {\begin{matrix} 0 & {(u_{k \alpha})} \\ {(u_{\alpha k})} & {(u_{\alpha \beta})} \\ \end{matrix} } \right),p,u,x) $. \textbf{Case(ii): } $1 \leqslant l \leqslant N'-1$ \\ Now we set $ X_{ab} = 0 $ for $a \in B $ or $b \in B $, \begin{equation} X_{N'N'} =u_{N'N'i} - \frac{1} {{F^{N'N'} }}[\sum\limits_{a,b = N' - l+1}^N {F^{ab} u_{abi} } + \sum\limits_{a = N' +1}^N {F^{p_a } u_{ai} } + F^u u_i + F^{x_i }], \end{equation} $ X_{ab} = u_{abi} $ otherwise, $Y=u_i$ and $Z_k=\delta_{ki}$. We can verify that $(X_{ab}) \in S_{N'-1}(I_{N'})$ and $ \widetilde X = ((X_{ab} ),(X_\alpha ),Y,(Z_i )) \in \Gamma _{X_F^* }^ \bot $. Again by condition (4.3), we infer that \begin{align} J_i \geqslant -C(\sum\limits_{i,j \in B} {\left| {\nabla u_{ij} } \right|} + \phi ), \end{align} since $C>\sigma _l (G) + \frac{{\sigma _1 ^2 (B\left| i \right.) - \sigma _2 (B\left| i \right.)}} {{\sigma _1 ^2 (B)}}>0$, thus we obtain \begin{eqnarray} \sum\limits_{a,b = 1}^N {F^{ab} \phi _{ab} } &\leqslant& C(\sum\limits_{i,j \in B}{\left| {\nabla u_{ij} } \right|} + \phi ) \notag \\ &&- \frac{1}{{\sigma _1 ^3 (B)}}\sum\limits_{i \in B} {\sum\limits_{a,b = 1}^N {F^{ab} } [\sigma _1 (B)u_{iia} - u_{ii} \sum\limits_{j \in B} {u_{jja} } ]} [\sigma _1 (B)u_{iib} - u_{ii} \sum\limits_{j \in B}{u_{jjb} } ] \notag \\ &&- \frac{1}{{\sigma _1 (B)}}\sum\limits_{\scriptstyle i,j \in B \hfill \atop \scriptstyle i \ne j \hfill} {\sum\limits_{a,b = 1}^N {F^{ab} u_{ija} u_{ijb} } } \notag \\ &\leqslant& C(\sum\limits_{i,j \in B} {\left| {\nabla u_{ij} } \right|} + \phi ) - \frac{\delta_0}{{\sigma _1 ^3 (B)}}\sum\limits_{i \in B} {\sum\limits_{a = 1}^N \widetilde V_{i a}^2}- \frac{\delta_0}{{\sigma _1 (B)}}\sum\limits_{\scriptstyle i,j \in B \hfill \atop \scriptstyle i \ne j \hfill} {\sum\limits_{a = 1}^N u_{ija}^2 }, \end{eqnarray} where $\widetilde V_{i a}=\sigma _1 (B)u_{iia} - u_{ii} \sum\limits_{j \in B} {u_{jja} } $. Referring to Lemma 3.3 in [2], we can control the term $\sum\limits_{i,j \in B} {\left| {\nabla u_{ij} } \right|}$ by the rest terms on the right hand side in (4.19) and $ \phi+\left| {\nabla \phi } \right| $ where \begin{equation} \phi_a=O(\phi)+ \sum\limits_{i \in B}[\sigma _l (G) + \frac{{\sigma _1 ^2 (B\left| i \right.) - \sigma _2 (B\left| i \right.)}} {{\sigma _1 ^2 (B)}}]u_{iia}. \end{equation} So there exist positive constants $C_1$,$C_2$ independent of $\varepsilon$, such that \begin{equation} \sum\limits_{ab = 1}^N {F^{ab} \phi _{ab} } \leqslant C_1 (\phi + \left| {\nabla \phi } \right|)-C_2\sum\limits_{i,j \in B} {\left| {\nabla u_{ij} } \right|}, \quad \forall x \in \mathcal {O}. \end{equation} Taking $\varepsilon \to 0$, (4.19) is proved for $u$. By the Strong Maximum Principle, $\phi (x) \equiv 0 $ in $\mathcal {O}$; and $W$ is of constant rank in $\mathcal {O}$. The proof of Theorem 4.2 is completed. \begin{remark} In the above proof, we have used a weak condition (4.3). Also we can directly use the condition (1.4), i.e.(3.13). We set $ X_{ab} = 0 $ for $a \in B $ or $b \in B $, $ X_{ab} = u_{abi} $ otherwise, $Y=u_i$ and $Z_k=\delta_{ki}$. Then we have $ \widetilde X = ((X_{ab} ),(X_\alpha ),Y,(Z_i )) \in \mathcal{S}^N \times \mathbb{R}^{N''} \times \mathbb{R} \times \mathbb{R}^{N'}$, and by (3.13), $J_i \geqslant 0$ for every $i \in B$. So (4.19) holds. As above, Theorem 4.2 holds. \end{remark} \begin{remark} In particular, for $N'=1$, we only need the following structure condition \begin{equation} F(\left( {\begin{matrix} 0 & b \\ {b^T } & c \\ \end{matrix} } \right),p',p'',u,x',x'') \text{ is locally convex in } (c,p'',u,x'), \end{equation} then we have $(u_{ij})_{N' \times N'}$ is of constant rank in $\Omega$. Since when $N'=1$, so the minimum rank $l$ has only two cases: $l=1$ and $l=0$. If $l=1$ we are done; and if $l=0$, (4.16) and (4.19) holds by condition (4.22). Then the result holds as the proof of Theorem 4.2. \end{remark} \section{the proof of theorem 1.5} In this section we give the proof of Theorem 1.5. It is similar to the proof of Theorem 1.2 only some minor modifications. Following the notations of Theorem 1.5, suppose $W(x,t_0)=(u_{ij}(x,t_0))_{N' \times N'}$ attains minimal rank $l=l(t_0)$ at some point $z_0 \in \Omega$. We may assume $l\leqslant N'-1$, otherwise there is nothing to prove. As in the section 4, there is a neighborhood $\mathcal {O}\times (t_0-\delta, t_0+\delta]$ of $(z_0, t_0)$ instead of $\mathcal {O}$, such that $ u_{N'N'} \geqslant \cdots \geqslant u_{N' - l+1N' - l+1} \geqslant C > 0 $ for all $(x,t) \in \mathcal {O}\times (t_0-\delta, t_0+\delta]$, and we can denote $ B = \{ u_{11} , \cdots ,u_{N'- lN'- l} \} $ and $ G = \{ u_{N' - l+1N' - l+1} , \cdots ,u_{N'N'} \} $. If $t_0=T$, the neighborhood should be $\mathcal {O}\times (t_0-\delta, t_0]$. Setting $\phi$ as (3.2) (where $W(x,t)$ instead of $W(x)$), then we see from Proposition 3.1 that $$ \phi \in C^{1,1}(\mathcal {O}\times (t_0-\delta, t_0+\delta]) ,\quad \phi(x,t) \geqslant 0, \quad \phi(z_0,t_0) = 0, $$ Also when we choose $\mathcal {O}$ and $\delta>0$ small enough, the corresponding (3.4), (4.5) and (4.8) hold. Then Theorem 1.5 is a consequence of the following theorem and the method of continuity. \begin{theorem} Suppose $\Omega$ is a domain in $\mathbb{R}^N = \mathbb{R}^{N'} \times \mathbb{R}^{N'' } $ and $ F(A,p,u,x,t) \in C^{2,1} (\mathcal{S}^N \times \mathbb{R}^N \times \mathbb{R} \times \Omega \times (0,T])$ satisfies (1.2) for each $t$ and (1.8). Let $u \in C^{3,1}$ is a partial convex solution of (1.9). For each $t_0 \in (0,T]$, if $ W(x,t_0)=(u_{ij}(x,t_0))_{N' \times N'}$ attains minimum rank $l$ at some point $z_0 \in \Omega$, then there is a neighborhood $\mathcal {O}\times (t_0-\delta, t_0+\delta] $ of $(z_0, t_0)$ as above and a positive constant $C$ independent of $\phi$ (defined in (3.2)), such that \begin{equation} \sum\limits_{ab = 1}^N {F^{ab} \phi _{ab}(x,t)-\phi_t(x,t) } \leqslant C(\phi (x,t) + \left| {\nabla \phi(x,t) } \right|), \quad \forall (x,t) \in \mathcal {O} \times (t_0-\delta, t_0+\delta]. \end{equation} In turn, $W(x,t)$ has constant rank $l$ in $\mathcal {O} \times (t_0-\delta, t_0]$, where $l=l(t_0)$. \end{theorem} \textbf{Proof of Theorem 5.1}.The proof is similar to the proof of Theorem 4.2, so we only give the main process of the proof. With $u_t = F(D^2u,Du, u, x, t)$, using the same notations as above and the proof of Theorem 4.2, we have $u_\varepsilon(x,t)=u(x,t)+ \frac{\varepsilon } {2}\left| x' \right|^2 $ for $\varepsilon >0$ sufficient small. Thus $u_\varepsilon(x,t)$ satisfies equation \begin{equation} (u_\varepsilon)_t =F(D^2u_\varepsilon,Du_\varepsilon,u_\varepsilon,x,t) - R_\varepsilon(x,t), \end{equation} where $R_\varepsilon(x,t)=F(D^2 u_\varepsilon,Du_\varepsilon,u_\varepsilon,x,t)-F(D^2 u,Du,u,x,t)$. As in the proof of Theorem 4.2, we omit the subindex $\varepsilon$. Differentiate (5.2) one time in $x_i$ for $i \in B$, then we can get $$ \sum\limits_{a,b = 1}^N {F^{ab} u_{abi} } + \sum\limits_{a = 1}^N {F^{p_a } u_{ai} } + F^u u_i + F^{x_i } = O(\phi )+u_{i,t} \text{ ,} $$ i.e. \begin{align} \sum\limits_{a,b = N' - l+1}^N {F^{ab} u_{abi} } + \sum\limits_{a = N' +1}^N {F^{p_a } u_{ai} } + F^u u_i + F^{x_i } = O(\sum\limits_{i,j \in B} {\left| {\nabla u_{ij} } \right|} + \phi )+u_{i,t} \text{ .} \end{align} Differentiate (5.2) twice in $x_i$ for $i \in B$, then we can get \begin{align} &\sum\limits_{a,b = 1}^N {F^{ab} u_{abii} } + \sum\limits_{a,b = 1}^N {u_{abi} [\sum\limits_{c,d = 1}^N {F^{ab,cd} u_{cdi} } + \sum\limits_{c = 1}^N {F^{ab,p_c } u_{ci} } + F^{ab,u} u_i + F^{ab,x_i } ]} \notag \\ & + \sum\limits_{a = 1}^N {F^{p_a } u_{aii} } + \sum\limits_{a = 1}^N {u_{ai} [\sum\limits_{c,d = 1}^N {F^{p_a ,cd} u_{cdi} } + \sum\limits_{c = 1}^N {F^{p_a ,p_c } u_{ci} } + F^{p_a ,u} u_i + F^{p_a ,x_i } ]} \notag \\ & + F^u u_{ii} + u_i [\sum\limits_{c,d = 1}^N {F^{u,cd} u_{cdi} } + \sum\limits_{c = 1}^N {F^{u,p_c } u_{ci} } + F^{u,u} u_i + F^{u,x_i } ] \\ &+ \sum\limits_{c,d = 1}^N {F^{x_i ,cd} u_{cdi} } + \sum\limits_{c = 1}^N {F^{x_i ,p_c } u_{ci} } + F^{x_i ,u} u_i + F^{x_i ,x_i } = O(\phi )+u_{ii,t} \text{ ,} \notag \end{align} i.e. \begin{align} &\sum\limits_{a,b = 1}^N {F^{ab} u_{abii} } + \sum\limits_{a,b,c,d = N' - l+1}^N {F^{ab,cd} u_{abi} u_{cdi} } + 2\sum\limits_{a,b = N' -l+1}^N {\sum\limits_{c = N' + 1}^N {F^{ab,p_c } u_{abi} u_{ci} } } \notag \\ & + 2\sum\limits_{a,b = N' - l+1}^N {F^{ab,u} u_{abi} u_i } + 2\sum\limits_{a,b = N' - l+1}^N {F^{ab,x_i } u_{abi} } + \sum\limits_{a,c = N' + 1}^N {F^{p_a ,p_c } u_{ai} u_{ci} }\\ & + 2\sum\limits_{a = N' + 1}^N {F^{p_a ,u} u_{ai} u_i } + 2\sum\limits_{a = N' + 1}^N {F^{p_a ,x_i } u_{ai} } + F^{u,u} u_i ^2 + 2F^{u,x_i } u_i + F^{x_i ,x_i } \notag \\ = &O(\sum\limits_{i,j \in B} {\left| {\nabla u_{ij} } \right|} + \phi)+u_{ii,t} \text{ .}\notag \end{align} We denote that $$ \phi _t = \sum\limits_{i,j = 1}^{N'} {\frac{{\partial \phi }} {{\partial u_{ij} }}u_{ij,t} = \sum\limits_{i = 1}^{N'} {\frac{{\partial \phi }} {{\partial u_{ii} }}u_{ii,t} } } \text{ ,} $$ so we can obtain from (3.5), (4.12) and (5.5), \begin{eqnarray} &&\sum\limits_{ab = 1}^N {F^{ab} \phi _{ab}(x,t)-\phi_t(x,t)} = -\sum\limits_{i \in B} {[\sigma _l (G) + \frac{{\sigma _1 ^2 (B\left| i \right.) - \sigma _2 (B\left| i \right.)}} {{\sigma _1 ^2 (B)}}]}J_i \notag \\ &&- \frac{1}{{\sigma _1 ^3 (B)}}\sum\limits_{i \in B} {\sum\limits_{ab = 1}^N {F^{ab} } [\sigma _1 (B)u_{iia} - u_{ii} \sum\limits_{j \in B} {u_{jja} } ]} [\sigma _1 (B)u_{iib} - u_{ii} \sum\limits_{j \in B}{u_{jjb} } ]\\ &&- \frac{1}{{\sigma _1 (B)}}\sum\limits_{\scriptstyle i,j \in B \hfill \atop \scriptstyle i \ne j \hfill} {\sum\limits_{ab = 1}^N {F^{ab} u_{ija} u_{ijb} }} \notag \\ &&+ O(\sum\limits_{i,j \in B} {\left| {\nabla u_{ij} } \right|} + \phi ) .\notag \end{eqnarray} Now the right hand side of (5.6) is the same as the right hand side of (4.13). From Remark 4.3, we set $ X_{ab} = 0 $ for $a \in B $ or $b \in B $, $ X_{ab} = u_{abi} $ otherwise, $Y=u_i$ and $Z_k=\delta_{ki}$. Then we have $ \widetilde X = ((X_{ab}),(X_\alpha ),Y,(Z_i )) \in \mathcal{S}^N \times \mathbb{R}^{N''} \times \mathbb{R} \times \mathbb{R}^{N'}$, and by (3.13), $J_i \geqslant 0$ for every $i \in B$. So (4.19) holds. A similar analysis as in the proof of Theorem 4.2 for the right hand side of equation (4.19) yields \begin{equation} \sum\limits_{ab = 1}^N {F^{ab} \phi _{ab}(x,t)-\phi_t(x,t) } \leqslant C_1(\phi (x,t) + \left| {\nabla \phi(x,t) } \right|)-C_2\sum\limits_{i,j \in B} {\left| {\nabla u_{ij} } \right|}, \end{equation} where the positive constants $C_1$,$C_2$ independent of $\varepsilon$, and $(x,t) \in \mathcal {O} \times (t_0-\delta, t_0+\delta]$. Then $W(x,t)$ has a constant rank $l$ for each $(x,t) \in \mathcal {O} \times (t_0-\delta, t_0]$ by the Strong Maximum Principle for parabolic equations. Theorem 5.1 holds. \section{discussion of structure condition } In this section, we discuss the condition (4.3) and (1.4). For any given $ Q \in \mathbb{O}_{N'}$, we define $$ \widetilde{F_Q }(A,b,c,p'',u,x') = F(\left( {\begin{matrix} {Q\left( {\begin{matrix} 0 & 0 \\ 0 & {A^{ - 1} } \\ \end{matrix} } \right)Q^T } & {Q\left( {\begin{matrix} 0 & 0 \\ 0 & {A^{ - 1} } \\ \end{matrix} } \right)b} \\ {b^T \left( {\begin{matrix} 0 & 0 \\ 0 & {A^{ - 1} } \\ \end{matrix} } \right)Q^T } & {c + b^T \left( {\begin{matrix} 0 & 0 \\ 0 & {A^{ - 1} } \\ \end{matrix} } \right)b} \\ \end{matrix} } \right),p,u,x), $$ for $ (A,b,c,p'',u,x') \in \mathcal{S}_+^{N' - 1} \times \mathbb{R}^{N' \times N''} \times \mathcal{S}^{N''} \times \mathbb{R}^{N''} \times \mathbb{R} \times \mathbb{R}^{N'} $ and fixed $ (p',x'') \in \mathbb{R}^{N'} \times \mathbb{R}^{N''} $. Condition (1.4) implies the following condition \begin{equation} \widetilde {F}_Q (A,b,c,p'',u,x') \text{ is locally convex in } (A,b,c,p'',u,x'), \end{equation} for any fixed $N' \times N'$ orthogonal matrix $Q$. \begin{proposition} Let $Q \in \mathbb{O}_{N'}$. The condition (6.1)is equivalent to \begin{equation} Q^* (\widetilde X,\widetilde X) \geqslant 0, \end{equation} for any $ \widetilde X = ((X_{ab} ),(X_\alpha ),Y,(Z_i )) \in \mathcal{S}_{N'-1}(Q) \times \mathbb{R}^{N''} \times \mathbb{R} \times \mathbb{R}^{N'} $, where $Q^*$ is defined in (4.2). \end{proposition} \textbf{Proof}. By approximating, Proposition 6.1 holds from Lemma 3.2. \begin{remark} Condition (1.4) is equivalent to (3.13), and (1.4) implies (6.1) for any fixed $N' \times N'$ orthogonal matrix $Q$. Condition (6.1) is equivalent to (6.2), and Lemma 4.1 is a consequence of Proposition 6.1. And condition (6.1) is weaker than condition (1.4). \end{remark} There is a class of functions which satisfy (1.4). Through a direct calculation and using (3.13), we can get \begin{proposition} If $g$ is a non-decreasing and convex function and $F_1$, $\cdots$, $F_m$ satisfy condition (1.4), then $F = g(F_1,\cdots, F_m)$ also satisfies condition (1.4). In particular, if $F_1$ and $F_2$ are in the class, so are $F_1 +F_2$ and $F_1^\alpha$( where $F_1 > 0$) for any $\alpha \geqslant 1$. \end{proposition} \begin{remark} This paper was finished in April 2009, and B. Bian and P. Guan give a better structural condition ( an equivalent condition of (4.3)) in their paper "A Structural Condition for Microscopic Convexity Principle", which appears in Discrete and Continuous Dynamical Systems, Volume 28, Number 2, 2010, pp. 789-807. \end{remark}
197,905
A wedding is all about celebrating a couple’s future together. This is why it’s important to find fresh, new ways to decorate and celebrate the big day. As brides and grooms begin planning their 2019 ceremonies and receptions, here are some of the biggest wedding trends for 2019 we’ll be seeing this upcoming year. Colors While blush tones dominated the wedding scene in past years, varying shades of purples will be taking over in 2019. Purple is a great color choice for wedding details as it can be bold and dramatic or light and airy. Other popular color choices in 2019 will be silver sage, dusty rose, dusty orange, sunset, sun yellow, berry pink, dusty blue, champagne gold, and cinnamon. As far as metallic accents go, copper and rust will continue to be the go-to options. Favors More brides and grooms are hoping to make their weddings a memorable experience not just for themselves, but for their guests as well. Personalization is key in today’s weddings. The biggest trend for wedding favors is to have them be related to the couple’s hometowns or current towns. Look forward to seeing food-related favors at weddings in 2019, such as a popular dessert from the couple’s hometown. Flowers In past years, we saw flowers hanging in the air at weddings. This year, florals will be moving back to the reception tables as centerpieces. Greenery will be a popular choice for bouquets and centerpieces with a few bold flowers as accents. Another popular flower trend we’ll be seeing in 2019 is floral arches. Rings While traditional diamonds will also be a timeless, popular option for brides, 2019 brides will be incorporating color into their engagement and wedding rings. Some brides will be adding gemstones to their bands to add a little color to their ring finger, while other brides will be ditching the traditional white diamond as the centerstone in place of a large colored gemstone such as a ruby, sapphire, or emerald. Online jeweler, Blue Nile, offers a colorful collection of engagement rings directly on their website, allowing you to shop the latest trends without having to leave your home. Tables Greenery won’t be the only thing decorating reception tables this upcoming year. Velvet, black accents, and lucite are other elements that brides and grooms will be using to create gorgeous, modern tablescapes. On the other hand, traditional tablescapes will be making a comeback as well. Over-the-top tablescapes with monogrammed napkins and tall silver candle holders will be popular choices for the more traditional couples.
375,426
Future Students Welcome to The Collins College of Hospitality Management! the college at at collins@csupomona.edu or (909) 869-2275 for questions or more information.
130,696
The UltiPro login page analysis has found it safe to use. It is a strategic end to end HRMS payroll and HR talent management solution for businesses. We scanned UltiPro.com and the report is generated below with a full technical scan of the login hosted on the 72.35.67.191 website IP address. Reverse DNS matches what the UltiPro login is recorded as from the registrar. There are 482 other domains linking to it and 87 internal links from the page. IP Address 72.35.67.191 Hostname whatmakesyouperfect ASN AS19271 ASN Owner Peak 10 Continent North America Country Code Flag (US) United States Latitude / Longitude 26.1223 / -80.1434 City Fort Lauderdale Region Florida HTTP Status Code 200 Connect Time 0.225557 seconds Download Speed 49.95 KB /s Header Size 527.00 B Download Size 77.91 KB UltiPro Login Analysis: Running on: Microsoft-IIS/7.5 SOA record: Primary nameserver: ns1.intersourcing.com. Hostmaster E-mail address: admin.ultimatesoftware.com. Serial Number: 2443 Refresh: 10800 Retry: 3600 Expire: 1814400 Default TTL: 900. MX record: ns1.intersourcing.com. admin.ultimatesoftware.com. 2443 10800 3600 1814400 900. [TTL: 3600] Whois record for UltiPro.com is: Registrant Name: The Ultimate Software Group, Inc. Registrant Organization: The Ultimate Software Group, Inc. Registrant Street: 2000 Ultimate Way Registrant City: Weston Registrant State/Province: FL Registrant Postal Code: 33326 Registrant Country: US Registrant Phone: +1.9999999999 Registrant Phone Ext: Registrant Fax: +1.9999999999 Registrant Fax Ext: Registrant Email: [email protected] Registry Admin ID: Admin Name: Rhoden, Christopher Admin Organization: The Ultimate Software Group Admin Street: 2000 Ultimate Way Admin City: Weston Admin State/Province: FL Admin Postal Code: 33326 Admin Country: US Admin Phone: +1.9543317000 Admin Phone Ext: Admin Fax: +1.9999999999 Admin Fax Ext: Admin Email: [email protected] Registry Tech ID: Tech Name: Rhoden, Christopher Tech Organization: The Ultimate Software Group Tech Street: 2000 Ultimate Way Tech City: Weston Tech State/Province: FL Tech Postal Code: 33326 Tech Country: US Tech Phone: +1.9543317000 Tech Phone Ext: Tech Fax: +1.9999999999 Tech Fax Ext: Tech Email: [email protected] Name Server: NS1.INTERSOURCING.COM Name Server: NS3.ULTIMATESOFTWARE.COM Name Server: NS2.INTERSOURCING.COM We scanned 690043 files and no malware has been detected on the page and it took 5.86 seconds to load. The UltiPro login is safe to use based on our analysis and is scored at a 92% uptime availability. Sign in at:
247,721
Rafael Nadal and Maria Sharapova headlined most of the action at Barcelona and Stuttgart. One of them is holding a trophy, and the other is still searching for answers. What are their French Open prospects in the upcoming weeks? We also pay our respects to the Barcelona Open and its great Spanish dominance. Who would have thought this monopoly would be halted from someone born in the Far East? The WTA showcased high-octane tennis at Stuttgart, but Ana Ivanovic really let one slip away. Finally, who else is willing to step up on the ATP Tour? The top players have shown their mortality, but it still takes assailants who are ready to seize the moment. As always, we will hand out two awards as we look at the unusual, disappointing and triumphant happenings in tennis. These are the "winners and losers" of tennis.
181,880
@Thisfunktional talks with cast and crew of I AM GANGSTER Writer-director Morits Rechenberg, producer Ralf Weinfurtner, and actors Marlene Forte, Gilberto Ortiz and Rick Mancia talk with Jesus, @Thisfunktional of thisfunktional.com, about "I am Gangster." "I AM GANGSTER" tells the story of three young men in East Los Angeles trying to survive on a daily basis. David, an idealistic correctional officer able to keep his family safe, deals with excessive use-of-force incidents at the jail and is challenged to make a fateful decision between the lesser of two evils. At the same time, hardcore gang member Lito seeks retaliation for the shooting of a fellow gang member by the police, while gang matriarch Tia refuses to let him do so and rejects his request for more organizational responsibility. This pressures Lito to begin seeking more power on his own. Meanwhile, rebellious and easily influenced teenager Rio faces bullies at school and pressure at home, when he meets up with his old childhood friend who is an active gang member. Now, Rio could be easily swayed to enter the dangerous life that looks so inviting. "I AM GANGSTER" was birthed from a collaboration with members the Teen Club of Hazard Park in East Los Angeles, when Rechenberg and these teenagers created a short film called TICKED back in 2008. And now, eight years later he's come full circle with this powerful feature length film that comes straight from the heart. Rechenberg was inspired by their stories, as well as by legendary Los Angeles filmmaker Charles Burnett and his classic KILLER OF SHEEP. "I AM GANGSTER" brings a non-judgmental/non-apologetic portrait of life on the streets of the City of Angels. For more info: Follow and interact with Jesus: Twitter – @Thisfunktional Instagram – @Thisfunktional Facebook – ThisfunktionalLA Tumblr – Thisfunktional AudioBoom – Thisfunktional YouTube – Thisfunktional
331,554
TITLE: Axiomatization of Exterior Algebras ? Rotman QUESTION [3 upvotes]: Rotman makes the following axiomatization Definition: If $V$ is a free $k$-module of rank $n$, then a Grassmann algebra on $V$ is a $k$ algebra $G(V)$ with identity element, $e_0$, such that $G(V)$ contains $\langle e_0 \rangle \oplus V$ as a submodule, where $\langle e_0 \rangle \cong k$. $G(V)$ is generated as a $k$-algebra, by $\langle e_0 \rangle \oplus V$. $v^2=0$ for all $v \in V$. $G(V)$ is a free $k$-module of rank $2^n$. We have Theorem 9.139, pg 747. The statement states that "the Grassmann algebra" is graded. But the proof requires the model of a Grassman algebra constructed from part 1. In the rest of the chapter, he also only refers to the Grassmann algebra. So question is: Under the given axioms, are Grassmann algebras unique upto isomorphism? In particular, why can Rotman use "the" Grassmann algebra? Or is he only using this model? REPLY [1 votes]: Yes, the Grassman algebra for $V_k$, where $V$ is an $n$ dimensional $k$ module is unique. In brief, it can be shown that the construction of the Grassman algebra has a universal property, so that anything else satisfying the same axioms is necessarily isomorphic. If you don't believe in the universal property of the Grassman algebra yet, but are willing to believe in the universal property of the tensor algebra $T(V)$ of $V$ over $k$, then consider this: Suppose $V$ and $W$ are finite rank free $k$ modules. Then of course if they have different rank, $G(V)$ and $G(W)$ have to be nonisomorphic, because according to the axioms given they have different $k$ rank. If they have the same $k$ rank, then of course they are isomorphic, say by $\theta:V\to W$. By the universal property of tensor algebras, this lifts to an isomorphism $T(V)\cong T(W)$. Then their quotients by elements of the form $x\wedge x$ are also isomorphic, but those are just $G(V)\cong G(W)$. Also on page 750: An astute reader will have noticed that our construction of a Grassmann algebra $G(V)$ depends not only on the free $k$-module $V$ but also on a choice of basis of $V$. Had we chosen a second basis of $V$, would the second Grassmann algebra be isomorphic to the first one? He goes on to answer the question in Corollary 9.142. Let $V$ be a free $k$-module, and let $B$ and $B$ be bases of $V$ . If $G(V)$ is the Grassmann algebra defined using $B$ and if $G (V)$ is the Grassmann algebra defined using $B$ , then $G(V) \cong G (V)$ as graded k-algebras. So this, along with the fact that we shouldn't distinguish between free modules of the same rank over $k$, should convince you there is only one such algebra for a given $V_k$.
8,646
[sg_popup id=97] Last week the Democrats and their fake news media cohorts were exposed after transcripts revealed every Obama administration guest the mainstream media had on to spew their Russian Collusion Delusions, under oath, testified that they saw NO EVIDENCE of any collusion between the Trump campaign and the Kremlin. With another Bombshell expected to drop by THIS FRIDAY (May 15), the new round of documents will reportedly prove that Nancy Pelosi, Chuck Schumer, and Adam Schiff moved ahead with their attempted coup knowing from the start that THERE WAS NO COLLUSION. Louisiana Rep. Steve Scalise is demanding that the Durham investigation pick up the pace and start handing out indictments against the FBI agents, and Obama officials who perpetrated this hoax on America. During an interview on Fox News,. With regards to Durham’s investigation, Scalise said, “I want to see it sped up. I sure have been vocal about that.” He said it is because the election is coming up, and concerns have been raised about the appearances of such an inquiry reaching a climax right before the big event. . the Washington Examiner,” Pirro added “Biden was very much a part of this. This is the Obama-Biden administration. … This was something that was discussed constantly.” If Pelosi wins re-election,it just shows what MENTAL Retards live in California –
164,827
., plate 1 shop reviews 5 out of 5 stars Black Pearl Crystal Braceletbridesmaid gifts, Bridesmaid Jewelrybridesmaid gifts, Evening Party Accessory$20.00
25,229
\begin{document} \title{On computational complexity of Cremer Julia sets.} \author{Artem Dudko\thanks{A. Dudko acknowledges the support by the National Science Centre, Poland, grant 2016/23/P/ST1/04088 under POLONEZ programme which has received funding from the EU\;\protect\includegraphics[width=.03\linewidth]{Flag_of_Europe.pdf} Horizon 2020 research and innovation programme under the MSCA grant agreement No. 665778} and Michael Yampolsky\thanks{M. Yampolsky was partially supported by NSERC Discovery Grant}} \date{} \abovedisplayskip 8pt \abovedisplayskip 8pt \belowdisplayskip 9pt \maketitle \begin{abstract} We find an abundance of Cremer Julia sets of an arbitrarily high computational complexity. \end{abstract} \section{Introduction.} Most of us have seen pictures of a quadratic Julia sets on a computer screen. A program to visualize such a set seems easy to write based on its definition. Let us start by noting that a linear change of coordinates transforms every quadratic polynomial into the form $$p_c(z)=z^2+c,$$ and no two functions $p_c(z)$ with different values of $c$ are linearly conjugate. Thus when we study the dynamics of quadratics, it is sufficient to restrict ourselves to maps of this form. If we iterate $p_c$ starting with a point $z_0\in\Bbb C$, we will obtain an infinite orbit $$z_0,\;z_1=p_c(z_0),\;z_2=p_c(z_1)=p_c(p_c(z_0)),\ldots,z_n=p_c^{\circ n}(z_0),\ldots$$ Clearly, if $z_0>>1$ is large enough in relation to $c$, then $z_1\approx z_0^2$, $z_n\approx z_0^{2^n}$, and so $z_n\to\infty$. Thus, the set $K_c$ consisting of initial points $z_0$ whose orbits {\it do not} converge to infinity is bounded. It is known as the {\it filled Julia set} of $p_c$, and the Julia set $J_c$ is defined as its boundary: $$J_c=\partial K_c.$$ Alternatively, we can define $J_c$ as the {\it repeller} of the dynamical system $p_c:\mathbb C\to\mathbb C$. That is, $J_c$ is the limit set of the inverse images $p_c^{-n}(z_0)$ for all $z_0\in\mathbb C$ except at most one value (the unique exception happens when $c=0$ and $z_0=0$). This definition suggests what is perhaps the simplest approach to computing $J_c$: use the set set $p_c^{-n}(z_0)$ for some $z_0\in\mathbb C$ and a very large $n$ to approximate $J_c$. Often, a program like this produces a satisfactory image. Its principal shortcoming, however, is evident: how can we tell what $n$ to choose to get an approximation of the picture of $J_c$ (more formally, for which $n$ will the distance between $J_c$ and $p_c^{-n}(z_0)$ be less than one pixel size for the desired screen resolution)? An alternative approach, based on the first definition is also common: iterate points in the plane (centers of pixels with the given screen resolution) $n$ times, and see if the modulus of an iterate exceeds some fixed bound $M=M(c)$ such that, for instance, $|p_c(z)|>2|z|$ for $|z|>M$. Remove such points from the picture; what is left is an approximation of $K_c$, and $J_c$ is its boundary. A similar problem arises here: how do we know what $n$ to choose, as some points $z$ whose orbits escape to $\{|z|>M\}$ may take an arbitrarily long time to do so? The above questions are two instances of the celebrated Halting Problem, which is an example of an algorithmically unsolvable problem given by Turing in \cite{Tur}. In fact, M.~Braverman and the second author have shown that for some Julia sets obtaining a faithful computer image is {\it as hard as} the Halting Problem (see \cite{BY06,BY08}). So those are pictures that we can never hope to see. However, in a way, the {\it non-computable} examples of Julia sets are well understood. They belong to a class of Siegel quadratic polynomials (see \S~\ref{sec:cremer} for the definition), and at least for some of them (the locally connected ones produced in \cite{BY09}) we know what they would look like. There is another class of Julia sets that has long baffled computational practitioners: Cremer quadratic Julia sets. We discuss the definition of such Julia sets in \S~\ref{sec:cremer}. We do not know what their picture would look like and, so far, no one has been able to produce an informative image of such $J_c$. Counter-intuitively, all Cremer quadratic Julia sets are computable, at least in theory. This was proven in \cite{BBY}, where an explicit algorithm for computing accurate images of Cremer quadratic Julia sets was given. The algorithm is, however, not practical. Its running time on a screen with a reasonable resolution would be enormous. However, its existence allows us to formulate the following questions: \begin{itemize} \item[(I)] Does there exist an algorithm to compute at least one Cremer Julia sets with a practical running time (for instance, polynomial in $n$, where $2^{-n}$ is the size of the pixel on the computer screen)? \item[(II)] Does there exist at least one Cremer Julia set for which every algorithm will have an impractical running time (i.e. can we prove that there is at least one such $J_c$ with a high, for instance, non-polynomial, lower complexity bound)? \end{itemize} Note that the two questions are not mutually exclusive. In the present paper we show that the second one has an emphatically positive answer: \medskip \noindent {\sl For every lower bound $t(n)$ there exist an abundance of Cremer quadratic Julia sets whose computational complexity is not lower than $t(n)$}. \medskip \noindent The structure of the paper is as follows. In \S~\ref{sec:cremer} we briefly review the definitions of Cremer Julia sets. In \S~\ref{sec:comput} we formalize the concept of computational complexity of $J_c$, and in \S~\ref{sec:result} we formulate our main theorem. In \S~\ref{sec:lavaurs} we introduce the main tools of Complex Dynamics used in the proof. We present the proof in \S~\ref{sec:proof}. \subsection{Cremer quadratic Julia sets} \label{sec:cremer} We refer the reader to the classical book of Milnor \cite{Mil} for a detailed introduction to the basic concepts of Complex Dynamics. We will assume familiarity with the standard definitions, and will only briefly recall a few facts about Cremer quadratics below. Let $f$ be a holomorphic map defined on an open domain $U$. Let $z_0\in U$ be a periodic point of period $p$ for $f$. Denote by $\lambda=Df^p(z_0)$ the multiplier of $z_0$. Fatou-Shishikura bound implies that a quadratic polynomial $p_c$ can have at most one periodic point whose multiplier $\lambda\in\{|z|\leq 1\}$. We are principally interested in the case of irrationally indifferent periodic points, that is, $\lambda=e^{2\pi i\theta}$ with $\theta\in\mathbb R \setminus \mathbb Q$. The map $f^p$ is called linearizable on a neighborhood of $z_0$ if there exist a neighborhood $V$ of $z_0$ and a conformal map $\phi$ from $V$ to a neighborhood of $0$ such that $$\phi\circ f^p\circ\phi^{-1}(z)=\lambda z\;\;\text{for all}\;\;z\in\phi(V).$$ The point $z_0$ with $\lambda=\exp(2\pi i\theta)$, $\theta\in\mathbb R\setminus\mathbb Q$ is called a Cremer periodic point if $f^p$ is not linearizable near $z_0$; it is called a Siegel point otherwise. It is known that the property of being Cremer is directly related to the Diophantine properties of $\theta\notin\mathbb Q$. Namely, let $$\theta=\frac{1}{a_1+\frac{1}{a_2+\frac{1}{a_3+\ldots}}}, \;a_i\in \mathbb N,\;\text{ and let }\;\frac{p_n}{q_n}=\frac{1}{a_1+\frac{1}{a_2+\ldots\frac{1}{a_n}}}$$ be the $n$-th continued fraction convergent of $\theta$. Brjuno \cite{Brjuno-71} showed that if the following condition is satisfied $$\sum\limits_{n=1}^\infty \frac{\log q_{n+1}}{q_n}<\infty $$ ($\theta$ is called \emph{Brjuno number} in this case) then the map $f^{ p}(z)$ is linearizable near $z_0$. Let us now specialize to the case of quadratic polynomials. A quadratic map has two fixed points, counted with multiplicity. It will be convenient to us to consider quadratic polynomials of the form \begin{equation}\label{quadf2} z\mapsto \lambda z+z^2, \end{equation} with a fixed point at the origin, whose multiplier is equal to $\lambda$. A more familiar looking formula $p_c(z)=z^2+c$ is transformed into (\ref{quadf2}) with $$c=\lambda/2-\lambda^2/4$$ by the linear change of coordinates $$w=z-\lambda/2.$$ Since we are specifically interested in the case $\lambda=e^{2\pi i\theta}\in S^1$, let us set $$f_\theta(z)=e^{2\pi i\theta}z+z^2\text{, for }\theta\in\mathbb R;$$ this is a one real parameter family of quadratic polynomials. Yoccoz \cite{Yoccoz-95} proved a famous converse of Brjuno's Theorem for this family, that is if $\theta$ is not Brjuno then $0$ is a Cremer point of $f_\theta$. In what follows, we will mostly restrict our attention to the family $f_\theta$. Where it does not lead to a confusion, we will write $J_\theta$, $K_\theta$, etc., for the Julia set and the filled Julia set of $f_\theta$. Yoccoz showed that quadratic maps $f_\theta$ with a Cremer fixed point have the {\it Small Cycle property}, \ie there are periodic cycles contained in arbitrarily small neighborhoods of the Cremer fixed point, different from the Cremer point itself \cite{Yoccoz-95}. Vice versa, if a fixed point of a holomorphic map has the Small Cycle property it is necessarily a Cremer fixed point since by trivial argumentation it can not be of any other type (attracting, repelling, Siegel or parabolic). \subsection{Preliminaries on computability} \label{sec:comput} In this section we briefly recall the notions of computability and computational complexity of sets. For a more detailed exposition we refer the reader to the monograph \cite{BY08}. The notion of computability relies on the concept of a Turing Machine (TM) \cite{Tur}, which is a commonly accepted way of formalizing the definition of an algorithm. A precise description of a Turing Machine is quite technical and we do not give it here, instead referring the reader to any text on Computability Theory (e.g. \cite{Pap} and \cite{Sip}). The computational power of a Turing Machine is provably equivalent to that of a computer program running on a RAM computer with an unlimited memory. \begin{Def}\label{comp fun def} A function $f:\mn\rightarrow \mn$ is called computable, if there exists a TM which takes $x$ as an input and outputs $f(x)$. \end{Def} Note that Definition \ref{comp fun def} can be naturally extended to functions on arbitrary countable sets, using a convenient identification with $\mathbb{N}$. The following definition of a computable real number is due to Turing \cite{Tur}: \begin{Def} A real number $\alpha$ is called computable if there is a computable function $\phi:\mathbb{N}\rightarrow \mathbb{Q}$, such that for all $n$ $$\left|\alpha-\phi(n)\right|<2^{-n}.$$ \end{Def} The set of computable reals is denoted by $\mathbb{R}_\mathcal{C}$. Trivially, $\mq\subset\mrc$. Irrational numbers such as $e$ and $\pi$ which can be computed with an arbitrary precision also belong to $\mrc$. However, since there exist only countably many algorithms, the set $\mrc$ is countable, and hence a typical real number is not computable. The set of computable complex numbers is defined by $\mathbb{C}_\mathcal{C}=\mathbb{R}_\mathcal{C}+i\mathbb{R}_\mathcal{C}$. Note that $\mathbb{R}_\mathcal{C}$ (as well as $\mathbb{C}_\mathcal{C}$) considered with the usual arithmetic operation forms a field. To define computability of functions of real or complex variable we need to introduce the concept of an oracle: \begin{Def} A function $\phi:\mn\to\mq+i\mq$ is an oracle for $c\in\mc$ if for every $n\in\mn$ we have $$|c-\phi(n)|<2^{-n} .$$ \end{Def} A TM equipped with an oracle (or simply an {\it oracle TM}) may query the oracle by reading the value of $\phi(n)$ for an arbitrary $n$. \begin{Def} Let $S\subset \mc$. A function $f:S\to \mc$ is called computable if there exists an oracle TM $M^\phi$ with a single natural input $n$ such that if $\phi$ is an oracle for $z\in S$ then $M^\phi$ outputs $w\in \mq+i\mq$ such that $$|w-f(z)|<2^{-n} .$$ \end{Def} We say that a function $f$ is {\it poly-time computable} if in the above definition the algorithm $M^\phi$ can be made to run in time bounded by a polynomial in $n$, independently of the choice of a point $z\in S$ or an oracle representing this point. Note that when calculating the running time of $M^\phi$, querying $\phi$ with precision $2^{-m}$ counts as $m$ time units. In other words, it takes $m$ ticks of the clock to read the argument of $f$ with precision $m$ (dyadic) digits. Let $d(\cdot,\cdot)$ stand for Euclidean distance between points or sets in $\mathbb{R}^2$. Recall the definition of the {\it Hausdorff distance} between two sets: $$d_H(S,T)=\inf\{r>0:S\subset U_r(T),\;T\subset U_r(S)\},$$ where $U_r(T)$ stands for the $r$-neighborhood of $T$: $$U_r(T)=\{z\in \mathbb{R}^2:d(z,T)\leqslant r\}.$$ We call a set $T$ a {\it $2^{-n}$ approximation} of a bounded set $S$, if $d_H(S,T)\leqslant 2^{-n}$. When we try to draw a $2^{-n}$ approximation $T$ of a set $S$ using a computer program, it is convenient to let $T$ be a finite collection of disks of radius $2^{-n-2}$ centered at points of the form $(i/2^{n+2},j/2^{n+2})$ for $i,j\in \mathbb{Z}$. We will call such a set {\it dyadic}. A dyadic set $T$ can be described using a function \begin{eqnarray}\label{comp fun}h_S(n,z)=\left\{\begin{array}{ll}1,&\text{if}\;\;d(z,S)\leqslant 2^{-n-2}, \\0,&\text{if}\;\;d(z,S)\geqslant 2\cdot 2^{-n-2},\\ 0\;\text{or}\;1&\text{otherwise}, \end{array}\right.\end{eqnarray} where $n\in \mathbb{N}$ and $z=(i/2^{n+2},j/2^{n+2}),\;i,j\in \mathbb{Z}.$\\ \\ Using this function, we define computability and computational complexity of a set in $\mathbb{R}^2$ in the following way. \begin{Def}\label{DefComputeSet} Let $t:\mathbb N\to \mathbb N$. A bounded set $S\subset \mathbb{R}^2$ has time complexity bounded by $t(n)$ if there exist $n_0\in\mathbb N$ and a TM, which computes values of a function $h(n,\bullet)$ of the form (\ref{comp fun}) in time $t(n)$, for all $n\geq n_0$. We say that $S$ is poly-time computable, if there exists a polynomial $p(n)$, such that $S$ is computable in time $p(n)$. \end{Def} \subsection{The main result} \label{sec:result} \begin{Th}\label{ThMain} For any function $t:\mathbb N\to \mathbb N$ there exists a dense $G_\delta$ subset of Cremer parameters $S_t\subset S^1$ such that for any $\theta\in S_t$ the Julia set of $f_\theta$ has time complexity not lower than $t(n)$. \end{Th} \noindent Let us explain the meaning of the statement of Theorem \ref{ThMain} in more detail. Given a function $t(n)$ and a parameter $\theta\in S_t$ the map $f_\theta$ has a Cremer fixed point. The Theorem states that for any Turing machine $M=M(n)$ with an oracle for $\theta$ there exists a sequence of integers $n_i\to\infty$ such that $M(n_i)$ will not produce a correct $2^{-n_i}$-approximation of $J_\theta$ in time less or equal to $t(n_i)$. \section{Lavaurs maps} \label{sec:lavaurs} Our exposition of Douady-Lavaurs theory \cite{Lav} of parabolic implosion follows the lecture notes \cite{Zin} (see, in particular Theorem~2.3.2 there). Let $\theta=p/q$. Then $f_\theta$ has a parabolic fixed point at the origin with exactly $q$ attracting and $q$ repelling directions. Denote these directions by $\nu_a^i$ and $\nu_r^i$, $i=0,\ldots,q-1$, so that $\nu_{a/r}^{i+1\mod q}$ is obtained from $\nu_{a/r}^i$ by rotating by the angle $2\pi /q$ and $\nu_r^i$ is obtained from $\nu_a^i$ by rotating by the angle $\pi/q$. Fix corresponding $q$ attracting and $q$ repelling petals $P_a^i,P_r^i$. Let $\Phi_a^i$ and $\Phi_r^i$ be the attracting and the repelling Fatou coordinates for $f_\theta^q$ defined on the unions of these petals, \ie $$\Phi_{a/r}^i(f_\theta^q)(z)=\Phi_{a/r}^i(z)+1,\;\;z\in P_{a/r}^i.$$ The interior of the filled Julia set $K_\theta$ can be written as a disjoint union $$K_\theta= \sqcup_{i=0}^{q-1} U_i\;\; \text{for}\;\; U_i=\{z:f_\theta^{qr}(z)\in P_a^i\;\;\text{for some}\;\;r\geqslant 0\}.$$ Fix $i$. Extend $\Phi_a^i$ to $U_i$ by $$\Phi_a^i(z)=\Phi_a^i(f_\theta^{qn}(z))-n$$ whenever $f_\theta^{qn}(z)\in P_a^i$. The inverse of the repelling Fatou coordinate $\Phi_r^i$ extends to a holomorphic map $\Psi_r^i$ on $\mathbb C$. For $\sigma\in\mathbb C$ let $T_\sigma(z)=z+\sigma$ be the shift map. The {\it Lavaurs map} $L_\sigma$ is defined on $U_i$ by $$L_\sigma(z)=\Psi_r^i\circ T_\sigma\circ \Phi_a^i.$$ Notice that each Fatou coordinate is defined up to an additive constant. Changing this constant transforms $L_\sigma$ into $L_{\sigma+\tau}$ for some $\tau\in\mathbb C$. \begin{Th}\label{ThDouady} Let $\theta=p/q$, $p,q$ coprime. For an appropriate choice of Fatou coordinates for $f_\theta$ the following is true. Assume that $\epsilon_k\to 0$, $|\Arg(\epsilon_k)|<\pi/4$, and $N_k\in\mathbb Z$ are such that \begin{equation}\label{EqEpsN}-\frac{\pi}{\epsilon_k q^2}+N_k\rightarrow \sigma\in\mathbb C,k\to\infty.\end{equation} Then $f_{\theta+\epsilon_k}^{N_k}$ converges uniformly on compact subsets of the interior of $K_{\theta}$ to the map $L_\sigma$. \end{Th} Given a rational number $\theta$ fix Fatou coordinates of the corresponding parabolic map $f_\theta$ as in Theorem \ref{ThDouady}. For a complex number $\sigma$ the {\it filled Lavaurs Julia set} is defined as follows: $$K_{\theta,\sigma}=\overline{\{z\in K_\theta:L_\theta^n(z)\in K_\theta\;\;\forall\;\;n\in\mathbb N \}},$$ and the {\it Lavaurs Julia set} is it's boundary: $$J_{\theta,\sigma}=\partial K_{\theta,\sigma}.$$ Using Lavaurs maps, Douady \cite{Douady-94} showed that the correspondence $$\theta\mapsto J_\theta$$ is discontinuous with respect to the Hausdorff metric on compact sets at parabolic parameters (\ie rational $\theta$). In particular, given a rational $\theta$, $\sigma\in\mathbb C$ and a sequence $\epsilon_n$ as in \eqref{EqEpsN} one has: \begin{equation}\label{EqRelationsJulias}J_\theta\subsetneq J_{\theta,\sigma}\subset\liminf J_{\theta+\epsilon_n}\subset\limsup K_{\theta+\epsilon_n}\subset K_{\theta,\sigma}\subsetneq K_\theta.\end{equation} For $\sigma\in\mathbb R,\epsilon=\{\epsilon_n\}$ as above assume, in addition, that $\theta+\epsilon_n$ is Cremer for each $n$ and there exists a limit $\lim\limits_{n\to \infty} J_{\theta+\epsilon_n}$. Denote this limit by $\widetilde J_{\sigma,\epsilon}$. Let $\mathcal J(\sigma)$ be the set of all possible limits $\widetilde J_{\sigma,\epsilon}$. \begin{figure}[ht] \centerline{\includegraphics[width=0.7\textwidth]{prop8.pdf}} \caption{\label{fig1}Illustration to the proof of Proposition~\ref{PropDistLim}} \end{figure} \begin{Prop}\label{PropDistLim} For every rational number $p/q$ and every $\sigma_0\in\mathbb R$ there exists at most countably many $\sigma\in\mathbb R$ such that $\mathcal J(\sigma)\cap\mathcal J(\sigma_0)\neq\varnothing$. \end{Prop} \begin{proof} See Figure~\ref{fig1} for an illustration. Let $\theta$ be a rational number and $\sigma\in\mathbb C$. Let $\epsilon=\{\epsilon_n\}$ be a sequence as in \eqref{EqEpsN}. Passing to a subsequence if necessary we may assume that $$\widetilde J_{\theta,\epsilon}:=\lim J_{\theta+\epsilon_n}$$ exists. Notice that a priory $\widetilde J_{\theta,\epsilon}$ does not need to have an empty interior. Observe that \begin{equation}\label{EqInclPreim}\partial\widetilde J_{\theta,\epsilon}\supset L_\sigma^{-1}(J_\theta)\;\;\text{and}\;\;\widetilde J_{\theta,\epsilon}\subset L_\sigma^{-1}(K_\theta).\end{equation} Indeed, from \eqref{EqRelationsJulias} we have: $$\widetilde J_{\theta,\epsilon}\supset J_{\theta,\sigma}\supset L_\sigma^{-1}(J_\theta).$$ On the other hand, arbitrarily close to each point of $L_\sigma^{-1}(J_\theta)$ there is a disk $D$ such that $$f_{\theta+\epsilon_k}^{N_k}(D)\cap K_\theta=\varnothing$$ for sufficiently large $k$. Thus, $$L_\sigma^{-1}(J_\theta)\subset \overline{K_\theta\setminus \widetilde J_{\theta,\epsilon}},$$ which proves \eqref{EqInclPreim}. Now, restrict our attention to the case $\sigma\in\mathbb R$ and $\epsilon_n\in\mathbb R$. Fix $\sigma_0\in\mathbb R$. Let $\zeta_0\in L_{\sigma_0}(K_\theta)\cap J_\theta$ be such that $f_\theta^m(\zeta_0)=0$ for some $m\in\mathbb N$. Let $z_0\in \partial\widetilde J_{\theta,\epsilon}$ be such that $L_{\sigma_0}(z_0)=\zeta_0$. Without loss of generality we may assume that $$DL_{\sigma_0}(z_0)\neq 0\;\;\text{and}\;\;\frac{dL_{\sigma_0+\delta}(z_0)}{d\delta}\neq 0\;\;\text{at}\;\;\delta=0.$$ Assume for simplicity that $\theta=p/q$, ($p\in\mathbb Z,q\in\mathbb N$, $p,q$ are co-prime) with $q\geqslant 3$. Notice that $f_\theta^m$ sends conformally a neigborhood of $\zeta_0$ onto a neighborhood of $0$. For each repelling direction $\nu_r^i$ of $f_\theta$ let $\gamma_i$ be the external ray at $\zeta_0$ such that $f_\theta^m(\gamma_i)$ is tangent to $\nu_r^i$. For sufficiently small $\delta_0$ there exists an inverse branch $\phi_\delta$ of $L_{\sigma_0+\delta}$ defined on $U_{\delta_0}(\zeta_0)$ for $|\delta|<\delta_0$ such that $\phi_\delta$ depends analytically on $\delta$. Then there exists $0<\delta_1<\delta_0$ such that for $\delta\neq 0,|\delta|<\delta_1$, one has $$\phi_\delta(\cup\gamma_i\cap U_\delta(\zeta_0))\cap \phi_0(J_\theta\cap U_\delta(\zeta_0))\neq\varnothing.$$ Using \eqref{EqInclPreim} we obtain that $\widetilde J_{\theta,\epsilon}\notin \mathcal J(\sigma_0+\delta)$. It follows that $$\mathcal J(\sigma_0)\cap \mathcal J(\sigma_0+\delta)=\varnothing.$$ Since this is true for any $\sigma_0$ the statement of the proposition follows. For $q=1$ and $q=2$ the proof is similar and we leave it to the reader as an exercise. \end{proof} \noindent An immediate consequence of Proposition \ref{PropDistLim} is the following: \begin{Co}\label{CoDistLim} For every rational number $\theta$ there exists a continuum $\Upsilon$ of sequences $\gamma=\{\gamma_i\}_{i\in\mathbb N}$ of Cremer parameters such that $\gamma_i\to \theta$, the limit $\lim J_{\gamma_i}$ exist for every $\gamma\in\Upsilon$ and $\lim J_{\gamma_i}$ are pairwise distinct for $\gamma\in E$. \end{Co} \section{Constructing Cremer Julia sets of high complexity} \label{sec:proof} First, let us prove an auxiliary technical statement. \begin{Prop}\label{PropCremerSeq} For any oracle TM $M^\phi$, any $\theta_0\in\mathbb T$, any $\epsilon_0>0$ and any $n_0\in\mathbb N$ there exists a Cremer parameter $\theta_1\in U_{\epsilon_0}(\theta_0)$, a number $\epsilon_1>0$ and an integer $n_1>n_0$ such that the following conditions are satisfied: \begin{itemize} \item[$1)$] $\overline{U_{\epsilon_1}(\theta_1})\subset U_{\epsilon_0}(\theta_0)$; \item[$2)$] for every $\theta\in U_{\epsilon_1}(\theta_1)$ the polynomial $f_\theta$ has a non-zero periodic point in $U_{2^{-{n_1}}}(0)$; \item[$3)$] for every $\theta\in U_{\epsilon_1}(\theta_1)$ the Turing Machine $M(n_1)$ does not produce a correct $2^{-{n_1}}$-approximation of $J_\theta$ in time $t(n_1)$. \end{itemize} \end{Prop} \begin{proof}[Proof of Proposition \ref{PropCremerSeq}] Since rational numbers are dense in $\mathbb T$ without loss of generality we may assume that $\theta_0$ is rational. Corollary \ref{CoDistLim} implies that there exist two sequences of Cremer parameters $\{\gamma^1_j\},\{\gamma^2_j\}$ convergent to $\theta_0$ such that the limits of the Julia sets $$J^s:=\lim\limits_{j\to\infty} J_{\gamma^s_j},\;\; s=1,2,$$ exist with respect to the Hausdorff metric and are distinct. Choose $N$ such that $d_H(J^1,J^2)>2^{-N}$. Set $n_{1}=\max\{N,n_0+1\}$. Choose $j$ sufficiently large so that \begin{itemize} \item{} $\gamma^1_j,\gamma^2_j\in U_{\epsilon_0}(\theta_0)$; \item{} $d_H(J_{\gamma^1_j},J_{\gamma^2_j})>2^{1-N}$; \item{} the first $t(n_{0})$ dyadic digits of $\gamma^1_j$ and $\gamma^2_j$ coincide. \end{itemize} There are two possibilities. \vskip 0.2cm \noindent $a)$ The Turing Machine $M$ given the input $n_{1}$ and an oracle for $\theta=\gamma^s_j$ for $s=1$ or $2$ either runs for longer than $t(n_{1})$ time units or does not produce a finite set of complex numbers. Then we set $\theta_{1}=\gamma^s_j$. \vskip 0.2cm \noindent $b)$ Otherwise, the Turing Machine $M$ with an input $n_{1}$ in time $t(n_{1})$ is not able to distinguish between $\gamma^1_j$ and $\gamma^2_j$, therefore produces the same collection of complex points $S$ for these two values. Choose $s\in\{1,2\}$ such that $S$ does not coincide with a $2^{-n_{1}}$ approximation of $J_{\gamma_j^s}$. Set $\theta_{1}=\gamma^s_j$. Further, since the correspondence $$\theta\mapsto J_\theta$$ is continuous at Cremer parameters with respect to the Hausdorff metric \cite{Douady-94}, for sufficiently small $\epsilon_{1}$ for any $\theta\in U_{\epsilon_{1}}(\theta_{1})$ the Turing Machine $M_{1}(n_{1})$ does not produce a correct $2^{-n_{1}}$ approximation of $J_\theta$. By the Small Cycles Property, $f_{\theta_{1}}$ has a periodic cycle in the punctured $2^{-n_{1}}$-neighborhood of the origin. From the Implicit Function Theorem it follows that for sufficiently small $\epsilon_{1}$ the condition $2)$ holds. Finally, to satisfy $1)$ we make $\epsilon_{1}$ smaller if necessary. \end{proof} Now we a ready to prove Theorem \ref{ThMain}. Let $\mathfrak M$ be the set of all Turing Machines. Notice that $\mathfrak M$ is countable. Fix a sequence $\{M_i\}_{i\geqslant 1}$ of Turing Machines such that each $M\in\mathfrak M$ appears in this sequence infinitely many times. Using Proposition \ref{PropCremerSeq} we construct a countable collection $\Omega_1$ of triples $(\theta_1,\epsilon_1,n_1)$, where $\theta_1\in\mathbb T$ is rational, $\epsilon_1>0$ and $n_1\in\mathbb N$, such that \begin{itemize} \item[$1)$] the sets $U_{\epsilon_1}(\theta_1)$ are pairwise disjoint for $(\theta_1,\epsilon_1,n_1)\in\Omega_1$ and their union is dense in $\mathbb T$; \item[$2)$] given $(\theta_1,\epsilon_1,n_1)\in\Omega_1$ for every $\theta\in U_{\epsilon_1}(\theta_1)$ the polynomial $f_\theta$ has a non-zero periodic point in $U_{2^{-{n_1}}}(0)$ and the Turing Machine $M_1(n_1)$ does not produce a correct $2^{-{n_1}}$-approximation of $J_\theta$ in time $t(n_1)$. \end{itemize} Moreover, using Proposition \ref{PropCremerSeq} by induction we construct a sequence $\{\Omega_i\}$, where $\Omega_i$ is a collection of triples $(\theta_i,\epsilon_i,n_i)$ satisfying the conditions for $\Omega_1$ with $M_1$ replaced by $M_i$ and, in addition, the following conditions: \begin{itemize} \item[$3)$] for $i\in\mathbb N$ if $(\theta_i,\epsilon_i,n_i)\in\Omega_i$ and $(\theta_{i+1},\epsilon_{i+1},n_{i+1})\in\Omega_{i+1}$ then either $U_{\epsilon_i}(\theta_i)\cap U_{\epsilon_{i+1}}(\theta_{i+1})=\varnothing$ or $U_{\epsilon_i}(\theta_i)\supset U_{\epsilon_{i+1}}(\theta_{i+1})$; \item[$4)$] if $U_{\epsilon_i}(\theta_i)\supset U_{\epsilon_{i+1}}(\theta_{i+1})$ then $n_{i+1}>n_i$. \end{itemize} Let $A_i$ be the union of the sets $U_{\epsilon_i}(\theta_i)$ over triples $(\theta_i,\epsilon_i,n_i)\in\Omega_i$. Then $A=\cap A_i$ is a dense $G_\delta$ subset of $\mathbb T$. Let $\theta_\infty\in A$. Then for every $i\in\mathbb N$ there exists a unique $(\theta_i,\epsilon_i,n_i)\in\Omega_i$ such that $\theta_\infty\in U_{\epsilon_i}(\theta_i)$. By the condition $2)$, the polynomial $f_{\theta_\infty}$ has a small cycle property, therefore, $\theta_\infty$ is a Cremer parameter. Moreover, for every Turing Machine $M$ there exists infinitely many positive integers $n$ such that $M$ does not produce a correct $2^{-n}$-approximation of $J_{\theta_\infty}$ in time $t(n)$. This finishes the proof of Theorem \ref{ThMain}.
32,865
TITLE: Inverse without compute inverse QUESTION [0 upvotes]: Let's assume I have the matrices $I\in \Re^{nxn}$ which is the identity matrix and moreover both $H,B\in \Re^{nxn}$. I want to find an alternative way to write the following espression: $$(I+HB)^{-1}$$ Exactly I don't want to have any term in which i need to invert something where $B$ appears. For example i tried to use the Matrix Inversion Lemma obtaining: $$(I+HB)^{-1}=I-IH(I+BIH)^{-1}BI=I-H(I+BH)^{-1}B$$ But as you can see in the middle term $(I+BH)^{-1}$ there is an inversion where $B$ appears. Any suggestions? REPLY [1 votes]: You can write it using the geometric series, at least formally: $$(I+BH)^{-1}=\sum_{j=0}^{\infty}(-HB)^j,$$ but you'll definitely have some convergence issues to work out. Here, also, as a convention, $(-HB)^0:=I$.
5,810
Mad MattR Building Set with Tools -. hape
348,560
TITLE: Convergence in statistics QUESTION [0 upvotes]: Let $X_1, \ldots , X_n$ be iid random variables with common pdf $f(x) = e^{-(x-\theta)}$ when $x>\theta$ and $f(x)=0$ elsewhere. Here $\theta$ is a fixed parameter. This pdf is called the shifted exponential. Let $Y_n = \min\{X_1, \ldots ,X_n\}$. Prove that $Y_n \rightarrow \theta$ in probability, by first obtaining the cdf of $Y_n$. My answer: The cdf is . . . $F_{Y_n}(x) = -e^{-x+\theta}$ when $x>\theta$ and $F_{Y_n}(x) = 0$ elsewhere $$P[|Y_n - \theta| < \epsilon] = P[\theta - \epsilon < Y_n < \theta + \epsilon] = F_{Y_n}(\theta + \epsilon) - F_{Y_n}(\theta - \epsilon) = -e^{-\epsilon}$$ But now I have to show that $-e^{-\epsilon}$ approaches 1 as $n$ approaches infinity, right? But I'm confused, because there is no $n$, so it just stays the same no matter what $n$ is... Thanks in advance REPLY [1 votes]: When you write the cdf of $Y_n$, that is your mistake. You are interested in the minimum of a sequence of random variables. Think about what it means for $P(Y_n\leq x)$, that means that there is at least one $X_i$ for $1\leq i\leq n$ with $X_i\leq x$. Alternatively, $P(Y_n\geq x)$ means $X_i\geq x$ for each $i=1,2,\ldots,n$. The $X_i$ are independent so...
119,262
TITLE: Do photons make the universe expand? QUESTION [8 upvotes]: I have a problem understanding the ideas behind a basic assumption of cosmology. The Friedmann equations follow from Newtonian mechanics and conservation of Energy-momentum $(E_{kin}+E_{pot}=E_{tot})$ or equally from Einsteins field equation with a Friedmann-Lemaitre-Robertson-Walker metric. In a radiation dominated, flat universe, standard cosmology uses the result from electrodynamics for radiation pressure $P_{rad}=\frac{1}{3}\rho_{rad}$, where $\rho_{rad}$ is the energy density of radiation in the universe and $P_{rad}$ is the associated pressure. It then puts the second Friedmann equation into the first to derive the standard result $\rho_{rad} \propto a^{-4}$, where $a$ is the scale factor on the metric. Putting this result into the first Friedmann equation yields $$a\propto\sqrt{t}$$ where $t$ is the proper time. Therefore we used the standard radiation pressure of classical electrodynamics to derive an expression for the expansion of the universe. My problem is in understanding why this is justified. Is the picture that photons crash into the walls of the universe to drive the expansion really valid? Certainly not (what would the walls of the universe be, and what are they made of ;)?), but at least this is how the radiation equation of state is derived. Is there any further justification for this? Be aware of that I'm talking here about a radiation dominated universe, so matter and dark energy can be neglected. Therefore, can we not derive a certain rate of expansion for the universe without anything mysterious like Dark Energy? REPLY [1 votes]: It is not necessary to assume that the universe has walls in order for the matter content of the universe to have nonzero pressure. The standard assumption is that the matter/radiation content of the universe is infinite, but at a finite volume density. Also note that there could be some sort of "end of matter" at some radius beyond the cosmological horizon that would be, in principle, completely undetectable, which would enable you to get rid of the "infinite matter" assumption. Irrespectively, it's the initial state of the universe that produces the pressure, not some equilibrium state with an external wall. Also, note that most of the point of the thermodynamic machinery involving walls or reservoirs is to remove the dependance of the final result on these walls or reservoirs.
220,586
Taking Away Her Confidence ..As a young woman I am full of experiences from this main category. My mom, she was a hard working woman that took care of my little brother an I. But this one guy she started dating, I guess you can say he put on a act in front of our family. For my little brother and I knew him behind closed doors. He told my mom he didn't want her working any more, because he wanted her home when he got home from work. Well as time passed my mom started drinking to ease her nerves I guess that is how I took it. And her boyfriend lost his job took his anger out on my mom. Every night they would sit in the living room and drink. Then he would beat on her. I would witness it every night. I would try to save my mom from it, only to get a bruised cheek. I begged my mother to leave him, she just didn't have the confidence to be an accomplished single mom like she used to be she grew dependent of her boyfriend. It hurt me to see my mother go through that. When she did want to leave she paid the price. Now she is suffering arthritis. Can't walk on her own, uses crutches. She was pushed down two flights of stairs the night she decided to leave him. I love my mother with all my heart. Nobody understands her like I do. I am scared that someday my spouse will end up like that man. I hate hearing stories of men killing their spouse over idiotic reasons. I hope that one day there will be a non violent place. I wouldn't want my kids to see that, how ever the world we live in now is full of violence. To me my moms ex-boyfriend took away my mother, I can't do most of the things other people do with their mothers. He robbed that from me. What a coward to hit on a women. I'm sorry that happened to you and your brother/mother i am sure she is a fine lady but nothing was your fault your were little and not to much you could of done at the time if your mother wasn't willing to leave him before. I know you may never forget it but you should try to make a little peace of it cause its over now you just have to remember that and not let part of your past hold you back because its nothing but just that a past. Anyways i wish you the best. I did try therapy however i could never seem to overcome this .She has left him a while back actually he was put in jail .She is remarried now and he treats her better than her last . You have to go to therapy to work out why this is happening to you. Your mom needs therapy too. But first she needs to leave that guy before he kills her.
401,012
\begin{document} \begin{abstract} The main aim of this short paper is to advertize the Koosis theorem in the mathematical community, especially among those who study orthogonal polynomials. We (try to) do this by proving a new theorem about asymptotics of orthogonal polynomials for which the Koosis theorem seems to be the most natural tool. Namely, we consider the case when a Szeg\"o measure on the unit circumference is perturbed by an arbitrary measure inside the unit disk and an arbitrary Blaschke sequence of point masses outside the unit disk. \end{abstract} \maketitle \section{Introduction and statement of results} \label{Intr} Consider a measure $\mu$ on the complex plane $\C$ of the form $\mu=\nu+w\,dm+\sum_k \mu_k\delta_{z_k}$ where $\nu$ is an arbitrary finite measure in the open unit disk $\D=\{z\in\C\,:\,|z|<1\}$, $m$ is the Haar measure on the unit circumference $\T$, $w\in L^1(m)$ is a strictly positive function satisfying the Szeg\"o condition $\int_{\T}\log w\,dm>-\infty$, $\mu_k>0$ satisfy $\sum_k\mu_k<+\infty$, and, at last, the points $z_k$ are taken in the exterior of the unit disk, i.e., $|z_k|>1$ for each $k$, and satisfy the Blaschke condition $\sum_k(|z_k|-1)<+\infty$. Let $$ p_n(z)=\tau_n z^n+\dots $$ be the $n$-th orthogonal polynomial with respect to the measure $\mu$ normalized by the conditions $\|p_n\|\ci{L^2(\mu)}=1$, $\tau_n>0$. \begin{theorem} \label{tau} $$\lim_{n\to\infty}\tau_n=\exp\left\{-\frac12\int_{\T}\log w\,dm\right\}\prod_k\frac{1}{|z_k|}\,.$$ \end{theorem} Let us introduce two auxiliary functions: the outer function $\psi$ in the exterior of the unit disk such that $|\psi|^2=\dfrac 1w$ on $\T$ and $\psi(\infty)>0$, and the Blaschke product $\ds B(z)=\prod_k \frac{\bar z_k}{|z_k|}\frac{z-z_k}{z\bar z_k-1}$. \begin{corollary} \label{asymp} For every $z\in\C$ with $|z|>1$, we have $$ \lim_{n\to \infty} \frac{p_n(z)}{z^n} = (B\psi)(z)\,. $$ \end{corollary} A few words about the history of the problem. For finitely many point masses lying on the real line the theorem was proved by Nikishin \cite{Niki}. Nikishin's result has been generalized in various ways by Benzin and Kaliagin {\cite{BK} and by Li and Pan in \cite{LP}. Peherstorfer and Yuditskii \cite{PY} seem to be the first to consider the case of infinitely many point masses. They proved the theorem for the case when all masses lie on the real line. An interesting attempt to deal with the general case was made by Peherstorfer, Volberg, and Yuditskii in \cite{PVY}. There the analog of Theorem \ref{tau} was proved for the asymptotic of orthogonal system of {\it rational functions} (in the spirit of CMV matrices). Whether this approach can give the asymptotic of orthogonal {\it polynomials} is not clear to us at this moment. \section{Proof of Theorem} \label{taun} Let us show first that $\limsup_{n\to\infty}\tau_n$ does not exceed the right hand side. To this end, let us observe that the right hand side can be rewritten as $\psi(\infty)B(\infty)$ where, as before $\psi$ is the outer function in the exterior of the unit disk such that $|\psi|^2=\dfrac 1w$ on $\T$ and $\psi(\infty)>0$, and $\ds B(z)=\prod_k \frac{\bar z_k}{|z_k|}\frac{z-z_k}{z\bar z_k-1}$ is the Blashke product with zeroes $z_k$. Now fix $\ell>0$ and put $\ds B_\ell(z)=\prod_{k\,:\,k\le\ell} \frac{\bar z_k}{|z_k|}\frac{z-z_k}{z\bar z_k-1}$. Consider the integral $\ds\int_{\T}\frac{p_n}{z^n\psi B_\ell}\,dm$. On one hand, its absolute value does not exceed $$ \int_{\T}\frac{|p_n|}{|\psi|}\,dm\le \Bigl(\int_{\T}\frac{|p_n|^2}{|\psi|^2}\,dm\Bigr)^{\frac12} =\|p_n\|\ci{L^2(w\,dm)}\le \|p_n\|\ci{L^2(\mu)}=1\,. $$ On the other hand, this integral can be easily computed using the residue theorem. It equals $$ \frac{\tau_n}{\psi(\infty)B_\ell(\infty)}-\sum_{k\,:\,k\le \ell}\frac{p_n(z_k)}{z_k^{n+1}\psi(z_k)B'_\ell(z_k)}\,. $$ Note now that $|p_n(z_k)|\le \mu_k^{-\frac12}\|p_n\|\ci{L^2(\mu)}=\mu_k^{-\frac12}$ for any $n$ and $z_k^{n+1}\to\infty$ as $n\to\infty$. Therefore, $\ds \sum_{k\,:\,k\le \ell}\frac{p_n(z_k)}{z_k^{n+1}\psi(z_k)B'_\ell(z_k)}\to 0$ as $n\to\infty$ and we conclude that $\limsup_{n\to\infty}\tau_n\le \psi(\infty)B_\ell(\infty)$. Since it is true for any $\ell$, we can pass to the limit as $\ell\to\infty$ and get $\limsup_{n\to\infty}\tau_n\le \psi(\infty)B(\infty)$. Now let us prove that $\liminf_{n\to\infty}\tau_n\ge \psi(\infty)B(\infty)$. To this end, observe that $$ \tau_n=\sup\{\tau\,:\,\text{there exists }p(z)=\tau z^n+\dots\text{ with }\|p\|\ci{L^2(\mu)}\le 1\}\,. $$ This means that is would suffice to construct a sequence of polynomials $q_n$ such that the leading coefficients of $q_n$ are arbitrarily close to $\psi(\infty)B(\infty)$ and $\limsup_{n\to\infty}\|q_n\|\ci{L^2(\mu)}\le 1$. The construction is extremely easy and well known when $\psi, B\in C^\infty(\T)$, which corresponds to the case when $w\in C^\infty(\T)$ and $B$ is a finite Blaschke product. In this case all one needs to do is to expand the analytic (in the exterior of the unit disk) function $F(z)=\psi(z)B(z)$ into its Taylor series at infinity: $F(z)=\tau_0+\tau_1 z^{-1}+\tau_2 z^{-2}+\dots$ and put $q_n(z)=z^n S_n(z)$ where $S_n(z)=\sum_{j=0}^n \tau_j z^{-j}$ is the $n$-th partial sum of this series. Clearly, the leading coefficient of $q_n$ is exactly $\tau_0=F(\infty)=\psi(\infty)B(\infty)$ for all $n$. On the other hand, since $F\in C^{\infty}(\T)$, the partial sums $S_n$ converge to $F$ uniformly on $\T$, which allows to estimate the norms $\|q_n\|\ci{L^2(\mu)}$ as follows. First of all, $$ \int_{\T} |q_n|^2w\,dm=\int_{\T} |S_n|^2w\,dm\to \int_{\T} |F|^2w\,dm=1\,. $$ Secondly, for each $k$, $$ |q_n(z_k)|=|z_k^nS_n(z_k)|=|z_k^n(S_n(z_k)-F(z_k))|\le \max_\T |S_n-F|\to 0\text{ as }n\to\infty $$ (the last inequality is just the maximum principle for the bounded and analytic in the exterior of the unit disk function $z^n(S_n(z)-F(z))=-\tau_{n+1}z^{-1}-\tau_{n+2}z^{-2}-\dots$). Thus, $\sum_k\mu_k|q_n(z_k)|^2\to 0$ as $n\to\infty$ (let us remind the reader that the sum is assumed to be finite here). Thirdly and finally, for every $z\in\D$, we have $|q_n(z)|\le \max_\T|q_n|=\max_\T |S_n|\to\max_\T|F|$ as $n\to\infty$, so the functions $q_n$ are uniformly bounded in $\D$. On the other hand, it is fairly easy to see that, for any $\ell^2$ sequence $\{\tau_j\}_{j\ge 0}$ and any $z\in\D$, the sequence $\sum_{j=0}^n\tau_j z^{n-j}$ ($n=1,2,\dots$) tends to $0$ as $n\to\infty$ and, moreover, this convergence is uniform on any compact subset of $\D$. Indeed, we have \begin{multline*} \Bigl|\sum_{j=0}^n\tau_j z^{n-j}\Bigr|\le \sum_{0\le j\le \frac n2}|\tau_j|\cdot|z|^{n-j}+ \sum_{\frac n2< j\le n}|\tau_j|\cdot|z|^{n-j} \\ \le \Bigl(\sum_{j\ge 0}|\tau_j|^2\Bigr)^{\frac12}\cdot\Bigl(\sum_{0\le j\le \frac n2}|z|^{2n-2j}\Bigr)^{\frac 12} +\Bigl(\sum_{j> \frac n2}|\tau_j|^2\Bigr)^{\frac12}\cdot\Bigl(\sum_{0\le j\le n}|z|^{2n-2j}\Bigr)^{\frac 12} \\ \le \Bigl(\sum_{j\ge 0}|\tau_j|^2\Bigr)^{\frac12}\cdot\left(\frac{|z|^n}{1-|z|^2}\right)^{\frac 12}+ \Bigl(\sum_{j> \frac n2}|\tau_j|^2\Bigr)^{\frac12}\cdot\left(\frac{1}{1-|z|^2}\right)^{\frac 12} \end{multline*} It remains to note that $|z|^n\to 0$ and $\sum_{j>\frac n2}|\tau_j|^2\to 0$ as $n\to\infty$. Thus, $q_n(z)\to 0$ uniformly on compact subsets of $\D$ as $n\to\infty$ and, by the dominated convergence theorem, $\int_\D |q_n|^2\,d\nu\to 0$ as $n\to\infty$. Combining these $3$ estimates, we conclude that as $n\to\infty$ $$ \|q_n\|^2\ci{L^2(\mu)}=\int_{\T} |q_n|^2\,dm+\int_\D |q_n|^2\,d\nu+\sum_k\mu_k|q_n(z_k)|^2\to 1+0+0=1\,. $$ Now we would like to do something similar in the general case. The main difficulty is that the Taylor series of the function $F$ in general does not converge to $F$ uniformly on $\T$. Fortunately, we do not really need the uniform convergence here. Let us find out what kind of convergence it would be appropriate to ask for. Firstly, in order to have $\int_{\T}|q_n|^2w\,dm\to \int_{\T}|F|^2w\,dm$, it suffices to ensure that $S_n\to F$ in $L^2(w\,dm)$. Secondly, let us estimate the (now possibly infinite) sum $\sum_k \mu_k|q_n(z_k)|^2$. Assuming for a moment that $F\in H^2$, we can try to estimate $|q_n(z_k)|^2$ by $\int_{\T} P_{z_k}|S_n-F|^2\,dm$ where $P_{z_k}$ is the Poisson kernel corresponding to the point $z_k$ (instead of the maximum principle, we use the subharmonicity of $|z^n(S_n(z)-F(z))|^2$ in the exterior of the unit disk). Therefore, to conclude that this sum goes to $0$, it suffices to ensure that $S_n\to F$ in $L^2(w_1\,dm)$ where $w_1=\sum_k\mu_k P_{z_k}\in L^1(m)$. At last, to estimate $\int_\D |q_n|^2\,d\nu$, let us observe that the assumption $F\in H^2$ is sufficient to ensure that $q_n(z)\to 0$ uniformly on compact subsets of $\D$ as $n\to\infty$ (the proof is exactly the same as before). The dominated convergence theorem is somewhat difficult to employ now because it would require an $L^2$ estimate for the $\sup_n|S_n|$ on $\T$, i.e., a Carleson type theorem, but, fortunately, other boundedness conditions are available to ensure that uniform convergence to $0$ on compact subsets of $\D$ implies convergence to $0$ in $L^2(\nu)$. The simplest such condition is uniform boundedness of the integrals $\int_\D|q_n|^2\,d\widetilde{\nu}$ where $\widetilde{\nu}$ is any finite measure of the kind $d\widetilde{\nu}=\varphi(|z|)\,d\nu$ with some positive function $\varphi(r)$ increasing to $+\infty$ as $r\to 1-$. Indeed, then, for any $r\in(0,1)$, we can write $$ \int_\D|q_n|^2\,d\nu=\int_{r\D}|q_n|^2\,d\nu+\int_{\D\setminus r\D}|q_n|^2\,d\nu \le \max_{rD}|q_n|^2\nu(\D)+\frac{1}{\varphi(r)}\int_\D|q_n|^2\,d\widetilde{\nu} $$ and observe that the first term tends to $0$ for any fixed $r\in(0,1)$ as $n\to\infty$ while the second one can be made arbitrarily small by choosing $r$ sufficiently close to $1$. Using the subharmonicity of $|q_n|^2$ in $\D$, we see that to get $\int_\D|q_n|^2\,d\nu\to 0$, it would suffice to have a uniform bound for $\int_{\T} |S_n|^2w_2\,dm$ where $w_2(z)=\int_\D P_\zeta(z)\,d\widetilde{\nu}(\zeta)$ is the ``harmonic sweeping" of the measure $\widetilde{\nu}$ to the unit circumference $\T$. Note that, again, we have $w_2\in L^1(m)$. The moral of the story is that it would suffice to ensure convergence of $S_n$ to $F$ in $L^2(W\,dm)$ where $W=1+w+w_1+w_2$ is a certain $L^1$ function on $\T$. (we added $1$ just to ensure that $L^2(W\,dm)\subset L^2(m)$). Of course, we cannot hope for that kind of convergence if the function $F$ itself is not in the space and, if we define it exactly as before by $F=\psi B$, most likely, it'll fail to be there. So let us see whether we can modify the definition of $F$. Apparently, we cannot do anything with the second factor: we need $F$ to vanish at all points $z_k$ in order to carry out our trick in the estimate of $\sum_k\mu_k|q_n(z_k)|^2$. On the other hand, after some thought, one can realize that we do not need the first factor to be exactly $\psi$: any outer function $\widetilde{\psi}$ with $|\widetilde{\psi}|\le|\psi|$ and $\widetilde{\psi}(\infty)\approx\psi(\infty)$ will do just as well. This freedom allows us to make $F$ belong to any given weighted space $L^2(V\,dm)$ with $V\ge 1$ (again, this condition is imposed just to get $F\in H^2$ for sure) satisfying $\int_{\T}\log V\,dm<+\infty$. Indeed, just define $\widetilde{\psi}$ by $|\widetilde{\psi}|^2=\min\left\{\frac 1w, \frac AV\right\}$ on $\T$, $\widetilde{\psi}(\infty)>0$. By choosing $A$ sufficiently large, we can ensure that $\widetilde{\psi}(\infty)$ is as close to $\psi(\infty)$ as we wish. On the other hand, we shall always have $\|F\|\ci{L^2(V\,dm)}= \|\widetilde{\psi}\|\ci{L^2(V\,dm)}\le \sqrt A<+\infty$. Since $W\in L^1(m)$ implies $\int_{\T}\log W\,dm<+\infty$, we, indeed, can make $F\in L^2(W\,dm)$ by choosing $V$ equal to $W$ or any larger weight with integrable logarithm. Unfortunately, this is not enough. We need more, namely, that $S_n\to F$ in $L^2(W\,dm)$. In particular, it implies that we must have a uniform bound for the norms $\|S_n\|\ci{L^2(W\,dm)}$. Since $S_n=z^{-n}\cP_+(z^n F)$, where $\cP_+$ is the orthogonal projection from $L^2(m)$ to $H^2$, we are naturally led to the question for which integrable weights $W\ge 1$ one can find another weight $V\ge W$ with $\log V\in L^1(m)$ such that $\cP_+$ is bounded as an operator from $L^2(V\,dm)$ to $L^2(W\,dm)$. The answer is given by the celebrated Theorem of Koosis: \begin{Koosisthm} \label{Koosis} For every integrable weight $W\ge 1$ one can find another weight $V\ge W$ with $\log V\in L^1(m)$ such that $\cP_+$ is bounded as an operator from $L^2(V\,dm)$ to $L^2(W\,dm)$. \end{Koosisthm} \medskip This is a truly remarkable theorem that deserves to be known much better than it currently seems to be. For reader's convenience, we included its proof in Appendix. Now let us finish the proof of our theorem. The only remaining difficulty is that we need convergence of $S_n$ to $F$ in $L^2(W\,dm)$ rather than mere boundedness of $\|S_n\|\ci{L^2(W\,dm)}$, which is all the Koosis theorem provides us with if we apply it directly to the weight $W$. The (fairly standard) trick is to apply the Koosis theorem to another weight $\widetilde{W}=W\varphi(W)$ where the increasing function $\varphi:[1,+\infty)\to[1,+\infty)$ is chosen so that $\lim_{x\to+\infty}\varphi(x)=+\infty$ and the weight $\widetilde{W}$ is still integrable. Let now $V$ be the weight corresponding to $\widetilde{W}$ instead of just $W$. We claim that $\|S_n-F\|\ci{L^2(W\,dm)}\to 0$ as $n\to\infty$ for all $F\in L^2(V\,dm)$. Indeed, since $V\ge \widetilde{W}\ge 1$, we know that $F\in L^2(m)$ and, thereby, $\|S_n-F\|\ci{L^2(m)}\to 0$. On the other hand, the norms $\|S_n-F\|\ci{L^2(\widetilde{W}\,dm)}$ are uniformly bounded. Hence, for every $M>0$, we can write $$ \int_{\T}|S_n-F|^2W\,dm=\int_{\{W\le M\}}|S_n-F|^2W\,dm + \int_{\{W> M\}}|S_n-F|^2W\,dm $$ $$ \le M\int_{\T}|S_n-F|^2\,dm+\frac1{\varphi(M)}\int_{\T}|S_n-F|^2\widetilde{W}\,dm\,. $$ Now, the first term tends to $0$ as $n\to\infty$ for any fixed $M>0$ while the second one can be made arbitrarily small by choosing $M$ large enough. The theorem is thus completely proved. \section{Proof of Corollary} \label{proofasymp} Again, denote by $B_\ell$ the partial Blaschke product with this zeros $z_k$, $k\le\ell$. Consider the integral $$ \int_{\T} \left| 1 - \frac{p_n(z)}{z^n} \frac{1}{\psi(z)B_\ell(z)}\right|^2\, dm\,. $$ Using the residue theorem, we conclude that it equals $$ 1+\|p_n\|\ci{L^2(w\,dm)} - 2 \frac{ \tau_n }{\psi(\infty) B_\ell(\infty)} - 2 \Re \sum_{k\,:\,k\le\ell} \frac{p_n(z_k)}{z_k^{n+1} \psi(z_k) B_\ell'(z_k)}\,. $$ We have already seen that $\ds \sum_{k\,:\,k\le\ell} \frac{p_n(z_k)}{z_k^{n+1} \psi(z_k) B_\ell'(z_k)}\to 0$ as $n\to\infty$. Also, $\|p_n\|\ci{L^2(w\,dm)}\le \|p_n\|\ci{L^2(\mu)}=1$. Thus, $$ \limsup_{n\to\infty}\int_{\T} \left|B_\ell(z) - \frac{p_n(z)}{z^n\psi(z)}\right|^2 \,{dm} \le 2\left(1-\frac{B(\infty)}{B_\ell(\infty)}\right)\,, $$ whence \begin{multline*} \limsup_{n\to\infty}\int_{\T} \left| B(z) - \frac{p_n(z)}{z^n\psi(z)}\right|^2 \,{dm} \\ \le 2\|B_\ell-B\|\ci{L^2(m)}+4\left(1-\frac{B(\infty)}{B_\ell(\infty)}\right)\,. \end{multline*} Since the right hand side of the last inequality tends to $0$ as $n\to\infty$, we conclude that $$ \lim_{n\to\infty}\int_{\T} \left|B(z) - \frac{p_n(z)}{z^n\psi(z)}\right|^2 \,{dm}=0\,. $$ It remains to recall that, for analytic in the exterior of the unit disk functions $\ds g_n(z)=B(z) - \frac{p_n(z)}{z^n\psi(z)}$, convergence to $0$ in $H^2$ is stronger than pointwise convergence to $0$ in the exterior of the unit disk. \section{Appendix: Proof of the Koosis theorem.} \label{proofKoosis} We shall outline the original proof from \cite{K} here. First of all, note that for any two weights $V\ge W\ge 1$, the boundedness of $\cP_+$ as an operator from $L^2(V)$ to $L^2(W)$ is implied by (actually, equivalent to) its boundedness as an operator from $L^2(w)$ to $L^2(v)$ where $w=\dfrac 1W$, $v=\dfrac 1V$. The latter is understood in the sense that there exists a finite constant $C>0$ such that $\int_{\T}|\cP_+g|^2v\,dm\le C\int_{\T}|g|^2w\,dm$ for any function $g\in L^2(m)\cap L^2(w\,dm)$. This can be seen by a standard duality argument. Using the density of trigonometric polynomials in $L^2(m)$, we also see that it is enough to check this estimate for the case when $g$ is a real trigonometric polynomial. Secondly, let us note that $\cP_+g=\dfrac12(\widehat g(0)+g+i\widetilde{g})$ where $\widetilde{\cdot}$ is the operator of harmonic conjugation, i.e., the operator that maps $\sum_j c_k z^k$ to $\frac1i\sum_j (\operatorname{sgn} j) c_k z^k$. Since the identity operator is bounded from $L^2(w\,dm)$ to $L^2(v\,dm)$ for any $v\le w$. Since $|\widehat g(0)|=\left|\int_{\T} g\,dm\right|\le \left(\int_{\T} |g|^2w\,dm\right)^{\frac12} \left(\int_{\T} W\,dm\right)^{\frac12}=\sqrt{\|W\|\ci{L^1(m)}}\|g\|\ci{L^2(w\,dm)}$, we see that the operator that maps $g$ to the constant function $\widehat g(0)$ is bounded in $L^2(w\,dm)$ and, thereby, from $L^2(w\,dm)$ to $L^2(v\,dm)$ for any $v\le w$. These two remarks show that it is enough to construct a weight $v\le w$ with integrable logarithm such that $\int_{\T} |\widetilde{g}|^2v\,dm\le C \int_{\T} |g|^2w\,dm$ for any real trigonometric polynomial $g$ with $\widehat g(0)=0$. To this end, consider an outer function $\Omega(z)$ with $\Re\Omega=W$ on $\T$. Note that $$ \left|1-\frac W{\Omega}\right|=\left|\frac{\Omega-\Re\Omega}{\Omega}\right|=\left|\frac{\Im\Omega}{\Omega}\right|<1\text{ almost everywhere on }\T\,. $$ Let $\rho=1-\left|1-\frac W{\Omega}\right|$. Consider the analytic polynomial $P(z)=g(z)+i\widetilde{g}(z)$. Since $P(0)=0$, we have $$ \int_{\T} \frac{P^2}\Omega\,dm=\frac{P(0)^2}{\Omega(0)}=0\,. $$ Let us rewrite it as $$ \int_{\T} {P^2}w\,dm =\int_{\T} {P^2}\left(1-\frac W\Omega\right)w\,dm $$ and take the real part of the left hand side with minus sign and the absolute value of the right hand side. We shall get the inequality $$ \int_{\T}(\widetilde{g}^2-g^2)w\,dm\le \int_{\T} |P|^2\left|1-\frac W\Omega\right|w\,dm= \int_{\T} (g^2+\widetilde{g}^2)(1-\rho)w\,dm\,, $$ which is equivalent to $$ \int_{\T} {\widetilde{g}^2}\rho w\,dm\le \int_{\T} {g^2}(2-\rho) w\,dm\le 2\int_{\T} {g^2}w\,dm\,. $$ Thus, we can choose $v=\rho w$. The only thing that remains to check is that $\int_{\T}\log v>-\infty$. To this end, note that $$ \rho=1-\left|\frac{\Im \Omega}{\Omega}\right|\ge \frac12\left(1-\left|\frac{\Im \Omega}{\Omega}\right|^2\right) =\frac{|\Re \Omega|^2}{2|\Omega|^2}=\frac{W^2}{|\Omega|^2}\,. $$ So $v=\rho w\ge \dfrac{W}{2|\Omega|^2}$. It remains to note that $W\ge 1$ while $\log|\Omega|\in L^1(m)$. The Koosis theorem is thus completely proved. \markboth{}{}
90,006
Practice Statement At East Lake Medical Clinic, we provided urgent care for acute illnesses and injuries as well as chronic disease management. Please visit our website at for more details. Dr. Dustin Ly, MD is Board Certified in American Board of Internal Medicine. He takes pride in his work and personally seeing all of his patients. For continuity of care, he also sees patients in local hospital and nursing homes. Urgent Care Services Primary Care Medical Weight Loss Injuries from Works, Sports, Motor Accidents STD: Testing and treatments DOT physicals Pre-employment exams School and Sport physical exams Travel Medicines Vaccinations TB testing WoundCare Labs test in office. Location - 4737 Old Canoe Creek Road St Cloud, FL 34769
61,007
TITLE: Knot Recognition as a Proof of Work QUESTION [23 upvotes]: Currently bitcoin has a proof of work (PoW) system using SHA256. Other hash functions use a proof of work system use graphs, partial hash function inversion. Is it possible to use a Decision problem in Knot Theory such as Knot recognition and make it into a proof of work function? Also has anyone done this before? Also, when we have this Proof of Work function will it be more useful than what is being currently computed? REPLY [8 votes]: If there is an Arthur-Merlin protocol for knottedness similar to the [GMW85] and [GS86] Arthur-Merlin protocols for Graph Non Isomorphism, then I believe such a cryptocurrency proof-of-work could be designed, wherein each proof-of-work shows that two knots are not likely to be equivalent/isotopic. In more detail, as is well known in the Graph Non Isomorphism protocol of [GMW85], Peggy the prover wishes to prove to Vicky the verifier that two (rigid) graphs $G_0$ and $G_1$ on $V$ vertices are not isomorphic. Vicky may secretly toss a random coin $i\in\{0,1\}$, along with other coins to generate a permutation $\pi\in\ S_V$, and may present to Peggy a new graph $\pi(G_i)$. Peggy must deduce $i$. Clearly Peggy is only able to do this if the two graphs are not isomorphic. Similarly, and more relevant for the purposes of a proof-of-work, as taught by [GS86] an Arthur-Merlin version of the same protocol includes Arthur agreeing with Merlin on $G_0$, $G_1$, given as for example adjacency matrices. Arthur randomly picks a hash function $H:\{0,1\}^*\rightarrow\{0,1\}^k$, along with an image $y$. Arthur provides $H$ and $y$ to Merlin. Merlin must find a $(i,\pi)$ such that $H(\pi(G_i))=y$. That is, Merlin looks for a preimage of the hash $H$, the preimage being a permutation of one of the two given adjacency matrices. As long as $k$ is chosen correctly, if the two graphs $G_0$ and $G_1$ are not isomorphic then there will be a higher chance that a preimage will be found, because the number of adjacency matrices in $G_0 \cup G_1$ may be twice as large than if $G_0\cong G_1$. In order to convert the above [GS86] protocol to a proof-of-work, identify miners as Merlin, and identify other nodes as Arthur. Agree on a hash $H$, which, for all purposes, may be the $\mathsf{SHA256}$ hash used in Bitcoin. Similarly, agree that $y$ will always be $0$, similar to the Bitcoin requirement that the hash begins with a certain number of leading $0$’s. The network agrees to prove that two rigid graphs $G_0$ and $G_1$ are not isomorphic. The graphs may be given by their adjacency matrices A miner uses the link back to the previous block, along with her own Merkle root of financial transactions, call it $B$, along with her own nonce $c$, to generate a random number $Z=H(c\Vert B)$ The miner calculates $W= Z\:mod\: 2V!$ to pick $(i,\pi)$ The miner confirms that $\pi(G_i)\neq G_{1-i}$ - that is, to confirm that the randomly chosen $\pi$ is not a proof that the graphs are isomorphic If not, the miner calculates a hash $W=H(\pi(G_i))$ If $W$ begins with the appropriate number of $0$’s, then the miner “wins” by publishing $(c,B)$ Other nodes can verify that $Z=H(c\Vert B)$ to deduce $(i,\pi)$, and can verify that $W=H(\pi(G_i))$ begins with the appropriate difficulty of $0$’s The above protocol is not perfect, some kinks I think would need to be worked out. For example, it's not clear how to generate two random graphs $G_0$ and $G_1$ that satisfy good properties of rigidity, for example, nor is it clear how to adjust the difficulty other than by testing for graphs with more or less vertices. However, I think these are probably surmountable. But for a similar protocol on knottedness, replace random permutations on the adjacency matrix of one of the two graphs $G_1$ and $G_2$ with some other random operations on knot diagrams or grid diagrams... or something. I don’t think random Reidemeister moves work, because the space becomes too unwieldy too quickly. [HTY05] proposed an Arthur-Merlin protocol for knottedness, but unfortunately there was an error and they withdrew their claim. [Kup11] showed that, assuming the Generalized Riemann Hypothesis, knottedness is in $\mathsf{NP}$, and mentions that this also puts knottedness in $\mathsf{AM}$, but I’ll be honest I don’t know how to translate this into the above framework; the $\mathsf{AM}$ protocol of [Kup11] I think involves finding a rare prime $p$ modulo which a system of polynomial equations is $0$. The prime $p$ is rare in that $H(p)=0$, and the system of polynomial equations corresponds to a representation of the knot complement group. Of note, see this answer to a similar question on a sister site, which also addresses the utility of such "useful" proofs-of-work. References: [GMW85] Oded Goldreich, Silvio Micali, and Avi Wigderson. Proofs that Yield Nothing but their Validity, 1985. [GS86] Shafi Goldwasser, Michael Sipser. Private Coins versus Public Coins in Interactive Proof Systems, 1986. [HTY05] Masao Hara, Seiichi Tani, and Makoto Yamamoto. UNKNOTTING is in $\mathsf{AM} \cap \mathsf{coAM}$, 2005. [Kup11] Greg Kuperberg. Knottedness is in $\mathsf{NP}$, modulo GRH, 2011.
110,875
We need to sell the house. DH will not talk about it. Yesterday I found out he had not paid last year's Real Estate Taxes...we are now fined and charged interest which we can't afford. AND this year's real estate is coming up! A couple weeks ago our electricity was turned off because he did not pay the electric bill for 4 months....not telling me. He has his messy horading in (what once was) 2 beautiful sheds, 1 garage and a large office. How do I get the house ready to sell? This accumulation of mess is WAY beyond my ability to clean up. He rents 2 warehouses/machine shops that are draining profits from his not so profitable business. They are filled to ceiling with dirty messy stuff. ie: He told me he took his truck to bring our old carpet to be disposed of a few years ago....instead I found out he is "storing" it at his shop so that he didn't have to pay for the disposal. He is not willing/able to clean it up himself....he doesn't SEE it? Any suggestions? Are their agencies/professionals/people who can help with this type of thing? PLUS the fact that he will not admit that he is not paying even HALF of our expenses? PLUS the fact that he has a separate checking account in his name only that he uses as a cash clearing house for himself? I am in a financial pickle. I feel like such a fool. I don't know how to get him out of his mess....and now it is MY mess to pay for his messes. Melissa, tell me. How much am I supposed to do/not do for/with him? What professionals can I find to have some peace of mind and help me clear things up? I am drowning and it feels like he is using me as a floating device! How do you partner financially with a person who is not able to partner with you? AND not become their mother or co-dependent? ALL my boundaries have been crossed multiple times. I can't retire by myself because of social security inequities for moms/wives. so sorry Submitted by lynninny on Hi Jenna, Just read this and want to tell you, first: I am so sorry. I know that you are going through a pretty tough go of it. So your DH, his finances, and the mess sound pretty out of control. I hope you can extricate yourself before he takes you down the financial tubes--would it be worth the retirement (SS) to have him completely destroy your credit and plunge you into debt? Or what about a legal separation (my understanding is that it would allow you to share health coverage and benefits but designate which financial responsibility is whose)? Can you go see an attorney for advice? No one has to know until you make a decision. It may help you understand fully what your legal options are. My opinion is, if you have been trying for this long to get him to listen, and he won't, then you have more of a need to take care of yourself, and your future, than you have an obligation to help him straighten this out. If you want to get your house or shared spaces ready to sell, can you hire a few people from a moving company or a college or something to just come in and do that? Put it in bins and take it away--to a storage facility, whatever? Thinking of you and best of luck to you. Hugs. Legal separation Submitted by lynnie70 on From the little I have explored, in a legal separation I think the judge determines who gets to live in the house (so he would HAVE to leave) and how money is appropriated for you and the kids to live on. Maybe he can be in charge of his warehouses and other expenses that you don't want to carry? Let him lose them, and then you don't have to clean them up. And that goes for anything else he won't clean up. Possibly you could ask a charity to come weekly and carry away things you set out on the curb if he won't come get them? At least you'll get a tax deduction. I couldn't get one ex out for two years after we divorced. Finally I just started setting his stuff by the curb and told him I wasn't going to be responsible for what happened to it. He came and got everything. Sounds like you are really in deep, but to get things straightened out, you may have to get him out first legally. Good luck. Maybe you could call your Submitted by PoisonIvy on Maybe you could call your city or county social services agency and talk to a social worker. I'm wondering if getting the involvement of a governmental "helping" agency might (1) take off some of the legal pressure for the taxes; (2) force help on your husband. For JennaLemon Submitted by MelissaOrlov on I think you are getting some good advice here. My opinion is: Another resource might be Stephanie Sarkis, who specializes in helping those with ADHD manage their finances. She has a book out on the topic and would likely send you in the direction of other resources if you contacted her. Make sure you have a good "support net" of friends or family in place. I suspect you are going to find out some more things about what your husband has been doing financially that will be a shock and may need the moral support.
269,789
Advanced Search The inquiry We gathered information and promoted awareness of the issue by— February 2012 round-table session – Official Report (transcript) June 2012 parliamentary debate on Women and Work – Official Report Scrutiny of the Draft Budget 2013-14 February 2012 to January 2014 We launched our call for evidence on 10 December 2012 and began taking oral evidence on 28 March 2013. A summary paper of evidence received is available, along with all individual evidence Evidence received The Committee published its report on 18 June 2013— 4th Report, 2013 (Session 4): Women and work 4th Report, 2013 (Session 4): Women and work The Committee received the Scottish Govenment's response on 20 September 2013— 28th November 2013 16 January 2014 Contact: for further information on the work of the Committee you can contact the Clerks by telephone on 0131 348 5408, by email at: Equal.Opportunities@scottish.parliament.uk or by post c/o the Clerk, Equal Opportunities Committee, TG.01, The Scottish Parliament, EDINBURGH, EH99 1SP.
124,332
10102 Fingerboard Rd Ijamsville, MD 21754 USA The petting farm is open for for visitors! Our Petting Farm in Urbana, MD is conveniently located in south Frederick County near Montgomery Co., Howard Co., Baltimore, Washington, D.C. and Northern Virginia. Book a field trip or bring the family and come out to the farm for some Farm Fun. Get up-close and hands-on with our 200 plus animals as well as go on a hay ride, watch a pig race and get a free feed for our sheep & goats! In October, admission includes a pumpkin! No reservation needed, just buy a ticket at the gate. Admission is charged for everyone ages 2 to 92 (children 23 months and younger are free). Admission is $14.00, cash or $14.50 for charge. Check out our Urbana Petting Farm Page for more details or visit us on Facebook! NO reservations required!
373,936
\begin{document} \title[Permanence for nonuniform hyperbolicity]{Permanence of nonuniform nonautonomous hyperbolicity for infinite-dimensional differential equations} \author[T. Caraballo]{Tom\'as Caraballo$^1$} \email{caraball@us.es} \thanks{$^1$ Departamento de Ecuaciones Diferenciales y An\'alisis Num\'erico, Universidad de Sevilla, Spain.} \author[A. N. Carvalho]{Alexandre N. Carvalho$^2$} \thanks{$^2$ Instituto de Ci\^encias Matem\'aticas e de Computa\c c\~ao, Universidade de S\~ ao Paulo, Brazil.} \email{andcarva@icmc.usp.br} \author[J. A. Langa]{Jos\'e A. Langa$^1$} \email{langa@us.es} \author[A. N. Oliveira-Sousa]{Alexandre N. Oliveira-Sousa$^{2}$} \email{alexandrenosousa@gmail.com} \subjclass[2010]{Primary 37B55, 37B99, 34D09, 93D09} \keywords{nonuniform exponential dichotomy, robustness, permanence of hyperbolic equilibria} \date{} \dedicatory{} \begin{abstract} In this paper, we study stability properties of nonuniform hyperbolicity for evolution processes associated with differential equations in Banach spaces. We prove a robustness result of nonuniform hyperbolicity for linear evolution processes, that is, we show that the property of admitting a nonuniform exponential dichotomy is stable under perturbation. Moreover, we provide conditions to obtain uniqueness and continuous dependence of projections associated with nonuniform exponential dichotomies. We also present an example of evolution process in a Banach space that admits nonuniform exponential dichotomy and study the permanence of the nonuniform hyperbolicity under perturbation. Finally, we prove persistence of nonuniform hyperbolic solutions for nonlinear evolution processes under perturbations. \end{abstract} \maketitle \section{Introduction} \par In the framework of dynamical systems, hyperbolicity plays a fundamental role (see, e.g. \cite{Katok,C-Robinson,Shub} and the references therein). It is the key property for most of the results on permanence under perturbations. The permanence, on the other hand, is an essential property for dynamical systems that model real life phenomena. That importance is related to the fact that modelling always comes with approximations (due do the empiric nature that it caries) and/or with simplifications (introduced to make models treatable or simply because the complete set of variables that are related to the phenomenon is not known). Therefore, in order that the mathematical model reflects, in some way, the phenomenon modelled, it is essential that its dynamical structures are robust under perturbation. It starts with the robustness under perturbation of hyperbolicity itself. Here, we are concerned with the robustness of nonautonomous nonuniform hyperbolicity. \par In the discrete case, \textit{hyperbolic dynamical systems} $x_{n+1}=Bx_n$ appear when the spectrum of the bounded linear operator $B$ does not intercept the unitary circle in the complex plane. This implies the existence of a \textit{hyperbolic decomposition} of the space, which means that exist two main directions: one where the evolution of the dynamical system decays exponentially and another where it grows exponentially. This property can be interpreted as a complete understanding of local or global dynamics. The set of operators that has such decomposition is an open set in the spaces of bounded linear operators and the operators in this set are called \textit{hyperbolic operators}. In other words, if $B$ is hyperbolic there is a neighborhood of $B$ such that every operator in this neighborhood is hyperbolic. For autonomous differential equations, when $A$ is a bounded linear operator, $\dot{x}=Ax$, by the spectral mapping theorem \cite{Kato}, hyperbolicity is associated with linear operators such that the spectrum does not intersect the imaginary line. \par Generally, in nonautonomous differential equations, the notion of hyperbolicity is referred as \textit{exponential dichotomy}. More precisely, consider the following differential equation in a Banach space $X$, \begin{equation}\label{introdution-eq-standart-linear-equation} \dot{x}=A(t)x, \ \ x(s)=x_s\in X. \end{equation} Under appropriate conditions, the solutions $x(t,s;x_s)$, $t\geq s$, of this initial value problem define an evolution process $\mathcal{S}:= \{S(t,s)\,; \, t\geq s \}$, where $S(t,s)x_s=x(t,s;x_s)$. We say that the evolution process $\mathcal{S}$ admits an \textit{(uniform) exponential dichotomy} if there exists a family of projections, $\{Q(t)\, ;\, t\in \mathbb{R}\}$ such that for each $t\geq s$ we have that $S(t,s)Q(s)=Q(t)S(t,s)$, $S(t,s)$ is an isomorphism from $R(Q(s))$ onto $R(Q(t))$, and \begin{eqnarray}\label{introdution-exp-dichotomy-def-eq1} \|S(t,s)(Id_X-Q(s))\|_{\mathcal{L}(X)}&\leq &K e^{-\alpha(t-s)}, \ \ t\geq s;\\\label{introdution-exp-dichotomy-def-eq2} \|S(t,s)Q(s)\|_{\mathcal{L}(X)}&\leq& Ke^{\alpha(t-s)}, \ \ t< s, \end{eqnarray} for some constants $K\geq 1$ and $\alpha>0$. Note that, since the vector field is changing in time, it is natural to think that for each initial time we have a hyperbolic decomposition that resembles the properties in the autonomous case. There is a long list of works through these last decades about existence of exponential dichotomies and their stability properties, for instance: \cite{Carvalho-Langa,Chow-Leiva-1,Chow-Leiva-2,Hale-Zhang,Henry-1,Henry-2,Pliss-Sell}. \par If we replace the constant $K$ in the above definition by a continuous function $K(s)$ in \eqref{introdution-exp-dichotomy-def-eq1} and \eqref{introdution-exp-dichotomy-def-eq2}, we say that \eqref{introdution-eq-standart-linear-equation} admits a \textit{nonuniform exponential dichotomy} (for an introduction see \cite{Barreira-Valls-Sta}). Usually, the nonuniform bound is given by $K(s)\leq De^{\nu|s|}$ for some $\nu>0$. If $\alpha>\nu$, the properties of the nonuniform behavior resembles the uniform case, so this is the usual requirement to prove results about robustness and admissibility. As in the uniform case, there are many works concerning issues of existence and robustness for nonuniform exponential dichotomies \cite{Alhalawa,Barreira-Dragicevi-Valls,BarreiraValls-Non,Barreira-Valls-Sta,Barreira-Valls-R,Barreira-Valls-Existence,Barreira-Valls-Robustness-noninvertible,Zhou-Lu-Zhang-1}. In this paper, we study robustness and permanence of global hyperbolic solutions. \par The robustness of nonuniform exponential dichotomy for equation \eqref{introdution-eq-standart-linear-equation} can be interpreted as follows: suppose that the associated solution operator (evolution process) admits a nonuniform exponential dichotomy. The problem is to know for which family of bounded linear operators $\{B(t): \, t\in \mathbb{R}\}$, the perturbed problem \begin{equation}\label{introdution-eq-standart-linear-equation-perturbed} \dot{x}=A(t)x+B(t)x, \end{equation} admits a nonuniform exponential dichotomy. \par Barreira and Valls \cite{Barreira-Valls-R} studied under which conditions the nonuniform exponential dichotomy is robust in the case of invertible evolution processes. Later, Zhou \textit{et al.} \cite{Zhou-Lu-Zhang-1}, proved a similar result for random difference equations for linear operators without the invertibility requirement. More recently, Barreira and Valls \cite{Barreira-Valls-Robustness-noninvertible}, proved that nonuniform exponential dichotomy is robust for continuous evolution processes, also without invertibility. They consider an evolution process that admits a nonuniform exponential dichotomy with a general growth rate $\rho(\cdot)$. They proved that if $\alpha >2\nu$ and $B:\mathbb{R}\to \mathcal{L}(X)$ is continuous satisfying $\|B(t)\|_{\mathcal{L}(X)}\leq \delta e^{-3\nu |\rho(t)| }\rho^\prime(t)$, for all $t\in \mathbb{R}$, then the perturbed problem \eqref{introdution-eq-standart-linear-equation-perturbed} admits a $\rho$-nonuniform exponential dichotomy. \par We provide a interpretation of the robustness result as \textit{open property}. In fact, if an evolution process $\mathcal{S}$ admits a nonuniform exponential dichotomy, there is an open neighborhood $N(\mathcal{S})$ of $\mathcal{S}$ such that every evolution process in $N(\mathcal{S})$ also admits a nonuniform exponential dichotomy. Our proof of the robustness result is inspired by the ideas of Henry \cite{Henry-1}. We prove that if a continuous evolution process admits a nonuniform exponential dichotomy, then each discretization also admits it. Then we use the \textit{roughness} of the nonuniform exponential dichotomy for discrete evolution processes, obtained by Zhou \textit{et al.} \cite{Zhou-Lu-Zhang-1}, to obtain that each discretization of the perturbed evolution process also admits a nonuniform exponential dichotomy. Thus, to obtain our robustness result, we have to guarantee that if each discretization of a continuous evolution process $\mathcal{S}$ admits a nonuniform exponential dichotomy, then $\mathcal{S}$ also admits it. \par By this method, we obtain uniqueness and continuous dependence of projections, and explicit expressions for the bound and exponent of the perturbed evolution process. Besides, since we preserve the condition $\alpha>\nu$ of Zhou \textit{et al.} \cite{Zhou-Lu-Zhang-1}, we obtain an improvement of the result of Barreira and Valls, at the particular case of Theorem 1 of \cite{Barreira-Valls-Robustness-noninvertible}, when $\rho(t)=t$. Moreover, we do not assume that the evolution processes are invertible, then it is possible to apply our result on evolutionary differential equations in Banach spaces, as the ones that appears in \cite{Carvalho-Langa,Carvalho-Langa-Robison-book,Chow-Leiva-2,Henry-1}. \par An important consequence of the robustness result regarding nonlinear evolution processes is the persistence under perturbation of \textit{hyperbolic solutions}. More precisely, consider a semilinear differential equation \begin{equation}\label{introdution-eq-standart-semilinear-equation} \dot{x}=A(t)x+f(t,x), \ \ x(s)=x_s\in X, \end{equation} and suppose that there for each $s\in \mathbb{R}$ and $x_s\in X$ there exist a solution $x(\cdot,s;x_s):[s,+\infty)\to X$, then there exist a nonlinear evolution process $\mathcal{S}_f=\{S_f(t,s):t\geq s\}$ defined by $S_f(t,s)x=x(t,s;x_s)$. A map $\xi:\mathbb{R}\to X$ is called a \textit{global solution} for $\mathcal{S}_f$ if $S_f(t,s)\xi(s)=\xi(t)$ for every $t\geq s$ and we say that $\xi$ is a \textit{nonuniform hyperbolic solution} if the linearized evolution process over $\xi$ admits a nonuniform exponential dichotomy. This notion also appears in Barreira and Valls \cite{Barreira-Valls-Sta} as \textit{nonuniformly hyperbolic solutions}. In the uniform case, in Carvalho and Langa \cite{Carvalho-Langa}, it was studied the existence of hyperbolic solutions from nonautonomous perturbation of hyperbolic equilibria for a autonomous semilinear differential equation. \par Inspired by Carvalho \textit{et al.} \cite{Carvalho-Langa-Robison-book} we prove a result on the persistence of nonuniform hyperbolic solutions under perturbations. In fact, if $\xi$ is a nonuniform hyperbolic solution for $\mathcal{S}_f$ and $g$ is a map ``close" to $f$, then there exists a nonuniform hyperbolic solution for $\mathcal{S}_g$ ``close" to $\xi$. Additionally, we also prove that bounded nonuniform hyperbolic are \textit{isolated} in the space of bounded continuous functions $C_b(\mathbb{R},X)$, i.e., if $\xi$ is a nonuniform hyperbolic solution, then there exists a neighborhood of $\xi$ in $C_b(\mathbb{R})$ such that $\xi$ is the only bounded solution for $\mathcal{S}_f$ is this neighborhood. \par Therefore, the aim of this work is to establish: uniqueness and continuity for family of projections associated with nonuniform exponential dichotomy; a robustness result with the condition $\alpha>\nu$; and persistence of nonuniform hyperbolic solutions. To that end, in Section \ref{section-robustness-discrete-case}, we summarize some important facts for discrete evolution processes with nonuniform exponential dichotomy. We prove uniqueness and continuity of projections and briefly recall the robustness result of Zhou \textit{et al.} \cite{Zhou-Lu-Zhang-1} in our framework. Then, in Section \ref{section-robustness-continuous-case}, we prove uniqueness and continuous dependence for family of projections, and a robustness result of nonuniform exponential dichotomy for continuous evolution processes. In Section \ref{subsection-a-general-example}, we present a class of examples of evolutions processes in a Banach space that admit nonuniform exponential dichotomy. Finally, in Section \ref{section-persistence}, we consider \textit{nonuniform hyperbolic solutions} for evolution processes associated with semilinear differential equations. We prove that these solutions are isolated in $C_b(\mathbb{R},X)$ and that they persist under perturbations. \section{Nonuniform exponential dichotomy: discrete case}\label{section-robustness-discrete-case} \par In this section, we present some basic facts of nonuniform exponential dichotomy for discrete evolution processes. We briefly recall some results of Zhou \textit{et al.} \cite{Zhou-Lu-Zhang-1} (without considering a random parameter) that we will use to prove our results for the continuous case. Their most important result is a robustness theorem for the nonuniform exponential dichotomy. Moreover, as a consequence of the results in \cite{Zhou-Lu-Zhang-1} we obtain uniqueness and continuous dependence of projections associated with the nonuniform exponential dichotomy. \par We start with the definition of a \textit{discrete evolution process} in a Banach space $(X,\|\cdot\|_X)$ in a particular case where the family of operators are linear bounded operators in $X$. \begin{definition} Let $\mathcal{S}=\{S_{n,m}\, : \, n\geq m \hbox{ with } n,m\in \mathbb{Z}\}$ be a family of bounded linear operators in a Banach space $X$. We say that $\mathcal{S}$ is a \textbf{discrete evolution process} if \begin{enumerate} \item $S_{n,n}=Id_X$, for all $n\in \mathbb{Z}$; \item $S_{n,m}S_{m,k}=S_{n,k}$, for all $n\geq m\geq k$. \end{enumerate} To simplify the notation, we only write $\mathcal{S}=\{S_{n,m}\, : n\geq m \}$ as an evolution process, whenever is clear are dealing with discrete ones. \end{definition} \begin{remark} It is always possible to associate a discrete evolution process $\mathcal{S}=\{S_{n,m}\, : \, n\geq m \}$ with the family index with one parameter, i.e., $S_n=S_{n+1,n}$ for all $n\in \mathbb{Z}$. Conversely, if we have a family $\{S_n\, :\, n\in \mathbb{Z}\}\subset \mathcal{L}(X)$ we define $S_{n,m}:=S_{n-1}\cdots S_m$ for $n>m$ and $S_{n,n}:=Id_X$ and obtain a linear evolution process $\mathcal{S}=\{S_{n,m}\, : \, n\geq m \}$. Therefore, when we refer to a discrete evolution process $\mathcal{S}$ we can use both notations. \end{remark} \par The definition above is natural in the study of \textit{difference equations}. Given a discrete linear evolution process $\mathcal{S}=\{S_n:n\in \mathbb{Z}\}$ it is possible to consider the following difference equation \begin{equation}\label{discrete-equation-associated-with-the-process} x_{n+1}=S_n x_n, \ \ x_n\in X, \ \ n\in \mathbb{Z}. \end{equation} Now, we present the definition of \textit{nonuniform exponential dichotomy}. \begin{definition} Let $\mathcal{S}=\{S_{n,m} \, : \, n\geq m\}\subset \mathcal{L}(X)$ be an evolution process in a Banach space $X$. We say that $S$ admits a \textbf{nonuniform exponential dichotomy} if there is a family of continuous projections $\{Q_n; \, n\in \mathbb{Z}\}$ in $\mathcal{L}(X)$ such that \begin{enumerate} \item $Q_nS_{n,m}=S_{n,m}Q_m$, for $n\geq m$; \item $S_{n,m}:R(Q_m ) \to R(Q_n )$ is an isomorphism, for $n\geq m$, and we define $S_{m,n}$ as its inverse; \item There exists a function $K:\mathbb{Z}\rightarrow [1,+\infty)$ with $K(n)\leq De^{\nu |n|}$, for some $D\geq 1$ and $\nu>0$, and $\alpha>0$ such that \begin{equation*} \|S_{n,m}(Id_X-Q_m )\|_{\mathcal{L}(X)}\leq K(m) e^{-\alpha(n-m)}, \ \ \forall n\geq m; \end{equation*} and \begin{equation*} \|S_{n,m}Q_m\|_{\mathcal{L}(X)}\leq K(m) e^{\alpha(n-m)}, \ \ \forall n\leq m. \end{equation*} \end{enumerate} \end{definition} \par In this theory, $K$ and $\alpha$ are usually called the \textbf{bound} and the \textbf{exponent} of the exponential dichotomy, respectively. \par We now recall the definition of a \textit{Green function}. \begin{definition} Let $\mathcal{S}=\{S_n:n\in \mathbb{Z}\}$ be a discrete evolution process such that admits a nonuniform exponential dichotomy with family of projections $\{Q_n\}_{n\in \mathbb{Z}}$. The \textbf{Green function} associated to the evolution process $\mathcal{S}$ is given by \begin{equation*} G_{n,m}= \left\{ \begin{array}{l l} S_{n,m}(Id_X-Q_m), & \quad \hbox{if } n\geq m, \\ -S_{n,m}Q_m, \, & \quad \hbox{if } n<m. \end{array} \right. \end{equation*} \end{definition} \par A space that appears naturally when dealing with nonuniform exponential dichotomies is \begin{equation*} l_{1/K}^\infty(\mathbb{Z}):=\big\{f:\mathbb{Z}\to X: \, \sup_{n\in \mathbb{Z}} \big\{\|f_n\|_X K(n+1)\big\}=M_f <+\infty \big\}, \end{equation*} where $K:\mathbb{Z}\to \mathbb{R}$ is such that $K(n)\geq 1$ for all $n\in \mathbb{Z}$. \par As in the uniform case, the next result shows that it is possible to obtain the solution for \begin{equation}\label{discrete-equation-associated-with-the-process-perturbed} x_{n+1}=S_n x_n+f_n. \end{equation} using the \textit{Green function}. \begin{theorem}\label{th-admissibility-pair-discrete-case} Assume that the evolution process $\mathcal{S}=\{S_n:n\in \mathbb{Z}\}$ admits a nonuniform exponential dichotomy with bound $K(n)\leq De^{\nu|n|}$ and exponent $\alpha>\nu$. If $f\in l_{1/K}^\infty(\mathbb{Z})$, then \eqref{discrete-equation-associated-with-the-process-perturbed} possesses a bounded solution given by \begin{equation*} x_n=\sum_{-\infty}^{+\infty}G_{n,k+1}f_k, \ \ \forall \ n\in \mathbb{Z}. \end{equation*} \end{theorem} \par For the proof of Theorem \eqref{th-admissibility-pair-discrete-case} see Zhou \textit{et al.} \cite{Zhou-Lu-Zhang-1}. Note that, in \cite{Zhou-Lu-Zhang-1} they consider tempered exponential dichotomies, but their proof hold true with the condition $\alpha>\nu$. \par As a consequence of Theorem \ref{th-admissibility-pair-discrete-case}, we obtain uniqueness of the family of projections associated with the nonuniform exponential dichotomy. \begin{corollary}[Uniqueness of projections] \label{cor-uniqueness-projection-discrete} If $\mathcal{S}=\{S_n:n\in \mathbb{Z}\}$ admits a nonuniform exponential dichotomy with bound $K(n)\leq De^{\nu|n|}$ and exponent $\alpha>\nu$, then the family of projections are uniquely determined. \end{corollary} \begin{proof} Let $\{Q_n^{(i)}\, ; \, n\in \mathbb{Z}\}$, for $i=1,2$, projections associated with the evolution process $\mathcal{S}$. Given $x\in X$ and $m\in \mathbb{Z}$ fixed, define $f_n=0$, for all $n\neq m-1$, and $f_{m-1}=K(m)^{-1}x$. Thus, $f\in l_{1/K}^\infty(\mathbb{Z})$ and from Theorem \ref{th-admissibility-pair-discrete-case} there exists a unique solution $\{x_n\}_{n\in \mathbb{Z}}$ for \begin{equation*} x_{n+1}=S_n x_n +f_n, \ \ n\in \mathbb{Z}. \end{equation*} Hence, $x_m=\sum_{-\infty}^{+\infty}G_{m,k+1}^{(i)}f_k=G_{m,m}^{(i)}f_{m-1}=K(m)^{-1}(Id_X-Q_m^{(i)})x$, for $i=1,2$. Therefore, $Q_m^{(1)}=Q_m^{(2)}$. \end{proof} \par Next, we establish a result on continuous dependence of projections. \begin{theorem}[Continuous dependence of projections] \label{th-continuity-projection} Suppose that $\{T_n\}_{n\in {\mathbb Z}}$ and $\{S_n\}_{n\in {\mathbb Z}}$ admits a nonuniform exponential dichotomy with projections $\{Q_n^{\mathcal{T}}\}_{n\in {\mathbb Z}}$ and $\{Q_n^{\mathcal{S}}\}_{n\in {\mathbb Z}}$, exponents $\alpha_\mathcal{T}$ and $\alpha_\mathcal{S}$, respectively, and the same bound $K(n)\leq De^{\nu|n|}$. If $\nu<\min\{\alpha_\mathcal{T},\alpha_\mathcal{S}\}$ and \begin{equation*} \sup_{n\in \mathbb{Z}} \big\{K(n+1)\|T_n-S_n\|_{{\mathcal L}(X)}\big\}\leq \epsilon, \end{equation*} then $$ \sup_{n\in \mathbb{Z}} \big\{K(n)^{-1} \, \|Q_n^{\mathcal{T}}-Q_n^{\mathcal{S}}\|_{{\mathcal L}(X)}\big\} \leq\frac{e^{-\alpha_\mathcal{S}} + e^{-\alpha_\mathcal{T}}}{1-e^{-(\alpha_\mathcal{S}+\alpha_\mathcal{T})}}\, \epsilon . $$ \end{theorem} \begin{proof} Let $z\in X$ and $m\in \mathbb{Z}$ be fixed and consider \begin{equation*} f_n= \left\{ \begin{array}{l l} 0, & \quad \hbox{if } n\neq m-1, \\ K(m)^{-1}z, \, & \quad \hbox{if } n=m-1. \end{array} \right. \end{equation*} Thus, by Theorem \ref{th-admissibility-pair-discrete-case}, there exist bounded solutions $x^k=\{x_n^k\}_{n\in\mathbb{Z}}$ given by $x_n^{k}:=G_{n,m}^{k} zK(m)^{-1}$ for $k=\mathcal{T}, \mathcal{S}$. Note that, for $n\in \mathbb{Z}$, \begin{equation*} x_{n+1}^{\mathcal{T}} - S_n x_n^{\mathcal{T}}= T_n x_n^{\mathcal{T}}-S_n x_n^{\mathcal{T}}+f_n \end{equation*} and $ x_{n+1}^{\mathcal{S}} - S_n x_n^{\mathcal{S}}=f_n$. Then, if $z_n:=x_n^{\mathcal{T}}-x_n^{\mathcal{S}}$ we obtain that $z_{n+1}=S_n z_n + y_n$, where $y_n=(T_n-S_n)x_n^{\mathcal{T}}$ for all $n\in \mathbb{Z}$. Thanks to the boundedness of the sequence $\{x_n^{\mathcal{T}}\}_{n\in\mathbb{Z}}$ and by the hypothesis on $T_n-S_n$ we have that $\{y_n K(n+1)\}_{n\in {\mathbb Z}}$ is bounded, and by Theorem \ref{th-admissibility-pair-discrete-case} we have that \begin{equation*} z_n=\sum_{k=-\infty}^\infty G_{n,k+1}^{\mathcal{S}} (T_k-S_k)G_{k,m}^{\mathcal{T}} zK(m)^{-1}, \end{equation*} and therefore, by the hypothesis on $\mathcal{T}-\mathcal{S}$, we deduce \begin{eqnarray*} \|z_m\|_X&\leq&\sum_{k=-\infty}^\infty K(k+1) e^{-\alpha_\mathcal{S}|m-k-1|} \|T_k-S_k\|_{\mathcal{L}(X)}e^ {-\alpha_\mathcal{S}|k-m|}\|z\|_{X} \\ &\leq& \frac{e^{-\alpha_\mathcal{S}} + e^{-\alpha_\mathcal{T}}}{1-e^{-(\alpha_\mathcal{S}+\alpha_\mathcal{T})}}\, \epsilon \, \|z\|_X. \end{eqnarray*} The definition of $z$ in $m$ yields $$ z_m=x_m^{\mathcal{T}}-x_m^{\mathcal{S}}=(G_{m,m}^{\mathcal{T}}-G_{m,m}^{\mathcal{S}})K(m)^{-1}z=(Q_m^{\mathcal{S}}-Q_m^{\mathcal{T}})K(m)^{-1}z. $$ Consequently, \begin{equation*} \|(Q_m^{\mathcal{S}}-Q_m^{\mathcal{T}})K(m)^{-1}z\|_X\leq \frac{e^{-\alpha_\mathcal{S}} + e^{-\alpha_\mathcal{T}}}{1-e^{-(\alpha_\mathcal{S}+\alpha_\mathcal{T})}}\, \epsilon \, \|z\|_X, \end{equation*} which concludes the proof of the theorem. \end{proof} \par Finally, we state a robustness result for discrete evolution processes with nonuniform exponential dichotomies. \begin{theorem}[Robustness for discrete evolution processes] \label{th-roughness-discrete-TED} Let $\mathcal{S}=\{S_n:n\in \mathbb{Z}\}$, $\mathcal{B}=\{B_n:n\in \mathbb{Z}\} \subset \mathcal{L}(X)$ be discrete evolution processes. Assume that $\mathcal{S}$ admits a nonuniform exponential dichotomy with bound $K(n)\leq De^{\nu|n|}$ and exponent $\alpha>\nu$, and that $\mathcal{B}$ satisfies \begin{equation*} \|B_k\|_{\mathcal{L}(X)} \leq \delta K(k+1)^{-1}, \ \forall k\,\in \mathbb{Z}, \end{equation*} where $\delta>0$ is such that $\delta<(1-e^{-\alpha})/(1-e^{-\alpha})$. Then the perturbed evolution process $\mathcal{T}=\mathcal{S}+\mathcal{B}$ admits a nonuniform exponential dichotomy with exponent \begin{equation*} \tilde{\alpha}=-\ln(\cosh \alpha - [\cosh ^2 \alpha-1-2\delta\sinh \alpha]^{1/2}), \end{equation*} and bound \begin{equation*} \tilde{K}(n)=K(n)\left[1+\frac{\delta}{(1-\rho)(1-e^{-\alpha})}\right]\max{[D_1,D_2]}, \end{equation*} where $\rho:=\delta(1+e^{-\alpha})/(1-e^{-\alpha})$, $D_1:=[1-\delta e^{-\alpha}/(1-e^{-\alpha-\tilde{\alpha}})]^{-1}$, $D_2:=[1-\delta e^{-\tilde{\beta}}/(1-e^{-\alpha-\tilde{\beta}})]^{-1}$ and $\tilde{\beta}:=\tilde{\alpha}+\ln(1+2\delta\sinh\alpha)$. \end{theorem} \par The proof of Theorem \ref{th-roughness-discrete-TED} follows by the same arguments of the proof of Theorem 1 of \cite{Zhou-Lu-Zhang-1} with minimal changes. It is important to notice that all the arguments of their proof still hold with the assumption $\alpha>\nu$. We reinforce, that one of our goals is to prove a robustness result of nonuniform exponential for \textit{continuous evolution processes} with this same condition on the exponents ($\alpha>\nu$). \section{Nonuniform exponential dichotomy: continuous case} \label{section-robustness-continuous-case} In this section, we consider evolution processes with parameters in $\mathbb{R}$. Inspired by the ideas of Henry \cite{Henry-1}, we prove theorems that allows us to obtain the continuous versions of the results presented in Section \ref{section-robustness-discrete-case}. The main theorem of this section is our robustness result for nonuniform exponential dichotomies, namely Theorem \ref{th-roughness-continuous-TED}, and we also provide a version of it to be applied in differential equations, Theorem \ref{th-perturation-of-nununiform-exp-dcihotomy-for-differential-eq}. In addition, we establish results on the uniqueness and continuous dependence of projections associated with nonuniform exponential dichotomy, Corollary \ref{cor-uniqueness-projection-continuous} and Theorem \ref{th-continuous-depende-projections}, respectively. \par We define a \textit{continuous evolution process} in $X$ as follows. \begin{definition} Let $\mathcal{S}:=\{S(t,s):X\to X\, ; \, t\geq s, \ t,s\in \mathbb{R}\}$ be a family of continuous operators in a Banach space $X$. We say that $\mathcal{S}$ is a \textbf{continuous evolution process} in $X$ if \begin{enumerate} \item $S(t,t)=Id_X$, for all $t\in \mathbb{R}$; \item $S(t,s)S(s,\tau)=S(t,\tau)$, for $t\geq s\geq \tau$; \item $\{(t,s)\in \mathbb{R}^2 ; \, t\geq s\}\times X\ni (t,s,x)\mapsto S(t,s)x$ is continuous. \end{enumerate} \par To simplify we usually say that $\mathcal{S}=\{S(t,s):t\geq s\}$ is an \textbf{evolution process}, whenever is implicit that $\mathcal{S}$ is a continuous evolution process. \end{definition} \begin{remark} Note that the operators $S(t,s):X\to X$, in the definition above, do not need to be linear. In fact, in Section \ref{section-persistence}, we study permanence of the nonuniform hyperbolic behavior for nonlinear evolution processes. \end{remark} \par We also recall the notion of a \textit{global solution} for an evolution process. \begin{definition}\label{def-global-solution} Let $\mathcal{S}=\{S(t,s):t\geq s\}$ be an evolution process. We say that $\xi:\mathbb{R}\to X$ is a \textbf{global solution} for $\mathcal{S}$ if $S(t,s)\xi(s)=\xi(t)$ for every $t\geq s$. \par We say that a global solution $\xi$ is \textbf{backwards bounded} if there exists $t_0\in \mathbb{R}$ such that $\xi(-\infty, t_0]=\{\xi(t): t\leq t_0\}$ is bounded. \end{definition} \par Now, we present the definition of \textit{nonuniform exponential dichotomy} for linear evolution processes: \begin{definition} Let $\mathcal{S}=\{S(t,s) \, ; \, t\geq s\}\subset \mathcal{L}(X)$ be an evolution process. We say that $\mathcal{S}$ admits a \textbf{nonuniform exponential dichotomy} if there exists a family of continuous projections $\{Q(t)\, : \, t\in \mathbb{R}\}$ such that \begin{enumerate} \item $Q(t) S(t,s)= S(t,s) Q(s)$, for all $t\geq s$; \item $S(t,s):R( Q(s) ) \to R( Q(t) )$ is an isomorphism, for $t\geq s$, and we define $S(s,t)$ as its inverse; \item There exists a continuous function $K:\mathbb{R}\rightarrow [1,+\infty)$ and some constants $\alpha>0$, $D\geq 1$ and $\nu\geq 0$ such that $K(s)\leq De^{\nu|s|}$ and \begin{eqnarray*} \|S(t,s)(Id_X-Q(s))\|_{\mathcal{L}(X)}&\leq& K(s) e^{-\alpha(t-s)}, \ \ t\geq s;\\ \|S(t,s)Q(s)\|_{\mathcal{L}(X)}&\leq& K(s) e^{\alpha(t-s)}, \ \ t< s. \end{eqnarray*} \end{enumerate} \end{definition} \begin{remark} This definition also includes \textit{uniform exponential dichotomies}, when $K$ is bounded, and \textit{tempered exponential dichotomies}, when $t\mapsto K(t)$ has a sub-exponential growth, see \cite{Barreira-Dragicevi-Valls,Zhou-Lu-Zhang-1}. \end{remark} \par In the following result we study each ``discretization at instant $t$" of an evolution process that admits a nonuniform exponential dichotomy. \begin{theorem}\label{th-continuousTED-implies-discreteTED} Let $\mathcal{S}$ be a continuous evolution process that admits a nonuniform exponential dichotomy with bound $K(t)=De^{\nu |t|}$ and exponent $\alpha>0$. Then for each $t\in \mathbb{R}$ and $l>0$ the discrete evolution process \begin{equation*} \{S_{m,n}(t)\, :\,m,n\in \mathbb{Z}\, \hbox{with } m\geq n \}:=\{S(t+ml,t+nl)\, :\,m,n\in \mathbb{Z}\,\hbox{with } m\geq n \} \end{equation*} admits a nonuniform exponential dichotomy with bound $\tilde{K}_t(m):=K(t+ml)$ and exponent $\tilde{\alpha}=\alpha l$. \end{theorem} \begin{proof} Define, for each $t\in \mathbb{R}$, the family of projections $\{Q_m(t)=Q(t+ml)\, : \, m\in \mathbb{N}\}$, then \begin{eqnarray*} Q_m(t)S_{m, n}(t)&=&Q(t+ml)S(t+ml,t+nl)\\ &=&S(t+ml,t+nl)Q(t+nl)\\ &=&S_{m, n}(t)Q_n(t), \end{eqnarray*} and the first property is proved. Note that, for $m\geq n$, \begin{equation*} S_{m,n}(t)|_{R(Q_n(t) )} =S(t+ml,t+nl)|_{R(Q(t+nl) )} \end{equation*} and the right hand side of the equation is an isomorphism, so we define the inverse $S_{n,m}(t):R(Q(t+ml))\to R(Q(t+nl))$. \par Finally, for $n\geq m$, \begin{eqnarray*} \|S_{n,m}(t)(Id_X-Q_m(t))\|_{\mathcal{L}(X)} &=& \|S(t+ml,t+nl)(Id_X-Q(t+nl))\|_{\mathcal{L}(X)}\\ &\leq& K(t+ml) e^{-\alpha l(n-m)}, \end{eqnarray*} and, for $n<m$, \begin{eqnarray*} \|S_{n,m}(t)Q_m(t)\|_{\mathcal{L}(X)} &=&\|S_{n,m}(t)Q(t+ml)\|_{\mathcal{L}(X)} \\ &\leq& K(t+ml) e^{\alpha l(n-m)}. \end{eqnarray*} Therefore, $\{S_{n,m}(t):n\geq m\}$ admits a discrete nonuniform exponential dichotomy with exponent $\tilde{\alpha}=\alpha l$ and bound $\tilde{K}_t(m)=K(t+ml)\leq De^{\nu |t|} e^{\nu l |m|}$, which concludes the proof. \end{proof} \begin{remark} In Theorem \ref{th-continuousTED-implies-discreteTED}, for a fixed $t\in \mathbb{R}$, the discretized evolution process $\{S_n(t)\, : \, n\in \mathbb{Z}\}$ possesses with a bound $K_t$ dependent of the time $t$ and the exponent $\tilde{\alpha}$ is independent of $t$. This is an expected difference with the the case of uniform exponential dichotomy, where both, the bound and the exponent of the discretization are independent of $t$, see Henry \cite{Henry-1}. \end{remark} \par Now, as a consequence of Theorem \ref{th-continuousTED-implies-discreteTED} and Corollary \ref{cor-uniqueness-projection-discrete}, we obtain the uniqueness of the family of projections. \begin{corollary}[Uniqueness of the family of projections] \label{cor-uniqueness-projection-continuous} Let $\mathcal{S}$ be an evolution process such that admits a nonuniform exponential dichotomy with bound $K(t)\leq De^{\nu|t|}$ and exponent $\alpha>\nu$. Then the family of projections is unique. \end{corollary} \par As another application of Theorem \ref{th-continuousTED-implies-discreteTED}, we prove a result on the continuous dependence of projections. \begin{theorem}[Continuous dependence of projections] \label{th-continuous-depende-projections} Suppose that $\mathcal{S}$ and $\mathcal{T}$ are linear evolution processes with nonuniform exponential dichotomy with projections $\{Q^\mathcal{S}(t):t\in \mathbb{R}\}$ and $\{Q^\mathcal{T}(t):t\in \mathbb{R}\}$ and exponents $\alpha_\mathcal{T},\alpha_\mathcal{S}$ and with the same bound $K$. If $\nu<\min\{\alpha_\mathcal{T},\alpha_\mathcal{S}\}$ and \begin{equation}\label{eq-th-continuous-depende-projections} \sup_{0\leq t-s\leq 1}\big\{K(t)\|T(t,s)-S(t,s)\|_{\mathcal{L}(X)}\big\}\leq \epsilon, \end{equation} then \begin{equation*} \sup_{t\in \mathbb{R}}\big\{K(t)^{-1}\|Q^\mathcal{T}(t)-Q^\mathcal{S}(t)\|_{\mathcal{L}(X)}\big\} \leq \frac{e^{-\alpha_\mathcal{S}} + e^{-\alpha_\mathcal{T}}}{1-e^{-(\alpha_\mathcal{S}+\alpha_\mathcal{T})}} \epsilon. \end{equation*} \end{theorem} \begin{proof} From Theorem \ref{th-continuousTED-implies-discreteTED}, for each $t_0\in\mathbb{R}$ and $0<l\leq 1$ we have that both $\{T_n(t_0):n\in \mathbb{Z}\}$ and $\{S_n(t_0):n\in \mathbb{Z}\}$ admit a nonuniform exponential dichotomy with exponents $\alpha_\mathcal{T} l$ and $\alpha_\mathcal{S} l$ and the same bound $K_{t_0}(n):=K(t_0+nl)$. Now, from Theorem \ref{th-continuity-projection} we conclude that \begin{equation*} K(t_0+nl)^{-1}\|Q^\mathcal{T}(t_0+nl)-Q^\mathcal{S}(t_0+nl)\|_{\mathcal{L}(X)} \leq \frac{e^{-\alpha_\mathcal{S}} + e^{-\alpha_\mathcal{T}}}{1-e^{-(\alpha_\mathcal{S}+\alpha_\mathcal{T})}} \epsilon. \end{equation*} To conclude the proof note that for any $t\in\mathbb{R}$ it is possible to find $t_0$ and $0<l\leq 1$ such that $t=t_0+nl$. \end{proof} \par Uniqueness and continuous dependence of projections were a simple consequence of Theorem \ref{th-continuousTED-implies-discreteTED}, and of course the results at the discrete case. However, to prove our robustness result, we will need a sort of a reciprocal result of Theorem \ref{th-continuousTED-implies-discreteTED}. \begin{theorem}\label{th-discrete-dichotomy-implies-continuous-dichotomy} Let $\mathcal{S}:\{S(t,s):t\geq s\}\subset\mathcal{L}(X)$ be a continuous evolution process. Suppose that \begin{enumerate} \item there exist $l>0$ and $\nu \geq 0$ such that \begin{equation*} L(\nu,l):=\sup_{0\leq t-s\leq l}\big\{ \|S(t,s)\|_{\mathcal{L}(X)} \,e^{-\nu |t|} \big\} < +\infty, \end{equation*} \item for each $t\in \mathbb{R}$ the discretized process, \begin{equation*} \{T_{n,m}(t) ,\, n\geq m\}=\{S(t+nl,t+ml),\, n\geq m\} \end{equation*} possesses a nonuniform exponential dichotomy with bound $K_t(\cdot):\mathbb{Z}\rightarrow [1,+\infty)$, with $K_t(m)\leq De^{\nu |t+m|}$ and exponent $\alpha>0$ independent of $t$. \end{enumerate} If $\nu \, l<\alpha $, the evolution process $\mathcal{S}$ admits a nonuniform exponential dichotomy with exponent $\hat{\alpha}=(\alpha-\nu l)/l$ and bound \begin{equation*} \hat{K}(s)= D^2 e^{2\alpha} \max\{L(\nu,l), L(\nu,l)^2\} \, e^{2\nu|s|}. \end{equation*} \end{theorem} \begin{proof} First, we fix $t\in \mathbb{R}$ and define the linear operator $T_n(t):=T_{n+1,n}(t)$, for each $n\in \mathbb{Z}$. Then for each discrete evolution process $\{T_n(t)\, : \, n\in \mathbb{Z}\}$, there exists a family of projections $\{Q_n(t)\, : \, n\in \mathbb{Z}\}$ such that the nonuniform exponential dichotomy conditions are satisfied. \par For each fixed $k\in \mathbb{Z}$ we have \begin{equation*} T_{n+k}(t) = T_n(t+kl), \ \ \forall n \in \mathbb{Z}. \end{equation*} Then this linear operator generates the same evolution process with associated projections $\{Q_{n+k}(t)\}_{n\in\mathbb{Z}}$ and $\{Q_{n}(t+kl)\}_{n\in\mathbb{Z}}$. Thus by uniqueness of the projections for the discrete case, namely Corollary \ref{cor-uniqueness-projection-discrete}, we obtain that for all $n,k\in \mathbb{Z}$, \begin{equation*} Q_{n+k}(t)=Q_n(t+kl). \end{equation*} Now, for all $t\in \mathbb{R}$ we define $Q(t):=Q_0(t)$. These projections are the candidates to obtain the nonuniform exponential dichotomy. \par Let us now prove the boundedness in the case $t\geq s$. \par \textbf{Claim 1:} If $t\geq s$, then \begin{equation*} \|S(t,s)(Id_X-Q(s))\|_{\mathcal{L}(X)} \leq \hat{K}(s) e^{-\hat{\alpha}(t-s)}, \end{equation*} where $\hat{K}$ is defined in the statement of the theorem. \par In fact, choose $n\in \mathbb{N}$, such that $nl+s\leq t<(n+1)l+s$, then we write \begin{equation*} S(t,s)(Id_X-Q(s))= S(t,s+nl) S(s+nl,s)(Id_X-Q_0(s)). \end{equation*} Thus, by hypothesis, \begin{equation*} \|S(s+nl,s)(Id_X-Q_0(s))\|_{\mathcal{L}(X)}= \|T_{n,0}(s)(Id_X-Q_0(s))\|_{\mathcal{L}(X)} \leq K_s(0)e^{-\alpha n}, \end{equation*} which implies that \begin{eqnarray*} \|S(t,s)(Id_X-Q(s))\|_{\mathcal{L}(X)}&\leq& \|S(t,s+nl)\|_{\mathcal{L}(X)} K_s(0) e^{-\alpha n}\\ &=& K(s) e^{\alpha(t-nl-s)/l} \|S(t,s+nl)\|_{\mathcal{L}(X)} e^{-\alpha(t-s)/l}\\ &\leq& De^{\nu |s|}\, e^{\alpha} \, e^{\nu|t|} \, L(\nu,l) \, e^{-\alpha(t-s)/l}, \end{eqnarray*} where was used the fact that $0\leq t-s-nl<l$. \par Now, note that, if $t\geq s\geq 0$ we have \begin{equation*} \nu|t|-\alpha(t-s)/l= -(\alpha-\nu l)(t-s)/l +\nu |s|, \end{equation*} and, for $s\leq t\leq 0$, \begin{equation*} \nu|t|-\alpha(t-s)/l= -(\alpha+\nu l)(t-s)/l +\nu |s|, \end{equation*} then choose $\hat{\alpha}=(\alpha-\nu l)/l$. Thus, we obtain for $t\geq s\geq 0$ and $s\leq t\leq 0$ that \begin{eqnarray*} \|S(t,s)(Id_X-Q(s))\|_{\mathcal{L}(X)} &\leq& De^{\nu |s|}\, e^{\alpha} \, e^{\nu|t|} \, L(\nu,l) \, e^{-\alpha(t-s)/l}\\ &\leq & DL(\nu,l)\, e^\alpha \, e^{2\nu |s|} e^{-\hat{\alpha}(t-s)}. \end{eqnarray*} Finally, for $t\geq 0\geq s$ we have \begin{eqnarray*} \|S(t,s)(Id_X-Q(s))\|_{\mathcal{L}(X)} &=& \|S(t,s)(Id_X-Q(s)^2)\|_{\mathcal{L}(X)}\\ &\leq & \|S(t,0)(Id_X-Q(0)\|_{\mathcal{L}(X)}\, \|S(0,s)(Id_X-Q(s))\|_{\mathcal{L}(X)}\\ &\leq & D^2L(\nu,l)^2\, e^{2\alpha} \, e^{2\nu |s|} e^{-\hat{\alpha}(t-s)}. \end{eqnarray*} Therefore, for $t\geq s$, \begin{equation*} \|S(t,s)(Id_X-Q(s))\|_{\mathcal{L}(X)}\leq \ D^2 e^{2\alpha} \max\{L(\nu,l), L(\nu,l)^2\} e^{2\nu|s|} e^{-\hat{\alpha}(t-s)} \end{equation*} and the first claim is proved. \par Now, to prove the other inequality, for $t<s$, we take $n\leq 0$ such that $s+nl\leq t<s+(n+1)l$, and define for $z\in R(Q(s))$ the linear operator \begin{equation*} S(t,s)z:=S(t,s+nl)\circ [T_{0,n}(s)|_{R(Q_n(s))}]^{-1}z. \end{equation*} In other words, \begin{equation*} S(t,s)z=S(t,s+nl)\circ T_{n,0}(s)z. \end{equation*} \textbf{Claim 2:} If $t<s$, we have \begin{equation*} \|S(t,s)Q(s)\|_{\mathcal{L}(X)} \leq \hat{K}(s) e^{\hat{\alpha}(t-s)}. \end{equation*} Indeed, for $x\in X$ and $s+nl\leq t<s+(n+1)l$, for $n\leq 0$, by hypothesis, \begin{equation*} \|T_{n,0}(s)Q_0(s)x\|_X\leq K_s(0) e^{\alpha n} \|x\|_X. \end{equation*} Hence, by a similar argument to that in the proof of Claim 1 we obtain that \begin{equation*} \|S(t,s)Q(s)x\|_X \leq \|S(t,s+nl)\|_{\mathcal{L}(X)} De^{\nu |s|} e^{\alpha n} \|x\|_X \leq \hat{K}(s) e^{\hat{\alpha}(t-s)}\|x\|_X. \end{equation*} Now, to conclude the assertion we take the supremum for $\|x\|_X=1$. \par \textbf{Claim 3:} For all $t_0\in \mathbb{R}$ we characterize the kernel of $Q(t_0)$, $N(Q(t_0))=\{z\in X: Q(t_0)z=0 \}$, as \begin{equation*} N(Q(t_0))=\{z\in X\, : [t_0,+\infty) \ni t \mapsto S(t,t_0)z \hbox{ is bounded} \}. \end{equation*} Let $z\in N(Q(t_0))$, so by definition $Q(t_0)z=0$ and for $t\geq t_0$ we can use Claim 1 to obtain \begin{equation*} \|S(t,t_0)z\|_X = \|S(t,t_0)(Id_X-Q(t_0))z\|_X \leq \hat{K}(t_0)e^{-\hat{\alpha} (t - t_0)}\|z\|_X. \end{equation*} Therefore, $[t_0,+\infty) \ni t \mapsto S(t,t_0)z$ is bounded. \par On the other hand, if $z\notin N( Q( (t_0) ) )$ and $n>0$, \begin{eqnarray*} \|Q(t_0)z\|_X &\leq & \|T_{0,n}(t_0) Q_n(t_0)\|_{\mathcal{L}(X)} \|T_{n,0}(t_0)z\|_X\\ &\leq& De^{\nu |t_0|} e^{\nu |n|}e^{-\alpha n}\|S(t_0+nl,t_0)z\|_X. \end{eqnarray*} Thus, we obtain \begin{equation*} \|Q(t_0)z\|_X D^{-1} e^{-\nu |t_0|} e^{n(\alpha -\nu)} \leq \|S(t_0+nl,t_0)z\|_X. \end{equation*} Consequently, as $\nu<\alpha$ we have that $[t_0,+\infty) \ni t \mapsto S(t,t_0)z$ is not bounded. \par Note that the last assertion implies that \begin{equation*} S(t,t_0)N(Q(t_0)) \subset N(Q(t)). \end{equation*} \par \textbf{Claim 4:} The linear operator \begin{equation*} S(t,t_0): R(Q(t_0))\rightarrow X \end{equation*} is injective for all $t\geq t_0$. \par Indeed, let $z\in R(Q(t_0))$ with $S(t,t_0)z=0$. Choose $n\in \mathbb{N}$ so that $t\leq nl+t_0$, then \begin{equation*} 0=S(t_0+nl,t)0=S(t_0+nl,t)S(t,t_0)z=T_{n,0}(t_0)z, \end{equation*} this implies that $z\in N(T_{n,0}(t_0)|_{R(Q_0(t_0))})=\{0\}$. \par \textbf{Claim 5:} For all $t_0\in \mathbb{R}$ the range of $Q(t_0)$ is \begin{equation*} R(Q(t_0))=\{z\in X\,: \hbox{ there exists a backwards bounded solution }\xi\hbox{ with } \xi(t_0)=z\}. \end{equation*} Let $z\in R(Q(t_0))$ and $t<t_0$, then take $n\in \mathbb{Z}$ such that $t\in [t_0+nl,t_0+(n+1)l]$ and define \begin{equation*} \xi(t):=S(t,t_0+nl)T_{n,0}(t_0)z=S(t,t_0)z. \end{equation*} Now, choose $x\in X$ so that $z=Q(t_0)x$, thus by Claim 2 \begin{equation*} \|\xi(t)\|_X \leq \hat{K}(t_0) e^{\hat{\alpha}(t-t_0)} \|x\|_X. \end{equation*} Thus, $\xi$ is a backward bounded solution such that $\xi(t_0)=z$. Suppose that $z\notin R(Q(t_0))$ and that there exists $\xi: \mathbb{R}\rightarrow X$ a global solution such that $\xi(t_0)=z$. For $n\leq 0$ we can write $z=S(t_0,t_0+nl)\xi(t_0+nl)$, thus \begin{eqnarray*} \|(Id_X-Q(t_0)) z\|_X &\leq& \|S(t_0,t_0+nl)(Id_X-Q(t_0+nl))\|_{\mathcal{L}(X)} \, \|\xi(t_0+nl)\|_X \\ &\leq& De^{\nu |t_0|} e^{\nu |n|} e^{\alpha n} \|\xi(t_0+nl)\|_X. \end{eqnarray*} Therefore, \begin{equation*} \|(Id_X-Q(t_0)) z\|_X D^{-1}e^{-\nu |t_0|}e^{n(\nu-\alpha)} \leq\|\xi(t_0+nl)\|_X. \end{equation*} Since $\nu<\alpha$, it follows that $\xi$ is not backwards bounded, and the proof of Claim 5 is complete. \par \textbf{Claim 6:} $S(t,t_0)R(Q(t_0))= R(Q(t))$. \par Indeed, if $z\in R(Q(t_0))$, then there exists a backwards bounded solution $\xi$ through $z$ in $t=t_0$. Thus, $\xi$ is also a solution through $S(t,t_0)z$ in time $t$ and we see that $S(t,t_0)z\in R(Q(t))$. On the other hand, if $z\in R(Q(t))$, there is a backwards bounded solution $\xi$ with $\xi(t)=z$. Therefore, if $n\in \mathbb{Z}$ such that $nl+t\leq t_0\leq t$, define \begin{equation*} x=S(t_0,nl+t)S(nl+t,t)z\in R(Q(t_0)). \end{equation*} Therefore, $S(t,t_0)x=z$ and we conclude that $S(t,t_0)|_{R(Q(t_0))}$ is an isomorphism. \par Finally, we prove that the family of projections commutates with the evolution process. \par \textbf{Claim 7:} $Q(t)S(t,s)=S(t,s)Q(s)$. For $z\in X$, we have that \begin{equation*} S(t,t_0)z=S(t,t_0)(Id_X-Q(t_0))z + S(t,t_0)Q(t_0)z. \end{equation*} Now, as $(Id_X-Q(t_0))z\in N(Q(t_0))$ and $S(t,t_0)Q(t_0)z\in R(Q(t))$, applying $Q(t)$ we obtain \begin{equation*} Q(t)S(t,t_0)z=S(t,t_0)Q(t_0)z. \end{equation*} \end{proof} \par We are ready to present the main result of this section. \begin{theorem}[Robustness for continuous evolution processes]\label{th-roughness-continuous-TED} Let $\mathcal{S}=\{S(t,s):t\geq s\}\subset \mathcal{L}(X)$ be an evolution process that admits a nonuniform exponential dichotomy with bound $K(s)=De^{\nu |s|}$ and exponent $\alpha>\nu$. Assume that \begin{equation}\label{th-roughness-continuous-TED-hypothesis1} L_\mathcal{S}(\nu):=\sup_{0\leq t-s\leq 1} \big\{e^{-\nu |t|} \|S(t,s)\|_{\mathcal{L}(X)}\big\}<+\infty. \end{equation} Then there exists $\epsilon>0$ such that if $\mathcal{T}=\{T(t,s)\, :\, t\geq s\}$ is an evolution process such that \begin{equation}\label{th-roughness-continuous-TED-hypothesis2} \sup_{0\leq t-s\leq 1}\big\{ K(t) \|S(t,s)-T(t,s)\|_{\mathcal{L}(X)} \big\}<\epsilon, \end{equation} then $\mathcal{T}$ admits a nonuniform exponential dichotomy with exponent $\hat{\alpha}:=\tilde{\alpha}-\nu$ and bound \begin{equation}\label{th-hat-K} \hat{K}(s)= \tilde{D}^2 e^{2\tilde{\alpha}} \max\{L_\mathcal{T}(\nu), L_\mathcal{T}(\nu)^2\} \, e^{2\nu|s|}, \end{equation} where $\tilde{D}:=D(1+\epsilon/(1-\rho)(1-e^{-\alpha}))\max\{D_1,D_2\}$, and $\rho,\tilde{\alpha}, D_1$ and $D_2$ are the same as in Theorem \ref{th-roughness-discrete-TED}. \end{theorem} \begin{proof} Let $n\in \mathbb{Z}$ and $t_0\in \mathbb{R}$, then, by Theorem \ref{th-continuousTED-implies-discreteTED}, the discrete evolution process $\{S_{n}(t_0):=S(t_0+n+1,t_0+n) \, : \, n\in \mathbb{Z}\}$ admits a nonuniform exponential dichotomy with bound $K_t(n)\leq De^{\nu (|t+n|)}$ and exponent $\alpha>0$. Let $\epsilon>0$ be such that $\epsilon<(1-e^{-\alpha})/(1+e^{-\alpha})$ and $\mathcal{T}=\{T(t,s): t\geq s\}$ an evolution process that satisfies \eqref{th-roughness-continuous-TED-hypothesis2}. Let $\{T_n(t_0):n\in \mathbb{Z}\}$ be the discretization of $\mathcal{T}$ at $t_0$ and define, for each $n\in\mathbb{Z}$ and $t_0\in \mathbb{R}$, the linear bounded operator $$B_n(t_0):=T_n(t_0)- S_n(t_0).$$ Hence, from \eqref{th-roughness-continuous-TED-hypothesis2}, we have that \begin{equation*} \|B_n(t_0)\|_{\mathcal{L}(X)}< \epsilon K_{t_0}(n+1)^{-1}. \end{equation*} Therefore, by Theorem \ref{th-roughness-discrete-TED}, the discrete evolution process $T_n(t_0)=S_n(t_0)+B_n(t_0)$ admits a nonuniform exponential dichotomy with exponent \begin{equation*} \tilde{\alpha}:=-\ln(\cosh \alpha - [\cosh ^2 \alpha-1-2\epsilon\sinh \alpha]^{1/2}), \end{equation*} and bound \begin{equation*} \tilde{K}_{t_0}(n):=K_{t_0}(n)\left[1+\frac{\epsilon}{(1-\rho)(1-e^{-\alpha})}\right] \max{[D_1,D_2]}, \end{equation*} where $D_1,D_2,\rho$ are constants that can be found in Theorem \ref{th-roughness-discrete-TED}. \par Since each discretization at time $t$ have the same exponent $\alpha>0$ we see that $\epsilon$ can be choose independent of $t$. Thus for each $t\in \mathbb{R}$, the discrete evolution process $\{T_n(t): n\in \mathbb{Z}\}$ admits nonuniform exponential dichotomy with bound $\tilde{K}_{t}(n)$ and exponent $\tilde{\alpha}$ defined above. Then condition (2) of Theorem \ref{th-discrete-dichotomy-implies-continuous-dichotomy} hold true for $\mathcal{T}$. \par Moreover, from \eqref{th-roughness-continuous-TED-hypothesis2}, $\mathcal{T}$ satisfies \begin{eqnarray*} \|T(t,s)\|_{\mathcal{L}(X)}&\leq& \epsilon K(t)^{-1} +\|S(t,s)\|_{\mathcal{L}(X)}\\ &\leq &\epsilon +\|S(t,s)\|_{\mathcal{L}(X)}, \hbox{ for } 0\leq t-s\leq 1 \end{eqnarray*} then $\sup_{ 0\leq t-s\leq 1}\{e^{-\nu |t|}\|T(t,s)\|_{\mathcal{L}(X)}\}$ is finite. Finally, note that it is possible to choose $\epsilon>0$ small such that $\tilde{\alpha}>\nu$. Therefore, Theorem \ref{th-discrete-dichotomy-implies-continuous-dichotomy} implies that $\mathcal{T}$ admits nonuniform exponential dichotomy with bound $\hat{K}$ defined in \eqref{th-hat-K} and exponent $\hat{\alpha}=\tilde{\alpha}-\nu>0$. \end{proof} \begin{remark} Assumption \eqref{th-roughness-continuous-TED-hypothesis1} on the growth of $\mathcal{S}$ is expected for evolution processes that admit nonuniform exponential dichotomies, see Barreira and Valls \cite{Barreira-Valls-Sta} or Example \ref{example-nonuniformED} in Section \ref{subsection-a-general-example}. \end{remark} \begin{remark} Theorem \ref{th-roughness-continuous-TED} allows us to see the robustness as an \textit{open property}. In fact, let $\mathfrak{S}_\nu$ be the space every evolution process that satisfy \eqref{th-roughness-continuous-TED-hypothesis1} and define a distance in $\mathfrak{S}_\nu$ as \begin{equation*} d_\nu(\mathcal{S},\mathcal{T}):=\sup_{0\leq t-s\leq 1}\big\{ e^{\nu|t|} \|S(t,s)-T(t,s)\|_{\mathcal{L}(X)} \big\}. \end{equation*} Then, from Theorem \ref{th-roughness-continuous-TED} we see that if $\mathcal{S}\in \mathfrak{S}_\nu$ admits a nonuniform exponential dichotomy with bound $K(t)=De^{\nu|t|}$ and exponent $\alpha>\nu$, then there exists $\epsilon>0$ such that every evolution process $\mathcal{T}$ in a $\epsilon$-neighborhood of $\mathcal{S}$ admits a nonuniform exponential dichotomy with bound and exponent given in Theorem \ref{th-roughness-continuous-TED}. \end{remark} \par Now, we present another formulation of Theorem \ref{th-roughness-continuous-TED} that allows us to apply the result for differential equations in Banach spaces. \begin{theorem}\label{th-perturation-of-nununiform-exp-dcihotomy-for-differential-eq} Let $\mathcal{S}=\{S(t,s):t\geq s\}\subset \mathcal{L}(X)$ be an evolution process that admits a nonuniform exponential dichotomy with bound $K(s)=De^{\nu |s|}$ and exponent $\alpha>\nu$. Assume that \begin{equation*}\label{th-roughness-continuous-TED-H1} L_\mathcal{S}(\nu):=\sup_{0\leq t-s\leq 1} \big\{e^{-\nu |t|} \|S(t,s)\|_{\mathcal{L}(X)}\big\}<+\infty. \end{equation*} Let $\{B(t)\, :\, t\in \mathbb{R}\}\subset \mathcal{L}(X)$ so that $\mathbb{R} \ni t\mapsto B(t)x$ is continuous for all $x\in X$ and \begin{equation*} \|B(t)\|_{\mathcal{L}(X)}<\delta e^{-3\nu |t|}. \end{equation*} Then any evolution process that satisfies the integral equation \begin{equation}\label{eq-th-perturbed-equation-VCF} T(t,s)=S(t,s)+\int_{s}^{t}S(t,\tau)B(\tau) T(\tau,s)d\tau \in \mathcal{L}(X), \ \ t\geq s, \end{equation} admits a nonuniform exponential dichotomy for suitably small $\delta>0$, with bound and exponent given in Theorem \ref{th-roughness-continuous-TED}. \end{theorem} \begin{proof} Let $\mathcal{T}=\{T(t,s): t\geq s\}$ be a evolution process satisfying \eqref{eq-th-perturbed-equation-VCF}. Then \begin{equation*} \|T(t,s)\|_{\mathcal{L}(X)}\leq \|S(t,s)\|_{\mathcal{L}(X)}+\int_{s}^{t} \|S(t,\tau)\|_{\mathcal{L}(X)} \|B(\tau)\|_{\mathcal{L}(X)} \, \|T(\tau,s)\|_{\mathcal{L}(X)}d\tau. \end{equation*} Thus, fix $s$ and define the function $\phi(t)=e^{-\nu|t|}\|T(t,s)\|_{\mathcal{L}(X)}$, for $t\leq s+1$, \begin{equation*} \phi(t)\leq L_\mathcal{S}(\nu)+ L_\mathcal{S}(\nu)\int_{s}^{t} \|B(\tau)\|_{\mathcal{L}(X)}e^{\nu|\tau|}\phi(\tau) d\tau \end{equation*} By Grownwall's inequality, we obtain that \begin{equation*} \phi(t)\leq L_\mathcal{S}(\nu) e^{L_\mathcal{S}(\nu) \int_s^t\|B(\tau)\|_{\mathcal{L}(X)}e^{\nu|\tau|}d\tau}, \hbox{ for } t\leq s+1. \end{equation*} Therefore, \begin{equation*} L_\mathcal{T}(\nu):=\sup_{ 0\leq t-s\leq 1}\big\{e^{-\nu|t|}\|T(t,s)\|_{\mathcal{L}(X)}\big\} <+\infty. \end{equation*} Now, for $0\leq t-s\leq 1$, \begin{eqnarray*} \|S(t,s)-T(t,s)\|_{\mathcal{L}(X)} &\leq& \int_{s}^{t} e^{\nu(|t|+|\tau|)} L_\mathcal{S}(\nu)\|B(\tau)\|_{\mathcal{L}(X)} L_\mathcal{T}(\nu) d\tau\\ &= &L_\mathcal{T}(\nu) L_\mathcal{S}(\nu) \, e^{\nu|t|}\int_{s}^{t}e^{\nu|\tau|}\|B(\tau)\|_{\mathcal{L}(X)}d\tau . \end{eqnarray*} Then \begin{equation*} K(t)\|S(t,s)-T(t,s)\|_{\mathcal{L}(X)} \leq L_\mathcal{T}(\nu) L_\mathcal{S}(\nu) D\, \delta, \end{equation*} and choose $\delta >0$ suitably small in order to use Theorem \ref{th-roughness-continuous-TED} and conclude the proof. \end{proof} \par Theorem \ref{th-perturation-of-nununiform-exp-dcihotomy-for-differential-eq} is very useful when dealing with differential equations. In fact, let $\{A(t) : \, t\in \mathbb{R}\}$ be a family of linear operators, bounded or unbounded, and consider \begin{equation}\label{eq-standart-linear-equation} \dot{x}=A(t)x, \ \ x(s)=x_s\in X. \end{equation} Suppose that for each $s\in \mathbb{R}$ and $x_s\in X$ there exists a unique solution $x(\cdot,s,x_s):[s,+\infty)\to X$. Thus there exists an evolution process $\mathcal{S}=\{S(t,s):t\geq s\}$ defined by $S(t,s)x_s:=x(t,s,x_s)$ for each $t\geq s$. \par To study robustness of nonuniform exponential dichotomy of problem \eqref{eq-standart-linear-equation}, we suppose that $\mathcal{S}$ admits a nonuniform exponential dichotomy and we want to know for which class of $\{B(t): \, t\in \mathbb{R}\}\subset \mathcal{L}(X)$ the perturbed problem \begin{equation}\label{eq-standart-linear-equation-perturbed} \dot{x}=A(t)x+B(t)x, \ \ x(s)=x_s\in X, \end{equation} admits a nonuniform exponential dichotomy with bound $K(t)=De^{\nu|t|}$ and exponent $\alpha>0$. In this way, Theorem \ref{th-perturation-of-nununiform-exp-dcihotomy-for-differential-eq} ensures that the nonuniform hyperbolicity is preserved for exponentially small perturbations. In other words, if the norm of the perturbation of $B$ does not grow more than $e^{-3\nu |t|}$ for $\nu<\alpha$, then the perturbed problem \eqref{eq-standart-linear-equation-perturbed} admits a nonuniform exponential dichotomy. \begin{remark} \par In Barreira and Valls \cite{Barreira-Valls-Robustness-noninvertible} is also provide a version of Theorem \ref{th-perturation-of-nununiform-exp-dcihotomy-for-differential-eq} under different assumptions. They considered a general growth rate $\rho(t)$ for the nonuniform exponential dichotomy and proved that if $\alpha >2\nu$ and $B:\mathbb{R}\to \mathcal{L}(X)$ is continuous satisfying $\|B(t)\|_{\mathcal{L}(X)}\leq \delta e^{-3\nu |\rho(t)| }\rho^\prime(t)$, for all $t\in \mathbb{R}$, then the perturbed problem \eqref{eq-standart-linear-equation-perturbed} admits $\rho$-nonuniform exponential dichotomy. We note that our method does not work for general growth rates $\rho(t)$. On the other hand, for $\rho(t)=t$, since our condition on the exponents is only $\alpha>\nu$ we obtain a improvement of their robustness result (at this particular case). \par Considering invertible evolution processes, Barreira and Valls proved in \cite{Barreira-Valls-R} a result similar to Theorem \ref{th-perturation-of-nununiform-exp-dcihotomy-for-differential-eq}. It is assumed that $A(t)$ is bounded, so the evolution process $\mathcal{S}$ is invertible (which means that $S(t,s)$ is invertible for every $t\geq s$) and that the perturbation $B$ satisfies $\|B(t)\|_{\mathcal{L}(X)}\leq \delta e^{-2\nu|t|}$ for all $t$. In their proof, thanks to invertibility, they can write explicit expressions of the projections for the perturbed evolution process. However, in many situations, it is not expected to have that $A(t)\in \mathcal{L}(X)$, see \cite{Carvalho-Langa-Robison-book,Henry-1}. \end{remark} \section{An application in infinite-dimensional differential equations}\label{subsection-a-general-example} \par In this section, we show an application of the robustness result in order to obtain examples of evolution processes that admits nonuniform exponential dichotomies. Inspired in an example of \cite{Barreira-Valls-Sta}, we provide an evolution process defined on a Banach space that admits a nonuniform exponential dichotomy. Then, apply Theorem \ref{th-perturation-of-nununiform-exp-dcihotomy-for-differential-eq} to study for which class of perturbations the nonuniform hyperbolicity will be preserved. \par Let $X$ and $Y$ be two Banach spaces. Suppose that $A$ is a generator of a $C_0$-semigroup $\{e^{At}: t\geq 0\}$ in $X$ and $B\in \mathcal{L}(Y)$ with $\sigma(A)\subset (-\infty,-\omega)$ and $\sigma(B)\subset (\omega,+\infty)$, for some $\omega >0$, and there exists $M\geq 1$ such that \begin{eqnarray*} \|e^{A (t-s)}\|_{\mathcal{L}(X)} &\leq& M e^{-\omega(t-s)}, \, t\geq s; \\ \|e^{B (t-s)}\|_{\mathcal{L}(Y)} &\leq& M e^{\omega(t-s)},\, t<s. \end{eqnarray*} \begin{remark} Let $\mathcal{C}$ be a generator of an \textit{hyperbolic $C_0$-semigroup} $\{e^{\mathcal{C}t}: t\geq 0\}$, i.e., the associated evolution processes $\{e^{\mathcal{C}(t-s)}: t\geq s\}$ admits an uniform exponential dichotomy with a single projection $Q(t)=Q\in \mathcal{L}(X)$ for every $t\in \mathbb{R}$. Then, there is a decomposition $X=X^u\oplus X^s$ such that $A:=C|_{X^s}$ and $B=\mathcal{C}|_{X^u}$ satisfy the conditions above over $X^s$ and $X^u$, respectively, see \cite{Carvalho-Langa-Robison-book,Chow-Leiva-1,Henry-1}. \end{remark} \par Let $\omega>a>0$ and define the linear operator in $Z=X\times Y$ \begin{equation*} \mathcal{A}(t):=\left[ {\begin{array}{cc} A -at \sin(t)Id_{X} & 0 \\ 0 & B + at\sin(t)Id_{Y} \\ \end{array} } \right]. \end{equation*} Consider the differential equation \begin{equation}\label{example-nonuniformED} \dot{z}=\mathcal{A}(t)z, \ \ z(s)=z_s\in Z. \end{equation} \par Then, the evolution process associated with problem \eqref{example-nonuniformED} is defined by \begin{equation*} T(t,s)=(U(t,s), V(t,s)) \end{equation*} where \begin{eqnarray*} U(t,s)&=&e^{A (t-s)} \exp\bigg\{-\int_{s}^{t}a\tau \sin(\tau ) d\tau \bigg\} \hbox{ and}\\ V(t,s)&=&e^{B (t-s)} \exp\bigg\{\int_{s}^{t}a\tau \sin(\tau ) d\tau \bigg\} \end{eqnarray*} are evolution processes in $X$ and $Y,$ respectively. \par The proof of the next result is inspired by Proposition 2.3 of \cite{Barreira-Valls-Sta}. \begin{proposition}\label{th-general-example-of-non-uniform-exp-dichotomy} Let $\mathcal{T}=\{T(t,s):t\geq s\}$ be the evolution process defined above. Then $\mathcal{T}$ admits a nonuniform exponential dichotomy with bound $K(t)=Me^{2a(1+|t|)}$ and exponent $\alpha=\omega-a>0$. \end{proposition} \begin{proof} \par Define the linear operators $P(t)=P_X$ and $Q(t)=P_Y$ for all $t\in \mathbb{R}$ where $P_X$ and $P_Y$ are the canonical projections onto $X$ and $Y$, respectively. Then $T(t,s)P(s)=U(t,s)$ and $T(t,s)Q(s)=V(t,s)$ for all $t\geq s$. \par In this way we have that $P_X$ commutates with $T(t,s)$, for all $t\geq s$ and since $B\in\mathcal{L}$(Y) generates a group in $Y$ we have that $V(t,s)$ is an isomorphism over $Y.$ Note that \begin{eqnarray*} \|U(t,s)\|_{\mathcal{L}(X)}&=&\exp\bigg\{-\int_{s}^{t}a\tau \sin(\tau ) d\tau \bigg\} \|e^{A (t-s)}\|_{\mathcal{L}(X)}\\ &\leq& Me^{-\omega(t-s)+at\cos(t) -as\cos(s) -a\sin(t) +a\sin(s)} \end{eqnarray*} Now, proceed as in Proposition 2.3 of \cite{Barreira-Valls-Sta} to obtain \begin{equation}\label{eq-estimate-for-U} U(t,s)\leq Me^{(-\omega+a)(t-s)+2a|s|+2a}, \hbox{ for } t\geq s. \end{equation} Similarly, we obtain that \begin{equation}\label{eq-estimate-for-V} \|V(t,s)\|_{\mathcal{L}(Y)} \leq Me^{2a+2a|s|} e^{(\omega+a)(t-s)} \hbox{ for } t<s. \end{equation} Therefore, $\mathcal{T}$ admits a nonuniform exponential dichotomy with bound $K(t)=Me^{2a(1+|t|)}$ and exponent $\alpha=\omega-a>0$. \end{proof} \par Now, apply Theorem \ref{th-perturation-of-nununiform-exp-dcihotomy-for-differential-eq} to Example \ref{example-nonuniformED}. \begin{theorem}\label{th-example-application-of-robustness} Consider for each $\epsilon>0$ the operator $B_\epsilon(t)\in\mathcal{L}(Z)$ such that $\|B_\epsilon(t)\|\leq \epsilon e^{-6a |t|}$, and define the operator \begin{equation*} \mathcal{A_\epsilon}(t):=\mathcal{A}(t) +B_\epsilon(t). \end{equation*} If $\omega>3a$, there exists $\epsilon>0$ such that the evolution process associated with the problem \begin{equation}\label{eq-application-section-an-application} \dot{x}=\mathcal{A}_\epsilon(t)x, \ \ x(s)=x_s\in Z. \end{equation} admits a nonuniform exponential dichotomy. \end{theorem} \begin{proof} Let us prove first that the evolution problem associated with \eqref{example-nonuniformED} satisfies \begin{equation}\label{eq-norm-evolution-process-last-theorem} \sup_{ 0\leq t-\tau\leq 1} \big\{e^{-\nu |t|}\|T(t,\tau)\|_{\mathcal{L}(Z)} \big\} <+\infty. \end{equation} In fact, we have for $t\geq s$ that \begin{equation*} \|T(t,s)\|_{\mathcal{L}(Z)}\leq \|U(t,s)\|_{\mathcal{L}(X)} + \|V(t,s)\|_{\mathcal{L}(Y)}, \end{equation*} where $U$ and $V$ are the evolution processes defined in the proof of Proposition \ref{th-general-example-of-non-uniform-exp-dichotomy}. Then it is enough to prove that each evolution process satisfies \eqref{eq-norm-evolution-process-last-theorem} in the corresponding space. From \eqref{eq-estimate-for-U} we have that \begin{equation*} e^{-2a|t|}\|U(t,s)\|_{\mathcal{L}(X)} \leq Me^{2a+2a(|s|-|t|)} e^{-(\omega-a)(t-s)}. \end{equation*} Thus, we have to analyze the term $e^{-2a(|s|-|t|)}$ for $0\geq t-s\geq 1$. If $t\geq s\geq 0$ or $0\geq t\geq s$ we have $|s|-|t|=|s-t|$, then $e^{2a(|s|-|t|)}\leq e^{2a|s-t|}$ is a continuous function bounded on the set $\{(t,s): 0\leq t-s\leq 1\}$. Also, if $t\geq 0\geq s$ we have $|s|-|t|=-s-t=-(s-t)-2t\leq 1$ so $e^{2a(|s|-|t|)}\leq e^{2a}$ and therefore \begin{equation*} \sup_{ 0\leq t-s\leq 1}\{e^{-2a|t|}\|U(t,s)\|_{\mathcal{L}(X)}\} <+\infty, \hbox{ for all } t\geq s. \end{equation*} Note that $\|e^{B(t-s)}\|_{\mathcal{L}(Y)}\leq \tilde{M}e^{\beta(t-s)}$ for some $\tilde{M}\geq 1$ and $\beta>0$, for every $t\geq s$. Then \begin{equation*} \|V(t,s)\|_{\mathcal{L}(Y)}= \exp\bigg\{\int_{s}^{t}a\tau \sin(\tau ) d\tau \bigg\} \|e^{B (t-s)}\|_{\mathcal{L}(Y)} \leq \tilde{M}^2e^{4a+2a|t|} e^{(\beta+a)(t-s)}, \end{equation*} which implies that \begin{equation*} \sup_{ 0\leq t-s\leq 1}\{ e^{-2a|t|}\|V(t,s)\|_{\mathcal{L}(Y)}\}<+\infty. \end{equation*} \par Now, from Proposition \ref{th-general-example-of-non-uniform-exp-dichotomy}, $\mathcal{T}$ admits a nonuniform exponential dichotomy where the bound is $K(s)=Me^{2a+2a|s|}$ and exponent $\alpha=\omega-a>0$. Since $\nu:=2a$ is such that $\alpha>\nu$, we apply Theorem \ref{th-perturation-of-nununiform-exp-dcihotomy-for-differential-eq} to conclude that the evolution process generated by \eqref{eq-application-section-an-application} admits a nonuniform exponential dichotomy. \end{proof} \begin{remark} Note that, in Theorem \ref{th-perturation-of-nununiform-exp-dcihotomy-for-differential-eq} the assumption $\alpha>\nu$ of Theorem \ref{th-example-application-of-robustness} is expressed by $\omega>3a$. On the other hand, to apply Theorem 1 of \cite{Barreira-Valls-Robustness-noninvertible} the hypothesis must be $\omega>5a$, because their condition is $\alpha>2\nu$. \end{remark} \section{Persistence of nonuniform hyperbolic solutions}\label{section-persistence} \par In this section, we study nonlinear evolution processes associated with a semilinear differential equation. Inspired by \cite{Carvalho-Langa-Robison-book}, we study persistence of \textit{nonuniform hyperbolic solutions} under perturbation for evolutions processes in Banach spaces. More precisely, we use \textit{Greens function} to characterize bounded global solutions for semilinear differential equations and conclude that nonuniform hyperbolic solutions are \textit{isolated} in the set of bounded continuous functions, see Theorem \ref{lemma-non-uniform-hperbolic-solutions-second}. Finally, in Theorem \ref{th-persistence}, we provide conditions to prove that nonuniform hyperbolic solutions persist under perturbations. \par Consider a semi-linear differential equation \begin{equation}\label{nonuniform-hyperbolic-solutions-definition} \dot{y}=A(t)y +f(t,y), \ \ y(s)=y_s. \end{equation} Assume that $f$ is continuous in the first variable and locally Lipschitz in the second and that $A(t)$ is associated with a linear bounded evolution process $\mathcal{T}=\{T(t,s): t\geq s\}$, i.e., for each $s\in\mathbb{R}$ and $x\in X$ the mapping $[s,+\infty)\ni t\to T(t,s)x$ is the solution of $$\dot{x}=A(t)x, \ x(s)=x.$$ Then we have a \textit{local mild solution} for problem \eqref{nonuniform-hyperbolic-solutions-definition}, that is, for each $(s,y_s)\in \mathbb{R}\times X$ there exist $\sigma=\sigma(s,y_s)>0$ and a solution $y$ of the integral equation \begin{equation} y(t,s;y_s)=T(t,s)y_s+\int_{s}^{t}T(t,\tau)f(\tau,y(\tau,s)) d\tau, \end{equation} for all $t\in [s, s+\sigma)$. \par If for each $(s,y_s)\in \mathbb{R}\times X$, $\sigma(s,y_s)=+\infty$, we can consider the evolution process $S_f(t,s)y_s=y(t,s;y_s)$. We refer to $\mathcal{S}_f=\{S_f(t,s):t\geq s\}$ as the evolution process obtained by a non-linear perturbation $f$ of $\mathcal{T}$. \par Suppose additionally that $f:\mathbb{R}\times X \rightarrow X$ is differentiable with continuous derivatives. Let $\xi$ be a global solution of $\mathcal{S}_f$ (see Definition \ref{def-global-solution}), and $\mathcal{L}_f=\{L_f(t,s)\,: t\geq s \}$ is the linearized evolution process of $\mathcal{S}_f$ on $\xi$. Thus $\mathcal{L}_f$ satisfies \begin{equation*} L_f(t,s)=T(t,s)+\int_{s}^{t} T(t,\tau)D_xf(\tau,\xi(\tau)) L_f(\tau,s)d\tau. \end{equation*} \begin{definition} If $\mathcal{L}_f$ admits a nonuniform exponential dichotomy we say that $\xi$ is a \textbf{nonuniform hyperbolic solution} for $\mathcal{S}_f$. \end{definition} \par In Barreira and Valls \cite{Barreira-Valls-Sta} this notion is called \textit{nonuniformly hyperbolic trajectories}. \begin{remark} We point out that the existence of a nonuniform hyperbolic solution can be obtained by an application of Theorem \ref{th-perturation-of-nununiform-exp-dcihotomy-for-differential-eq}. For instance, if $\mathcal{T}$ admits nonuniform exponential dichotomy with bound $K(s)=De^{\nu|s|}$ such that \begin{equation*} \sup_{ 0\leq t-s\leq 1}\big\{e^{-\nu|t|} \|T(t,s)\|_{\mathcal{L}(X)} \big\}<+\infty, \end{equation*} and $f_\epsilon$ is such that $f_\epsilon(t,\cdot)$ is differentiable with continuous derivatives, and $$\sup_{t\in \mathbb{R}}\|e^{3\nu|t|}f_\epsilon(t,\cdot)\|_{C^1(X)}\to 0 \hbox{ as }\epsilon\to 0.$$ Then, for each $\epsilon>0$ suitable small, it is possible to obtain nonuniform hyperbolic solutions $\xi_\epsilon$ for $\mathcal{S}_{f_\epsilon}$. \end{remark} \begin{remark}\label{lemma-non-uniform-hperbolic-solutions-first-lemma} Let $\varphi$ be a global solution for $\mathcal{S}_f$. Then \begin{equation}\label{eq-lemma-non-uniform-hperbolic-solutions-first-lemma} \varphi(t)=L_f(t,s)\varphi(s)+\int_{s}^{t}L_f(t,\tau) [f(\tau,\varphi(\tau)) - D_xf(\tau,\xi(\tau)) \varphi(\tau)] d\tau, \ \ t\geq s. \end{equation} In particular, the global bounded solution $\xi$ satisfies the integral equation \eqref{eq-lemma-non-uniform-hperbolic-solutions-first-lemma}. \end{remark} \par The next result allows us to characterize bounded nonuniform hyperbolic solutions. \begin{theorem} \label{lemma-non-uniform-hperbolic-solutions-second} Assume that there is a global nonuniform hyperbolic solution $\xi$ for $\mathcal{S}_f$ and that $\mathcal{L}_f$ admits a nonuniform exponential dichotomy with bound is $K(s)=De^{\nu|s|}$, for all $s\in \mathbb{R}$, and exponent $\alpha>\nu$. If $\varphi$ is a bounded global solution for $\mathcal{S}_f$, then $\varphi$ satisfies \begin{equation*} \varphi(t)=\int_{-\infty}^{+\infty} G_f(t,\tau) [f(\tau,\varphi(\tau)) - D_xf(\tau,\xi(\tau)) \varphi(\tau)] d\tau, \end{equation*} where $G_f$ is the Green function associated with the evolution process $\mathcal{L}_f$, \begin{equation*} G_{f}(t,s)= \left\{ \begin{array}{l l} L_f(t,s)(Id_X-Q(s)), & \quad \hbox{if } t\geq s, \\ -L_f(t,s)Q(s) \, & \quad \hbox{if } t<s. \end{array} \right. \end{equation*} Moreover, if $\xi$ is a bounded nonuniform hyperbolic solution of $\mathcal{S}_f$ and \begin{equation}\label{equation-condition-to-obtain-persisntence-on-f} \rho(\epsilon)= \sup_{\|x\| \leq \epsilon}\sup_{t\in \mathbb{R}} \frac{e^{\nu |t|} \, \|f(t,\xi(t)+x) -f(t,\xi(t) ) -D_xf(t,\xi(t) )x\| }{\|x\|} \to 0, \hbox{ as } \epsilon \to 0, \end{equation} then $\xi$ is \textbf{isolated} in the set of bounded and continuous functions $C_b(\mathbb{R}, X)$, i.e., there is a neighborhood of $\xi$ such that $\xi$ is the only bounded global solution of $\mathcal{S}_f$ with the trace inside of this neighborhood. \end{theorem} \begin{proof} \par If $\tau> t$ we have that \begin{equation} \varphi(\tau)=L_f(\tau,t)\varphi(\tau)+\int_{t}^{\tau}L_f(t,s) [f(s,\varphi(s)) - D_xf(s,\xi(s)) \varphi(s)] ds. \end{equation} Thus, applying $Q(\tau)$ in the previous equation we obtain \begin{equation} Q(\tau)\varphi(\tau)=L_f(\tau,t)Q(t)\varphi(t)+\int_{t}^{\tau}L_f(t,s) Q(s)[f(s,\varphi(s)) - D_xf(s,\xi(s)) \varphi(s)] ds. \end{equation} Now, use that $L_f(\tau, t)|_{R(Q(t))}$ is invertible with inverse $L_f(t,\tau)$ so we obtain \begin{equation} L_f(t,\tau)Q(\tau)\varphi(\tau)=Q(t)\varphi(t)+\int_{t}^{\tau}L_f(t,s) Q(s)[f(s,\varphi(s)) - D_xf(s,\xi(s)) \varphi(s)] ds. \end{equation} By definition we have that for $\tau>t$ suitable large \begin{equation*} \|L_f(t,\tau)Q(\tau)\varphi(\tau)\|\leq D e^{\nu |\tau|} e^{\alpha(t-\tau)} \sup_{s\in \mathbb{R}}\|\varphi(s)\|\to 0, \hbox{ as } \tau\to +\infty. \end{equation*} Then \begin{equation} Q(t)\varphi(t)=-\int_{t}^{+\infty}L_f(t,s) Q(s)[f(s,\varphi(s)) - D_xf(s,\xi(s)) \varphi(s)] ds. \end{equation} Similarly, for $t>\tau$, as \begin{equation*} \|L_f(t,\tau)(Id_X-Q(\tau))\varphi(\tau)\|\leq D e^{\nu |\tau|} e^{-\alpha(t-\tau)} \sup_{s\in \mathbb{R}}\|\varphi(s)\| \to 0, \hbox{ as } \tau\to -\infty, \end{equation*} thus \begin{equation} (Id_X-Q(t))\varphi(t)=\int_{-\infty}^{t}L_f(t,s) Q(s)[f(s,\varphi(s)) - D_xf(s,\xi(s)) \varphi(s)] ds. \end{equation} Therefore, the result follows by writing $\varphi(t)=(Id_X-Q(t))\varphi(t)+Q(t)\varphi(t)$ and using the previous expressions. \par Finally, if $\varphi$ is a bounded solution of $\mathcal{S}_f$ with $\sup_{t\in \mathbb{R}}\|\varphi(t)-\xi(t)\|\leq \epsilon$, then \begin{equation*} \sup_{t\in \mathbb{R}} \|\varphi(t) -\xi(t)\| \leq 2 D\rho(\epsilon) \alpha^{-1} \sup_{t\in \mathbb{R}} \|\varphi(t) -\xi(t)\|. \end{equation*} For $\epsilon>0$ such that $2 D\rho(\epsilon)\alpha^{-1}<1 $ we see that $\varphi(t)=\xi(t)$ for all $t\in \mathbb{R}$. \end{proof} \begin{remark} The Green function $\mathcal{G}_f$ satisfies \begin{equation*} \|G_f(t,s)\|_{\mathcal{L}(X)}\leq De^{\nu|s|} e^{-\alpha|t-s|}, \end{equation*} for all $t,s\in \mathbb{R}$. \end{remark} \par Now, as an application of Theorem \ref{th-roughness-continuous-TED} we prove a result on the \textit{persistence of nonuniform hyperbolic solutions}. \begin{theorem}[Persistence of nonuniform hyperbolic solutions] \label{th-persistence} Let $f:\mathbb{R}\times X\rightarrow X$ continuous with first continuous derivatives, $\mathcal{T}$ a linear evolution processes and $\mathcal{S}_f$ be evolution process generated by $f$ and $\mathcal{T}$. Assume that \begin{enumerate} \item $\mathcal{T}$ satisfies \begin{equation}\label{th-hypothesis-1} \sup_{ 0\leq t-s\leq 1}\{e^{-\nu|t|} \|T(t,s)\|_{\mathcal{L}(X)} \}<+\infty, \end{equation} \item there is a global nonuniform hyperbolic solution $\xi$ for $\mathcal{S}_f$, i.e., $\mathcal{L}_f$ admits a nonuniform exponential dichotomy with bound $K(s)=De^{\nu|s|}$, for all $s\in \mathbb{R}$, and exponent $\alpha>\nu$. \item $\xi$ is bounded with $\sup_{t\in \mathbb{R}}\|\xi(t)\|\leq M$; \item $f$ satisfies Condition \eqref{equation-condition-to-obtain-persisntence-on-f}; \item the derivative of $f$ is small through $\xi$, i.e., \begin{equation*} \sup_{s\in \mathbb{R}}\sup_{\|x\|\leq M}\{e^{\nu |s|} \|D_xf(s,x)\|_{\mathcal{L}(X)}\} <+\infty; \end{equation*} \item $g:\mathbb{R} \times X \rightarrow X$ is differentiable with continuous derivatives and such that \begin{equation}\label{{th-persistence-hypothesis-4}} \sup_{|x|\leq M} \|f(t,x)-g(t,x)\|_X + \|D_xf(t,x)-D_xg(t,x)\|_{\mathcal{L}(X)}<e^{-3\nu |t|} \min\bigg\{\, \frac{\epsilon}{4D\alpha^{-1}},\delta\, \bigg\}, \end{equation} where $\delta>0$ is the same of Theorem \ref{th-perturation-of-nununiform-exp-dcihotomy-for-differential-eq}. \end{enumerate} Then there exists a unique nonuniform hyperbolic solution $\psi$ for $\mathcal{S}_g$ such that \begin{equation*} \sup_{t\in \mathbb{R}} \|\xi(t)-\psi(t)\|<\epsilon. \end{equation*} \end{theorem} \begin{proof} If $y$ is a global bounded solution for $\mathcal{S}_g$, then, as in Remark \ref{lemma-non-uniform-hperbolic-solutions-first-lemma}, we have that \begin{equation} \begin{split} y(t)=L_f(t,s)y(s)+\int_{s}^{t}L_f(t,\tau) [g(\tau,y(\tau)) - D_xf(\tau,\xi(\tau)) y(\tau)] d\tau,\\ \xi(t)=L_f(t,s)\xi(s)+\int_{s}^{t}L_f(t,\tau) [f(\tau,\xi(\tau)) - D_xf(\tau,\xi(\tau)) \xi(\tau)] d\tau. \end{split} \end{equation} Thus $\phi(t)=y(t)-\xi(t)$ satisfies the following integral equation \begin{equation}\label{equation-5112} \phi(t)=L_f(t,s)\phi(s)+\int_{s}^{t}L_f(t,\tau) h(\tau,\phi(\tau)) d\tau, \end{equation} where $h(t,\phi(t))=g(t,\phi(t)+\xi(t)) -f(t,\xi(t))-D_xf(t,\xi(t))\phi(t)$. \par Then, by Theorem \ref{lemma-non-uniform-hperbolic-solutions-second}, there exists a bounded solution of \eqref{equation-5112} in \begin{equation*} B_\epsilon:=\{\phi :\mathbb{R}\rightarrow X \, : \phi \hbox{ is continuous and } \sup_{t\in \mathbb{R}} \|\phi(t)\|<\epsilon\}, \end{equation*} if and only if, the operator \begin{equation*} (\mathcal{F}\varphi)(t)=\int_{-\infty}^{+\infty} G_f(t,s) h(s,\varphi(s)) ds, \end{equation*} has a fixed point in the space $B_\epsilon$. \par Now, we use the fact that $\mathcal{L}_f$ admits a nonuniform exponential dichotomy to show that $\mathcal{F}$ has a unique fixed point in $B_\epsilon$ for suitable small $\epsilon>0$. In order to use the Banach fixed point Theorem, we have to prove that $\mathcal{F}$ is a contraction and that $\mathcal{F}B_\epsilon\subset B_\epsilon$. \par First, let $\phi \in B_\epsilon$ we have \begin{eqnarray*} \| (\mathcal{F} \phi)(t)\|_X &\leq& D \int_{-\infty}^{+\infty} e^{\nu |s|} e^{-\alpha|t-s|} \|h(s,\phi(s))\|_X ds\\ &\leq& 2D\alpha^{-1} \sup_{t\in \mathbb{R}} e^{\nu|t|} \, \|g(t,\xi(t)+\phi(t))-f(t,\xi(t)+\phi(t))\|_X\\ &+&2D\alpha^{-1} \epsilon \sup_{\|x\|\leq \epsilon} \sup_{t\in \mathbb{R}} \frac{ e^{\nu|t|} \,\|f(t,\xi(t)+x)-f(t,\xi(t))-D_xf(t,\xi(t))x\|_X }{\|x\|_X} \\ &\leq & \epsilon/2+2\alpha^{-1}D\rho(\epsilon) \epsilon. \end{eqnarray*} Thus, choosing $\epsilon>0$ such that $4\alpha^{-1}D\rho(\epsilon)<1$, we see that $\mathcal{F}\phi\in B_\epsilon$. Now, we show that $\mathcal{F}$ is a contraction. In fact, with similar computations we are able to prove for $\phi_1,\phi_2\in B_\epsilon$ that \begin{equation*} \| (\mathcal{F} \phi_1)(t)-(\mathcal{F} \phi_2)(t)\|_X \leq \frac{1}{2} \sup_{t\in \mathbb{R}} \|\phi_1(t)-\phi_2(t)\|_X. \end{equation*} Therefore, there is a unique fixed point $\phi$ in $B_\epsilon$ and we obtain $\psi=\phi+\xi$ a global solution of $\mathcal{S}_g$. \par Finally, we prove that $\psi$ is a nonuniform hyperbolic solution, that means, the linear evolution process $\mathcal{L}_g:=\{L_g(t,s): t\geq s\}$ that satisfies \begin{equation*} L_g(t,\tau)=T(t,\tau)+\int_{\tau}^{t} T(t,s) D_xg(s,\psi(s)) L_g(s,\tau) ds \end{equation*} admits a nonuniform exponential dichotomy. \par To that end, we show that $\mathcal{L}_f$ satisfies conditions of Theorem \ref{th-perturation-of-nununiform-exp-dcihotomy-for-differential-eq} and we see $\mathcal{L}_g$ as a small perturbation of $\mathcal{L}_f$. Indeed, since $\mathcal{T}$ satisfies \eqref{th-hypothesis-1} and \begin{equation*} L_f(t,s)=T(t,s)+\int_{s}^{t} T(t,\tau)D_xf(\tau,\xi(\tau)) L_f(\tau,s)d\tau, \end{equation*} from a Grownwall inequality and assumption (5) we see that \begin{equation*} \sup_{ 0\leq t-\tau\leq 1} \{e^{-\nu|t|}\,\|L_f(t,\tau)\|_{\mathcal{L}(X)}\} <+\infty. \end{equation*} \par To complete the proof, note that \begin{equation*} L_g(t,\tau)=L_f(t,\tau)+\int_{\tau}^{t} L_f(t,s) [D_xf(s,\xi(s))-D_xg(s,\psi(s))] L_g(s,\tau)ds. \end{equation*} Now, define $B(s):=D_xf(s,\xi(s))-D_xg(s,\psi(s))$ for all $s\in \mathbb{R}$. Then, by hypothesis \eqref{{th-persistence-hypothesis-4}}, we use Theorem \ref{th-perturation-of-nununiform-exp-dcihotomy-for-differential-eq} to conclude that $\psi$ is a nonuniform hyperbolic solution of $\mathcal{S}_g$. \end{proof} \section{Conclusions} \par The method of discretization, results \ref{th-continuousTED-implies-discreteTED} and \ref{th-discrete-dichotomy-implies-continuous-dichotomy}, allowed us to compare continuous and discrete dynamical systems that exhibit nonuniform hyperbolicity. This approach was already known in the case of uniform exponential dichotomies, see for example \cite{Chow-Leiva-1,Henry-1}, and in this work we established it for the nonuniform case. Through this procedure we obtain: \begin{enumerate} \item Uniqueness of the family of projections: Corollary \ref{cor-uniqueness-projection-continuous}. \item Continuous dependence of projections: Theorem \ref{th-continuous-depende-projections}. \item Robustness of nonuniform exponential dichotomies: theorems \ref{th-roughness-continuous-TED} and \ref{th-perturation-of-nununiform-exp-dcihotomy-for-differential-eq}. \end{enumerate} \par A disadvantage of the discretization method is that it is not possible to consider nonlinear growth rates $\rho(t)$, as in Barreira and Valls \cite{Barreira-Valls-Robustness-noninvertible}. On the other hand, it was possible to prove the robustness result with the assumption $\alpha>\nu$, which seems to be the sharpest one. In fact, if the growth of the bound $De^{\nu|s|}$ is greater or equal than the exponent $\alpha>0$ we do not know if the robustness results hold true. \par The continuous dependence of projections and the persistence of hyperbolic solutions play an important role in the study of continuity of local unstable and stable manifolds for an associated nonlinear evolution process. In \cite{Carvalho-Langa} they use these permanence results to conclude continuity of pullback attractors under perturbation. On the other hand, it is not clear yet how to apply the results of stability of nonuniform hyperbolicity in the theory of attractors. However the persistence of nonuniform hyperbolic solutions and continuous dependence of projections should be important to study continuity of invariant manifolds associated to the nonuniform hyperbolic solutions. This, in turn, will be crucial in a possible application in the theory of attractors. \section*{Acknowledgments} \par This work was carried out while Alexandre Oliveira-Sousa visited the Dpto. Ecuaciones Diferenciales y An\'alisis Num\'erico (EDAN), Universidad de Sevilla and he wants to acknowledge the warm reception from people of EDAN. We acknowledge the financial support from the following institutions: T. Caraballo and J. A. Langa by Ministerio de Ciencia, Innovaci\'on y Universidades (Spain), FEDER (European Community) under grant PGC2018-096540-B-I00, and by Proyecto I+D+i Programa Operativo FEDER Andaluc\'{\i}a US-1254251; A. N. Carvalho by S\~ao Paulo Research Foundation (FAPESP) grant 2018/10997-6, CNPq grant 306213/2019-2, and FEDER - Andalucía P18-FR-4509; and A. Oliveira-Sousa by S\~ao Paulo Research Foundation (FAPESP) grants 2017/21729-0 and 2018/10633-4, and CAPES grant PROEX-9430931/D. \bibliographystyle{abbrv} \bibliography{Bibliografia3} \end{document}
45,681
Climate Action in the Humanist Community A post by Javan Lev Poblador for "Humanist Voices", the blog of Young Humanists International A post by Javan Lev Poblador for "Humanist Voices", the blog of Young Humanists International Javan Lev Poblador is the Young Humanists International Coordinator and the Chief Executive of the Humanist Alliance Philippines, International. Before joining Humanists International, Javan has been active in community work zeroing on environmental conservation, climate justice, human rights, and civic youth engagement advocacies since 2012. This blog was originally published on Humanistically Speaking. Events in the month of November 2013 still continue to haunt Filipinos. I still remember strong winds buffeting the roof of our house, rivers overspilling and flooding roads, and a blackout that lasted for five days. But it wasn’t until the power came back and I turned on the TV that I realized how horrendous it was for other parts of the Philippines. Bodies piling up on the sides of the roads, homes submerged in floodwater, and lost family members swept away by the storm surge. The climate crisis has already progressed from a scientific observation to a real, everyday phenomenon that affects how we live or die. It’s not a question anymore of whether it’s real or not, but will we do anything about it? Although the climate crisis is the greatest threat to face modern humans, not everyone is affected in the same way. The sad reality is that some of us will have to fight harder than others, just for being in the front seat of the effects of climate change and all in the pursuit of climate justice. And it’s not hard to see why, in recent years, young people all around the world have begun to fight back on a never-before-seen magnitude. As humanists, we have a duty of care to all of humanity including future generations and we recognize our dependence on and responsibility for the natural world (Amsterdam Declaration 2002). Humanists International has also put on record at the UN that the world must wake up to the science and curb the impacts of the climate crisis through its Reykjavik Declaration on the Climate Change Crisis. If there’s one thing I learned from my work as an environmental journalist, it’s that environmental rights are also human rights and all rights are interconnected. But I’m not the only one to have this view. I‘ve asked young humanists in other countries about Humanism and climate change. Rebekka Hill from Young Humanists UK said that, as humanists, “…we trust science and fight unnecessary suffering.” Wonderful Mkhutche from Humanists Malawi and Gerardo Chaparro from Humanists of Puerto Rico both agreed that we owe it to future generations to leave a liveable planet and they emphasized the importance of taking urgent action on climate change. And Sasa AguilaAguire from Humanist Alliance Philippines, International (HAPI) concluded that all the ‘progress’ we have made as a civilization will soon be swept away if we do nothing now to mitigate the climate crisis. Unwilling to sit idly by, we are taking all this rage, frustration, and passion and turning it into climate actions. As the Young Humanists International Coordinator, together with my team and the support of other humanist organizations, we have launched Young Humanist Climate Action. This is a longrunning campaign of Young Humanists International in the pursuit of climate justice which aims: This doesn’t mean that the humanist movement has not done anything prior to this project. This campaign will only solidify further the existing climate advocacies in other humanist organizations with a greater focus on young humanists. To name a few, Belgian humanists deMens.nu through #BackToTheClimate call for more climate policies and for states to limit global temperature rise to 1.5°C. Humanist Society Scotland launched EcoHumanism which champions present and future generations that have no voice. And Think School has released fourteen open license videos in a series on the climate crisis. The recent IPCC report released in August paints a bleak picture for our planet’s future that left many of us overwhelmed. But hope is not lost. The Paris Agreement’s 1.5°C warming limit is still achievable if there’s a strong global action and we act faster. A huge piece of the climate solution lies in cutting down our greenhouse gas emissions and divesting from fossil fuels, but there are many pieces of the jigsaw: addressing agriculture, environmental conservation, and the choices we make every day. Yes, it’s a long shot, and it calls for the cooperation of every person on the planet. However, our response to the COVID-19 pandemic has shown us that drastic changes for the better are possible. But we can no longer afford any more delay.
59,543
ModSecurity is an open source embeddable web application firewall, or intrusion detection and prevention engine for web applications. ModSecurity provides protection from a range of attacks against web applications and allows for HTTP traffic monitoring and real-time analysis with no changes to existing infrastructure, by operating as an Apache Web server module mod_security or standalone, and thus increase web application security.. Trả lời
371,595
It may be lovely, but Sony’s under-powered P-series costs too much and features features poor battery life. Sony insists that its P-series isn't a netbook but, apart from the price tag, it's hard to deny that there are some striking similarities. The processor, for instance, is an Intel Atom Z520 running at 1.33GHz, and only the 2GB of memory sets it apart from the rest of the pack. The extra memory does little to help performance, however. The combination of Vista Home Premium and the low-powered Atom left our benchmarks limping to the lowest score on test: a pitiful 0.16. Spending a few minutes with the Sony was enough to make the XP-powered netbooks feel turbo-charged.The P-series is incredibly slim and light - weighing just 618g - and even manages to feel relatively sturdy despite its insubstantial figure. That fine figure doesn't, however, leave much room for the essentials. As the least expensive model in the P-series family, the VGN-P13GH/Q has only a tiny three-cell battery - a choice that, despite the modest processor, provides just 3hrs 11mins of light use.The provision of a 4200rpm, 60GB hard disk is pretty meagre, too, and partly responsible for the poor performance. But there are some highlights: draft-n wireless, Gigabit Ethernet and a 3G modem give the P-series the lead when it comes to networking, for example.It isn't hard to be a little impressed by the P-series. The keyboard doesn't have much in the way of feel, and the 1600 x 768 display leaves text painfully small on occasion, but the fact that Sony has managed to squeeze in a usable keyboard and high-resolution display is pretty impressive. Alas, it just isn't enough to make us want to spend nearly $1400 for the privilege. This Review appeared in the November, 2009.
169,721
More and more you'll see yourself exposed to a conversation where you meet with a techno architect, a visual artist, a musician or an industrial designer. For a start you and your tech knowledge will allow you to understand some of the jargon and somehow be accepted by the new age professionals until they start with their generative design trade-offs. For those out there who don't know, Generative Design is a field from the CGI where images or animations are generated by a computer algorythm. Sounds familiar? Maybe the golden number, Pi the movie, Fibonacci rings a bell. Well in few words, a set of rules with some sort of variations that if controlled can generate amazing patterns impossible to replicate without the help of an "ordinateur". In a way, many of the functions of our 3d software or even AE are designed through algorythms that replicate certain "natural" functions in digital. Say the physics of mograph, Trapcode you name it. Check this awseome lab specialized in generative design. Onformative. You'll trip with their achievements. If this is your thing check out our code and interactive tags, many posts about this type of design. Peace.
331,358
While giving birth is a relatively safe procedure in the US, it still is not without risk to both the mother and the infant. One of the most frightening risks for soon-to-be-parents is the risk of a birth injury. Birth injuries are any type of injury or trauma that may occur during labor and delivery. By definition a birthmark is a type of birth injury. While birthmarks are harmless, other types of birth injuries are not and may adversely affect the health of the infant. Some examples of more serious birth injuries include: - Birth asphyxia: occurs when there is inadequate intake of oxygen, which can be caused during birth if the umbilical cord becomes wrapped around the baby's neck, cutting of the oxygen supply. Birth asphyxia can cause long term damage to the brain and other internal organs and even death, depending on how long the oxygen supply is cut off. - Cerebral palsy: a non-progressive disorder that affects the parts of the brain that control muscle movement and coordination. Cerebral palsy can be caused before, during or after birth when oxygen is cut off to the baby's brain. - Meconium aspiration syndrome: occurs when the baby inhales meconium (i.e. baby's first bowel movement) and amniotic fluid into the lungs during birth. It is the leading cause of illness and death in newborns in the US and normally happens in cases when the fetus is stressed during delivery, which can occur if the baby is past his or her due date. - Erb's palsy: nerve damage that causes paralysis in the baby's arm, hand and/or fingers. It occurs when the network of nerves near the baby's neck are stretched, which can occur during birth if too much force is used to remove the baby from the birth canal. It also can occur if the baby becomes lodged behind the mother's pelvis, requiring the use of forceps or vacuum to remove him or her. Most children will fully recover from the nerve damage, but some children will suffer permanent injury - Fractures: clavicle, or collarbone, fractures are the most common in newborns, but other bones also can be broken during delivery. Birth Injuries and Medical Malpractice While most birth injuries are relatively minor and will not result in long-term harm to the baby, some of them are very serious and may impair the baby's physical and mental development, and in some cases, even result in death. It is estimated that as many as 2% of all birth injuries are the result of medical malpractice. Doctors who provide prenatal care to pregnant women and oversee their labor and delivery are responsible for taking reasonable steps to protect the health of the mother and baby. This includes taking appropriate action to protect and treat the mother for infection and illness while pregnant and monitoring the vital signs of mother and fetus during labor and delivery. When the treating physician is alerted of changes in the vital signs, then the physician has a responsibility to take timely, appropriate steps to protect the mother and infant. This may include inducing labor or performing a C-Section, for example. Obstetricians and the attending medical staff may commit medical malpractice when they: - Fail to treat the mother for infection during pregnancy - Fail to anticipate birth complications and take appropriate steps to correct or minimize the harm - Fail to respond to bleeding before or during labor - Fail to monitor the baby's vital signs, including heart rate and oxygen - Fail to respond to signs of fetal distress - Fail to take appropriate action when the fetus becomes entangled in the umbilical cord - Fail to perform or delay performance of an emergency c-section - Give the mother too much pitocin, a drug used to induce labor, causing harm to the baby - Use too much force removing the baby from the birth canal, either manually or with the aid of a vacuum or forceps, resulting in injury to the baby Obstetricians are not the only ones who may act negligently and cause a birth injury. Midwives, nurses and other hospital staff responsible for monitoring the mother during labor and the child after delivery also can cause birth injuries. For more information on your legal rights following a birth injury, contact an experienced medical malpractice attorney.
14,114
TITLE: linear algebra - eigenvalues/vectors & diagnalization QUESTION [0 upvotes]: $$R(θ) = \begin{pmatrix} \cosθ & -\sinθ \\ \sinθ & \cosθ \end{pmatrix}$$ $0 < θ < π$ Now, I understand that there are not any eigenvectors/values over $\mathbb R$ (but do has over the Complex field) for this Matrix, but how I show this by geometry? And one more question - is this matrix an Orthogonal Matrix? Thank you. REPLY [1 votes]: First of all, the matrix does have eigenvalues, but they are not real numbers. If you are comfortable with complex numbers you can use the usual methods to show that the eigenvalues are $e^{\pm i\theta}$. The matrix $R(\theta)$ represents the rotation of a vector by an angle $\theta$ with $0<\theta<\pi$. If it has a real eigenvalue $\lambda$ with eigenvector $\bf v$, then by definition $R(\theta)\bf v=\lambda \bf v$. That is, rotating $\bf v$ through the angle $\theta$ gives the same result as multiplying $\bf v$ by a real scalar. If you draw a diagram, it is geometrically obvious that this is impossible. The matrix is orthogonal because if you multiply it by its transpose you get the identity matrix.
66,788
Midwest Veterinary Supply is a family/employee owned, full-line distributor. With 6 branch locations in Minnesota, Iowa, Wisconsin, Indiana, Pennsylvania and Texas. Our broad range of services has everything you need to run a successful practice. Stop by our booth and find out what we can start doing for you. 21467 Holyoke Avenue Lakeville MN, 55044 United States Website:
352,254
\begin{document} \title{Hodge filtrations on tempered Hodge modules} \date{June 18, 2022} \author{Dougal Davis}\address{School of Mathematics, University of Edinburgh, James Clerk Maxwell Building, Peter Guthrie Tait Road, Edinburgh EH9 3FD, United Kingdom.} \email{dougal.davis@ed.ac.uk} \thanks{DD was supported by the EPSRC programme grant EP/R034826/1} \author{Kari Vilonen}\address{School of Mathematics and Statistics, University of Melbourne, VIC 3010, Australia, also Department of Mathematics and Statistics, University of Helsinki, Helsinki, Finland} \email{kari.vilonen@unimelb.edu.au, kari.vilonen@helsinki.fi} \thanks{KV was supported in part by the ARC grants DP180101445, FL200100141 and the Academy of Finland} \subjclass[2020]{14F10; 22E46; 32S35} \maketitle \section{Introduction} In this companion paper to~\cite{DV} we show that the Hodge filtration of a tempered Hodge module is generated by the lowest piece of its Hodge filtration, i.e., it is, in some sense, as simple as possible. To explain the statement in more detail and to put it in some context, let us recall that in~\cite{schmid-vilonen} Schmid and the second author initiated a program and formulated a series of conjectures postulating the existence of a (mixed) Hodge structure on irreducible and standard Harish-Chandra modules. These Hodge structures should arise by taking global sections of the appropriate mixed Hodge modules. One motivation for the conjectures is that they are expected to help in understanding the unitary dual of reductive Lie groups. In~\cite{DV} we confirmed that results about unitarity can indeed be obtained from the mixed Hodge modules by deriving a key theorem in~\cite{ALTV} as a consequence of a stronger result. By Beilinson-Bernstein localization we can associate to Harish-Chandra modules their corresponding Harish-Chandra sheaves. Thus we can talk about tempered sheaves etc. As is explained in~\cite{DV}, for example, when the infinitesimal character is real the irreducible modules carry a canonical Hodge module structure. In general this Hodge structure can be complicated. However, in this paper we prove the following result which is Theorem~\ref{main} in the main body of the paper: \begin{thm*} The Hodge filtration of tempered Hodge module is generated by the lowest piece of its Hodge filtration as a filtered $\cD$-module. \end{thm*} As is explained in section~\ref{thc} tempered Hodge modules are a mixture of a tempered Hodge module for spherical principal series for split groups and Hodge module associated to closed orbits. In the latter case, the proof follows easily from the definition of push-forward for filtered $\cD$-modules. In the former case a crucial ingredient is the fact that the minimal $K$-types lie in the lowest piece of the Hodge filtration, one of the main results in~\cite{DV}. As a consequence of the theorem above we obtain \begin{thm*} The conjectures in~\cite{schmid-vilonen} hold in the tempered case. \end{thm*} This result is stated as Theorem~\ref{signature} and we refer the reader there for a more precise statement. The authors thank Wilfried Schmid for his contributions to this paper. \section{Tempered Harish-Chandra sheaves} \label{thc} It is convenient for us to follow the basic set up of~\cite{DV} which we recall here briefly. We will work in the context of Harish-Chandra modules. Let us fix a complex reductive group $G$ and an involution $\theta$ of $G$. We write $K=G^\theta$ for the fixed point group. We will always use lower case Gothic letters to denote the corresponding Lie algebras. On the level of Lie algebras we have the Cartan decomposition $\fg = \fk \oplus \fp$ into eigenspaces of $\theta$. If $B$ is a Borel and $N$ is its unipotent radical then $H=B/N$, the universal Cartan, is independent of the choice of $B$ and it comes equipped with a canonical root system. In what follow we will always consider the roots in $B$ to be negative. We write $\HC(\fg,K)$ for the category of Harish-Chandra modules of the pair $(\fg,K)$, and $\HC(\fg,K)_\l$ for the full subcategory of Harish Chandra modules with infinitesimal character $\chi_\l$ which is associate to $\l\in\fh^*$ under the Harish-Chandra homomorphism. For convenience we deviate from the notation in~\cite{DV} and use Harish-Chandra's notation. In particular, $\HC(\fg,K)_\rho$ contains the trivial representation and $\l=0$ corresponds to the most singular infinitesimal character and is the center of the action of the Weyl group $W$. Let us write $\cB=B/G$ for the flag manifold of $G$. Associated to $\l$ we have the sheaf of twisted differential operators $\cD_\l$ on $\cB$. We write $\HC(\cD_\lambda,K)$ for the category of Harish-Chandra sheaves, i.e., for the category of $K$-equivariant $\cD_\l$-modules on $\cB$. If the parameter $\lambda$ is dominant then, according to Beilinson-Bernstein, each irreducible Harish-Chandra module $M$ is obtained as global sections of a unique irreducible Harish-Chandra sheaf $\cM$. We call an irreducible Harish-Chandra sheaf tempered if the associated representation is. The tempered Harish-Chandra modules were first classified in~\cite{KZ1982}. A geometric classification, which we recall below, is given in~\cite{HMSWII}. An irreducible Harish-Chandra sheaf $\cM$ is an intermediate extension of a rank one $\lambda$-twisted local system $\gamma$ on a $K$-orbit $Q$, i.e., $\cM=j_{!*}\gamma$ for $j:Q \to \cB$ the inclusion. Let us further recall that we call $\cM=j_{!*}\gamma$ {\it clean} if it coincides with $j_!\gamma$ and hence with $j_*\gamma$. Let us recall that associated to the orbit $Q$ there is a $\theta$-stable Cartan $T$ such that $T$ has a fixed point in $Q$. In this way we can also think of $\gamma$ as a one dimensional Harish-Chandra module for the pair $(\fh,T^\theta)$ once we identify $T$ with $H$ using the chosen fixed point. We decompose $\ft$ under $\theta$ into its eigenspaces as $\ft=\fa\oplus\fc$ with $\fa$ being the $(-1)$-eigenspace and $\fc$ the $(+1)$-eigenspace. Then such a one dimensional module consists of a pair $\lambda\in\fh^*$, $\Lambda:T^\theta\to \bC^*$ such that $d\Lambda + \rho= \lambda|_\fc$. We now impose the condition that the infinitesimal character $\lambda\in\fh^*_\bR$ is real. This is the basic case as the other tempered representations can be obtained by simply moving the parameter $\nu$ to the imaginary direction. Under this condition for $\cM$ to be tempered we have to have \begin{equation*} \la|_\fa = 0 \ \ \ \text{and} \ \ \ \text{$\cM=j_{!*}\gamma$ is clean}\,. \end{equation*} Thus, we have $d\Lambda = \lambda$. Furthermore, cleanness poses a condition on the orbit $Q$: for any complex positive root $\alpha$ the root $\theta\alpha$ has to also be positive. Now, let $\operatorname{Z}_\fg(\fc)=\fl\oplus \fc$ and let us write \begin{equation} \fv\ = \ \bigoplus_{\substack{\alpha \in \Phi^+(\fg,\ft) \\ \alpha|_\fc\neq 0}}\fg_\alpha\,. \end{equation} This gives us a parabolic \begin{equation*} \fq_L\ = \ \fl\oplus \fc\oplus \fv\,. \end{equation*} Let us first consider the projection \begin{equation*} p:\cB \to \cB_{Q_L}\,, \ \ \text{$\cB_{Q_L}$ the generalized flag manifold of parabolics of type $\fq_L$}\,. \end{equation*} The $\fq_L$ is $\theta$-stable and hence the image of $Q$ in $\cB_{Q_L}$ is closed. The fiber of $p$ is is the flag manifold $\cB_L$ of $L$. Furthermore, $\fa$ is maximal torus in $L$ and hence $(L,\theta)$ is split and the orbit $Q\cap \cB_L$ is the open orbit in $\cB_L$. \section{Tempered Hodge modules} As in~\cites{DV,schmid-vilonen}, we work in this section in the context of twisted mixed Hodge modules. In the general set-up in~\cite{DV} we work in the context of complex Hodge theory as in~\cite{SS}, but in the tempered case we treat here working with mixed Hodge modules of Saito~\cites{S1,S2} would also suffice. The category $\HC(\cD_\l,K )$ has a mixed Hodge module version $\HCH(\cD_\l,K )$ (denoted by $\mrm{MHM}_{\lambda - \rho}(K \bslash \mc{B})$ in \cite{DV}) if the parameter $\l\in\fh^*_\bR=\bR\otimes_\bZ \mb{X}^*(H)$, which we assume from now on; here $\mb{X}^*(H)=\Hom(H,\bC^*)$ is the character lattice. An irreducible Harish-Chandra sheaf $\cM$ is an intermediate extension of a rank one $\lambda$-twisted local system $\gamma$ on a $K$-orbit $Q$, i.e., $\cM=j_{!*}\gamma$ for $j:Q \to \cB$ the inclusion. It has a unique lift to $\HCH(\cD_\l,K )$ where we declare it to be of weight $\dim Q$ and the Hodge filtration is given by \begin{equation*} \begin{aligned} F_p \gamma \ &= \ \begin{cases}\ 0\ \ &p<0\ , \\ \ \gamma \ \ \ &p\geq 0\ .\end{cases} \end{aligned} \end{equation*} Assume that the infinitesimal character $\la$ is dominant and real. We can now state our main result: \begin{thm} \label{main} Let us write $c=\codim Q$. The Hodge filtration of the irreducible tempered Harish-Chandra sheaf $j_{!*}(\gamma)$ is generated by $F_cj_{!*}(\gamma)$. \end{thm} We also prove the following result, which is a special case of a conjecture of Schmid and the second author \cite{schmid-vilonen}. \begin{thm} \label{signature} In the context of Theorem \ref{main}, let $S$ be the polarization on $j_{!*}\gamma$ and $\Gamma(S)$ the induced Hermitian form on the irreducible tempered $(\fg, K)$-module $V := \Gamma(j_{!*}\gamma)$ (see e.g., \cite[\S 4.3]{DV}). Then the form \[ \Gamma(S)|_{F_p V \cap (F_{p-1} V)^\perp} \] is $(-1)^{p - c}$-definite for all $p$. \end{thm} \section{Tempered Hodge modules for split groups} \label{split} In this section we consider a special case of tempered Hodge modules $\cM$ of spherical principal series associated to split groups. Then $\cM$ comes from a local system on the open orbit $Q$ and by the considerations in section~\ref{thc} we know that the infinitesimal character is zero, i.e., $\cM$ is a $\cD_0$-module and that it is clean. This forces $\cM$ to be self dual as a $\cD_0$-module and hence it is self dual as a Hodge module. Here we recall that for $\cD_0^{op}=\cD_0$. The corresponding spherical Harish-Chandra module $M=\Gamma(\cB,\cM)$ is isomorphic, as a $(\fg,K)$-module to $U(\fg)_0 \otimes_{U(\fk)} \bC_0$ where $U(\fg)_0$ stands for the quotient of $U(\fg)$ the center acts by infinitesimal character zero and $\bC_0$ is the trivial representation of $K$. We have a corresponding description of $\cM$ as \begin{equation*} \cM \ = \ \cD_0 \otimes_{U(\fk)} \bC_0\,. \end{equation*} Using this description we view $\cM$ as a filtered module with the filtration induced from $\cD_0$. We denote this filtration by $F'_\bullet \cM$. This filtration is generated by the rank one $\cO_X$-module $F'_0\cM$. Let us identify $\fg$ with $\fg^*$, write $\cN$ for the nilpotent cone in $\fg$ and write $\mu: T^*X\to \cN$ for the moment map. We then have \begin{thm} \label{thm:split} The filtration $F'$ on $\cM$ coincides with the Hodge filtration $F$ and $\gr^{F}_\bullet\cM = \mu^*(\cO_\fp)$ . \end{thm} \begin{cor} \label{split exactness} We have $\oh^k(\cB,\gr^{F}_\bullet\cM)=0$ for $k>0$ and $\Gamma(\cB,\gr^{F}_\bullet\cM) = \cO_{\cN\cap \fp}$. \end{cor} The rest of this section is devoted to the proof of this theorem and the corollary. We begin with \begin{lem} The filtered module $(\cM,F')$ is filtered self dual and $\gr^{F'}_\bullet\cM = \mu^*(\cO_\fp)$ is Cohen-Macaulay. \end{lem} It is perhaps helpful to recall that the associated graded of the Hodge filtration of any Hodge module is Cohen-Macaulay. The Cohen-Macaulay condition is necessary to have a good notion of filtered dual. \begin{proof} We begin with some preparatory statements. As the group is split the Cartan subspace $\fa$ of $\fp$ is also a Cartan for $\fg$ and the little Weyl group of $\fa$ is the full Weyl group of $\fg$. Thus the restriction isomorphism \begin{equation} \label{invariant} \bC[\fg]^G \xrightarrow {\ \sim \ } \bC[\fp]^K \end{equation} is an isomorphism. By Kostant and Kostant-Rallis~\cite{KR}, the scheme-theoretic fibers of $\mf{g} \to \spec \mb{C}[\mf{g}]^G$ and $\mf{p} \to \spec \mb{C}[\mf{p}]^K$ are complete intersections and are reduced. By~\eqref{invariant} we see that \begin{equation*} \cO_\cN \otimes_{\cO_\fg}^\bL \cO_\fp \ = \ \cO_\cN \otimes \wedge^\bullet \fk = \cO_{\cN\cap\fp}\,. \end{equation*} We have $\mu^{-1}(\fp)=T^*_K\cB$, the union of conormal bundles of $K$-orbits. Thus, because the group is split, we have \begin{equation*} \codim_{T^*\cB} \mu^{-1}(\fp) = \dim K \end{equation*} We conclude that the scheme $\mu^{-1}(\mf{p})$ is a complete intersection and its structure sheaf $\mu^*(\cO_\fp)$ has the following Koszul resolution \begin{equation} \label{koszul} \dots \to \cO_{T^*\cB}\{i\} \otimes \wedge^i \fk \to \cO_{T^*\cB}\{i-1\} \otimes \wedge^{i-1} \fk \to \dots\to \cO_{T^*\cB}\{1\} \otimes \fk \to \cO_{T^*\cB} \,. \end{equation} Let us now consider the filtered module $(\cM,F')$. It has the following filtered resolution \begin{equation} \label{resolution} \dots \to \cD_0\{i\} \otimes \wedge^i \fk \to \cD_0\{i-1\} \otimes \wedge^{i-1} \fk \to \dots\to \cD_0\{1\} \otimes \fk \to \cD_0 \,. \end{equation} We recall that $\cD_0\{i\}$ denotes the filtered module $\cD_0$ with filtration shifted so that it begins in degree $i$. To verify that the complex above is a resolution we pass to the associated graded complex. The graded resolution coincides with~\eqref{koszul}. Thus we conclude that~\eqref{resolution} is a filtered resolution of $(\cM,F')$ and, furthermore that $\gr^{F'}_\bullet = \mu^*(\cO_\fp)$ We now form the filtered dual $\bD(\cM,F')$ by making use of the resolution above. We obtain, writing $n=\dim(X)=\dim \fk$: \begin{equation*} \dots \to \cD_0\{-i\} \otimes \wedge^i \fk \to \dots\to \cD_0\{-n+1\} \otimes \wedge^{n-1} \fk\to \cD_0\{-n\} \otimes \wedge^n \fk \,. \end{equation*} Thus, we conclude that $\bD(\cM,F')=(\cM,F'\{-n\})$, i.e., that $(\cM,F')$ is filtered self dual. \end{proof} \begin{lem} If $F'_0\cM \subset F_0\cM$ then $F'_i\cM = F_i\cM$ for all $i$. \end{lem} \begin{proof} Our hypothesis implies that the identity map is a morphism of filtered modules $(\cM,F') \to (\cM,F)$, i.e., that $F'_i\cM \subset F_i\cM$ for all $i$. As both $(\cM,F')$ and $(\cM,F)$ are self dual filtered modules we obtain a morphism of filtered modules $(\cM,F) \to (\cM,F')$, i.e., that $F_i\cM \subset F'_i\cM$ for all $i$. Thus $F'_i\cM = F_i\cM$ for all $i$. \end{proof} To conclude the proof it remains to show that $F'_0\cM \subset F_0\cM$. The module $F'_0\cM $ is generated by the minimal $K$-type $\bC_0$. By~\cite[Theorem 4.5]{DV} the minimal $K$-type lies in $F_0\cM$. This concludes the proof of theorem~\ref{thm:split}. \section{Proof of Theorem~\ref{main}} \label{tempered} Let us now consider a general tempered Hodge module $\cM$. Then, by the considerations in section~\ref{thc}, we have $\cM=j_{!*}\gamma$ where $\gamma$ is a clean local system on an orbit $Q$ with the special properties specified in that section. Using the notation in section~\ref{thc} we write $S=p(Q)$ for the closed $K$-orbit. Then $\bar Q=p^{-1}(S)$ and we have a $K$-equivariant smooth fibration $\bar p: \bar Q \to S$. We further write $i: \bar Q \to \mc{B}$ for the closed embedding and $\tilde j: Q \to \bar Q$ for the open embedding. Let us start by considering $\cN= \tilde j_{!*}(\gamma)$. Let us consider the fiber $\cB_L$ of $\bar p$. Let us now consider the restrictions $\cN|_{\cB_L}$ and $\gamma_L = \gamma|_{Q\cap \mc{B}_L}$. As the restriction is non-characteristic, we have $\cN|_{\cB_L}=\tilde j_{!*}\gamma_L$. After we adjust the cohomological shift and the weights then the sheaf $\cN|_{\cB_L}=\tilde j_{!*}\gamma_L$ is the tempered spherical principal series sheaf considered in section~\ref{split} for $(\fl,K_L)$ and so we know that its Hodge filtration is generated by $F_0\cN|_{\cB_L}$. Thus the same is true about $\cN$ and we conclude that its Hodge filtration is generated by $F_0\cN$. Let us write $\cI$ for the ideal sheaf of $\bar Q$ in $\cB$. As $\cM=i_*\cN$ and $i$ is an inclusion of a closed smooth subvariety \[ \cM = F_pi_*\cN = \cD_\cB/\cD_\cB\cI \otimes_{\cO_{\bar Q}} \cN\otimes_{\cO_{\bar Q}}\omega_{\bar Q/\cB} \,. \] Let us write $c=\codim{Q}$. Then, by the formula for filtered proper push-forwards, we have \begin{equation*} F_p\cM =F_pi_*\cN = \sum_{r+k\leq p-c} F_k\mc{D}_{\mc{B}, \lambda}/ F_k\mc{D}_{\mc{B}, \lambda}\,\cI_{\bar Q}\otimes_{\cO_{\bar Q}}F_r\cN\otimes_{\cO_{\bar Q}}\omega_{\bar Q/\cB}. \end{equation*} From this formula and the fact that the Hodge filtration of $\cN$ is generated by $F_0\cN$ we conclude that the Hodge filtration of $\cM$ is generated by $F_c\cM$. \section{Proof of Theorem \ref{signature}} We will prove Theorem \ref{signature} by appealing to the known unitarity of tempered $(\fg, K)$-modules. We first recall the relationship between $\fu_\mb{R}$ and $\fg_\mb{R}$-invariant forms, cf., \cite[\S 12]{ALTV}. Since the flag variety $\mc{B}$ and the universal Cartan $H$ are all canonically associated with $G$, the involution $\theta \colon G \to G$ induces compatible involutions on both. We will write $\delta \colon H \to H$ for the induced involution on $H$ and $\mf{h}$; note that $\delta$ preserves the positive roots in $\mb{X}^*(H)$ and, in the notation of \cite[\S 2.4]{DV}, is equal to $\theta_Q$ for any closed orbit $Q$. The involution $\theta \colon \mc{B} \to \mc{B}$ lifts non-canonically to the $H$-torsor $\tilde{\mc{B}}$, intertwining the action of $\delta$ on $H$; we fix such a lift in what follows. For any $(\mc{D}_\lambda, K)$-module $\mc{M}$, the pullback $\theta^*\mc{M}$ is a $(\mc{D}_{\delta \lambda}, K)$-module, and we have an isomorphism \[ \Gamma(\mc{M}) \overset{\sim}\to \Gamma(\theta^*\mc{M}) \] intertwining the involution $\theta \colon U(\mf{g}) \to U(\mf{g})$. For an irreducible $(\mc{D}_\lambda, K)$-module $j_{!*}\gamma$ associated with a $K$-orbit $Q$ and $(\mf{h}, H^{\theta_Q} = T^\theta)$-module $(\lambda, \Lambda)$, we have $\theta^*j_{!*}\gamma = j_{!*}\theta^*\gamma$, where $\theta^*\gamma$ is the local system on $\theta(Q)$ with parameter $(\delta \lambda, \delta\Lambda)$. If $j_{!*}\gamma$ is tempered, then $(\delta \lambda, \delta\Lambda) = (\lambda, \Lambda)$ and $\theta(Q) = Q$, so $\theta^*j_{!*}\gamma \cong j_{!*}\gamma$. We will fix such an isomorphism so that the induced map \[ \theta \colon \Gamma(j_{!*}\gamma) \to \Gamma(j_{!*}\gamma) \] is equal to the identity on the minimal $K$-type (which is unique since the representation is tempered). In particular, the above map is necessarily an involution. Suppose now that $\lambda \in \mf{h}^*_\mb{R}$, and $j_{!*}\gamma$ is an irreducible tempered $(\mc{D}_\lambda, K)$-module, equipped with its polarization $S$ as a pure Hodge module. We therefore have the $\mf{u}_\mb{R}$-invariant form $\Gamma(S)$ on $\Gamma(j_{!*}\gamma)$; it follows immediately that the form \[ \langle u, v \rangle := \Gamma(S)(u, \theta v) \] is $\mf{g}_\mb{R}$-invariant. Since the tempered representation $\Gamma(j_{!*}\gamma)$ is unitary and $\Gamma(S)$ is positive definite on the minimal $K$-type by \cite[Theorem 4.3 and Proposition 4.7]{DV}, we deduce the following. \begin{prop} \label{prop:theta signature} For $\epsilon = \pm 1$, the polarization $\Gamma(S)$ is $\epsilon$-definite on the $\epsilon$-eigenspace $\Gamma(j_{!*}\gamma)^{\epsilon\theta}$. \end{prop} We now claim the following. \begin{prop} \label{prop:hodge parity} The associated graded of the Hodge filtration on $\Gamma(j_{!*}\gamma)^\theta$ (resp., $\Gamma(j_{!*}\gamma)^{-\theta}$) is concentrated in even plus $c$ (resp., odd plus $c$) degrees. \end{prop} \begin{proof}[Proof of Theorem \ref{signature}] Follows immediately from Propositions \ref{prop:theta signature} and \ref{prop:hodge parity}. \end{proof} \begin{proof}[Proof of Proposition \ref{prop:hodge parity}] Consider first the case of a tempered spherical principal series. By Corollary \ref{split exactness}, we have \[ \gr^F_\bullet \Gamma(j_{!*}\gamma) = \mc{O}_{\mc{N} \cap \mf{p}} \] is naturally a graded quotient of $\Sym(\mf{p})$. Since $\theta$ acts on $\mf{p}$ with eigenvalue $-1$ by definition the result in this case follows. For the general tempered case, consider as in \S\ref{tempered} the smooth fibration \[ \bar{p} \colon \bar{Q} \to S\] and the Hodge module $\tilde{j}_{!*}\gamma$ on $\bar{Q}$. We have \begin{equation} \label{eq:hodge parity 1} \Gamma(\mc{B}, \gr^F_\bullet j_{!*}\gamma) = \Gamma(S, \mc{F}\{c\}), \end{equation} where \[ \mc{F} = \Sym(\mc{N}_{S/\mc{B}_{Q_L}}) \otimes \omega_{S/\mc{B}_{Q_L}} \otimes \bar{p}_{\bigcdot} \gr^F_\bullet \tilde{j}_{!*}\gamma\] and $\{c\}$ denotes a grading shift and $\bar{p}_{\bigcdot}$ the sheaf-theoretic pushforward. Since $S$ is closed, it is fixed by $\theta$ pointwise, so $\theta$ acts on the sheaf $\mc{F}$. Since $\bar{p}_{\bigcdot} \tilde{j}_{!*}\gamma$ is fibrewise a tempered spherical principal series for $L$, the $+1$ (resp., $-1$) eigenspace of $\gr^F_\bullet \tilde{j}_{!*}\gamma$ is concentrated in only even (resp., odd) degrees as shown above. Moreover, $\theta$ acts on the normal bundles $\mc{N}_{S/\mc{B}_{Q_L}}$ with eigenvalue $-1$, so the above is also true for $\mc{F}$. The proposition now follows by \eqref{eq:hodge parity 1}. \end{proof}
111,575
Search Results for 'Suminat injection' > Records 1-1 How To Order? Once you've found your medication, click on the "Get Price Quote" button. Simple reasons why Canada-Mailorder-Pharmacy.com is America's number one portal to essential prescription drugs like Suminat injection. Savings up to 90% on prescription drugs on Suminat injection Experienced Suminat injection prescription service provided by licensed pharmacies. Suminat injection Prescription drugs are non-taxable when you purchase them. Live and friendly customer service staff available 7 days a week ready to help your questions about Suminat injection The international medication program may have Suminat injection available from licensed international pharmacies. Cheap Suminat injection, Canada Prescription Drugs, Buy Mail Order Suminat injection From Canada, Medicare Alternative
45,935
TITLE: What is the proof behind this identity used in proving the No-Cloning Theorem QUESTION [0 upvotes]: I am trying to follow an elementary proof of the No-Cloning theorem. The source I am using implies that $\langle \space(\lvert\psi\rangle \otimes |\psi\rangle) \space | \space (|\phi\rangle\otimes |\phi\rangle)\space \rangle = \langle\psi\rvert\phi\rangle^2$. I have limited experience in tensor calculus and so do not understand how this step is made. Why is this true? Alternatively, if anyone could direct me to a source that explains it that would also be appreciated. N.B. This identity appears to be implied by the source rather than explicitly stated. If this is incorrect please do say. REPLY [2 votes]: This simply boils down to the induced definition of the scalar product on a tensor product space† of inner-product spaces $\mathcal{G}$ and $\mathcal{H}$: if $χ,ξ\in \mathcal{G}$ and $φ,ψ\in\mathcal{H}$, then $$ \langle χ\otimes φ, ξ\otimes ψ\rangle_{\mathcal{G}\otimes\mathcal{H}} := \langle χ,ξ\rangle_{\mathcal{G}} \cdot \langle φ,ψ\rangle_{\mathcal{H}}. $$ Why is that the definition? Well, you want this scalar product to have certain properties, which are easily fulfilled by that definition. In particular, both the tensor product and scalar product must be bilinear (or sesquilinear for a complex scalar product). This already pins the definition largely down to the one given above, but I don't think it can uniquely determine the prefactor. You can easily derive that exact definition though if you consider baseis $\{g_i|i\in\mathscr{I}_\mathcal{G}\}\subset \mathcal{G}$ and $\{h_k|k\in\mathscr{I}_\mathcal{H}\} \subset \mathcal{H}$. These straightforwardly induce a basis of $\mathcal{G}\otimes\mathcal{H}$ as $$ \{g_i\otimes h_k | i\in\mathscr{I}_\mathcal{G},k\in\mathscr{I}_\mathcal{H}\}. $$ Now, we usually want all baseis to be orthonormal with respect to the inner products on the corresponding space. For the tensor product, this means: $$ \langle g_i\otimes h_k,\, g_j\otimes h_l\rangle \stackrel!= δ_{ij}⋅δ_{kl}. $$ Therefore (written in summation convention) $$\begin{align} \langle χ\otimes φ, ξ\otimes ψ\rangle_{\mathcal{G}\otimes\mathcal{H}} =& \langle χ_i g_i\otimes φ_k h_k, ξ_jg_j\otimes ψ_lh_l\rangle \\=& χ_iφ_k\bar ξ_j\bar ψ_l \cdot \langle g_i\otimes h_k, g_j\otimes h_l\rangle \\=& χ_i\bar ξ_j \cdot φ_k\bar ψ_l \cdot δ_{ij}\cdot δ_{kl} \\=& χ_i\bar ξ_jδ_{ij}\cdot φ_k\bar ψ_l δ_{kl} \\=& \langle χ_i g_i, ξ_jg_j\rangle\cdot \langle φ_kh_k,ψ_l h_l\rangle \\=& \langle χ, ξ\rangle_{\mathcal{G}} \cdot \langle φ,ψ\rangle_{\mathcal{H}}. \end{align}$$ †Of course, this definition doesn't yet cover the entire space: a general vector in a tensor product can only be written as a sum over such $χ\otimes φ$-pairs, but the extension follows readily from the sesquilinearity property.
205,699